Research Methods in Distance Education

By: James KlineJingwei LiWenjing Luo and Zaina Sheets

Learning Objectives

After reading this chapter, you will be able to:

  • Distinguish among common arguments comparing the effectiveness of distance education and traditional face-to-face education.
  • Demonstrate knowledge of research methodologies that result in significant findings in distance education.
  • Describe the key research design and methodological shortcomings in distance education research that still persist within the current literature.
  • Assess the major arguments in support of and against the various theoretical approaches to distance education research (descriptive, correlational, experimental, and case study frameworks).
  • Explain the benefits of using quantitative, qualitative, and mixed-methods approaches to distance education research.
  • Identify common gaps in distance education research in the areas of:
    • Classroom instruction at the higher education level
    • Improving whole program effectiveness
    • Individual difference effects, and
    • How the interaction of multiple technologies affects distance education effectiveness.
  • Evaluate the major findings and criticisms in distance education research.
  • Interpret how common distance education research implications like the digital divide, learner access, the uniqueness of the instructor role, and commonly overlooked system elements affect the quality of research in this field.
  • Discuss the benefits and limitations of using meta-analyses in distance education research.
  • Summarize the criteria and future directions of distance education research.
  • Provide an overall assessment of the research in distance education by synthesizing the major topics presented in this chapter.

Create instructional tools using basic instructional technologies to demonstrate your knowledge of the various topics discussed in this chapter.

Introduction

In this chapter we discuss research in the field of distance education.  The chapter is structured into five major sections with various subsections included within each.  In section I, the general topics prevalent in distance education research are introduced, primarily those pertaining to the common arguments on distance education effectiveness as compared with traditional classroom education.  In section II, we introduce you to several key design and methodological shortcomings in distance education research that still persist within the current literature.  In section III, we review various common gaps in the distance education research.  For instance, we discuss issues concerning classroom instruction at the higher education level, improving whole program effectiveness, individual difference effects, and how the interaction of multiple technologies affects distance education effectiveness.  From here, we move on to section IV, where we discuss some of the special implications of distance education research like the digital divide and learner access, the uniqueness of the instructor role in the distance education environment, and commonly overlooked system elements.  Finally, in section V, we discuss the overall quality of the research done in the field of distance education by presenting a case for the use of meta-analyses, providing an overall assessment of the research literature, offering suggestions for improving methodological quality in distance education research, and highlighting some new directions that distance education research is currently heading.

Common Arguments on Distance Education Effectiveness

With the development and application of technology, distance education emerged and developed as a trend in K-12 schools and the higher education system.  With this expansion of distance education, there is a clear debate concerning the effectiveness of distance education versus classroom instruction, with the latter being traditionally assumed as the standard. Researchers in this field have conducted numerous studies to investigate the effectiveness of distance education in general.  In this section, we will compile arguments from various meta-analysis studies to find out the effectiveness of a fully distance education condition, partially online condition (like: blended/hybrid learning), and face-to-face condition.
In this section, we will focus on the arguments on the effectiveness of distance education in terms of no significant difference, distance education as being non-effective, and distance education as being more effective than classroom instruction.

Argument #1: No Significant Difference

With the adoption of media for instructional purposes, a variety of studies have been conducted to measure the effectiveness of technology in mediated instruction, which is called the “media comparison study” (Lockee, Burton, and Cross, 1999, p. 33). Those studies compared the learning outcomes of instruction in traditional classroom settings, and that with specific instructional methods and media.  Russell (1997) asserted that these comparison studies “show such inconsistent results that the film, slide, and print paper to process no distinct advantage one over the other as far as these particular experiments are concerned” ( p. 257). Russell’s assertion is the “earliest evidence” (Lockee et al, 1999, p. 33) of no significant difference.
Clark (1983) criticized previous media comparison studies and declared that “these findings were incorrectly offered as evidence that different media were equally effective as conventional means in promoting learning” (p. 447). In Clark’s point of view, the result of no significant difference uncovered that “changes in outcome scores did not result from any systematic difference in the treatments compared” (Clark, 1983, p. 447). In addition, Clark (1994) stated that “learning is caused by the instructional methods embedded in the media presentation” (p. 26). He believes that media is simply a delivery medium for instructional content, and is not influential for learning under any conditions. The one that impacts learning is an instructional method, rather than the delivery medium. Gagne, Briggs, and Wager (2005) also argued that media is a “vehicle for the communications and stimulation that make up instruction” (p. 205). In their point of view, media per se do not transform instructional methods and learning content, so media cannot directly influence learning.

Argument #2: Distance Education is Not as Effective

According to Morrison, Ross, Kemp, and Kalman (2010), distance education encounters several restrictions.  For instance, distance education requires complex and intricate telecommunication environments and resources.  Compared to the effectiveness of synchronic presentation in classrooms, the quality of audio and video used for distance education transmission may be asynchronous and would cause distraction.  Also, communication and interactions between learners and instructors in the distance education environment are constrained and not as fluid as those in face-to-face classroom settings.  Because of the difficulties and constraints during interactions, students’ learning interests and their learning outcomes may decrease. Similarly, the cost of resources and equipment required in distance education settings are unaffordable for some schools or learners.  Finally, the dropout rate for learners in distance education delivery systems is higher than traditional education.

Argument #3: Distance Education is More Effective

We talked about Clark’s point of view in the argument of no significant difference. In contrast, Kozma (1994) believed “media and methods are inextricably interconnected” (p. 16) in instructional design. “Media must be designed to give us powerful new methods, and our methods must take appropriate advantage of media’s capabilities” (p. 16).  In other words, media do influence learning outcomes. This is a famous debate in the field of instructional design, and the debate continues.

With the tendency to compare learning outcomes between instruction with and without the inclusion of medium, and to measure the effectiveness of media, more researchers demonstrated their viewpoints and argued that distance education is “at least equal to campus-based, face-to-face version” (Lockee et al, 1999, p. 34). Moore and Thompson (1990) argued that distance education can be as effective as conventional classroom instruction in terms of learning achievement, student and instructor attitude, and cost-effectiveness. Under the condition of applying appropriate instructional approaches and technologies, frequent student-to-student interaction, and timely teacher-to-student feedback, students generated higher learning achievements.

Besides the aforementioned media comparative studies, Means et al. (2013) found that the effectiveness of blended or hybrid learning is significantly higher than either distance education or face-to-face education. To improve learning effectiveness and outcomes, teachers blend technologies such as computers, projectors, presentation software, and virtual courses with face-to-face learning. Practically, face-to-face and purely online learning are simultaneous in blended learning. In other words, as a medium, technology should be regarded as a tool to facilitate learning, rather than a substitution that replaces face-to-face instruction.  In more recent studies, comparing with face-to-face conditions, distance education provides flexible access to sufficient content and qualified instruction whenever or wherever learners need. Distance education enhances learning accessibility for learners who are not available to attend face-to-face class. Technologies support distance learning in terms of “interactivity, social networking, and collaboration” (Means et al., 2013, p. 3) and increase learning effectiveness.

Means et al. (2013) designed a meta-analysis to synthesize and contrast learning outcomes of both fully online and hybrid learning conditions with face-to-face classroom conditions.  In Means’ study, only random assignment experimental studies and statistical controlled quasi-experimental studies are included. They calculated effect sizes and average effect sizes to indicate the differences between learning with technology and traditional classroom instructions. A coding scheme was applied to identify the condition, practice, and methodological variables. Mean et al. (2013) concluded that the effectiveness of purely online learning is equal to face-to-face learning, while blended learning is more effective than traditional face-to-face instruction. In addition, students in distance learning conditions interactions outperform those receiving face-to-face instruction.

According to Bernard, et al. (2004), it is not easy to find answers concerning the distance education effectiveness question in a single study. “It is only through careful reviews of the general state of affairs in a research literature that large questions can be addressed and the quality of the research itself and the veracity of its findings can be assessed” (p. 383).  There are several attempts to summarize distance education research with problems, such as researchers’ bias, subjectivity, and incapability to solve questions. There are also alternative meta-analysis studies to integrate studies with varying sample size “by extracting an effect size from all studies” (Bernard et al, 2004, p. 384). In addition, by investigating “moderator variables” (Bernard et al, 2004, p. 384), meta-analyses can explore complicated and specific correlations in data. Next, we will demonstrate research findings on the effectiveness and difference of distance education and face-to-face education.

Additional Findings on the Effectiveness of Distance Education

The differences between distance education and face-to-face education lie in the “proximity of learner and teacher, and differential means through which interaction and learner engagement can occur” (Bernard et al., 2004, p.381). Synchronicity and asynchronicity, issues of instructional design, student motivation, direct and indirect communication, and perceptions of isolation are the key contrasting features of distance education and face-to-face education. Clark (2000) and Smith and Dillon (1999) have argued the importance of conducting studies that compare distance education and face-to-face education to investigate the effectiveness of compatible types of distance education technologies, discover media attributes and their hypothesized effects on learning, and push forward our understanding of distance education and face-to-face education. According to Bernard et al. (2004), comparing the effectiveness of distance education with classroom alternatives will defeat the misunderstanding of distance education as a mere alternative to campus-based education and prove its worthiness for certain content domains, learners, and pedagogical circumstance.

Current comparative studies are limited in their findings with regard to quality comparisons between distance education and face-to-face methods of delivery. The most frequently researched questions regarding comparisons between distance education and face-to-face education were formulated around the quality of learning and instruction, the cost-effectiveness, learners’ attitudes toward distance education, needs of distance education learners, and factors affecting the quality of education in both situations.

Traditionally, research in distance education has been dominated by “comparison studies” or “media comparison studies” (Clark, 1983; Clark & Salomon, 1986), most of which compare the effectiveness of distance education with that of face-to-face education or the effectiveness of one technology over another (Gunawardena & McIssac, 2004; Joy & Garcia, 2000).  Some of them conclude that there are no significant differences between distance education and face-to-face education in learning outcomes (Gunawardena & McIssac, 2004; Russell, 1999).  Some conclude that positive effects found in some distance education researchers are balanced by negative effects found from other studies (Gunawardena & McIssac, 2004).  Besides, the comparison studies have been criticized for their wrong selection of factors, flawed methodologies, biased sampling, and improper measures of outcomes (Clark & Salomon, 1986).  Therefore, some researchers suggest that a systematic analysis of the research findings will reveal the significant differences between distance education and face-to-face instruction implicated by some individual studies.

In the upcoming sections of this chapter, we will explain and demonstrate the problems in current distance education research, the gaps in the research, the necessity of using meta-analysis studies and their findings, as well as new directions for distance education research.

Key Shortcomings in Distance Education Research

Evolving Research Approaches and Frameworks

Researchers have argued that distance education research is devoid of a consistent theoretical framework (Moore, 2003; Berge & Mrozowski, 2001; Perraton, 2000; Saba, 2000).  For instance, Saba (2000) noted that “Research questions are rarely posed within a theoretical framework or based on its fundamental concepts and constructs” (p. 2).  Likewise, Berge and Mrozowski’s (2001) review of the literature between 1990 and 1999 found that of 1,419 articles and dissertation abstracts pertaining to distance education, only 62.7% (n=890) specified a clear research methodology or theoretical framework.

However, if  much of the distance education research does indeed lack a consistent theoretical framework, it becomes difficult to compare the studies using objective criteria presented within formal standards like those of the Institute for Higher Education Policy (i.e., the IHEP benchmarks), which are designed to provide a basis for judging the level of quality in distance education practice and theory.  One purpose of this section is to show the importance of having a consistent theoretical framework or clear research methodology when conducting distance education research.  A secondary purpose is to demonstrate through more recent literature the existence of a current trend towards greater empirical approaches to distance education research.

Traditional Approaches to Distance Education Research

Early research on distance education often employed either a predominantly descriptive, correlational, experimental or case study framework (Perraton, 2000; Phipps and Merisotis, 1999).

In terms of descriptive studies, these are works that simply seek to describe the distance education phenomenon (Naidu, 2009).  Berge and Mrozowski (2001) found three‐fourths of the articles and dissertations in their study used a descriptive approach with all other approaches spread throughout the remaining quarter.  While descriptive studies are useful for identifying gaps in existing research, they are not able to demonstrate causation among variables, a key requirement for determining the relative effectiveness of a distance education course or program in comparison to a traditional learning environment or for comparing satisfaction levels between the two instructional settings.  In the case of distance learning, descriptive studies are most useful for demonstrating “the need for a research agenda and future vision in the field of distance education” (Berge & Mrozowski, 2001), but little else.

Correlational studies involve collecting data with the purpose of determining the degree to which a relationship exists between multiple variables (Fraenkel & Wallen, 2014).  Although correlational studies help researchers make general predictions regarding learning and satisfaction outcomes, this type of research does not allow for identification of cause and effect relationships.  Correlational studies are limited in that they only “show if there is a positive correlation, negative correlation, or no correlation between data sets” (What are the disadvantages of correlation research?, n.d.).  They are not meant to prove – for which they are often used – that one variable causes another.

Although approach to distance education research that provides greater control over a study and allows the researcher to identify causation among variables is the experimental approach.  In this approach the researcher uses manipulation and controlled testing to determine effects between variables.  However, the experimental situation is nonetheless still artificial at times with results that may not generalize well for practitioner use.  Likewise, it is sometimes difficult to completely design experimental conditions free from experimenter effects or extraneous bias.  According to Bullen (1999):

“Ethical and practical considerations make it almost impossible to conduct truly experimental studies in education. Students cannot be randomly assigned to control and treatment groups in these kinds of situations. Controlling extraneous variables means that technologies cannot be used in ways that take advantage of their unique characteristics. For example, imposing this kind of control when comparing video with classroom instruction would mean simply producing a videorecording of the classroom presentation for the distance students instead of exploiting the unique symbol system offered by video” (p. 103).

Finally, a case study framework is a “systematic inquiry into an event or a set of related events which aims to describe and explain the phenomenon of interest” (Bromley, 1990, p. 302).

This approach examines the phenomenon of distance learning by employing a content-based scenario of the phenomenon under consideration to stimulate the imagination of the readers as they place themselves into the particular case.  The particular phenomena to be studied within a case study approach vary widely from an individual to an entire systemCase study data come from archival records, direct observations, documentation, interviews, physical artifacts and participant observation (Yin, 1994).

Objectivism and Quantitative Frame of Reference

Although empiricism, rationalism, pragmatism, and humanism are well-established  philosophical approaches that inform research in distance education theory (cf. chapter 2 of this eBook), the traditional – and still primary – approach to critiquing distance education research has been to use an objectivist epistemology with a quantitative (i.e. statistical) frame of reference.  Objectivism claims that “one reality exists independent of anyone perceiving it, humankind is capable of knowing this reality only by the faculty of reason, and objective knowledge and truth is possible” (Peikoff 1993).  Also, whereas knowledge exists independent of the learner, the objectivist designs conditions to promote acquisition of pre-established objectives (Hannafin & Hill, 2007).  In one comprehensive study of the IHEP benchmarks within the distance education literature, Shimp (2008) found objectivist/quantitative (and descriptive) approaches accounted for 61% of all research methodologies utilized in the 278 journal articles in the literature sample.  This study covered journal articles for the time period 2002 through 2006 and included articles from The American Journal of Distance Education, Distance Education, Journal of Distance Education, and Open Learning.

Although there has been no lack of demand for more qualitative approaches as means for obtaining a richer and deeper range of data (Minnes, 1985; Saba 2000), relatively few distance education studies are informed by constructivist epistemologies(i.e. humanist; cf. chapter 2 of this eBook) which use interpretivist/qualitative methods or use a mixed-methods approach which combine both quantitative and qualitative methods as a form of “triangulation” (Neumann, 2007, p. 149) across methods.  In further support for a mixed-methods approach to distance education research, Garrison and Shale (1994) state, “Researchers are realizing that in practice the methodologies can be viewed as complementary… Researchers who advocate combining quantitative and qualitative methods are thus on solid epistemological ground” (p. 25).  Particularly within a diverse research field like distance education, the ability to explore the issues from multiple perspectives through use of a wide-range of different instruments, methods, and data collection techniques is invaluable for mutually validating results.

Current Research Trends

Lee, Driscoll, and Nelson (2004) have noted, “Understanding trends and issues in terms of topics and methods is pivotal in the advancements of research on distance education”  (p. 225).  In their review of the distance education research from 2000 to 2008, Zawacki-Richter, Bäcker, and Vogt (2009) found “a significant trend towards collaborative research and more qualitative studies” (p. 1).  For this study, they reviewed 695 articles written in five major distance education journals between 2000 and 2008.  Unlike Shimp (2008), Zawacki-Richter et al (2009) found an apparent trend towards more empirical research with only 38.1% of the articles being descriptive in nature and 12.9% following a mixed-method approach.  They contrast their results with Berge & Mrozowski (2001), who classified 75.9% of 727 articles published in four major distance education journals between 1990 and 1999 as descriptive articles, and with Mishra (1997) who reviewed 361 articles from 1991 and 1996 that yielded a percentage of 47.6 % as descriptive papers.

In addition to showing a movement away from predominantly quantitative approaches to distance education research, Zawacki-Richter et al (2009) also reveal a highly significant association between the research methods used and the theoretical preference of a particular journal. The majority of papers accepted for publication to the journals Open Learning (OL) and the International Review of Research in Open and Distance Learning (IRRODL) between the years 2000 and 2008 employed a more traditional descriptive or theoretical approach: 48.1% for OL and 56.6% for IRRODL.  Similarly, Zawacki-Richter et al (2009) found the American Journal of Distance Education (AJDE) had a preference for quantitative studies by noting that 63.4% of all articles accepted for publication between 2000 and 2008 employed a quantitative design.  Yet traditional approaches did not predominate across all journals.  In the journal Distance Education (DE), nearly one-third of all publications (29.5%) employed qualitative methods. Likewise, another top journal, the Journal of Distance Education (JDE), showed a significant interest in mixed-methods approaches as 28.1% of all papers published followed some form of triangulation approach.  Thus, the types of approach to distance education research vary greatly and largely depend on the research interests of the particular journal, yet are moving away from predominantly descriptive and quantitative methods at the same time.

Flawed Research Methodologies

Another key limitation in much of the distance education research is the use of research methodologies of flawed or questionable value (Randolph, 2007; Phipps and Merisotis, 1999).  At a basic level, experimental or quasi-experimental studies need to 1) be able to show cause and effect relationships by controlling for extraneous variables, 2) randomly assign subjects to treatment and control groups, 3) use valid and reliable instruments to measure effects and attitudes, and 4) control for the “reactive effects” of subjects.

1.    Failure to Control for Extraneous Variables

Particularly in the case of experimental approaches to research, a lack of control for extraneous variables (i.e. competing outside causes) prevents the researcher from accurately comparing control group and experimental group outcomes.  Since much of the experimental research in distance education aims to measure the effects of specific technological tools on learning outcomes or on learner satisfaction, the researcher needs to be able to accurately appraise this relationship by first removing any extraneous variables that could influence the outcomes.  Unfortunately, this controlling for extraneous variables is frequently missing from distance education research.  For instance, Randolph (2007), in his review of the distance education literature from 1999 to 2005, found that posttest-only designs with nonequivalent controls left the studies open to “selection and selection-interaction threats to internal validity” (p. 8).  Likewise, he found that although researchers often compare demographic data to measure selection threats, the chosen demographic variables were inadequate for measuring the factors most relevant to outcomes.  Out of 66 studies, Randolph (2007) found that only one (Litchfield, Oakland, and Anderson, 2002) “used a design strong enough to control for most extraneous variables” (p. 8).

2.    Failure to Randomly Assign Subjects

Another frequent problem encountered in the literature is the failure of distance education research studies to randomly assign subjects to treatment and control groups.  Random assignment of students into experimental and control groups is the commonly accepted way to control for extraneous variables.  However, many of the studies, particularly those from the 1990s and early 2000s, seem to keep intact groups as means of comparison with other groups.  As a result, there is no way to separate whether it is the multiple extraneous variables that are affecting student achievement or satisfaction as distance learners or the actual technology used to provide the distance learning experience.  Randolph (2007) notes that none of the 66 studies he examined followed the basic statistical procedure of using random selection – a research design flaw which greatly hindered the ability of the researchers to make causal generalizations between independent and dependent variables.

3.    Use of Invalid or Unreliable Measurement Instruments

A well-designed educational research study also needs to use consistently reliable and valid measurement instruments to ascertain actual learning outcomes and student attitudes.  For instance, in order to produce accurate research results with a high level of validity and reliability, measurement instruments like student exams, surveys, and rating-scales all need to measure what they are designed to measure.  In much of the distance education research examined, the use of invalid or unreliable measurement instruments put their results in question.  Common instruments like self-report Likert surveys and teacher-designed evaluation tools are subject to strong reactive effects (Randolph, 2007; Phipps and Merisotis, 1999).

4.    Failure to Control for Reactive Effects of Subjects

Finally, many distance education studies do not properly control for the reactive effects of the test subjects.  Phipps & Merisotis (1999) define reactive effects as “a number of factors associated with the way in which a study is conducted and the feelings and attitudes of the students involved” (p. 4).  They describe one common reactive effect that is often unaccounted for in distance education research – the novelty effect – which refers to “increased interest, motivation, or participation on the part of students simply because they are doing something different, not better per se” (p. 4).  According to Clark (1983), “Novelty effects with newer media is a confounding variable due to the tendency of participants to pay increased attention to technology that is new to them (Clark 1983).  This is of particular concern in the case of distance education, as students are introduced to various new online technologies like podcasts, video collaboration techniques, discussion board, various grouping systems, online presentation formats, and the many intricacies presented in the many different LMS options available for distance learning, which the student  must all master in order to be successful in the distance education format.  One such instance is Malan (2007), who found that novelty effects were indeed the cause for some of their students’ enthusiasm for using podcasts.  Another reactive effect, called the John Henry Effect, refers to “control groups or their teachers feeling threatened or challenged by being in competition with a new program or approach and, as a result, outdoing themselves and performing well beyond what would normally be expected” (Phipps & Merisotis, 1999, p. 4).

What are the Gaps in the Research?

Effectiveness of Traditional Classroom Instruction in Higher Education

Data from traditional classroom research is needed before making claims regarding or assessment of distance education. According to Major and Palmer (2001), traditional classroom instruction in higher education is lecture based. Lecture based instruction involves delivering as much information as possible, as quickly as possible (Major & Palmer, 2001). Major and Palmer (2001) consider it as the most effective and efficient way to disseminate information. The problem with this form of instruction is that many faculty members are poor lecturers and learners are often poor participants; this type of instructions allows learners to be passive in the classroom (Major & Palmer, 2001). Learners construct knowledge; they do not take it in as it is disseminated, but rather they build on knowledge they have gained previously (Cross, 1998).  This form of instruction does not allow learners the opportunity to collaborate and make cognitive connections, social connections and experiential connections (Cross, 1999). Even though face-to-face education allows for an easier collaboration space than distance education, it’s usually not utilized. This is due to the large class sizes, and the pace at which the instructor must teach (“LEADING CHANGE,” 2013). Learners make connections in different ways, which means they learn in different ways as well.

Higher education today involves a multitude of instructional styles; however, lecture-based is still a popular choice. Giving learners the individual attention they need to develop critical thinking and problem-solving skills in a typical higher education classroom setting is difficult to do (Wilson, 1996).  One of the suggested ways to help develop these skills is by using hybrid/blended learning methods. Hybrid (or blended) learning incorporates the traditional face-to-face setting with some of the class time replaced with distance education activities. The infusion of technology and technology­ based services in higher education is commonly thought to have revolutionized the learning process and provided institutions with an efficient alternative to traditional instructional methods (Perez Pena, 2012). With hybrid learning you get the pros of both face-to-face instruction and distance education, however you also get the cons associated with both as well (“LEADING CHANGE,” 2013). Porter, Graham, Spring, and Welch (2014) suggest ways to improve hybrid learning as follows: develop hybrid-learning method advocates at multiple institutional levels to establish a shared implementation, and obtain the needed resources to implement the learning methods.

If instructors are able to incorporate hybrid learning into their instruction, traditional classroom instruction has a chance at being successful. However, if faculty continues to take the “easy ways out” by giving strictly lecture-based instruction, learners won’t fully develop all necessary skills.

Research and Suggestions on Distance Education Effectiveness

To determine if a “significant difference” truly exists between distance and traditional education, whole program effectiveness through improved facilitation needs to be examined instead of focusing on the more common single-course effectiveness studies. One effective strategy to consider is Ragan’s (2010) use of Penn State’s World Campus report entitled “Online Instructor Performance Best Practices and Expectations,” to create an expectation of the core behaviors for successful distance education facilitators to produce effective whole program distance education.  In this article, Ragan attempts to help facilitators with this statement “If you don’t tell us what is expected, how will we know what to do to succeed?” The ten principles are as follows: Show up and Teach, Practice Proactive Course Management Strategies, Establish Patterns of Course Activities, Plan for the Unplanned, Response Requested and Expected, Think Before You Write, Help Maintain Forward Progress, Safe and Secure, Quality Counts and (Double) Click a Mile on My Connection. These ten principles should be used in all distance education course development. If they were, distance education facilitation might have an even higher success rate for the overall program, the instructor/facilitator, and for the learners.

  1. Show up and Teach: it is suggested instructors have all core teaching material, resources and instructional strategies in place prior to the start of class. This is encouraged because it allows the instructor to not have to continue to create course material throughout the length of the course, instead instructors can be actively participant in the class. This allows the instructor the opportunity to equally reach all learners in synchronous or a synchronous environments.
  2. Practice Proactive Course Management Strategies: instructors need to define the expectations of the course, communicate the learning objectives and the responsibilities of the learner.  Instructors need to monitor all learning activities and communicate information on assignment submissions (upcoming, missed, etc.).
  3. Establish Patterns of Course Activities: it is important to define and communicate pace and work pattern.  This allows learners and instructors to plan and manage the coursework.  It also allows the instructor to know when the class day is over and take time for themselves/their family.
  4. Plan for the Unplanned: this is how the instructor is communicating to learners the strategy for managing communication when “life happens.”  It takes away the panic for both the learner and the instructor. For example, if there was an unexpected emergency (like bad weather or campus closing for a threat), the instructor would know how to reach their learners to inform them of any changes. The learners would know where to check for any updates and how to get in contact with the instructor.
  5. Response Requested and Expected: timely feedback is essential in a distance education course.  Instructors need to define the timeframe for responding to learner inquiries.  Also, instructors must monitor commonality amongst inquiries to see if there should be refinement or additional clarification added.
  6. Think Before You Write: instructors need to be clear and concise in communication.  If a learner comes to them with an inquiry, instructors should view that as an opportunity to improve communication on the assignment.  Instructors should establish and communicate the etiquette expectations for inquiries.
  7. Help Maintain Forward Progress: instructors should communicate response times for submitted assignments.  Feedback enables a learner to monitor their progress and make adjustments in the course.  Establishing response times also allows the instructor to maintain their deadlines and plan for events.
  8. Safe and Secure: instructors should clearly define communication method expectations. How they wished to be contacted, and what forms of questions should go where.
  9. Quality Counts: instructors should be monitoring the quality of the distance education experience and seeking ways to improve it.  Feedback from learners should be taken into consideration as well.
  10. (Double) Click a Mile on My Connection: instructors should be experienced and understand the distance education platform learners are expected to use. Learner feedback about the platform should be monitored and taken into consideration for improving the system.

If instructors/facilitators followed the ten principles there might be more consistency and less confusion. Ragan (2010) adds that programs don’t necessarily have to follow this exactly.  However, if programs required guidelines like these throughout their facilitation efforts across their entire program, overall distance education effectiveness might increase dramatically.

Individual Difference Effects

Individual differences refer to the variation or deviations among individuals in regard to a single characteristic or number of characteristics. These characteristics include, but are not limited to, age, gender, culture, and motivation. These characteristics are common factors that should influence the design of distance education, but they are sometimes forgotten by designers. Distance education could improve a step further if more research was conducted on these individual differences and how to incorporate them successfully into the design.

Motivation in adult learners can be intrinsic, driven by the need for autonomy or personal growth, or instrumental, impelled by social and environmental pressures (Harvey, 1995). The motivation for women returning to college is for a number of reasons, these reasons include career advancement, higher wages and personal fulfillment (American Association of University Women, 1999).  Research shows that women face significant barriers that hinder their completion of the face-to-face setting and distance education setting.  These are often internal barriers that include: fear of failure, lack of self-confidence, discomfort, etc. (Furst-Bowe, 2002; Gorback, 1994; Garland & Martin, 2005).  However, national data shows that currently more women than men are enrolling in distance education courses and graduating faster (Shea & Bidjerano, 2016). Furst-Bowe’s (2002) findings “suggest that women are returning to college primarily for job-related reasons and that they are deliberately selecting programs delivered via distance education because of the convenience associated with distance education courses and other types of distance education courses delivered at sites near their homes” (p. 87). This is due to the flexibility of distance education courses. Distance education courses give learners the flexibility to complete work on their schedule. It also allows learners to spend time formulating a response, rather than being called on and having to instantly answer in a face-to-face setting, which is beneficial for all learners (Coombs, 2000). Typically in face-to-face classrooms, female learners may speak out less frequently and confidently than male learners due to role socializations (Anderson & Haddad, 2005). With distance education courses, participation is required and female learners appear less hesitant to speak out and engage in dialogue (Anderson & Haddad, 2005).

Another issue educators need to take into consideration is the age gap. Technology use is one place where the gap occurs. Adult learners, both male and female, may need additional assistance to participate in distance education courses (Furst-Bowe, 2002).  Adult learners have unique characteristics that can vary among them in terms of educational and life experiences (Burge, 1998). Central concepts to adult education include: experiential learning, self-directed learning and transformative learning theory (Cercone, 2008). “Experiential learning is composed of three components: (a) knowledge of concepts, facts, information, and experience; (b) prior knowledge applied to current, ongoing events; and (c) reflection with a thoughtful analysis and assessment of learners’ activity that contributes to personal growth” (Cercone, 2008, p.147). Self-directed learning lies in the learner, they may initiate learning with or without assistance from others (Lowry, 1989).  Transformative learning helps adult learners understand their experiences, how they make sense or “meaning of their experiences, the nature of the structures that influence the way they construe experience, the dynamics involved in modifying meanings, and the way the structures of meaning themselves undergo changes when learners find them to be dysfunctional” (Mezirow, p. xii).  Distance education instructors should take these concepts into consideration, though no one theory can explain how adults learn (Cercone, 2008).

The Interaction of Multiple Technologies

Multiple technology interaction has highly impacted the world of education. The different technologies created over the years have allowed designers and instructors to be creative and effective in their delivery of instruction. Multiple modes of technologies has affected the social organization of teaching and learning in higher education by expanding the delivery of higher education and opening opportunities to rethink the fundamentals of the higher-education setting, such as the roles of students and teachers, time and place of instruction, and organizational participants (Gumport & Chun, 2002).

“No single technology is likely to address all the teaching and learning requirements across a full course or program, satisfy the needs of different learners, or address the variations in their learning environments. Using a mixture of media allows for differences in student learning styles or capabilities” (Moore & Kearsley, 2011, p. 91). Also, the use of multiple technologies and multiple media provides redundancy and flexibility. Even though redundancy is usually discouraged, if there was a problem with one of the technologies, the other could compensate (Moore & Kearsley, 2011). With technology comes concern, “a common concern among faculty is that technology may be used inappropriately given differences in student proficiency with technological tools as well as the types of course delivery modes available” (Frantzen, 2014, p. 567). Instructors must ensure that the selected forms of media work together and that students understand how the media work together. Providing a course map (usually a study guide) that depicts how the different technologies work together and relate to each other is desirable (Moore & Kearsley, 2011).

Multiple modes of technologies also allow for methods that build on students’ inquiry, problem-solving skills and gain of content knowledge (O’Lawrence, 2016). Using multiple technologies in an ineffective way, just because you have them, would hurt more than help. However, if these technologies are used effectively, they could greatly enhance and help a student’s learning experience. Thus, the influence of multiple technologies is a necessary topic to include when debating distance education effectiveness.

Implications

Access is Not Only an Issue of Technology

Access depends on learner skills and other individual qualities as well as adequate training to use the technologies inherent in distance education. The digital divide refers to the gap between those who do and those who do not have access to new forms of information technology (Hargittai, 2001). The problem with this is it usually focuses on two extremes those with access and those without access, “the haves and have-nots of digital age” (Hargittai, 2001). In Image 1 Boogaard (2006) illustrates the digital divide for computer ownership across the globe. More and more people have begun to use the web for communication and information retrieval, so it becomes less and less useful to look at merely demographic differences (Hargittai, 2001). Because most people have access to the Internet whether it be at home, work, school, etc., what should be observed is the differences in how those who are pursuing distance education are able to use the medium (Hargittai, 2001; Moore & Kearsley, 2011).

Digital-divide.png
Figure 1. Global digital divide.

In 2001, Dimaggio and Hargittai coined the term “digital inequality.” Digital inequality better captures the complexity of inequalities relevant to understanding the differences in access and use of information technologies (Dimmagio and Hargittai, 2001). Digital inequality considers five dimensions: differences in the technical apparatus people use to access the Internet, location of access, extent of one’s social support network, types of uses to which one puts the medium, and one’s level of skill. The skill access is divided among users (Van Dijk, 2006). The user first has to acquire operational skills, and then they have to develop and apply information skills and finally strategic skills (using a computer and network sources as a means for particular goals) (Van Dijk, 2006). Even if a user has the ability to develop their skills they might not have the motivational access. If a user refuses to use the Internet, they would be put on the end of the extreme scale as have-nots. This is why the digital divide needs further research. Most research is quantitative and tries to describe the large picture of the problem (Van Dijk, 2006). There needs to be qualitative research done to bring forward the precise mechanisms explaining the division of the technology concerned in everyday life (Van Dijk, 2006).

Instructor Role is Different in Distance Education

In distance education, the role of the instructor is more important than ever Instructors serve as a guide, creator, facilitator, curator, and instructor (Ragan, 2010). In distance education, learners tend to rely on their instructor more than in the face-to-face environment because of the lack of in-person interaction, as well as the ease of access. Instructors need to define communication expectations, availability, feedback response times and course expectations to help learners manage their learning process. Also, those items need to be defined, so the instructor can manage their life outside of class and not feel like they need to constantly be monitoring and answering learner inquiries (Ragan, 2010).

Isman, Altinay, and Altinay (2004), defined the learner’s main role as learning, and the instructor’s main role is designing the course and setting the needs of learners. The distance between the learner and the instructor changes the role from content expert only to a blend of content expert, resource manager, learning process expert, and process implementation manager. Garrison, Cleveland-Innees, and Fung (2004) describe the role of the learner as both independence and interdependence (learner-content, learner-instructor, and learner-learner). In distance education, instructors usually base your grade on your interaction with the content and other learners in the course. Another difference with distance education is learners have to plan their own schedule, strategies, etc.; they have to claim responsibility for themselves. In face-to-face courses you have that reoccurrence of class to force you to stay on track. While in distance education, instructors have to build learner motivation, and make sure they update communications, feedback and information in a timely manner (Isman, Altinay Z., & Altinay F., 2004). Instructors are responsible for guiding learners to take responsibility for their own learning. Instructors can gauge their success based on how learners perform.

System Elements are as Important as Technology

One important element to consider when designing instruction is learner characteristics. Richey, Klein, and Tracey (2011) describe these as demographic characteristics and individual differences, beliefs and attitudes, and mental models that may influence the selection of instructional strategies. Proven demographic characteristics that impact learner attitudes and performance in education include age, work experience and educational level (Richey, 1992). Take age as an example. If you have a learner in their teens and a learner in their 70s, their level of technological knowledge is more than likely going to be vastly different. Instructional designers have to consider the technology and instruction they use for these two different types of learners. A great predictor of motivation towards learning and how well the material is transferred to other settings is attitude (Richey, Klein & Tracey, 2011). Carroll (1963) considers factors such as general ability, time required to learn a skill and amount of time a learner is willing to spend learning as learner characteristics. Bloom (1976) emphasized the importance of learner background as they relate to cognitive and affective entry behaviors. Keller (1979) explains that learner characteristics, prior knowledge and learning design will impact learning and performance. Keller (1987, 2010) said that motivational problems and strategies relate to four main components: attention, relevance, confidence, and satisfaction; this resulted in the ARCS model of motivation.

Quality of Research in Distance Education

The Importance of Meta-Analysis for Distance Education Research

Meta-analysis is a collection of systematic techniques for resolving apparent contradictions in research findings (Shachar, 2008). According to Glass et al. (1981), meta-analysis is an “alternative to the selectivity of narrative views and the problem of conclusions based on test statistics from studies with different sample sizes” (p. 80). meta-analysis collects findings from different studies and translates them to a common metric system called effect size, which is used to analyze the relationship between research characteristics and findings.

The effect size is the standardized mean difference between the experiment and control groups. The calculation of effect size will provide accurate information on the differences between null and alternative hypotheses, thus, predicting researches’ power degree. In addition, effect size will provide guidance for future research designs in formulating optimal power instead of wasting time on minimal effects.

A meta-analysis provides a feasible solution to collecting empirical findings from individual studies for the purpose of integrating, synthesizing, and making sense of them. Valid statistical findings will be achieved through strict adherence to research procedures, systematic treatment, and analysis of data. One of the benefits of conducting meta-analysis is to give voice to small but distinctive studies whose individual findings are not robust enough to warrant serious consideration but their integrated findings will contribute to the bigger picture. It is also an approach to estimating the differences between treatment groups from a large set of studies. Moreover, moderator variables are functioned to provide a more detailed relationship that exists in the data (Bernard, et al., 2005).

Meta-analysis receives many criticisms with regard to its suitability and validity.  It has been questioned for its capability of comparing studies undertaken with heterogeneous methodologies.  Potential answers are provided to counter the question such as correcting flaws to improve the reliability and validity of research studies, and accepting it since there is no better way to synthesize numerous studies at this moment.  Some of the current distance education research conducted through meta-analysis is found below:

a.    Internet-based distance education programs for K-12 students (Cavanaugh, 1999; Cavanaugh, 2001; Cavanaugh et al., 2004)
b.    Comparing distance education with classroom instruction (Bernard et al., 2004)
c.    Distance education courses delivered via multiple technologies (Zhao et al., 2005)
d.    Comparing web-based training with face-to-face training for job-related knowledge or skills (Sitzmann et al., 2006)
e.    Effectiveness of online and blended learning (Means et al., 2009; Means et al., 2013)

The Overall Quality of Research in Distance Education

The overall quality of the original research in distance education is questionable, thus, resulting in many of the inconclusive findings.  Some commenters criticize the poor quality of distance education research on issues such as a) lack of experimental control; b) lack of procedures for randomly selecting research participants; c) lack of random assignment of participants to treatment conditions; d) poorly designed dependent measures that lack reliability and validity; and e) failure to account for a variety of variables related to the attitudes of students and instructors (Anglin & Morrison, 2000; Diaz, 2000; Perraton, 2000; Saba, 2000).  However, distance education has its own inherent reasons for not producing high quality research findings. First, it is difficult to involve learners who are learning from a distant place and get them to fill out surveys, finish interventions, and participate in interviews.  Second, even if they agree to participate, other issues will arise such as the fidelity of evaluations and thoroughness of response.  Third, issues of experimental controls are hard to manipulate because it is impossible to randomly assign learners to either control or experimental groups (Bernard et al., 2004).

Overall Assessment of Research Literature

Journal articles encompass the largest amount of information compared to research reports and dissertations with regard to the research in distance education. As journal articles are usually scrutinized by peer reviewers, they report research methodology fully and completely.  It is surprising to find that dissertations miss some important information although they are scrutinized by a panel of academic experts as well.

More information about the traditional classroom environment should be provided in the public record. For studies that compared the effectiveness of distance education and face-to-face education, they usually included a rich description of the distance education but limited descriptions of the traditional classroom condition that are key to understanding the varying effectiveness (Zhao et al., 2005).  In addition, some descriptive statistical information is missing from the research reports.  For instance, if means and standard deviations are listed, precise effect sizes can be calculated.  Otherwise, there is no way for readers to calculate them by themselves.

For research design, some aspects are amenable to improvement. To better control for selection bias, random assignment or pre-testing is needed.  Measurements need to be refined to capture relevant data for research.  In addition, materials, media, length of the instruction, and choice of instructor need to be controlled in order to create equivalent experimental groups.  Besides, it is ideal to make comparisons between programs that are similar in length so that results could be generalized. Below is a list of major distance education journals.

Table 1. List of distance education journals.

Distance Education Journals
Primary Distance Education Journals
American Journal of Distance Education
International Journal of Distance Education and E-Learning
Distance Education: An International Journal
Quarterly Review of Distance Education
Open Learning: The Journal of Open and Distance Learning
Online Learning
Online Journal of Distance Learning Administration
International Review of Research in Open and Distance Learning (IRRODL)
European Journal of Open and Distance Learning (EURDL)
Turkish Online Journal of Distance Education
Australasian Journal of Educational Technology
Campus Technology Magazine
Canadian Journal of Learning and Technology
Chief Learning Officer
CITE Journal: Contemporary Issues in Technology and Teacher Education
DEOSNEWS Archive
Distance Education Report
Distance Learning Magazine
Educational Technology & Society
Educause Quarterly Archives
Innovate: Journal of Online Education Archive

Information Available in the Literature

There is a lack of in-depth reporting in the current literature base. About 60% study features were missing (Bernard et al., 2005). The most repeated problem was the insufficient reporting of the characteristics of the comparison condition. It was difficult to come to any conclusion as to the extent of the difference between distance education condition and face-to-face condition if we cannot discern with which a distance education condition is being compared. This problem was persistent in reports, conference papers, and even journal articles and dissertations as well.

The Quality of Methodologies

The nature of educational practices makes field experiments vulnerable to rival explanations of research hypotheses. Thus, field experiments are always higher in external validity than in internal validity (Bernard et al., 2005). Moreover, missing information of some studies resulted in the unavailability of many codable aspects of methodological quality. Taken all together, the inadequacies of experimental and methodological factors and missing information undermine the foundation of the distance education research literature.
Research studies in distance education often produce contradictory results due to their diverse research designs in interventions, settings, measurement instruments, and methods.  Broad research designs are implemented to examine the effectiveness of distance education due to the fact that distance education has a multitude of delivery and instructional methods and technologies.

Research Quality of “No-Difference” Studies

The most famous no-significant-difference studies were completed by Russell in 1999. The studies lasted for almost a century from 1928 to 1998, during which 355 articles were analyzed to support that there was no significant difference between distance education and face-to-face education.  However, according to Machtmes and Asher (2000), most of the studies on this list were not experimental studies, but merely surveys distributed to a small sample size without mentioning the return rate of the surveys and the learner demographics (p.31). Moreover, there were no systematic approaches included in these studies either. Bernard et al. (2005) also criticized Russell’s approach by pointing out the unequal quality and rigor of studies included by Russell, unsampled differences exist in the population, and the issues of test statistics resulted from the different sample sizes of individual studies. Therefore, the validity and reliability of Russell’s findings are questionable. Ungerlerder and Burns (2003) did meta-analysis of the literature on networked and online learning. They found poor methodological quality and an overall effect size of zero for achievement and -0.509 for satisfaction.

Cavanaugh et al. (2004) conducted a meta-analysis focusing on Internet-based distance education programs for K-12 students. The study found that Internet-based education resulted in no significant difference in student achievement. However, Mean et al. (2013) criticized this study by saying that it ignored the independence of different outcomes. In addition, Borenstein et al. (2009) mentioned that this meta-analysis paid unequal attention to studies with more outcomes and studies with fewer outcomes.

Research Quality of “Difference” Studies

Some meta-analyses (Cavanaugh, 2001; Machtmes & Asher, 2000; Shachar & Neumann, 2003; Bernard et al., 2005; Means et al., 2013; Sitzmann et al., 2006) found different learning outcomes from distance education and face-to-face delivery methods. Cavanaugh (2001) calculated the effect size of 19 studies that compared learners learning with interactive distance education technology with those learning in traditional classrooms and found the mean effect size to be 0.147, which was of minimal amount.  Machtmes and Asher (2000) found the same phenomenon that different learning outcomes did exist between distance education and face-to-face education.  However, since distance education programs vary in their content, targeted learner population, instructor characteristics, and delivery method, studies that showed positive effects for distance education take different research variables into consideration and have different methodological features.  For example, Cavanaugh (2001) found that learning content, as one of the instructional features, has a huge influence on the effect size. As different researches pay different attention to the technological and instructional features of delivery systems, their final effect sizes are not comparable with each other.  Thus, the heterogeneity in the effect sizes claimed in different studies is difficult to compare.  Even though it is a promising way to use distance education literature to find supporting details for each claim, the number of distance education studies conducted in the past 30 years is extremely limited and with low power.  In addition, the coding systems adopted in different studies differ from each other.  Some are categorical, while some are fluid.  Unless there is a standard for coding the variables, the effect sizes from different studies are incomparable.

Shachar and Neumann (2003) reviewed 86 studies from 1990 to 2002 and found a small effect size of 0.37 for student achievement, thus, proving the slight differences between distance education and classroom instruction with credible data. Allen et al. (2002) analyzed 25 empirical studies that compared distance education and classroom conditions based on the measures of student satisfaction. Their results favored classroom instruction over distance education and found no effects on the “channel of communication.” However, this meta-analysis has some flaws such as including only one outcome measure, student satisfaction, and the least important indicators of effectiveness (Bernard et al., 2005). Its sample size and moderator variables reveal little information related to the question of distance education effectiveness.

Zhao et al. (2005) also conducted a meta-analysis examining 98 effect sizes from 51 studies published from 1996 to 2002. The study focused on distance education courses delivered through different generations of technology and found an overall effect size almost near zero. However, they later found that studies of blended approaches produced more positive effects than face-to-face instruction. In this study, they included a wide range of outcomes and averaged them to compute an overall effect size for the meta-analysis. According to Mean et al. (2013), the study is problematic since factors that are beneficial for one learning outcomes may be detrimental to other learning outcomes, thus, obscuring the results (p.10).

Improving Methodological Quality in Distance Education Research

Increasing the number of high-quality studies will result in increasing the quality evidence of distance education that can be derived from literature review.  According to the analysis of the distance education literature, improving the research practices will impact the future “practice, development, and implementation of policies regarding distance education” (Bernard, et al., 2004).  Because distance education implementation is steadily growing, efforts are being taken to improve the quality of its practices.

According to Bernard et al. (2004), there are three primary sources of standards that will guide future distance education research:
●    What Works Clearinghouse (WWC) agency has established a set of criteria to determine the quality of primary studies in a quantitative synthesis of the literature.  It includes eight composite questions that deal with the validity of a study: construct validity, internal validity, external validity, and statistical validity (Cook and Campbell, 1979).
●    The Evidence for Policy and Practice Information and Coordinating Center (EPPI-Center) takes a more flexible approach to develop methodologies that combine both qualitative and quantitative research evidence so that more perspectives can be included.
●    The Campbell Collaboration (C2) develops protocols for experimental and quasi-experimental reviews, generate standards for literature reviews, and make them available for public audiences.

These three primary sources of standards are not only concerned with the quality of research, but also with how studies are reported.  A complete, publicly available record will generate a high-quality review; a poor public record will result in a mediocre contribution to literature review.  They will work together to guide and ensure the ultimate credibility of research in distance education, which will affect the practice and policy makers with evidence-based arguments.

New Directions for Distance Education Research

As distance education has been popular around the globe, new research directions need to be taken to assess its quality. Some suggestions are proposed by some researchers (Bernard et al., 2004). The famous debate between Clark (1983) and Kozma (1994) has generated the form of comparative research in synchronous and asynchronous distance education applications. Clark (1983) claimed that media had no effect on learning; rather it was the instructional method and the nature of student involvement made the difference in learning. In comparison, Kozma argued that the highly interactive function of media did make a difference in learning and fundamentally changed the relationship between learners and teachers. Another form of research was generated by Cobb (1997), who proposed the idea that the “efficiency of a medium or symbol system can be judged by how much of the learner’s cognitive work it performs” (Bernard et al., 2004, p.191). There is little research evidence to support Cobb’s view, thus, providing a theoretical perspective that is worth investigating. Also, Schwab’s four common places –  teacher, student, what is taught, and milieux of teaching and learning – are a useful framework for conducting future in distance education (Zhao et al., 2005).

  • Below are some possibilities for future research in distance education (Bernard et al., 2005).
  • A theoretical framework is needed for the design and analysis of distance education. (Smith & Dillon, 1999). A good starting point might be to adapt the learner-centered principles to explore the cognitive and motivational factors present in distance education (American Psychological Association, 1997; Lambert & McCombs, 1998).
  • Paying more attention to student motivational dispositions in distance education, such as mental effort, persistence, and task choice. Although interest/satisfaction is related to students’ learning habits as well, they may not indicate success since students may be satisfied with their choice simply because they want to make less effort to learn.
  • Aspects of pedagogical effectiveness and efficiency are worth researching, such as faculty professional development in online teaching and their teaching time, student access and learning time, and cost effectiveness.
  • Different levels of learning in distance education could be studied. For example, researches could be done to compare the learning outcomes of different instructional strategies, such as problem-based learning and collaborative online learning.
  • Students’ existing characteristics, such as prior knowledge of distance education, skills, and behaviors and attitudes need to be studied to determine their readiness for distance education and online education (Bernard, et al., 2004).
  • Teachers’ tutor skills and readiness are to be examined from different perspectives, such as the skillfulness of using media and technology in distance education, adaption of traditional classroom practices to distance education, and nurturing of communities of learners (Schoenfeld-Tacher & Persichitte, 2000).
  • To examine the extent to which distance education could involve home learners, rural and remote learners, and learners with various disabilities.
  • Implementing more rigorous and complete research methodologies.

Chapter Summary

In this chapter, we discussed existing research in the field of distance education. Upon a variety of research and meta-analysis studies, we examined and explored common arguments on distance education effectiveness, the problems and criticisms in traditional and current distance education research approaches and methodologies, gaps within distance education research, current and future implications of distance education, the necessity of using meta-analysis studies and their findings, as well as future directions for research on distance education.
In the upcoming chapter, we will discuss the development and characteristics of delivery methods involved in distance education. It is crucial to choose or combine appropriate technologies to facilitate instruction and learning based on their pros and cons.

Research Methods Practice Assessment

The end-of-chapter practice assessment retrieves 10-items from a database and scores the quiz with response correctness provided to the learner. You should score above 80% on the quiz or consider re-reading some of the materials from this chapter. This quiz is not time-limited; however, it will record your time to complete. The scores are stored on the website and a learner can optionally submit their scores to the leaderboard. You can take the quiz as many times as you want.

Discussions

  • If you were to conduct research on the effectiveness of an online tutoring video for promoting learners’ understanding of the concepts, what procedures would you take to collect and analyze your data?
  • Discuss the findings from the current research comparing the efficiency of distance education and face-to-face education.  Explain your rationale for their advantages and disadvantages.
  • What are some of the benefits of using a traditional quantitative approach to distance education research?  How might a qualitative approach affect the research?  How might a mixed methods approach be used to conduct research on distance education?
  • Discuss the idea of digital divide vs digital inequality.  Which do you think is the more appropriate term for where we are today?
  • Define the term meta-analysis and discuss the benefits of using meta-analyses for distance education research.  How can meta-analysis be applied to generate significant results and guide future research?  Give an example of meta-analysis research and explain its significance.

Assignment Exercises

  1. Think about the benefits of using meta-analysis for distance education research.  Create a concept map to illustrate how meta-analysis can be applied to generate significant results and guide future research?
  2. What’s your opinion on the effectiveness of distance education?  In a brief 3-5 minute screencast or video presentation, please demonstrate and support your opinions with concrete examples. Use proper APA citations.
  3. Explain in a brief one-page paper why it is necessary for distance education research to have a clear and consistent theoretical framework?  How might the epistemology of the researcher influence his or her research questions.  Please provide clear examples.
  4. In a brief one-page paper, discuss some of the benefits of using a traditional quantitative approach to distance education research?  How might a qualitative approach affect the research?  How might a mixed methods approach be used to conduct research on distance education?  Please be sure to use concrete examples to support your ideas,
  5. Create a graphic organizer comparing and contrasting traditional classroom instruction, distance education, and hybrid/blended-learning.

References

American Psychological Association (Division 15, Committee on Learner-Centered Teaching Education for the 21st Century).     (1995, 1997). Learner-centered psychological principles: Guidelines for teaching educational psychology in teacher education programs. Washington, DC: Author.

Anglin, G. J., & Morrison, G. R. (2000). An Analysis of Distance Education Research: Implications for the Instructional Technologist. Quarterly Review of Distance Education, 1(3), 189-94.

Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with
distance education to traditional classrooms in higher education: A meta-analysis. The American Journal of Distance Education, 16(2), 83-97.

American Association of University Women. Educational Foundation, DYG, Inc, & Lake Snell

Perry and Associates. (1999). Gaining a Foothold: Women’s Transitions Through Work and College. American Association of University Women.

Anderson, D. M., & Haddad, C. J. (2005). Gender, voice, and learning in online course
environments. Journal of Asynchronous Learning Networks, 9(1), 3-14.

Barr, R. B., & Tagg, J. (1995). From teaching to learning—A new paradigm for undergraduate
education. Change: The magazine of higher learning, 27(6), 12-26.

Barrett, E., & Lally, V. (1999). Gender differences in an online learning environment. Journal of Computer Assisted Learning, 15(1), 48-60.

Berge, Z. L., & Mrozowski, S. (2001). Review of research in distance education, 1990 to 1999.
American Journal of Distance Education, 15(3), 5-19. doi:10.1080/08923640109527090

Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., . . . Huang, B.
(2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379-439. doi:10.3102/00346543074003379

Bernard*, R. M., Brauer, A., Abrami, P. C., & Surkes, M. (2004). The development of a questionnaire for predicting online learning achievement. Distance Education, 25(1), 31-47.

Bonwell, C. C., & Eison, J. A. (1991). Active Learning: Creating Excitement in the Classroom.
1991 ASHE-ERIC Higher Education Reports. ERIC Clearinghouse on Higher Education, The George Washington University, One Dupont Circle, Suite 630, Washington, DC 20036-1183.

Boogaard, D. (2006, December 19). The Global Digital Divide [Deutsch: Der Digitale Graben (vgl.
1. Abbildung); Karte ohne Jahr.]. Retrieved November 26, 2016, from https://commons.wikimedia.org/wiki/File:Global_Digital_Divide1.png

Bromley, D. B. (1990). Academic contributions to psychological counselling: I. A philosophy of
science for the study of individual cases. Counselling Psychology Quarterly, 3(3), 299-307.

Bullen, M. (1990). Learner responses to television in distance education: The need for a
qualitative approach to research. In B. Clough (Ed.), Proceedings of the ninth annual conference of the Canadian Association for the Study of Adult Education (pp. 48-53). Victoria, BC: University of Victoria

Bullen, M. (1999). What’s the difference: A review of contemporary research on the
effectiveness of distance learning in higher education [Review of Http://www.ihep.com/difference.pdf]. Journal of Distance Education/ Revue De L’enseignement à Distance, 14(1), 102-114. Retrieved October 9, 2016, from http://www.ijede.ca/index.php/jde/article/viewFile/433/372

Burge, E. (1998). Gender in distance education. In C. Campbell Gibson (Ed.), Distance learners
in higher education: Institutional responses for quality outcomes(pp. 25-45). Madison, WI: Atwood Publishing.
Burgstahler, S. (2002). Distance Learning: Universal design, universal access. Educational Technology Review, International Forum on Educational Technology Issues and
Applications, 10I(1). Retrieved December 7, 2004, from
http://www.aace.org/pubs/etr/issue2/burgstahler.cfm

Cavanaugh, C. S. (2001). The effectiveness of interactive distance education technologies in K-12
learning: A meta-analysis. International Journal of Educational Telecommunications, 7(1), 73-88.

Cercone, K. (2008). Characteristics of adult learners with implications for online learning design.
AACE journal, 16(2), 137-159.

Clark, R. E. (1983). Reconsidering research on learning from media. Review of educational research, 53(4), 445-459.

Clark, R. E. (1994). Media will never influence learning. Educational technology research and development, 42(2), 21-29.

Clark, R. E., & Salomon, G. (1986). Media in teaching. Handbook of research on teaching, 3, 464-478.

Cook, T. D., Campbell, D. T., & Day, A. (1979). Quasi-experimentation: Design & analysis issues for field settings (Vol. 351). Boston: Houghton Mifflin.

Cross, K. P. (1998). Opening Windows on Learning. The Cross Papers Number 2.

Cross, K. P. (1999). Learning Is about Making Connections. The Cross Papers Number 3.

Diaz, D. P. (2000). Carving a new path for distance education research. The Technology Source.
Retrieved October 10, 2016, from http://ts.mivu.org

DiMaggio, P., & Hargittai, E. (2001). From the ‘digital divide’ to ‘digital inequality’: Studying
Internet use as penetration increases. Princeton: Center for Arts and Cultural Policy
Studies, Woodrow Wilson School, Princeton University, 4(1), 4-2.

Original Authors: Ronald Phipps and Jamie Merisotis American Federation of Teachers, National Education Association, 1999, 48 pages

Dixson, M. D. (2012). Creating effective student engagement in online courses: What do
students find engaging?. Journal of the Scholarship of Teaching and Learning, 10(2), 1-13.

Fraenkel, J. R., & Wallen, N. E. (2014). How to design and evaluate research in education (9th
ed.). New York: McGraw-Hill.

Frantzen, D. (2014). Is Technology a One-Size-Fits-All Solution to Improving Student
Performance? A Comparison of Online, Hybrid and Face-to-Face Courses. Journal of
Public Affairs Education, 565-578.

Furst-Bowe, J. (2002). Identifying the Needs of Adult Women in Distance Learning Programs.
Gagne, R. M., Wager, W. W., Golas, K. C., Keller, J. M., & Russell, J. D. (2005). Principles of instructional design.
Garland, D., & Martin, B. N. (2005). Do gender and learning style play a role in how online
courses should be designed. Journal of Interactive Online Learning, 4(2), 67-81.

Garrison. D. R., & Shale, D. (1994). Methodological issues: Philosophical differences and
complementary methodologies. In D. R. Garrison (Ed.), Research perspectives in adult education(pp. 17-37). Florida: Krieger.

Gorback, K. (1994). Adult education. Thrusts for Educational Leadership, 23(5), 18-22.

Gumport, P.J., & Chun, M. (2002). Collaboration in distance education: from local to international perspective. In L. Foster, B.L. Bower, & L. W Watson (Eds.) ASHE reader:

Distance education–Teaching and learning in higher education pp. 602-612). Needham Heights, MD: Simon & Schuster.
Gunawardena, C. N., & McIsaac, M. S. (2004). Distance education. Handbook of research on educational communications and technology, 2, 355-395.

Hannafin, M. J., & Hill, J. R. (2007). Epistemology and the design of learning environments. In R.
A. Reiser & J. V. Dempsey (Eds.), Trends and Issues in Instructional Design and Technology (pp.  53-71). Upper Saddle River, NJ: Pearson Education.

Hargittai, E. (2001). Second-level digital divide: mapping differences in people’s online skills.
arXiv preprint cs/0109068.

Harvey, C. (1995). Increasing Course Completion Rates. Adults Learning (England), 6(6),
178-79.

Hew, K. F. (2008). Use of audio podcast in K-12 and higher education: A review of research topics
and methodologies. Educational Technology Research and Development, 57(3), 333-357. doi:10.1007/s11423-008-9108-3

İşman, A., Dabaj, F., Altinay, Z., & Altınay, F. (2004). Roles of the Students and Teachers in
Distance Education. Retrieved October 10, 2016, from http://itdl.org/Journal/May_04/article05.htm
Joy, E. H., & Garcia, F. E. (2000). Measuring learning effectiveness: A new look at no-significant-difference findings. Journal of Asynchronous Learning Networks, 4(1), 33-39.

Kozma, R. (1991). Learning with Media. Review of Educational Research 61(2), 179-211.

Kozma, R. B. (1994). Will media influence learning? Reframing the debate.Educational technology research and development, 42(2), 7-19.

Lambert, N. M., & McCombs, B. L. (1998). How students learn: Reforming schools through learner-centered education. American Psychological Association.

LEADING CHANGE IN PUBLIC HIGHER EDUCATION: A PROVOST REPORT SERIES 1 Leading change in public higher education A provost report series on trends and issues facing higher education Exploring the Pros and Cons of Online, Hybrid, and Face-to-face Class Formats. (2013). Retrieved from http://www.washington.edu/provost/files/2012/11/edtrends_Pros-Cons-ClassFormats.pdf

Lee, Y., Driscoll, M. P., & Nelson, D. W. (2004). The past, present, and future of research in
distance education: Results of a content analysis. American Journal of Distance Education, 18(4), 225-241.

Litchfield, R.E., M.J. Oakland and J.A. Anderson (2002). Relationship between intern
characteristics, computer attitudes, and use of online instruction in a dietic training program. American Journal of Distance Education, 16(1), 23–36.

Lockee, B. B., Burton, J. K., & Cross, L. H. (1999). No comparison: Distance education finds a
new use for ‘no significant difference’. Educational Technology Research and Development, 47(3), 33-42.

Lowry, C. M. (1989). Supporting and Facilitating Self-Directed Learning. ERIC Digest No. 93.

Machtmes, K., & Asher, J. W. (2000). A meta‐analysis of the effectiveness of telecourses in distance education. American Journal of Distance Education, 14(1), 27-46.

Major, C. H., & Palmer, B. (2001). Assessing the effectiveness of problem-based learning in
higher education: Lessons from the literature. Academic exchange quarterly, 5(1), 4-9.

Means, B., Toyama, Y., Murphy, R., & Bakia, M., Jones, K. (2009). Evaluation of evidence-based
practices in online learning: A meta-analysis and review of online learning studies. Washington, D.C.: U.S. Dept. of Education, Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service.

Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended
learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1-47.

Merisotis, J. P., & Phipps, R. A. (1999). What’s the difference?: Outcomes of distance vs.
traditional classroom-based learning. Change: The Magazine of Higher Learning, 31(3), 12-17.

Mezirow, J. (1997). Transformative learning. New Directions for Adult and Continuing
Education, 74, 5-12.

Minnes, J. R. (1985). Ethnography, case study, grounded theory, and distance education research.
Distance Education, 6(2), 189-198.

Moore, M. G. (2003). Editorial. American Journal of Distance Education, 17(3), 141-143.
doi:10.1207/s15389286ajde1703_1

Moore, M. G., & Thompson, M. M. (1990). The Effects of Distance Learning: A Summary of  Literature. Research Monograph Number 2.

Morrison, G. R., Ross, S. M., Kemp, J. E., & Kalman, H. (2010). Designing effective instruction. John Wiley & Sons.
Naidu, S. (2009). Researching Distance Education. Encyclopedia of Distance Learning,
Second Edition, 1786-1793. doi:10.4018/978-1-60566-198-8.ch263

Neuhauser, C. (2010). Learning style and effectiveness of online and face-to-face instruction.
The American Journal of Distance Education.

O’Lawrence, H. (2016). Managing workforce development in the 21st century: Global reflections
and forward thinking in the new millennium. Santa Rosa, CA: Informing Science Press.

Oblinger, D. G. (2006). The myth about no significant difference: Using technology produces no
significant difference. Educause Review, 41(6), 14-15.

Peikoff, L., (1993). Objectivism: The philosophy of Ayn Rand. New York: Penguin Books.

Perez-­Pena, R. (2012, July 17). Top universities test the online appeal of free. The New York Times.
Retrieved from http://www.nytimes.com/2012/07/
18/education/topuniversities­test­the­onlineappeal­of­free.html.

Perraton, H. (2000). Rethinking the research agenda. The International Review of Research in Open and Distributed Learning, 1(1).

Phipps, R. A., & Merisotis, J. P. (1999). What’s the difference?: A review of contemporary
research on the effectiveness of distance learning in higher education. Washington, DC: Institute for Higher Education Policy.

Porter, W. W., Graham, C. R., Spring, K. A., & Welch, K. R. (2014). Blended learning in higher education: Institutional adoption and implementation. Computers & Education, 75, 185-195.

Ragan, L. C. (2010). Principles of effective online teaching: Best practices in distance education.
Faculty Focus.

Randolph, J. (2007). What’s the Difference, Still? A Follow up Methodological Review of the
Distance Education Research. Informatics in Education, 6(1), 179–188.

Richey, R., Klein, J. D., & Tracey, M. W. (2011). The instructional design knowledge base:
Theory, research, and practice. New York: Routledge.

Rovai, A. P., Wighting, M. J., Baker, J. D., & Grooms, L. D. (2009). Development of an
instrument to measure perceived cognitive, affective, and psychomotor learning in traditional and virtual classroom higher education settings. The Internet and Higher Education, 12(1), 7-13.

Russell, T. L. (1997). The” No Significant Difference” phenomenon as reported in 248 research reports, summaries, and papers. North Carolina State University.

Russell, T. L. (1999). The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education: As reported in 355 research reports, summaries and papers. North Carolina State University.

Saba, F. (2000). Research in distance education: A status report. The International Review of Research in Open and Distributed Learning, 1(1).

Schoenfeld-Tacher, R., & Persichitte, K. A. (2000). Differential skills and competencies required of faculty teaching distance education courses. International Journal of Educational Technology, 2(1), 1-16.

Shachar, M. (2008). Meta-Analysis: The preferred method of choice for the assessment of distance learning quality factors. The International Review of Research in Open and Distributed Learning, 9(3).

Shea, P., & Bidjerano, T. (2016). A national study of differences between distance and non-distance community college students in time to first associate degree attainment, transfer, and dropout. Online Learning, 20(3).

Shimp, U. R. (2008). Evaluation of the distance education literature: A content analysis using the
Institute for Higher Education Policy benchmarks and selected bibliometric methods (Doctoral dissertation).

Sitzmann, T., Kraiger, K., Stewart, D., & Wisher, R. (2006). The Comparative Effectiveness Of
Web-Based And Classroom Instruction: A Meta-Analysis. Personnel Psychology, 59(3), 623-664. doi:10.1111/j.1744-6570.2006.00049.x

Smith, P. L., & Dillon, C. L. (1999). Lead article: Comparing distance learning and classroom learning: Conceptual considerations. American Journal of Distance Education, 13(2), 6-23.

Stone, M. T., & Perumean-Chaney, S. (2011). The benefits of online teaching for traditional
classroom pedagogy: A case study for improving face-to-face instruction. Journal of Online Learning and Teaching, 7(3), 393.

Taylor, J. C. (2001). Fifth generation distance education. Instructional Science and Technology,
4(1), 1-14.

Van Dijk, J. A. (2006). Digital divide research, achievements and shortcomings. Poetics, 34(4),
221-235.

What are the disadvantages of correlation research? (n.d.). Retrieved October 09, 2016, from
https://www.reference.com/world-view/disadvantages-correlation-research-1531107f6262ee55

Wilson, B. G. (1996). Constructivist learning environments: Case studies in instructional design.
Educational Technology.

Yin, R. K. (1994). Case study research: Design and methods (2nd ed.). Newbury Park, CA: Sage
Publications.

Zawacki-Richter, O., Baecker, E., & Vogt, S. (2009). Review of distance education research (2000
to 2008): Analysis of research areas, methods, and authorship patterns. The International Review Of Research In Open And Distributed Learning, 10(6), 21-50. doi:http://dx.doi.org/10.19173/irrodl.v10i6.741

Zhao, Y., Lei, J., Yan, B., Lai, C., & Tan, H. S. (2005). What makes the difference? A practical
analysis of research on the effectiveness of distance education. Teachers College Record,
107(8), 1-83.