(2017). What do instructional designers in higher education really do? International Journal on E-Learning, 16(4), 371-393.
Abstract: What do instructional designers in higher education really do? With the rise in online courses and programs in higher education, this question is especially important. We interviewed eight instructional designers from across the United States using a semi-structured interview protocol. The results were analyzed using the constant comparative qualitative procedure. Results demonstrate that instructional designers primarily serve faculty in their roles, but also perceive students as their final audience. Faculty are often both the client and the subject-matter experts in this context. Instructional designers in higher education use a wide variety of tools for a wide variety of purposes ranging from course design to supporting faculty in delivering online courses to facilitating meaningful workshops for faculty. Further, instructional designers in higher education exercise project management techniques to assist in managing the plethora of projects they may be assigned. Our paper concludes with a discussion of our findings and their connection to instructional design practice in higher education contexts.
(2017). A Meta-Analysis of Pair-Programming in Computer Programming Courses: Implications for Educational Practice. ACM Transactions on Computing Education, 17(4), 16:1-16:13.
Abstract: Several experiments on the effects of pair programming versus solo programming in the context of education have been reported in the research literature. We present a meta-analysis of these studies that accounted for 18 manuscripts with 28 independent effect sizes in the domains of programming assignments, exams, passing rates, and affective measures. In total, our sample accounts for N = 3,308 students either using pair programming as a treatment variable or using traditional solo programming in the context of a computing course. Our findings suggest positive results in favor of pair programming in three of four domains with exception to affective measures. We provide a comprehensive review of our results and discuss our findings.
Hohlfeld, T. N., Ritzhaupt, A. D., Dawson, K., & Wilson, M. L. (2017). An examination of seven years of technology integration in Florida schools: Through the lens of the Levels of Digital Divide in Schools. Computers & Education, 113, 135–161. Retrieved from http://www.sciencedirect.com/science/article/pii/S0360131517301227
Abstract: The purpose of this longitudinal research is to document the Information and Communication Technology (ICT) integration patterns in the state of Florida in relation to the Socio-Economic Status (SES) and school type (Elementary, Middle, and High Schools). This research is characterized by the Levels of Digital Divide in Schools model presented by Hohlfeld, Ritzhaupt, Barron, and Kemker (2008). We use seven years of secondary data collected by the Florida Department of Education: Technology Resource Inventory (TRI), and the percentage of students on Free-and-Reduced Lunch as a proxy for SES. The current study uses descriptive statistics, internal consistency reliability, exploratory factor analysis, and longitudinal multi-level models to examine the trends in ICT integration in the state of Florida by SES (High and Low) in each school type (Elementary, Middle, and High) over the seven-year period. Our results suggest that Florida has improved on several indicators related to the digital divide; however, some important differences still exist. For instance, Low-SES students generally use software more for computer-directed activities such as drill and practice or remedial work, while their High-SES counterparts are using software more for student-controlled activities such as creating with or communicating through technology. We discuss our findings in relation to the three-level model presented by Hohlfeld et al. (2008) and make recommendations to relevant stakeholders within the community.
Ritzhaupt, A. D., Huggins-Manley, A. C., Dawson, K., Ağaçlı-Doğan, N., & Doğan, S. (2017). Validity and Appropriate Uses of the Revised Technology Uses and Perceptions Survey (TUPS). Journal of Research on Technology in Education, 49(1-2), 73-87. Retrieved from http://dx.doi.org/10.1080/15391523.2017.1289132
Abstract: The purpose of this article is to explore validity evidence and appropriate uses of the revised Technology Uses and Perceptions Survey (TUPS) designed to measure in-service teacher perspectives about technology integration in K–12 schools and classrooms. The revised TUPS measures 10 domains, including Access and Support; Preparation of Technology Use; Perceptions of Professional Development; Perceptions of Technology Use; Confidence and Comfort Using Technology; Technology Integration; Teacher Use of Technology; Student Use of Technology; Perceived Technology Skills; and Technology Usefulness. We first provide a review of the literature supporting the design of the revised TUPS. We collected data from N = 1,376 teachers from one medium-sized school district in the state of Florida and conducted a variety of psychometric analyses. We performed internal structure analysis, correlation analysis, and factor analysis with these data. The results demonstrate that data collected from the TUPS are best used as descriptive, granular information about reported behaviors and perceptions related to technology, rather than treated as a series of 10 scales. These findings have implications for the appropriate uses of the TUPS. (Keywords: technology integration, K-12 teachers, survey, validity, reliability)
Kleinheksel, A. J. & Ritzhaupt, A. D. (2017). Measuring the adoption and integration of virtual patient simulations in nursing education: An exploratory factor analysis. Computers & Education, 108, 11 – 29.
Abstract: This study sought to develop a valid and reliable instrument to identify the characteristics of computer-based, interactive, and asynchronous virtual patient simulations that nurse educators identify as important for adoption, and the subsequent curricular integration strategies they employed. Once these factors were identified, this study also sought to explore any relationships between the influential features for adoption and the ways in which the adopted virtual patients are integrated. Data were collected with the Virtual Patient Adoption and Integration in Nursing (VPAIN) survey, which was completed by 178 nurse educators who were currently using, or had previously used virtual patient simulations. Both exploratory factor analysis and correlation analysis were conducted. Through exploratory factor analysis, 55.6% of the variance in the VPAIN adoption subscale data was accounted for by the nine adoption factors identified: Trustworthiness, Worldbuilding, Pedagogy, Differentiation, Encouragement, Clarity, Evaluation, Administrative Pressure, and Visibility. The factor analysis also identified five factors within the integration subscale, which accounted for 53.3% of the variance: Hour Replacement, Intensive Integration, Leveling, Preparation, and Benchmarking. A correlation analysis was conducted to identify relationships between the adoption and integration factors.