Category Archives: Professional

Digital Divide Article Published

Hohlfeld, T. N., Ritzhaupt, A. D., Dawson, K., & Wilson, M. L. (2017). An examination of seven years of technology integration in Florida schools: Through the lens of the Levels of Digital Divide in Schools. Computers & Education, 113, 135–161. Retrieved from http://www.sciencedirect.com/science/article/pii/S0360131517301227

Abstract: The purpose of this longitudinal research is to document the Information and Communication Technology (ICT) integration patterns in the state of Florida in relation to the Socio-Economic Status (SES) and school type (Elementary, Middle, and High Schools). This research is characterized by the Levels of Digital Divide in Schools model presented by Hohlfeld, Ritzhaupt, Barron, and Kemker (2008). We use seven years of secondary data collected by the Florida Department of Education: Technology Resource Inventory (TRI), and the percentage of students on Free-and-Reduced Lunch as a proxy for SES. The current study uses descriptive statistics, internal consistency reliability, exploratory factor analysis, and longitudinal multi-level models to examine the trends in ICT integration in the state of Florida by SES (High and Low) in each school type (Elementary, Middle, and High) over the seven-year period. Our results suggest that Florida has improved on several indicators related to the digital divide; however, some important differences still exist. For instance, Low-SES students generally use software more for computer-directed activities such as drill and practice or remedial work, while their High-SES counterparts are using software more for student-controlled activities such as creating with or communicating through technology. We discuss our findings in relation to the three-level model presented by Hohlfeld et al. (2008) and make recommendations to relevant stakeholders within the community.

TUPS Article Published

Ritzhaupt, A. D., Huggins-Manley, A. C., Dawson, K., Ağaçlı-Doğan, N., & Doğan, S. (2017). Validity and Appropriate Uses of the Revised Technology Uses and Perceptions Survey (TUPS). Journal of Research on Technology in Education, 49(1-2), 73-87. Retrieved from http://dx.doi.org/10.1080/15391523.2017.1289132

Abstract: The purpose of this article is to explore validity evidence and appropriate uses of the revised Technology Uses and Perceptions Survey (TUPS) designed to measure in-service teacher perspectives about technology integration in K–12 schools and classrooms. The revised TUPS measures 10 domains, including Access and Support; Preparation of Technology Use; Perceptions of Professional Development; Perceptions of Technology Use; Confidence and Comfort Using Technology; Technology Integration; Teacher Use of Technology; Student Use of Technology; Perceived Technology Skills; and Technology Usefulness. We first provide a review of the literature supporting the design of the revised TUPS. We collected data from N = 1,376 teachers from one medium-sized school district in the state of Florida and conducted a variety of psychometric analyses. We performed internal structure analysis, correlation analysis, and factor analysis with these data. The results demonstrate that data collected from the TUPS are best used as descriptive, granular information about reported behaviors and perceptions related to technology, rather than treated as a series of 10 scales. These findings have implications for the appropriate uses of the TUPS. (Keywords: technology integration, K-12 teachers, survey, validity, reliability)

Nursing Simulations Article Published

Kleinheksel, A. J. & Ritzhaupt, A. D. (2017). Measuring the adoption and integration of virtual patient simulations in nursing education: An exploratory factor analysis. Computers & Education, 108, 11 – 29.

Abstract: This study sought to develop a valid and reliable instrument to identify the characteristics of computer-based, interactive, and asynchronous virtual patient simulations that nurse educators identify as important for adoption, and the subsequent curricular integration strategies they employed. Once these factors were identified, this study also sought to explore any relationships between the influential features for adoption and the ways in which the adopted virtual patients are integrated. Data were collected with the Virtual Patient Adoption and Integration in Nursing (VPAIN) survey, which was completed by 178 nurse educators who were currently using, or had previously used virtual patient simulations. Both exploratory factor analysis and correlation analysis were conducted. Through exploratory factor analysis, 55.6% of the variance in the VPAIN adoption subscale data was accounted for by the nine adoption factors identified: Trustworthiness, Worldbuilding, Pedagogy, Differentiation, Encouragement, Clarity, Evaluation, Administrative Pressure, and Visibility. The factor analysis also identified five factors within the integration subscale, which accounted for 53.3% of the variance: Hour Replacement, Intensive Integration, Leveling, Preparation, and Benchmarking. A correlation analysis was conducted to identify relationships between the adoption and integration factors.

Games and Simulations Article Published

Ritzhaupt, A. D., Poling, N., Frey, C., Kang, Y. & Johnson, M. (2016). A phenomenological study of games, simulations, and virtual environments courses: What are we teaching and how? International Journal of Gaming and Computer-Mediated Simulations, 8(3), 59 – 73.

Abstract: Educational technology programs from across the United States are offering graduate courses in games, simulations, and virtual environments (GSVE) to their students. However, these courses, until now, have not been systematically studied. This research uses a hermeneutical phenomenological approach to answer the research question: “How do instructors describe their experience teaching GSVE courses?” Five professors of educational technology that have taught GSVE courses were interviewed using a semi-structured protocol based on the TPACK (Technological Pedagogical Content Knowledge) framework. These data were analyzed both analytically and thematically. The results of the study showed a wide variety of topics, tools, and pedagogies are used within GSVE courses. The results had five themes emerge: Focus on Application and Theory, Experiential Learning and Constructivism, Instructor’s Prior Experience with Games, Heterogeneous Student Populations, and Range of Technology Tools. These themes as well as these courses are highlighted within this paper. A discussion is provided.

TPACK Article Published

Ritzhaupt, A. D., Huggins, A. C., Madley, S., Ruggles, K. & Wilson, M. (2016). Validation of the Survey of Preservice Teachers’ Knowledge of Teaching and Technology: A multi-institutional sample. Journal of Digital Learning in Teacher Education, 32(1), 26 – 37.

Abstract: The TPACK (technological pedagogical content knowledge) framework (Mishra & Koehler, 2006) has gained tremendous momentum from within the educational technology commu- nity. Specifically, much discourse has focused on how to measure this multidimensional  construct to  further define the contours of the framework and potentially make some meaningful predictions. Some have proposed observation scales while other have proposed  self-report  measures  to gauge the phenomenon. The Survey of Pre-service Teachers ’ Knowledge of Teaching and Technology instrument is one popular tool designed to measure  TPACK  (Schmidt  et al., 2009)  specifically  from  preservice teachers in teacher education programs. This study extends the measurement framework by providing a confirmatory factor analysis of the theoretical   model   proposed   by Schmidt et al. (2009) on a sample of 227 preservice teachers from four public institutions of higher education  in  the  southeastern  United States. The data did not fit the theoretical 10-factor model implied by Schmidt  et al.  (2009),  thus,  an exploratory factor analysis was conducted  to  determine  the  optimal structure of the measurement tool for these data. This resulted in a nine- factor model, and there were measurement issues for several of the constructs. Additionally, the article provides evidence of external validity by correlating the instrument scores with   other   known   technology constructs.

1 2 3