A comparison of model-data fit for parametric and nonparametric item response theory models using ordinal-level ratings
This study compared the model-data fit of a parametric item response theory (PIRT) model to a nonparametric item response theory (NIRT) model to determine the best-fitting model for use with ordinal-level alternate assessment ratings. The PIRT Generalized Graded Unfolding Model (GGUM) was compared to the NIRT Mokken model. Chi-square statistics were examined to determine the fit of the GGUM. To determine the fit of the Mokken model, Loevinger’s H coefficients and the number of monotone homogeneity (MH) and double monotonicity (DM) violations were examined. Participants were 4,449 students ages 7 to 21 with autism spectrum disorder, a mild intellectual disability, moderate intellectual disability, or severe intellectual disability. These students participated in Indiana’s alternate assessment, the Indiana Assessment System of Educational Proficiencies (IASEP) in the years 2000, 2001, or 2002. The IASEP consists of five subscales, consisting of 20 items, each, in the domains of Information Acquisition and Use (Language Arts and Math), Personal Adjustment, Social Adjustment, Recreation and Leisure, and Vocational Experience. Students are rated by teachers using a universal rubric ranging from 0 to 4. The GGUM showed better overall model-data fit with most items showing good fit across all disability groups. The Mokken model’s stricter assumption of double monotonicity (DM) did not hold. The Mokken model’s required property of monotone homogeneity (MH) held across most items and disability groups, making this model useful for selected applications. Researchers or practitioners may benefit when choosing which IRT model to use for alternate assessment ratings.^
Deborah E. Bennett, Purdue University.
Education, Tests and Measurements|Education, Educational Psychology|Psychology, Psychometrics
Off-Campus Purdue Users:
To access this dissertation, please log in to our