A comparison of item parameters across item types

Gregory M Applegate, Purdue University

Abstract

Seven item formats from a professional licensure examination program (n=4,706) were evaluated to determine whether there are significant differences in item parameter estimates based on item formats. The item formats were standard multiple choice, multiple choice with a graphic or exhibit attached, multiple choice where the examinee was asked to choose the best of four possible correct solutions, multiple choice where examinees were directed to choose the exception, calculations, ordered response and multiple response. Analysis showed similar item parameters for all multiple-choice type items, but better item fit and significantly longer response times for calculation items. Multiple response items were found to produce similar item parameters as multiple-choice items with the exception of increased item difficulty, which is likely due to dichotomous scoring. Ordered response items tended to have longer response times, and are generally more difficult than multiple-choice items. Calculation items were found to be the most discriminating, while ordered response items were the least discriminating.

Degree

Ph.D.

Advisors

Gorham, Purdue University.

Subject Area

Educational tests & measurements

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS