Fitting cognitive diagnostic assessment to the Concept Assessment Tool for Statics (CATS)

Aidsa Ivette Santiago Roman, Purdue University

Abstract

A concept inventory (CI) is a multiple-choice instrument designed to evaluate whether a person has an accurate, working knowledge of a specific set of concepts. An important role of CI’s is to provide instructors with clues about the pre-conceptions (or misconceptions) their students hold which may be actively interfering with learning. Only a few engineering CI’s have been able to be applied successfully in instructional settings, due in part to statistical analysis techniques that are typically applied to the instrument. These techniques include psychometric interpretative techniques such as Classical Test Theory (CTT) and Item Response Theory (IRT), which measure the item performance data of the CI’s. However, these strategies do not measure students’ cognitive abilities (misconceptions). To begin filing this gap, the objective of this study was to determine the applicability of a new statistical method called the Fusion Model to the Concept Assessment Tool for Statics (CATS) among engineering students from various US universities. Specifically, the research question that guided this study was: Can the Fusion Model be appropriately used with the Cognitive Assessment Tool for Statics (CATS) to diagnostically measure students’ cognitive understanding of Statics concepts? In this study, the Fusion Model was applied to CATS through a four-phase procedure. Each phase had a specific objective that was tied to the primary research question. The analysis performed resulted in the generation of a Q-matrix that relates a set of cognitive attributes to specific questions. These attributes were determined using the expertise of the author of this study and most importantly the developer of CATS. Results of the study indicated that CATS has high capability to be used as diagnostic assessment, and also identified items (questions) that needed to be revised because they were not able to discriminate between examinees who were masters and non-masters of the specified attributes. Finally examinees’ expected performance patterns were generated. These patterns have the potential to help instructors to target their instruction on those attributes which are problematic and thus the corresponding concepts that are expected to be difficult for students.

Degree

Ph.D.

Advisors

Streveler, Purdue University.

Subject Area

Engineering|Quantitative psychology|Cognitive psychology|Science education

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS