Difficulty as a concept inventory design consideration: An exploratory study of the concept assessment tool for statics (CATS)

Dana L Denick, Purdue University

Abstract

The ability for engineering students to apply mathematic, scientific and engineering knowledge to real-life problems depends greatly on developing deep conceptual knowledge that structures and relates the meaning of underlying principles. Concept inventories have emerged as a class of tests typically developed for use in higher education science and engineering courses. Concept Inventories (CIs) are multiple-choice tests that are designed to assess students' conceptual understanding within a specific content domain. For example, the CI explored within this study, the Concept Assessment Tool for Statics (CATS) is intended to measure students' understanding of the concepts underlying the domain of engineering statics. High quality, reliable CIs may be used for formative and summative assessment, and help address the need for measures of conceptual understanding. Evidence of test validity is often found through calculation of psychometric parameters. Prior research has applied multiple theoretical measurement models including classical test theory and item response theory to find psychometric evidence that characterize student performance on CATS. Common to these approaches is the calculation of item difficulty, a parameter that is used to distinguish which items are more difficult than others. The purpose of this dissertation study is to provide context and description of what makes some CI items more difficult than others within the content area of statics, based on students' reasoning in response to CATS items. Specifically, the research question guiding this study is: how does student reasoning in response to CATS items explain variance in item difficulty across test items? Think-aloud interviews were conducted in combination with a content analysis of selected CATS items. Thematic analysis was performed on interview transcripts and CATS development and evaluation documentation. Two themes emerged as possible explanations for why some CATS items are more difficult than others: (1) a Direction of Problem Solving theme describes the direction of reasoning required or used to respond to CATS items, and may also provide some description of students' reasoning in response to determinant and indeterminant multiple-choice problems; and (2) a Distractor Attractiveness theme describes problematic reasoning that is targeted and observed as argumentation for incorrect CATS responses. The findings from this study hold implications for the interpretation of CATS performance and the consideration of difficulty in concept inventory design. Specifically, findings from this study suggest that item difficulty may be associated with complexity, relating to theories of cognitive load. Complexity as it contributes to item difficulty is not solely dependent on the content of the concept inventory item, but also may be due to the item design and context of the test question.

Degree

Ph.D.

Advisors

Streveler, Purdue University.

Subject Area

Educational tests & measurements|Engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS