Location

West Lafayette, Indiana

DOI

10.5703/1288284316845

Keywords

classroom assessment; validation; concept inventory

Abstract

There are many common challenges with classroom assessment, especially in first-year large enrollment courses, including managing high quality assessment within time constraints, and promoting effective study strategies. This paper presents two studies: 1) using the CATS instrument to validate multiple-choice format exams for classroom assessment, and 2) using the CATS instrument as a measure of metacognitive growth over time. The first study focused on validation of instructor generated multiple choice exams because they are easier to administer, grade, and return for timely feedback, especially for large enrollment classes. The limitation of multiple choice exams, however, is that it is very difficult to construct questions to measure higher order content knowledge beyond recalling facts. A correlational study was used to compare multiple choice exam scores with relevant portions of the CATS assessment (taken within a week of one another). The results indicated a strong relationship between student performance on the CATS assessment and instructor generated exams, which infers that both assessments were measuring similar content areas. The second study focused on a metacognition, more specifically, on students’ ability to self-assess the extent of their own knowledge. In this study students were asked to rank their confidence for each CATS item on a 1 (not at all confident) to 4 (very confident) Likert-type scale. With the 4-point scale, there was no neutral option provided; students were forced to identify some degree of confident or not confident. A regression analysis was used to compare the relationship between performance and confidence for pre, post, and delayed-post assessments. Results suggested that the students’ self-knowledge of their performance improved over time.

Share

COinS
 

Research to Practice: Leveraging Concept Inventories in Statics Instruction

West Lafayette, Indiana

There are many common challenges with classroom assessment, especially in first-year large enrollment courses, including managing high quality assessment within time constraints, and promoting effective study strategies. This paper presents two studies: 1) using the CATS instrument to validate multiple-choice format exams for classroom assessment, and 2) using the CATS instrument as a measure of metacognitive growth over time. The first study focused on validation of instructor generated multiple choice exams because they are easier to administer, grade, and return for timely feedback, especially for large enrollment classes. The limitation of multiple choice exams, however, is that it is very difficult to construct questions to measure higher order content knowledge beyond recalling facts. A correlational study was used to compare multiple choice exam scores with relevant portions of the CATS assessment (taken within a week of one another). The results indicated a strong relationship between student performance on the CATS assessment and instructor generated exams, which infers that both assessments were measuring similar content areas. The second study focused on a metacognition, more specifically, on students’ ability to self-assess the extent of their own knowledge. In this study students were asked to rank their confidence for each CATS item on a 1 (not at all confident) to 4 (very confident) Likert-type scale. With the 4-point scale, there was no neutral option provided; students were forced to identify some degree of confident or not confident. A regression analysis was used to compare the relationship between performance and confidence for pre, post, and delayed-post assessments. Results suggested that the students’ self-knowledge of their performance improved over time.