EVALUATING MICROCOMPUTER COURSEWARE: COMPARING THE RESULTS OF THREE EVALUATION METHODOLOGIES WITH EXTERNAL DATA COLLECTED DURING FIELD-TESTING
Abstract
This research study investigates whether three different courseware evaluation methods accurately predict the relative effectiveness of different computer-assisted instruction (CAI) programs that teach the same instructional skill. The researcher randomly assigned 144 eighth and ninth-grade students into six treatment groups. Each group studied a different CAI program that taught the same math fractions skill. The researcher used gains in student achievement to rank the CAI in order of effectiveness. The three different courseware evaluation methods used to evaluate the CAI were: (1) Local teachers using simple checklists, (2) MicroSIFT, and (3) Computer Program Appraisal (CPA). The correlation of concurrance, which measures the agreement of the three methods was positive, but not significant. The local teacher evaluation was the most accurate in predicting the effectiveness of the six different CAI programs. The interplay among the teachers contributed greatly to the accuracy of the evaluation. Not being able to observe students studying the CAI caused inaccuracies in the MicroSIFT evaluation. The researcher discovered that the evaluation of complete CAI packages varies depending on the specific skill that is used during student tryout.
Degree
Ph.D.
Subject Area
Curricula|Teaching
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.