Date of Award

January 2015

Degree Type


Degree Name

Doctor of Philosophy (PhD)


Educational Studies

First Advisor

Marcia Gentry

Committee Member 1

Helen Patrick

Committee Member 2

Jenny Daugherty

Committee Member 3

Kristina Ayers Paul


Few program evaluation models unique to gifted education currently exist. The Depth and Complexity Program Evaluation Tool (DC-PET) is a new method for conducting an evaluation of a gifted program that combines the Kaplan Depth and Complexity Model with tools and techniques from the fields of program evaluation and organizational change. The tool was designed to assist local school district personnel in generating data for gifted program improvement by requiring a close examination of critical issues in the field (e.g., defensible differentiation, underserved populations, twice-exceptional learners). The DC-PET is meant to provide a framework for guiding those who have little or no knowledge of the evaluation process using a paper-based workbook and a computer- or tablet-based application. Gifted coordinators from five districts were asked to create one or more evaluation teams consisting of at least five stakeholders willing to pilot the DC-PET. In total, nine evaluation teams comprised of 55 participants were formed. A sample of 40 individuals from seven different school districts was used as a comparison group. Data collected from the treatment group participants included the administration of a pre and post qualitative survey, a pre and post measure of evaluative thinking, weekly status checks, and the opportunity to participate in a focus group. Thirty-seven participants completed pre and post assessments of their evaluation knowledge using a 4-point response scale from 1 (Novice) to 4 (Expert). Mean scores increased after 10-18 hours using the DC-PET (M= 1.46, SD= 0.61 pretest to M= 2.19, SD= 0.57 posttest). An analysis of the pre and post administration of the Evaluative Thinking Inventory (Buckley and Archibald, 2011) revealed a statistically significant interaction between the intervention and time on evaluative thinking using repeated measures ANOVA (F (1,70) = 115.562, p = .027, = .068). Further analysis of between group differences revealed no statistical difference between the treatment group and the comparison group on the pre-study version of the Evaluative Thinking Inventory, F (1,70) = .031, p = .862, meaning both groups began with about the same level of evaluative thinking. However, there was a significant difference between the treatment group and comparison group on the post-study version of the Evaluative Thinking Inventory, (F (1,70) = 4.022, p = .049, = .054). The mean evaluation team member ratings for the degree to which the DC-PET aligned with the 10 empowerment evaluation principles on a scale from 1 (not at all) to 4 (a lot), were 3.21 or greater. Despite early concerns regarding the time commitment and self-doubt regarding the ability to meaningfully participate, eight of the nine evaluation teams successfully completed an evaluation of a gifted program. Participants reported learning new skills and evaluation methods, as well as obtaining a greater appreciation for the importance of evaluation.