Publications & Resources

On the Cognitive Interpretation of Performance Assessment Scores

Jul 2001

Carlos Cuauhtemoc Ayala, Richard Shavelson and Mary Ann Ayala

We investigated some aspects of reasoning needed to complete science performance assessments, i.e., students’ hands-on investigations scored for the scientific justifiability of the findings. While others have characterized the content demands of science performance assessments as rich or lean, and the processes as constrained or open, or characterized task demands as calling for different cognitive processes, we studied the reasoning demands of science performance assessments on three dimensions based on previous analysis of NELS:88 data: basic knowledge and reasoning, spatial mechanical reasoning, and quantitative science reasoning. In this pilot study, 6 subjects (3 experts and 3 novices) were asked to think aloud (talk aloud) while they completed one of three science performance assessments. The performance assessments were chosen because their tasks appeared to elicit differences in reasoning along these three dimensions. Comparisons were then made across the performance assessments and across the expertise levels. The talk alouds provided evidence of the three reasoning dimensions consistent with our nominal analysis of the performance assessment tasks. Furthermore, experts were more likely to use scientific reasoning in addressing the tasks, while novices verbalized more “doing something” and “monitoring” statements.

Ayala, C. C., Shavelson, R., & Ayala, M. A. (2001). On the cognitive interpretation of performance assessment scores (CSE Report 546). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).