Publications & Resources

On the “Exchangeability” of Hands-On and Computer-Simulated Science Performance Assessments

Nov 2000

Anders Rosenquist, Richard J. Shavelson, and Maria Araceli Ruiz-Primo

Inconsistencies in scores from computer-simulated and “hands-on” science performance assessments have led to questions about the exchangeability of these two methods (e.g., Baxter & Shavelson, 1994), in spite of the former’s highly touted potential (e.g., Bennett, 1999). Five possible explanations of students’ inconsistent performances were considered: (1) inadequate exposure to computers and simulations, (2) differential views of computer-simulated (2-dimensional icons) and hands-on tasks, (3) different methods tapping different aspects of achievement, (4) partial or incomplete knowledge, and (5) a combination of partial knowledge and method differences. The first explanation was ruled out by the fact that students had computers in their classes and used them for a variety of purposes, including simulation. The second explanation was ruled out using talk-aloud data, randomized experiments, and student questionnaire responses. If explanation 3 were tenable, the correlation between Electric Mysteries scores at time 1 and time 2 for either hands-on or computer simulation should be higher than the correlation between hands-on scores and computer simulation scores at either point in time. Shavelson, Ruiz-Primo, and Wiley (1999) provided correlations that did not jibe with this expectation. To explore the remaining two possible explanations dealing with student expertise, we compared the performance of high school physics students (“experts”) to that of Baxter and Shavelson’s elementary school students and found, somewhat surprisingly, that these “experts” were far from expert. Indeed, they were no more expert than the elementary students. Consequently, we have narrowed the possible explanations for the lack of exchangeability between computer-simulated and hands-on performance assessments to one of two choices: partial knowledge or the interaction of partial knowledge with method. The jury is still out.

Rosenquist, A., Shavelson, R. J., & Ruiz-Primo, M. A. (2000). On the exchangeability of hands-on and computer-simulated science performance assessments (CSE Report 531). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).