Publications & Resources

The Dependability and Interchangeability of Assessment Methods in Science

Jan 2000

Noreen M. Webb, Jonah Schlackman, and Brenda Sugrue

In this study, we investigated the importance of occasion as a hidden source of error variance in (a) estimates of the dependability (generalizability) of science assessment scores and (b) the interchangeability of science test formats. Two science tests were developed to measure eighth-grade students’ knowledge of concepts related to electricity and electric circuits: a hands-on assessment, which provided students with equipment to manipulate, and an analogous paper-and-pencil version. Students were administered both tests on two occasions, approximately one month apart. Results of the univariate generalizability results showed that explicitly recognizing occasion as a facet of error variance altered the interpretation about the substantial sources of error in the measurement and gave lower estimates of the dependability of science scores. Including occasion as an explicit source of variance in the multivariate generalizability analyses influenced the interpretation of the observed correlation between hands-on and paper-and-pencil scores but had little influence on the estimated disattenuated correlation between assessment methods.

Webb, N. M., Schlackman, J., & Sugrue, B. (2000). The dependability and interchangeability of assessment methods in science (CSE Report 515). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).