Publications & Resources

Validation of ELA and Mathematics Assessments: A General Approach

Jul 2012

Joan L. Herman and Kilchan Choi

Validity refers to the degree to which an assessment actually measures what it claims to measure and well serves intended purposes. From this perspective, assessments themselves are not valid or not; rather evidence of validity must be established in the context of specific interpretations and uses of test scores. A test may be well suited for one use and not for another. Moreover, validity is a matter of degree; validation requires the accumulation of evidence to support the argument that scores derived from a given test yield accurate inferences to support intended interpretations and uses (AERA, APA, NCME, 1999; Kane, 2001). Finally, our validity definition requires consideration of both (1) what is measured—that an assessment measures what it is intended to measure—and (2) what interpretations and uses the assessment is intended to serve. For both summative and formative assessments, the definition implies that assessments must yield technically sound measures of student learning. Moreover, results should be useful and used for intended purposes and not carry serious unintended or negative consequences.

Modern theory suggests that validation be approached as an argument that needs to be substantiated. A validity argument lays out the claims that an assessment and its scores must satisfy to serve its proposed purpose(s) and/or use(s). Validation efforts then focus on the collection of evidence to document how well the assessment satisfies each claim. Below, we set out these claims as a set of criteria that educational assessments used for summative or accountability purposes should meet and present plans to systematically collect evidence to evaluate these claims throughout our test development and field-testing process.

Herman, J. L., & Choi, K. (2012). Validation of ELA and mathematics assessments: A general approach. Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

KEYWORDS: