Publications & Resources
Examining the Generalizability of Direct Writing Assessment Tasks
Eva Chen, David Niemi, Jia Wang, Haiwen Wang and Jim Mirocha
This study investigated the level of generalizability across a few high quality assessment tasks and the validity of measuring student writing ability using a limited number of essay tasks. More specifically, the research team explored how well writing prompts could measure student general writing ability and if student performance from one writing task could be generalized to other similar writing tasks. A total of four writing prompts were used in the study, with three tasks being literature-based and one task based on a short story. A total of 397 students participated in the study and each student was randomly assigned to complete two of the four tasks. The research team found that three to five essays were required to evaluate and make a reliable judgment of student writing performance.
Chen, E., Niemi, D., Wang, J., Wang, H., & Mirocha, J. (2007). Examining the generalizability of direct writing assessment tasks (CSE Report 718). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).