Publications & Resources

Automatic Short Essay Scoring Using Natural Language Processing to Extract Semantic Information in the Form of Propositions

Aug 2013

Deirdre Kerr, Hamid Mousavi and Markus R. Iseli

The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques are inappropriate for scoring the content of an essay because they either rely on grammatical measures of quality or machine learning techniques, neither of which identify statements of meaning (propositions) in the text. In this report, we introduce a novel technique for using domain-independent, deep natural language processing techniques to automatically extract meaning from student essays in the form of propositions and match the extracted propositions to the expected response. The empirical results indicate that our technique is able to accurately extract propositions from student short essays, reaching moderate agreement with human rater scores.

Kerr, D., Mousavi, H., & Iseli, M. R. (2013). Automatic short essay scoring using natural language processing to extract semantic information in the form of propositions (CRESST Report 831). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).