Publications & Resources

Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models With Applications in Test Scoring, Scale Alignment, and Model Fit Testing

Jul 2013

Li Cai

Lord and Wingersky’s (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models (e.g., the bifactor model [Gibbons & Hedeker, 1992], the testlet model [Wainer, Bradlow, & Wang, 2007], or the two-tier model [Cai, 2010b], such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
Editor’s Note: This item was updated with a new version on Friday, July 26, 2013.

Cai, L. (2013). Lord-Wingersky Algorithm Version 2.0 for hierarchical item factor models with applications in test scoring, scale alignment, and model fit testing (CRESST Report 830). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).