Publications & Resources

Using Data and Big Ideas: Teaching Distribution as an Instance of Repeated Addition

May 2008

Terry P. Vendlinski, Keith E. Howard, Bryan C. Hemberg, Laura Vinyard, Annabel Martel, Elizabeth Kyriacou, Jennifer Casper, Yourim Chai, Julia C. Phelan, and Eva L. Baker

The inability of students to become proficient in algebra seems to be widespread in American schools. One of the reasons often cited for this inability is that instruction seldom builds on prior knowledge. Research suggests that teacher effectiveness is the most critical controllable variable in improving student achievement. This report details a process of formative assessment and professional development (called PowerSource©), which is designed to improve teacher effectiveness and student achievement. We describe the process we used to develop a model of distribution over addition and subtraction, one of three big ideas developed during the year, and the interactions we had with teachers about teaching distribution in various ways. As a consequence of these interactions, we were able to test whether teaching distribution using the notion of multiplication as repeated addition (a concept which students had learned previously), using array or area models, or teaching it procedurally had the greatest effects on student learning. We found that the repeated addition model was not only less likely to create certain student misconceptions, but also found that students taught using the repeated addition model were more likely to correctly answer questions involving distribution than were their counterparts taught using either of the other methods. Teachers subsequently reported that they preferred teaching distribution as an instance of repeated addition than teaching it using other available methods.

Vendlinski, T. P., Howard, K. E., Hemberg, B. C., Vinyard, L., Martel, A., Kyriacou, E., … Baker, E. L. (2008). Using data and big ideas: Teaching distribution as an instance of repeated addition (CRESST Report 734). Los Angeles: University of California, Los Angeles, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).