Publications

A Theoretical Framework for Data-Driven Decision Making

April 1, 2006

In the wake of the No Child Left Behind legislation (NCLB, 2001), data-driven decision making has become a central focus of education policy and practice. Schools seek to meet the Adequate Yearly Progress (AYP) requirements of NCLB face tremendous pressure to monitor carefully student performance on the high-stakes assessments that determine their success or failure. The complexity of disaggregating, analyzing, and reporting these testing data has increasingly led administrators to embrace commercial and home-grown data-driven decision making tools and support systems to help track and drive improvement in student performance. One consequence of the increased use of these tools is a growing gap between the use of test data to satisfy administrative demands and the use of test data in concert with other data sources to aid instructional decision making. While these tools have the potential to support the classroom-level instructional decisions of teachers, these tools tend to privilege an approach to data analysis that allows for the examination and reporting of system-wide or school-wide test trends and patterns, but only reveal limited information about individual students and the multiple factors that influence student performance. As a result, they meet the needs of school administrators much more readily than they do those of classroom teachers.

Recent research conducted at the Education Development Center's Center for Children and Technology (EDC/CCT) has found that school administrators use high-stakes test data to understand general patterns of performance, identifying class-, grade-, and school-wide strengths and weaknesses so that they can allocate resources and plan professional development and other kinds of targeted intervention activities (e.g., after school remediation, summer school attendance, etc.). Teachers, in contrast, are wary of using any single data source, such as high stakes test data, to make decisions about their students' strengths and weaknesses. Their preference is to engage multiple sources of data - homework assignments, in-class tests, classroom performances, as well as impressionistic, anecdotal, and experiential information - to inform their thinking about student learning. While this approach to data yields a richer profile of individual student performance, it also has a downside. Our research and that of others suggests that teachers are more inclined to examine factors that contribute to individual patterns of behavior and to think on a case-by-case basis, rather to look for patterns in data at different levels of aggregation, such as classroom-wide patterns. As a result, teachers' decision making strategies often lack systematicity, from student-to-student, class-to-class, and year-to-year, are unintentionally tinged with personal bias, and ignore key statistical concepts like distribution, variation, and reliability.

This paper builds on a project sponsored by the National Science Foundation to explore and create an evaluative framework for data-driven decision making. An outgrowth of this work and work of other EDC/CCT projects as well as the that of others at this conference, including the project advisory board, has led us to the development of an emerging conceptual framework for data-driven decision making. We present our current model, couched in the context of data-driven decision making in classrooms, schools, and districts.

STAFF

Margaret Honey
Daniel Light
Ellen Mandinach