Publications

Measuring the Efficacy of Big Math for Little Kids: A Look at Fidelity of Implementation

April 1, 2006

Since the inception of the No Child Left Behind Act (NCLB), the need and desire for research-based curricula has increased substantially. Not only do these curricula need to reflect previous research findings from developmental and cognitive science, but each curriculum itself needs to undergo rigorous scientific evaluation to show that it "works." Shavelson and Towne (2002) and the What Works Clearinghouse (2004) describe and advocate for these rigorous research standards in the hopes that these methodologies will authoritatively indicate "what works." Often, this vetting process involves randomized controlled trials (RCTs) to show that the curriculum produces strong, positive learning outcomes in diverse groups of students (Brass, Nunez-Neto, & Williams, 2006).

RCTs involve random assignment of subjects, exposing the intervention only in the treatment condition, and averaging the outcome measure of each group to determining if differences between groups are due to chance (Brass et. al., 2006). However, even this experimental methodology may not be enough (Mowbray, Holter, Teague, & Bybee, 2003). In order for a research study to examine the impact of a curriculum, it must ensure that the curriculum is implemented as the designers intended (Mowbray et. al., 2003). Thus, researchers need a tool that sensitively measures the degree to which teachers adhere to the intervention, in this case a curriculum.

This paper discusses the concept of implementation fidelity, details our process of developing and using a fidelity measures, and explores way in which our experience can generalizing beyond our study and inform other researchers.

STAFF

Barbrina Ertle
Herbert Ginsburg