Publications

Balancing Priorities in the Evaluation of Educational Technology

June 1, 2003

For 22 years CCT has explored how new technologies foster learning and improve teaching in schools and informal learning settings.

CCT partners with diverse organizations, including schools and districts, private and corporate philanthropy, government and policy groups, cultural institutions such as museums and libraries, and after school programs that are working to bridge the digital divide. Working closely with practicing educators, administrators, policymakers, and curriculum and tool developers has pushed us as researchers to reflect regularly on our theoretical and methodological groundings, and to strive to conduct our research in ways that are respectful of and responsive to the needs and priorities of the educators whose work we hope to inform and support.

In our evaluation projects, we seek to produce findings that can improve programs and practice at every level of program development, delivery, and implementation. All of our evaluation work begins from a belief that effective evaluation must produce both research-based knowledge of how technological applications can best support teaching and learning, as well as practice-based knowledge of how the technology integration process can best be designed to meet locally defined learning goals in schools. Conducting rigorous research while maintaining a high level of local validity and utility is our primary goal for all of our evaluation projects.

Below we share two examples of our approach to evaluation, each of which illustrates a response to a specific challenge to balance these multiple priorities.

What Is the Right Level of Analysis?
Some of the most challenging work we have done recently has involved bringing our evaluation strategies to bear on largescale projects that reach tens and even hundreds of thousands of teachers. Working in this context has challenged us to find new ways to sustain our emphasis on producing locally relevant findings while also developing a broad portrait of program reception and impact. Intel Teach to the Future is a professional development program for K-12 teachers that reaches in-service and preservice teachers in 29 countries. The primary focus of our evaluation has been the U.S. program for in-service teachers, which has reached over 100,000 teachers over the past three years. The Intel Teach to the Future curriculum focuses on helping teachers support students in pursuing original inquiries and longterm projects, and invites teachers to create a unit plan that includes student use of software to create publications and allows teachers to expand their technical skills in the context of a curriculum development process.

Throughout this evaluation, now in its third year, we have combined in-depth examination of the program's implementation and impact in specific contexts with broad looks at the same topics across a much larger sample of the participants.

Our initial focus was on how participation in the program might be changing teachers' practices and overall use of technology. However, over time, both elements of our data collection have demonstrated that while the program has had a considerable impact on participants' ability to integrate technology into their teaching, it is also having a more systemic impact at the school and district level.

For example, some districts are reshaping their overall professional development programs about technology to create a sequence of trainings that are consistent with the approach taken by Intel Teach to the Future. Consequently, we have designed our third-year data collection to examine these schooland district-level impacts more systematically, which will result in a more complete picture of the impact of this unusually largescale professional development initiative.

What Are the Key Factors for Success?
Clients often hope that evaluations will not only determine the relative success of their program, but also help them to understand what makes the program successful. Paying careful attention to emerging formative evaluation findings has allowed us, in many evaluations, to begin hypothesizing about these factors early on and to refine our examination and definition of them over time.

Since 1997, the CCT has evaluated IBM's Reinventing Education initiative. Reinventing Education has charted a distinctive course focused on cultivating long-term, flexible educational research and development partnerships with urban school districts and state education departments. These partnerships respond directly to the immediate needs of each partner state or district, whether targeted on student achievement, teacher professional development, or some other dimension of the reform process, with an eye toward producing systemic reforms that can alter teaching practice and the circumstances of student learning across subject areas.

After a pilot study in 1996-1997, CCT engaged in a comprehensive study of how the specific solutions to each site's needs were being developed and implemented, and the impact they were having on schools. Our study focused on how the Reinventing Education program was helping to address the specific educational challenges identified by each site, but also sought to understand what common challenges and opportunities were emerging at each site, and what factors shaped the likelihood of success in each site. The evaluation included: (1) measures customized to gather data about the nature of the particular problem, the design and implementation of each site's solution, and its impact over several years; and (2) questions and methods that were common across districts that helped us to decipher implementation and sustainability issues more generally.

Over time, we have found that while there is enormous variation among the participating districts and states, a clear set of common factors-including consistent sustained leadership, clarity of goals for student learning, and commitment to investing in professional development-have shaped their abilities to build on and benefit from their partnership with IBM.

Article reprinted with permission from Harvard Family Research Project from The Evaluation Exchange periodical, Spring 2003, Volume 9, Number 2, pp. 19-20.

STAFF

Katherine Culp
Margaret Honey