Publications

Adventures in Supercomputing: 1994 - 1995 Evaluation Final Report

January 1, 1996

The 1994-1995 evaluation of Adventures in Supercomputing was designed to expand upon evaluation work conducted during the 1993-1994 school year. In 1993- 1994, evaluators worked with nineteen schools, all in their second year in the program, and representing three states (Iowa, New Mexico and Tennessee). Demographics, contextual information, and videotaped student performances were all used to determine the types of experiences students were having in AiS, and those factors which were most prominently supporting or impeding their work.

This year, the evaluation team used the same research design to study second year schools in Alabama and Colorado. Additionally, eight third-year AiS schools in Iowa, New Mexico and Tennessee were included in the sample. This selection allowed us to establish a uniform body of evaluation data collected from second-year AiS schools in all five participating AiS states and to continue to evaluate the development of the program in schools undertaking their third year in AiS.

The goal of this evaluation was to determine what types of learning experiences were typical of students participating in the AiS program. In order to answer this question three types of data were collected and analyzed: demographic data describing the participating students, teachers and schools; contextual data describing the particular circumstances in which the AiS curriculum is implemented; and student learning data documenting the process and the outcomes of students' work.

The data documenting student learning outcomes - videotapes of student groups presenting their projects and being questioned about them - was analyzed according to performance criteria. Students were then clustered according to the scores they received on their presentations.

There were found to be three resulting clusters (High, Medium, Low) which were distinct from one another in the quality of student performance on the five dimensions of the performance criteria: understanding, critical thinking, clarity, teamwork, and technical competence. Clusters were then analyzed in relation to the demographic data and learning process data to isolate the variables that significantly correlated with membership in each cluster. Contextual data was used to aid in the interpretation of the significant variables.

Overall, three clusters of student achievement were identified that exhibited similar profiles, but whose mean scores centered around different points on the 1-5 (poor to outstanding) scale. Each of the three clusters had consistent scores across understanding, critical thinking, and clarity; a relatively higher score on teamwork; and a relatively lower score on technical competence. The High Cluster (37 percent) had very high scores across the dimensions, with a nearly perfect teamwork score and very strong scores for understanding, critical thinking and clarity. These scores reflect students' ability to define a tractable problem for investigation, use computational techniques effectively in their work, and understand the relationship between the content and the techniques of their inquiry. The Middle Cluster (35 percent) had consistently above-average scores, but was less successful in fully integrating their project work. Students in this cluster were often successful in defining and completing a piece of work, but their ability to conjecture based on their knowledge, to extract further information or ideas from their findings, or to grasp the implications of their work, was limited. The Low Cluster (27 percent), the smallest of the three, included students who, most often, were unable to create a well-defined project. Their presentations were often diffuse and provided little evidence that they had a clear sense of what they had set out to find or determine, or that they were able to make sense of the results of their work.

STAFF

Katherine Culp
Margaret Honey
Daniel Light