Pennsylvania

Using Formative Assessments: The Role of Policy Supports

Principal Investigator: 
Co-Investigator: 
Project Overview
Background & Purpose: 

The purpose of this exploratory study is to examine the use of interim assessments and the policy supports that promote their use to improve instruction, focusing on elementary school mathematics. In particular, we examine how elementary school teachers, individually and collectively, learn from interim assessment results in mathematics and apply that knowledge to instructional decisions about content, pedagogy, and working with individual students; and how teachers’ schedules, the nature of professional development, the sophistication of local data systems, and other district and school supports affect teachers’ use of formative assessment data.

Setting: 

Research was conducted in nine elementary schools located in one urban school district (six schools) and one suburban school district (three schools) in the northeastern United States.

Research Design: 

The research design for this project is comparative, and is designed to generate evidence which is descriptive [case study and observational], and associative/correlational [interpretive commentary].

This project collects original data using school records/policy documents, assessments of learning, achievement tests, personal observation, and survey research [paper and pencil self-completion questionnaires, and face-to-face semi-structured/informal interviews]. The following data collection instruments were developed specifically for this project: (1) semi-structured interview protocols for classroom teachers and school and district leaders; (2) Data Analysis Scenarios, which consisted of mock-ups of how each district reports its interim assessment results with hypothetical student scores, to learn about teachers’ familiarity with their district’s assessment and reporting systems; (3) “misconception” probes where teachers were presented with a hypothetical student who answered two items on the actual interim assessment incorrectly, to examine teachers’ understanding of and instructional responses to student errors; (4) classroom observations of each teacher for one math period, three times during the school year; (5) observation of school and district meetings; and (6) document analysis. In addition, we distributed a survey composed of nine multiple-choice items from the Content Knowledge for Teaching – Math (CKT-M) instrument developed by researchers at the University of Michigan to measure our participating teachers’ mathematical knowledge for teaching.

Interview data were analyzed using qualitative data analysis software. A descriptive, a priori code set was developed from the project's conceptual framework and applied to all interview transcripts. Coded data were organized both thematically and by case. Case profiles were developed integrating interview and classroom observation data. Patterns and analytic relationships were identified using analytic memos, subsequent coding iterations, and process-ordered matrices. In addition, responses from the Mathematics Knowledge for Teaching survey were scored according to the developers' specifications, resulting in z-scores indicating variation in content knowledge among the teachers in our sample.

Findings: 

We found that while elementary school teachers seek to use benchmark assessments in mathematics, varying supports and constraints influence the extent to which such assessments help improve classroom instruction. Findings from our district and school interviews as well as from classroom observations indicate that strict curricular pacing and lack of collaboration time may hamper teachers’ use of mathematics assessment results to address student misconceptions and procedural errors. For example, while the administration of benchmark assessments is scheduled into the curriculum, in most cases, teachers analyze their students’ scores outside of the school day or prep time, and technological or pedagogical support for such analysis is haphazard. At the same time, informational management systems that collect, organize, and report student results were found to influence the ways in which teachers analyze their students’ performance, and, in many cases, helped reinforce teachers’ knowledge of the state mathematics standards. In addition, we identified a need for professional development aimed at helping teachers move from analysis and planning to addressing students’ concept and skills development in the classroom.

Publications & Presentations: 

Nabors Olah, L., Lawrence, N. R., Goertz, M. E., Weathers, J., Riggan, M., & Anderson, J. (2007). Testing to the Test? Expectations and Supports for Interim Assessment Use. Paper presented at the American Educational Research Association (AERA) Annual Meeting, Chicago, IL. http://www.cpre.org/images/stories/cpre_pdfs/aera_2007_testing_to_test.pdf

Nabors Olah, L., Lawrence, N. R., & Riggan, M. (2008) Learning to learn from benchmark assessment data: How teachers analyze results. Paper presented at the American Educational Research Association (AERA) Annual Meeting, New York, NY. http://www.cpre.org/images/stories/cpre_pdfs/aera2008_olah_lawrence_rigg...

Bulkley, K., Christman, J. B., Goertz, M. E., & Lawrence, N. R. (2008). Building with Benchmarks: The Role of the District in Philadelphia's Benchmark Assessment System. Paper presented at the American Educational Research Association (AERA) Annual Meeting, New York, NY. http://www.cpre.org/images/stories/cpre_pdfs/aera2008%20goertz%20lawrenc...

Discipline: 
State: 
Target Population: 
Research Design: 

Pages

Subscribe to RSS - Pennsylvania