Influences of teacher knowledge and professional learning on teachers scoring student writing assessments

Date

5/13/2020

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Although standardized assessment has been part of the educational landscape since the late 20th century, the reauthorization of the Elementary and Secondary Education Act in 2001 (No Child Left Behind) ushered in a new emphasis on assessments for the purpose of state accountability. In response, states have sought various ways to adequately assess students in order to in turn hold schools and districts accountable for closing the achievement gap. In Texas, motivated by pressure from parents, teachers, and other vested interest groups who expressed concerns over too much testing, the 84th Texas Legislature in 2015 passed legislation requiring the Texas Education Agency (TEA) to examine alternative methods for writing assessment by designing and implementing a writing assessment pilot study. As the state explores alternatives to the current writing assessment system, the purpose of this qualitative study was to explore the various factors that come into play as a teacher makes scoring decisions while evaluating student writing. Additionally, the study considered the teacher knowledge and professional learning that contribute to the different scoring approaches teachers use while making scoring decisions. In order to situate the study and consider current efforts underway across the state, a document analysis was conducted of the current documents related to The Texas Writing Pilot. The focus of the study was six writing teachers from grades where writing is assessed across the state (e.g. grade 4, grade 7, and grade 9). Data were collected through Public Information Requests from TEA, interviews, and think aloud protocols that captured teachers verbal thinking about his/her scoring decisions while evaluating student writing. Findings were presented in three manuscripts written for publication in peer-reviewed journals. These findings revealed a clear disconnect between how educators teach writing and how the state assesses writing. The analysis of the interviews and think-aloud protocol transcripts shed light on the complexity of teacher decision-making. This analysis provided a look into the processes teachers use when making scoring decisions and revealed that teachers do not make scoring decisions in isolation, but rather rely on personal experience, professional learning, and mentorship when making scoring decisions. The findings are a step towards better understanding the influences of teacher decision making when scoring student writing and provide important considerations for a state or educational institution seeking to design assessment with improved inter-rater reliability among educators.

Description

Keywords

Writing, Writing assessment, Teacher decision-making, High-stakes testing, State assessment, Student-centered assessment, Authentic assessment, Alternative assessment, Performance assessment, Rater agreement, Inter-rater reliability, Scorer cognition, Rater cognition, Analytic writing rubric, Accountability, Policy

Citation