This is a guest post I did for The Oregonian. You can see the article on their website at this link.
The U.S. Department of Education requirement that Oregon and Washington use year-to-year changes in annual standardized test scores as the measure of student growth for teacher evaluation represents a serious violation of accepted standards of educational measurement practice. When the federal position was simply that student growth be a significant factor in teacher evaluation, a wide variety of acceptable options were available for states and local districts to be in compliance using high-quality practices. That is no longer true, and as a result, Oregon and Washington teachers and students will be harmed.
To understand the problem here, one must learn a bit about the nature of these standardized accountability tests. They are developed to sample broad domains of knowledge and long lists of achievement standards often spanning multiple grade levels. As a result, it is commonly the case that no single standard is tested with enough depth to support an inference about student mastery of it. Moreover, large numbers of standards or learning targets are not sampled at all. Further, they tend to rely on multiple-choice-test items to reduce scoring costs, thus severely restricting the learning targets that can be covered and leaving out many important ones. Finally, these accountability tests are never evaluated during development to determine in advance if they are even capable of detecting differences in the quality of instruction. So their sensitivity to differences in the quality of teaching is in question, and this should be a deal breaker in the context of teacher evaluation.
As a result of all of this, there is a high probability of misalignment between what is tested and the actual teaching responsibilities of any individual teacher. And even if there is alignment at some level, the samples of student performance are so thin as to prevent a confident judgment about students’ mastery of them. Under these circumstances, it is impossible to detect the effect of a teacher on the achievement of students.
But the problems don’t stop here. Next there is the issue of a year-long gap between pre and post testing required in order to track student growth using annual scores. During this span, a wide variety of factors can affect student learning that are clearly beyond the control of the teacher. This list includes the impact of a student’s previous teachers, learning ability, family background and home environment, the nature and abilities of classmates, etc. A strong body of research exists regarding the effects of these variables on student learning. The federal requirements would attribute any changes in student scores to the teacher and afford no opportunity for local educators to sort out the contributions of these confounding factors. Attempts to address this issue often employ “value added analysis” — a treatment of evidence that has been widely discredited in the technical measurement literature.
If Oregon and Washington meekly comply with these federal requirements, chances are that good teachers will be mislabeled as poor, and poor teachers will be misidentified as acceptable. Who suffers on both counts? Students.
Far superior options are available to our teachers and school leaders, and both states should put in place practices that promise access to them. The time has come to challenge the federal guidelines in this regard and demand that the architects of those guidelines defend them.