State Representative Jeremy Faison of Cosby says the state’s teacher evaluation system, and especially the portion that relies on student scores on TNReady is causing headaches for Tennessee’s teachers.
Faison made the remarks at a hearing of the House Government Operations Committee, which he chairs. The hearing featured teachers, administrators, and representatives from the Department of Education and Tennessee’s testing vendor, Questar.
Zach Vance of the Johnson City Press reports:
“What we’re doing is driving the teachers crazy. They’re scared to death to teach anything other than get prepared for this test. They’re not even enjoying life right now. They’re not even enjoying teaching because we’ve put so much emphasis on this evaluation,” Faison said.
Faison also said that if the Department of Education were getting ratings on a scale of 1 to 5, as teachers do under the state’s evaluation system (the TEAM model), there are a number of areas where the Department would receive a 1. Chief among them is communication:
“We’ve put an immense amount of pressure on my educators, and when I share with you what I think you’d get a one on, I’m speaking for the people of East Tennessee, the 11th House District, from what I’m hearing from 99.9 percent of my educators, my principal and my school superintendents.”
Rather frankly, Faison said both the state Department of Education and Questar should receive a one for its communication with local school districts regarding the standardized tests.
Faison’s concerns about the lack of communication from the TNDOE echo concerns expressed by Wilson County Director of Schools Donna Wright recently related to a different issue. While addressing the state’s new A-F report card to rate schools, Wright said:
We have to find a way to take care of our kids and particularly when you have to look at kids in kindergarten, kids in the 504 plan and kids in IEP. When you ask the Department of Education right now, we’re not getting any answers.
As for including student test scores in teacher evaluations, currently a system known as Tennessee Value Added Assessment System (TVAAS) is used to estimate the impact a teacher has on a student’s growth over the course of the year. At best, TVAAS is a very rough estimate of a fraction of a teacher’s impact. The American Statistical Association says value-added scores can estimate between 1-14% of a teacher’s impact on student performance.
Now, however, Tennessee is in the midst of a testing transition. While McQueen notes that value-added scores count less in evaluation (15% this past year, 20% for the current year), why county any percentage of a flawed score? When changing tests, the value of TVAAS is particularly limited:
Here’s what Lockwood and McCaffrey (2007) had to say in the Journal of Educational Measurement:
We find that the variation in estimated effects resulting from the different mathematics achievement measures is large relative to variation resulting from choices about model specification, and that the variation within teachers across achievement measures is larger than the variation across teachers. These results suggest that conclusions about individual teachers’ performance based on value-added models can be sensitive to the ways in which student achievement is measured.
These findings align with similar findings by Martineau (2006) and Schmidt et al (2005)
You get different results depending on the type of question you’re measuring.
The researchers tested various VAM models (including the type used in TVAAS) and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured.
And they concluded:
Our results provide a clear example that caution is needed when interpreting estimated teacher effects because there is the potential for teacher performance to depend on the skills that are measured by the achievement tests.
If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best.
After the meeting, Faison confirmed that legislation will be forthcoming that detaches TNReady data from teacher evaluation and student grades.
Faison’s move represents policy based on acknowledging that TNReady is in the early stages, and more years of data are needed in order to ensure a better performance estimate. Or, as one principal who testified before the committee said, there’s nothing wrong with taking the time to get this right.
For more on education politics and policy in Tennessee, follow @TNEdReport