The State Continues to Fail

Here’s another take on “Eric’s Story” about the Kindergarten portfolio evaluation process. The bottom line: Teachers are being disrespected and students are losing valuable learning time. All in the name of assigning a number to teachers in an evaluation process that leaves much to be desired.

Here’s what this teacher had to say:

I’m a teacher that has experienced this process from the view of teacher, portfolio district lead, and portfolio reviewer. Also, being chosen for the second round of scoring. I received both the emails you discussed as well as a third stating I’d been chosen for more scoring with the “guidance document” attached.

So I begin my second round of scoring tomorrow. A process none of us knew would exist. We thought our deadline was May 15 on scoring and we would be done.

I spent two full 8 hour days trying to score submissions (pulled away from my kindergarten screening duties) only for them not to be available to me so I did not complete the task and score the number they wanted me to score. Was this my fault? No! I tried but the state wouldn’t push them out to us. So that’s why I was chosen for round two.

Now summer is beginning. Teachers need summer to recuperate mentally and prepare for our next class which we happily look forward to receiving. We don’t need to spend it stressing over continued work load.

MORE on K portfolios>

If you have a story to tell about the portfolio process or another aspect of the intersection between policy and practice, send it to: andy@tnedreport.com

For more on education politics and policy in Tennessee, follow @TNEdReport

Keep the stories alive!


 

Driving Teachers Crazy

State Representative Jeremy Faison of Cosby says the state’s teacher evaluation system, and especially the portion that relies on student scores on TNReady is causing headaches for Tennessee’s teachers.

Faison made the remarks at a hearing of the House Government Operations Committee, which he chairs. The hearing featured teachers, administrators, and representatives from the Department of Education and Tennessee’s testing vendor, Questar.

Zach Vance of the Johnson City Press reports:

“What we’re doing is driving the teachers crazy. They’re scared to death to teach anything other than get prepared for this test. They’re not even enjoying life right now. They’re not even enjoying teaching because we’ve put so much emphasis on this evaluation,” Faison said.

Faison also said that if the Department of Education were getting ratings on a scale of 1 to 5, as teachers do under the state’s evaluation system (the TEAM model), there are a number of areas where the Department would receive a 1. Chief among them is communication:

“We’ve put an immense amount of pressure on my educators, and when I share with you what I think you’d get a one on, I’m speaking for the people of East Tennessee, the 11th House District, from what I’m hearing from 99.9 percent of my educators, my principal and my school superintendents.”

Rather frankly, Faison said both the state Department of Education and Questar should receive a one for its communication with local school districts regarding the standardized tests.

Faison’s concerns about the lack of communication from the TNDOE echo concerns expressed by Wilson County Director of Schools Donna Wright recently related to a different issue. While addressing the state’s new A-F report card to rate schools, Wright said:

We have to find a way to take care of our kids and particularly when you have to look at kids in kindergarten, kids in the 504 plan and kids in IEP. When you ask the Department of Education right now, we’re not getting any answers.

As for including student test scores in teacher evaluations, currently a system known as Tennessee Value Added Assessment System (TVAAS) is used to estimate the impact a teacher has on a student’s growth over the course of the year. At best, TVAAS is a very rough estimate of a fraction of a teacher’s impact. The American Statistical Association says value-added scores can estimate between 1-14% of a teacher’s impact on student performance.

Now, however, Tennessee is in the midst of a testing transition. While McQueen notes that value-added scores count less in evaluation (15% this past year, 20% for the current year), why county any percentage of a flawed score? When changing tests, the value of TVAAS is particularly limited:

Here’s what Lockwood and McCaffrey (2007) had to say in the Journal of Educational Measurement:

We find that the variation in estimated effects resulting from the different mathematics achievement measures is large relative to variation resulting from choices about model specification, and that the variation within teachers across achievement measures is larger than the variation across teachers. These results suggest that conclusions about individual teachers’ performance based on value-added models can be sensitive to the ways in which student achievement is measured.
These findings align with similar findings by Martineau (2006) and Schmidt et al (2005)
You get different results depending on the type of question you’re measuring.

The researchers tested various VAM models (including the type used in TVAAS) and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured.

And they concluded:

Our results provide a clear example that caution is needed when interpreting estimated teacher effects because there is the potential for teacher performance to depend on the skills that are measured by the achievement tests.

If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best.

After the meeting, Faison confirmed that legislation will be forthcoming that detaches TNReady data from teacher evaluation and student grades.

Faison’s move represents policy based on acknowledging that TNReady is in the early stages, and more years of data are needed in order to ensure a better performance estimate. Or, as one principal who testified before the committee said, there’s nothing wrong with taking the time to get this right.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

Value Added Changes

 

In what is certain to be welcome news to many teachers across the state, Governor Bill Haslam announced yesterday that he will be proposing changes to the state’s teacher evaluation process in the 2015 legislative session.

Perhaps the most significant proposal is to reduce the weight of value-added data on teacher evaluations during the transition to a new test for Tennessee students.

From the Governor’s press release explaining the proposed changes:

The governor’s proposal would:
•        Adjust the weighting of student growth data in a teacher’s evaluation so that the new state assessments in ELA and math will count 10 percent of the overall evaluation in the first year of
administration (2016), 20 percent in year two (2017) and 35 percent in year
three (2018). Currently 35 percent of an educator’s evaluation is comprised of
student achievement data based on student growth;
•        Lower the weight of student achievement growth for teachers in non-tested grades and subjects
from 25 percent to 15 percent;
•        And make explicit local school district discretion in both the qualitative teacher evaluation model that is used for the observation portion of the evaluation as well as the specific
weight student achievement growth in evaluations will play in personnel
decisions made by the district.

 

The proposal does not go as far as some have proposed, but it does represent a transition period to new tests that teachers have been seeking.  It also provides more local discretion in how evaluations are conducted.

Some educators and critics question the ability of value-added modeling to accurately predict teacher performance.

In fact, the American Statistical Association released a statement on value-added models that says, in part:

Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores

Additional analysis of the ability of value-added modeling to predict significant differences in teacher performance finds that this data doesn’t effectively differentiate among teachers.

I certainly have been critical of the over-reliance on value-added modeling in the TEAM evaluation model used in Tennessee. While the proposed change ultimately returns to using VAM for a significant portion of teacher scores, it also represents an opportunity to both transition to a new test AND explore other options for improving the teacher evaluation system.

For more on value-added modeling and its impact on the teaching profession:

Saving Money and Supporting Teachers

Real World Harms of Value-Added Data

Struggles with Value-Added Data

An Ineffective Teacher?

Principals’ Group Challenges VAM

 

For more on education policy and politics in Tennessee, follow @TNEdReport