Dear Educator

The Tennessee Department of Education explains the case of the missing students as some 900 teachers see their TVAAS scores recalculated.

Here’s the email those educators were sent:

Dear Educator,

We wanted to share an update with you regarding your individual TVAAS data.

The department has processed about 1.5 million records to generate individual TVAAS scores for nearly 19,000 educators based on the assessment results from over 1.9 million student tests in grades 2-8 and high school. During the review process with districts, we found that a small number of educators did not have all of their teacher-student claiming linkage records fully processed in data files released in early September. All linkage data that was captured in EdTools directly was fully incorporated as expected. However, due to a coding error in their software, our data processing vendor, RANDA Solutions, did not fully apply the linkage information that districts provided in supplemental Excel files over the summer. As a result, we are working with Randa to ensure that this additional data is included in final TVAAS processing.

 

You have been identified as an educator with some linkage data submitted via an Excel file that was not fully processed. This means after our statistical analysis vendor, SAS, receives these additional linkage records your score may be revised to reflect all the students you identified in the teacher-student claiming process. Only students marked “F” for instructional availability are used when calculating individual TVAAS data. Based on our records, there will be [X] additional students marked “F” for instructional availability linked to you when the additional data is incorporated.

 

Your district’s and school’s TVAAS scores are not affected by this situation given that all students are included in these metrics, regardless of which teacher is linked to them, so no other part of your evaluation composite would change. Moreover, only those teachers with this additional linkage data in Excel files are impacted, so the vast majority of your colleagues across the state have their final individual TVAAS composites, which are inclusive of all student data.

 

We expect to share your final growth score and overall level of effectiveness later this year. While we do not have more specific timing to share right now, we are expediting this process with our vendors to get you accurate feedback. We will follow-up with more detailed information in the next couple of weeks. Also, as announced to districts earlier this month, the department and your districts will be using new systems and processes this year that will ensure that this type of oversight does not happen again.

 

Thank you for your patience as we work to share complete and accurate feedback for you. We deeply value each Tennessee educator and apologize for this delay in providing your final TVAAS results. Please contact our office via the email address below if you have any questions.

 

Respectfully,

 

Office of Assessment Logistics

Tennessee Department of Education

A few things stand out about this communication:
  1. Tennessee continues to experience challenges with the rollout of TNReady. That’s to be expected, but it begs the question: Why are we rushing this? Why not take some time, hit pause, and get this right?
  2. The Department says, “Thank you for your patience as we work to share complete and accurate feedback for you.” If accurate feedback was important, the state would take the time to build a value-added data set based on TNReady. This would take three to five years, but would improve the accuracy of the information provided to educators. As it stands, the state is comparing apples to oranges and generating value-added scores of little real value.
  3. On the topic of value-added data generally, it is important to note that even with a complete data set, TVAAS data is of limited value in terms of evaluating teacher effectiveness. A recent federal lawsuit settlement in Houston ended the use of value-added data for teacher evaluation there. Additionally, a judge in New York ruled the use of value-added data in teacher evaluation was “arbitrary and capricious.”
  4.  When will teachers have access to this less than accurate data? Here’s what the TDOE says, “We expect to share your final growth score and overall level of effectiveness later this year. While we do not have more specific timing to share right now, we are expediting this process with our vendors to get you accurate feedback.” Maybe they aren’t setting a clear deadline because they have a track record of missing deadlines?
  5. It’s amazing to me that a teacher’s “overall level of effectiveness” can only be determined once TVAAS data is included in their evaluation score. It’s as if there’s no other way to determine an overall level of a teacher’s effectiveness. Not through principal observation. Not through analysis of data points on student progress taken throughout the year. Not through robust peer-evaluation systems.
  6. Let’s assume for a moment that the “level of effectiveness” indicator is useful for teacher development. Providing that score “later” is not exactly helpful. Ideally, actionable insight would be provided to a teacher and his/her administrators near the end of a school year. This would allow for targeted professional development to address areas that need improvement. Of course, this assumes targeted PD is even available.
  7. Accountability. This is the latest in a series of mishaps related to the new testing regimen known as TNReady. Teachers are held accountable through their evaluation scores, and in some districts, their pay is tied to those scores. Schools and districts are held accountable for growth and achievement scores and must develop School Improvement Plans to target areas of weakness. On the other hand, the Department of Education continues to make mistakes in the TNReady transition and no one is held accountable.

The email to impacted teachers goes to great lengths to establish the enormous scope of the TNReady transition. Lots of tests, lots of students, not too many mistakes. If this were the only error so far in the TNReady process, all could be forgiven. Instead, it is the latest in a long line of bumps. Perhaps it will all smooth out in time. Which only makes the case for hitting pause all the stronger.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

It May Be Ready, But is it Valid?

In today’s edition of Commissioner Candice McQueen’s Educator Update, she talks about pending legislation addressing teacher evaluation and TNReady.

Here’s what McQueen has to say about the issue:

As we continue to support students and educators in the transition to TNReady, the department has proposed legislation (HB 309) that lessens the impact of state test results on students’ grades and teachers’ evaluations this year.

In 2015, the Tennessee Teaching Evaluation Enhancement Act created a phase-in of TNReady in evaluation to acknowledge the state’s move to a new assessment that is fully aligned to Tennessee state standards with new types of test questions. Under the current law, TNReady data would be weighted at 20 percent for the 2016-17 year.

However, in the spirit of the original bill, the department’s new legislation resets the phase-in of growth scores from TNReady assessments as was originally proposed in the Tennessee Teaching Evaluation Enhancement Act. Additionally, moving forward, the most recent year’s growth score will be used for a teacher’s entire growth component if such use results in a higher evaluation score for the teacher.

We will update you as this bill moves through the legislative process, and if signed into law, we will share detailed guidance that includes the specific options available for educators this year. As we announced last year, if a teacher’s 2015-16 individual growth data ever negatively impacts his or her overall evaluation, it will be excluded. Additionally, as noted above, teachers will be able to use 2016-17 growth data as 35 percent of their evaluation if it results in a higher overall level of effectiveness.

And here’s a handy graphic that describes the change:

TNReady Graphic

 

 

Of course, there’s a problem with all of this: There’s not going to be valid data to use for TVAAS. Not this year. It’s bad enough that the state is transitioning from one type of test to another. That alone would call into question the validity of any comparison used to generate a value-added score. Now, there’s a gap in the data. As you might recall, there wasn’t a complete TNReady test last year. So, to generate a TVAAS score, the state will have to compare 2014-15 data from the old TCAP tests to 2016-17 data from what we hope is a sound administration of TNReady.

We really need at least three years of data from the new test to make anything approaching a valid comparison. Or, we should start over building a data-set with this year as the baseline. Better yet, we could go the way of Hawaii and Oklahoma and just scrap the use of value-added scores altogether.

Even in the best of scenarios — a smooth transition from TCAP to TNReady — data validity was going to be challenge.

As I noted when the issue of testing transition first came up:

Here’s what Lockwood and McCaffrey (2007) had to say in the Journal of Educational Measurement:

We find that the variation in estimated effects resulting from the different mathematics achievement measures is large relative to variation resulting from choices about model specification, and that the variation within teachers across achievement measures is larger than the variation across teachers. These results suggest that conclusions about individual teachers’ performance based on value-added models can be sensitive to the ways in which student achievement is measured.
These findings align with similar findings by Martineau (2006) and Schmidt et al (2005)
You get different results depending on the type of question you’re measuring.

The researchers tested various VAM models (including the type used in TVAAS) and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured.

And they concluded:

Our results provide a clear example that caution is needed when interpreting estimated teacher effects because there is the potential for teacher performance to depend on the skills that are measured by the achievement tests.

If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best. In the first year, such a comparison is invalid and unreliable.

So, we’re transitioning from TCAP to TNReady AND we have a gap in years of data. That’s especially problematic — but, not problematic enough to keep the Department of Education from plowing ahead (and patting themselves on the back) with a scheme that validates a result sure to be invalid.

For more on education politics and policy in Tennessee, follow @TNEdReport