Bias Confirmed

Last year, I wrote about a study of Tennessee TVAAS scores conducted by Jessica Holloway-Libell. She examined 10 Tennessee school districts and their TVAAS score distribution. Her findings suggest that ELA teachers are less likely than Math teachers to receive positive TVAAS scores, and that middle school teachers generally, and middle school ELA teachers in particular, are more likely to receive lower TVAAS scores.

The findings, based on a sampling of districts, suggest one of two things:

1) Tennessee’s ELA teachers are NOT as effective as Tennessee’s Math teachers and the middle school teachers are less effective than the high school teachers

OR

2) TVAAS scores are biased against ELA teachers (or in favor of Math teachers) due to the nature of the subjects being tested.

The second option actually has support from data analysis, as I indicated at the time and repeat here:

Holloway-Libell’s findings are consistent with those of Lockwood and McCaffrey (2007) published in the Journal of Educational Measurement:

The researchers tested various VAM models and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured.

That is, it’s totally consistent with VAM to have different estimates for math and ELA teachers, for example. Math questions are often asked in a different manner than ELA questions and the assessment is covering different subject matter.

Now, there’s even more evidence to suggest that TVAAS scores vary based on subject matter and grade level – which would minimize their ability to provide meaningful information about teacher effectiveness.

A recently released study about effective teaching in Tennessee includes the following information:

The study used TVAAS scores alone to determine a student’s access to “effective teaching.” A teacher receiving a TVAAS score of a 4 or 5 was determined to be “highly effective” for the purposes of the study. The findings indicate that Math teachers are more likely to be rated effective by TVAAS than ELA teachers and that ELA teachers in grades 4-8 (mostly middle school grades) were the least likely to be rated effective. These findings offer support for the similar findings made by Holloway-Libell in a sample of districts. They are particularly noteworthy because they are more comprehensive, including most districts in the state.

Here’s a breakdown of the findings by percentage of teachers rated effective and including the number of districts used to determine the average.

4-8 Math           47.5% effective                        126 districts

HS Math            38.9% effective                          94 districts

4-8 ELA              24.2% effective                      131 districts

HS ELA               31.1% effective                       100 districts

So, TVAAS scores are more likely to result in math teachers being rated effective and middle school ELA teachers are the least likely to receive effective ratings.

Again, the question is: Are Tennessee’s ELA teachers really worse than our Math teachers? And, are middle school ELA teachers the worst teachers in Tennessee?

Alternatively, one might suppose that TVAAS, as data from other value-added models suggests, is susceptible to subject matter bias, and to a lesser extent, grade level bias.

That is, the data generated by TVAAS is not a reliable predictor of teacher performance.

For more on education politics and policy in Tennessee, follow @TNEdReport

 

6 thoughts on “Bias Confirmed

  1. Thanks for sharing. TVAAS and VAM in general is not valid. If you look deeply into 7th grade ELA scores from 2011-2014, very few teachers across our state were 4 or 5 teachers. Most (over 70% were level 1 or 2). Are 7th grade ELA teachers the worst of the worst or was the test an invalid measure . No one from the TNDOE would discuss this issue with our district.

  2. “Again, the question is: Are Tennessee’s ELA teachers really worse than our Math teachers? And, are middle school ELA teachers the worst teachers in Tennessee?”

    No, that’s not the question. The question TVAAS is designed to answer is “Using (up to) the previous three years a a reference period, how does the growth in scores among students a teacher taught in the current period compare to the growth that would be predicted from the trend line established by the scores from the reference period?”

    There are numerous reasons why middle school ELA teachers may be less effective, but TVAAS is silent on those reasons. TVAAS simply shows that they are less effective. And TVAAS scores bear out anecdotal wisdom about the difficulty of teaching and remediating ELA in the middle school grades.

    TVAAS scores are not zero sum like effect scores; it is entirely possible for more teachers to underperform than overperform. This is a feature of the measure, not a bug.

    I see this blog frequently misrepresent the and nature of the assumptions behind TVAAS. I don’t know if the reason is ignorance or mendacity on the part of the author, but this post is idiotic.

    I’m not even going to get into the table the author created, except to say that it is jibberish. The “number of districts used to determine the average” is different for each measure. This type of comparison is meaningless if it doesn’t compare the same districts!

    • Thanks for your comment, John. While I understand that you disagree with my interpretation of TVAAS and value-added in general, I did want to let you know it was the TNDOE that created the table that I reproduced. It’s for information purposes — so one can see the number of districts reporting data used to determine the average.

  3. Let’s remember those ELA teachers teach writing, grammar, & reading in the same amount of time that those math teachers teach math!!! From a consistent “5” ELA 4th grade teacher

  4. Math teachers have the ability to use calculators and “tricks” to help their students post higher scores than what they actually understand. I have been complaining about this the last 4 years of TCAP as I saw my scores decline. Each year was supposed to be the last year. I refused to focus on tricks and stayed with conceptual understanding and my scores suffered. But I would put my students up against any students when it comes to understanding. Teachers are incentivized to get the best score regardless because of the way we are set up. When I ask about calculators, students will say, we only used them for the test. We are short changing our students when we hand them calculators and say that the test score is the most important thing.

  5. Pingback: VAM-Based Bias | Spears Strategy

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.