Key Driver

Much is being made of Tennessee’s teacher evaluation system as a “key driver” in recent “success” in the state’s schools.

A closer look, however, reveals there’s more to the story.

Here’s a key piece of information in a recent story in the Commercial Appeal:

The report admits an inability to draw a direct, causal link from the changes in teacher evaluations, implemented during the 2011-12 school year, and the subsequent growth in classrooms across the state.

Over the same years, the state has also raised its education standards, overhauled its assessment and teacher preparation programs and implemented new turnaround programs for struggling schools.

Of course, it’s also worth noting that BEFORE any of these changes, Tennessee students were scoring well on the state’s TCAP test — teachers were given a mark and were consistently hitting the mark, no matter the evaluation style.

Additionally, it’s worth noting that “growth” as it relates to the current TNReady test is difficult to measure due to the unreliable test administration, including this year’s problems with hackers and dump trucks.

While the TEAM evaluation rubric is certainly more comprehensive than those used in the past, the classroom observation piece becomes difficult to capture in a single observation and the TVAAS-based growth component is fraught with problems even under the best circumstances.

Let’s look again, though, at the claim of sustained “success” since the implementation of these evaluation measures as well as other changes.

We’ll turn to the oft-lauded NAEP results for a closer look:

First, notice that between 2009 and 2011, Tennessee saw drops in 4th and 8th grade reading and 8th grade math. That helps explain the “big gains” seen in 2013. Next, note that in 4th and 8th grade reading and 4th grade math, our 2017 scores are lower than the 2013 scores. There’s that leveling off I suggested was likely. Finally, note that in 4th and 8th grade reading, the 2017 scores are very close to the 2009 scores. So much for “fastest-improving.”

Tennessee is four points below the national average in both 4th and 8th grade math. When it comes to reading, we are 3 points behind the national average in 4th grade and 5 points behind in 8th grade.

All of this to say: You can’t say you’re the fastest-improving state on NAEP based on one testing cycle. You also shouldn’t make long-term policy decisions based on seemingly fabulous results in one testing cycle. Since 2013, Tennessee has doubled down on reforms with what now appears to be little positive result.

In other words, in terms of a national comparison of education “success,” Tennessee still has a long way to go.

That may well be because we have yet to actually meaningfully improve investment in schools:

Tennessee is near the bottom. The data shows we’re not improving (Since Bill Haslam became Governor). At least not faster than other states.

We ranked 44th in the country for investment in public schools back in 2010 — just before these reforms — and we rank 44th now.

Next, let’s turn to the issue of assessing growth. Even in good years, that’s problematic using value-added data:

And so perhaps we shouldn’t be using value-added modeling for more than informing teachers about their students and their own performance. Using it as one small tool as they seek to continuously improve practice. One might even mention a VAM score on an evaluation — but one certainly wouldn’t base 35-50% of a teacher’s entire evaluation on such data. In light of these numbers from the Harvard researchers, that seems entirely irresponsible.

Then, there’s the issue of fairness when it comes to using TVAAS. Two different studies have shown notable discrepancies in the value-added scores of middle school teachers at various levels:

Last year, I wrote about a study of Tennessee TVAAS scores conducted by Jessica Holloway-Libell. She examined 10 Tennessee school districts and their TVAAS score distribution. Her findings suggest that ELA teachers are less likely than Math teachers to receive positive TVAAS scores, and that middle school teachers generally, and middle school ELA teachers in particular, are more likely to receive lower TVAAS scores.

A second, more comprehensive study indicates a similar challenge:

The study used TVAAS scores alone to determine a student’s access to “effective teaching.” A teacher receiving a TVAAS score of a 4 or 5 was determined to be “highly effective” for the purposes of the study. The findings indicate that Math teachers are more likely to be rated effective by TVAAS than ELA teachers and that ELA teachers in grades 4-8 (mostly middle school grades) were the least likely to be rated effective. These findings offer support for the similar findings made by Holloway-Libell in a sample of districts. They are particularly noteworthy because they are more comprehensive, including most districts in the state.

These studies are based on TVAAS when everything else is going well. But, testing hasn’t been going well and testing is what generates TVAAS scores. So, the Tennessee Department of Education has generated a handy sheet explaining all the exceptions to the rules regarding TVAAS and teacher evaluation:

However, to comply with the Legislation and ensure no adverse action based on 2017-18 TNReady data, teachers and principals who have 2017-18 TNReady data included in their LOE (school-wide TVAAS, individual TVAAS, or achievement measure) may choose to nullify their entire evaluation score (LOE) for the 2017-18 school year at their discretion. No adverse action may be taken against a teacher or principal based on their decision to nullify his or her LOE. Nullifying an LOE will occur in TNCompass through the evaluation summative conference.

Then, there’s the guidance document which includes all the percentage options for using TVAAS:

What is included in teacher evaluation in 2017-18 for a teacher with 3 years of TVAAS data? There are three composite options for this teacher:

• Option 1: TVAAS data from 2017-18 will be factored in at 10%, TVAAS data from 2016-17 will be factored in at 10% and TVAAS data from 2015-16 will be factored in at 15% if it benefits the teacher.

• Option 2: TVAAS data from 2017-18 and 2016-17 will be factored in at 35%.

• Option 3: TVAAS data from 2017-18 will be factored in at 35%. The option that results in the highest LOE for the teacher will be automatically applied. Since 2017-18 TNReady data is included in this calculation, this teacher may nullify his or her entire LOE this year.

That’s just one of several scenarios described to make up for the fact that the State of Tennessee simply cannot reliably deliver a test.

Let’s be clear: Using TVAAS to evaluate a teacher AT ALL in this climate is educational malpractice. But, Commissioner McQueen and Governor Haslam have already demonstrated they have a low opinion of Tennesseans:

Let’s get this straight: Governor Haslam and Commissioner McQueen think no one in Tennessee understands Google? They are “firing” the company that messed up this year’s testing and hiring a new company that owns the old one and that also has a reputation for messing up statewide testing.

To summarize, Tennessee is claiming success off of one particularly positive year on NAEP and on TNReady scores that are consistently unreliable. Then, Tennessee’s Education Commissioner is suggesting the “key driver” to all this success is a highly flawed evaluation system a significant portion of which is based on junk science.

The entire basis of this spurious claim is that two things happened around the same time. Also happened since Tennessee implemented new teacher evaluation and TNReady? Really successful seasons for the Nashville Predators.

Correlation does NOT equal causation. Claiming teacher evaluations are a “key driver” of some fairly limited success story is highly problematic, though typical of this Administration.

Take a basic stats class, Dr. McQueen.

 

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support keeps the education news flowing!


 

Dear Educator

The Tennessee Department of Education explains the case of the missing students as some 900 teachers see their TVAAS scores recalculated.

Here’s the email those educators were sent:

Dear Educator,

We wanted to share an update with you regarding your individual TVAAS data.

The department has processed about 1.5 million records to generate individual TVAAS scores for nearly 19,000 educators based on the assessment results from over 1.9 million student tests in grades 2-8 and high school. During the review process with districts, we found that a small number of educators did not have all of their teacher-student claiming linkage records fully processed in data files released in early September. All linkage data that was captured in EdTools directly was fully incorporated as expected. However, due to a coding error in their software, our data processing vendor, RANDA Solutions, did not fully apply the linkage information that districts provided in supplemental Excel files over the summer. As a result, we are working with Randa to ensure that this additional data is included in final TVAAS processing.

 

You have been identified as an educator with some linkage data submitted via an Excel file that was not fully processed. This means after our statistical analysis vendor, SAS, receives these additional linkage records your score may be revised to reflect all the students you identified in the teacher-student claiming process. Only students marked “F” for instructional availability are used when calculating individual TVAAS data. Based on our records, there will be [X] additional students marked “F” for instructional availability linked to you when the additional data is incorporated.

 

Your district’s and school’s TVAAS scores are not affected by this situation given that all students are included in these metrics, regardless of which teacher is linked to them, so no other part of your evaluation composite would change. Moreover, only those teachers with this additional linkage data in Excel files are impacted, so the vast majority of your colleagues across the state have their final individual TVAAS composites, which are inclusive of all student data.

 

We expect to share your final growth score and overall level of effectiveness later this year. While we do not have more specific timing to share right now, we are expediting this process with our vendors to get you accurate feedback. We will follow-up with more detailed information in the next couple of weeks. Also, as announced to districts earlier this month, the department and your districts will be using new systems and processes this year that will ensure that this type of oversight does not happen again.

 

Thank you for your patience as we work to share complete and accurate feedback for you. We deeply value each Tennessee educator and apologize for this delay in providing your final TVAAS results. Please contact our office via the email address below if you have any questions.

 

Respectfully,

 

Office of Assessment Logistics

Tennessee Department of Education

A few things stand out about this communication:

  1. Tennessee continues to experience challenges with the rollout of TNReady. That’s to be expected, but it begs the question: Why are we rushing this? Why not take some time, hit pause, and get this right?
  2. The Department says, “Thank you for your patience as we work to share complete and accurate feedback for you.” If accurate feedback was important, the state would take the time to build a value-added data set based on TNReady. This would take three to five years, but would improve the accuracy of the information provided to educators. As it stands, the state is comparing apples to oranges and generating value-added scores of little real value.
  3. On the topic of value-added data generally, it is important to note that even with a complete data set, TVAAS data is of limited value in terms of evaluating teacher effectiveness. A recent federal lawsuit settlement in Houston ended the use of value-added data for teacher evaluation there. Additionally, a judge in New York ruled the use of value-added data in teacher evaluation was “arbitrary and capricious.”
  4.  When will teachers have access to this less than accurate data? Here’s what the TDOE says, “We expect to share your final growth score and overall level of effectiveness later this year. While we do not have more specific timing to share right now, we are expediting this process with our vendors to get you accurate feedback.” Maybe they aren’t setting a clear deadline because they have a track record of missing deadlines?
  5. It’s amazing to me that a teacher’s “overall level of effectiveness” can only be determined once TVAAS data is included in their evaluation score. It’s as if there’s no other way to determine an overall level of a teacher’s effectiveness. Not through principal observation. Not through analysis of data points on student progress taken throughout the year. Not through robust peer-evaluation systems.
  6. Let’s assume for a moment that the “level of effectiveness” indicator is useful for teacher development. Providing that score “later” is not exactly helpful. Ideally, actionable insight would be provided to a teacher and his/her administrators near the end of a school year. This would allow for targeted professional development to address areas that need improvement. Of course, this assumes targeted PD is even available.
  7. Accountability. This is the latest in a series of mishaps related to the new testing regimen known as TNReady. Teachers are held accountable through their evaluation scores, and in some districts, their pay is tied to those scores. Schools and districts are held accountable for growth and achievement scores and must develop School Improvement Plans to target areas of weakness. On the other hand, the Department of Education continues to make mistakes in the TNReady transition and no one is held accountable.

The email to impacted teachers goes to great lengths to establish the enormous scope of the TNReady transition. Lots of tests, lots of students, not too many mistakes. If this were the only error so far in the TNReady process, all could be forgiven. Instead, it is the latest in a long line of bumps. Perhaps it will all smooth out in time. Which only makes the case for hitting pause all the stronger.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

Apples and Oranges

Here’s what Director of Schools Dorsey Hopson had to say amid reports that schools in his Shelby County district showed low growth according to recently released state test data:

Hopson acknowledged concerns over how the state compares results from “two very different tests which clearly are apples and oranges,” but he added that the district won’t use that as an excuse.

“Notwithstanding those questions, it’s the system upon which we’re evaluated on and judged,” he said.

State officials stand by TVAAS. They say drops in proficiency rates resulting from a harder test have no impact on the ability of teachers, schools and districts to earn strong TVAAS scores, since all students are experiencing the same change.

That’s all well and good, except when the system upon which you are evaluated is seriously flawed, it seems there’s an obligation to speak out and fight back.

Two years ago, ahead of what should have been the first year of TNReady, I wrote about the challenges of creating valid TVAAS scores while transitioning to a new test. TNReady was not just a different test, it was (is) a different type of test than the previous TCAP test. For example, it included constructed response questions instead of simply multiple choice bubble-in questions.

Here’s what I wrote:

Here’s the problem: There is no statistically valid way to predict expected growth on a new test based on the historic results of TCAP. First, the new test has (supposedly) not been fully designed. Second, the test is in a different format. It’s both computer-based and it contains constructed-response questions. That is, students must write-out answers and/or demonstrate their work.

Since Tennessee has never had a test like this, it’s impossible to predict growth at all. Not even with 10% confidence. Not with any confidence. It is the textbook definition of comparing apples to oranges.

Here’s a statement from the academic article I cited to support this claim:

Here’s what Lockwood and McCaffrey (2007) had to say in the Journal of Educational Measurement:

We find that the variation in estimated effects resulting from the different mathematics achievement measures is large relative to variation resulting from choices about model specification, and that the variation within teachers across achievement measures is larger than the variation across teachers.
You get different value-added results depending on the type of test you use. That is, you can’t just say this is a new test but we’ll compare peer groups from the old test and see what happens. Plus, TNReady presents the added challenge of not having been fully administered last year, so you’re now looking at data from two years ago and extrapolating to this year’s results.
Of course, the company paid millions to crunch the TVAAS numbers says that this transition presents no problem at all. Here’s what their technical document has to say about the matter:
In 2015-16, Tennessee implemented new End-of-Course (EOC) assessments in math and English/language arts. Redesigned assessments in Math and English/language arts were also implemented in grades 3-8 during the 2016-17 school year. Changes in testing regimes occur at regular intervals within any state, and these changes need not disrupt the continuity and use of value-added reporting by educators and policymakers. Based on twenty years of experience with providing valueadded and growth reporting to Tennessee educators, EVAAS has developed several ways to accommodate changes in testing regimes.
Prior to any value-added analyses with new tests, EVAAS verifies that the test’s scaling properties are suitable for such reporting. In addition to the criteria listed above, EVAAS verifies that the new test is related to the old test to ensure that the comparison from one year to the next is statistically reliable. Perfect correlation is not required, but there should be a strong relationship between the new test and old test. For example, a new Algebra I exam should be correlated to previous math scores in grades seven and eight and to a lesser extent other grades and subjects such as English/language arts and science. Once suitability of any new assessment has been confirmed, it is possible to use both the historical testing data and the new testing data to avoid any breaks or delays in value-added reporting.
A couple of problems with this. First, there was NO complete administration of a new testing regime in 2015-16. It didn’t happen.
Second, EVAAS doesn’t get paid if there’s not a way to generate these “growth scores” so it is in their interest to find some justification for comparing the two very different tests.
Third, researchers who study value-added modeling are highly skeptical of the reliability of comparisons between different types of tests when it comes to generating value-added scores. I noted Lockwood and McCaffrey (2007) above. Here are some more:
John Papay (2011) did a similar study using three different reading tests, with similar results. He stated his conclusion as follows: [T]he correlations between teacher value-added estimates derived from three separate reading tests — the state test, SRI [Scholastic Reading Inventory], and SAT [Stanford Achievement Test] — range from 0.15 to 0.58 across a wide range of model specifications. Although these correlations are moderately high, these assessments produce substantially different answers about individual teacher performance and do not rank individual teachers consistently. Even using the same test but varying the timing of the baseline and outcome measure introduces a great deal of instability to teacher rankings.
Two points worth noting here: First, different tests yield different value-added scores. Second, even using the same test but varying the timing can create instability in growth measures.
Then, there’s data from the Measures of Effective Teaching (MET) Project, which included data from Memphis. In terms of reliability when using value-added among different types of tests, here’s what MET reported:
Once more, the MET study offered corroborating evidence. The correlation between value-added scores based on two different mathematics tests given to the same students the same year was only .38. For 2 different reading tests, the correlation was .22 (the MET Project, 2010, pp. 23, 25).
Despite the claims of EVAAS, the academic research raises significant concerns about extrapolating results from different types of tests. In short, when you move to a different test, you get different value-added results. As I noted in 2015:

If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best. In the first year, such a comparison is invalid and unreliable. As more years of data become available, it may be possible to make some correlation between past TCAP results and TNReady scores.

Or, if the state is determined to use growth scores (and wants to use them with accuracy), they will wait several years and build completely new growth models based on TNReady alone. At least three years of data would be needed in order to build such a model.

Dorsey Hopson and other Directors of Schools should be pushing back aggressively. Educators should be outraged. After all, this unreliable data will be used as a portion of their teacher evaluations this year. Schools are being rated on a 1-5 scale based on a growth model grounded in suspect methods.

How much is this apple like last year’s orange? How much will this apple ever be like last year’s orange?

If we’re determined to use value-added modeling to measure school-wide growth or district performance, we should at least be determined to do it in a way that ensures valid, reliable results.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

It Doesn’t Matter Except When It Does

This year’s TNReady quick score setback means some districts will use the results in student report cards and some won’t. Of course, that’s nobody’s fault. 

One interesting note out of all of this came as Commissioner McQueen noted that quick scores aren’t what really matters anyway. Chalkbeat reports:

The commissioner emphasized that the data that matters most is not the preliminary data but the final score reports, which are scheduled for release in July for high schools and the fall for grades 3-8. Those scores are factored into teachers’ evaluations and are also used to measure the effectiveness of schools and districts.

“Not until you get the score report will you have the full context of a student’s performance level and strengths and weaknesses in relation to the standards,” she said.

The early data matters to districts, though, since Tennessee has tied the scores to student grades since 2011.

First, tying the quick scores to student grades is problematic. Assuming TNReady is a good, reliable test, we’d want the best results to be used in any grade calculation. Using pencil and paper this year makes that impossible. Even when we switch to a test fully administered online, it may not be possible to get the full scores back in time to use those in student grades.

Shifting to a model that uses TNReady to inform and diagnose rather than evaluate students and teachers could help address this issue. Shifting further to a project-based assessment model could actually help students while also serving as a more accurate indicator of whether they have met the standards.

Next, the story notes that teachers will be evaluated based on the scores. This will be done via TVAAS — the state’s value-added modeling system. Even as more states move away from value-added models in teacher evaluation, Tennessee continues to insist on using this flawed model.

Again, let’s assume TNReady is an amazing test that truly measures student mastery of standards. It’s still NOT designed for the purpose of evaluating teacher performance. Further, this is the first year the test has been administered. That means it’s simply not possible to generate valid data on teacher performance from this year’s results. You can’t just take this year’s test (TNReady) and compare it to the TCAP from two years ago. They are different tests designed to measure different standards in a different way. You know, the old apples and oranges thing.

One teacher had this to say about the situation:

“There’s so much time and stress on students, and here again it’s not ready,” said Tikeila Rucker, a Memphis teacher who is president of the United Education Association of Shelby County.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

Mike Stein on the Teachers’ Bill of Rights

Coffee County teacher Mike Stein offers his thoughts on the Teachers’ Bill of Rights (SB14/HB1074) being sponsored at the General Assembly by Mark Green of Clarksville and Jay Reedy of Erin.

Here’s some of what he has to say:

In my view, the most impactful elements of the Teachers’ Bill of Rights are the last four items. Teachers have been saying for decades that we shouldn’t be expected to purchase our own school supplies. No other profession does that. Additionally, it makes much-needed changes to the evaluation system. It is difficult, if not impossible, to argue against the notion that we should be evaluated by other educators with the same expertise. While good teaching is good teaching, there are content-specific strategies that only experts in that subject would truly be able to appreciate fully. Both the Coffee County Education Association and the Tennessee Education Association support this bill.

And here are those four items he references:

This bill further provides that an educator is not: (1) Required to spend the educator’s personal money to appropriately equip a classroom; (2) Evaluated by professionals, under the teacher evaluation advisory committee, without the same subject matter expertise as the educator; (3) Evaluated based on the performance of students whom the educator has never taught; or (4) Relocated to a different school based solely on test scores from state mandated assessments.

The legislation would change the teacher evaluation system by effectively eliminating TVAAS scores from the evaluations of teachers in non-tested subjects — those scores may be replaced by portfolios, an idea the state has rolled out but not funded. Additionally, identifying subject matter specific evaluators could prove difficult, but would likely provide stronger, more relevant evaluations.

Currently, teachers aren’t required to spend their own money on classrooms, but many teachers do because schools too often lack the resources to meet the needs of students. It’s good to see Senator Green and Rep. Reedy drawing attention to the important issue of classroom resources.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

Reform is Working

That’s the message from the Tennessee Department of Education based on recently released TCAP results and an analysis of the data over time.

You can see for yourself here and here.

The one area of concern is reading, but overall, students are performing better than they were when new TCAP tests were started and standards were raised.

Here’s the interesting thing: This is true across school districts and demographic subgroups. The trend is positive.

Here’s something else: A similar trend could be seen in results before the change in the test in 2009.

Tennessee students were steadily making gains. Teachers and schools were hitting the mark set for them by policymakers. This in an age of collective bargaining for teachers and no TVAAS-based evaluation or pay schemes.

When the standards were made higher — certainly a welcome change — teachers again hit the mark.

Of course, since the standards change, lots of other reforms have taken place. Most of these have centered around teachers and the incorporation of TVAAS in teacher evaluation and even pay schemes. The State Board of Education even gutted the old state salary schedule to promote pay differentiation, ostensibly based on TVAAS scores.

But does pay for TVAAS actually lead to improved student outcomes as measured by TVAAS?

Consider this comparison of Putnam County and Cumberland County. Putnam was one of the original TIF recipients and among the first to develop a pay scheme based on teacher evaluations and TVAAS.

Putnam’s 2014 TVAAS results are positive, to be sure. But neighboring Cumberland County (a district that is demographically similar and has a similar assortment of schools) also shows positive TVAAS results.  Cumberland relies on the traditional teacher pay scale. From 2012-13 to 2013-14, Putnam saw a 50% increase in the number of categories (all schools included) in which they earned TVAAS scores of 5. So did Cumberland County.

Likewise, from 2012-13 to 2013-14, Putnam saw a 13% decline in the number of categories in which they earned TVAAS scores below a 3. In Cumberland County, the number was cut by 11%.

This is one example over a two-year cycle. New district level results for 2015 will soon be available and will warrant an update. But, it’s also worth noting that these results track results seen in Denver in analysis of their ProComp pay system. Specifially, University of Colorado’s Denver ProComp Evaluation Report (2010-2012) finds little impact of ProComp on student achievement, or on teachers’ professional practices, including their teaching practices or retention.

The Putnam-Cumberland initial analysis tracks with that of the ProComp studies: Teacher performance pay, even if devised in conjunction with teacher groups, cannot be said to have a significant impact on student performance over time.

So, prior to 2008, student academic achievement as measured by Tennessee standardized tests showed steady improvement over time. This occurred in an environment with no performance pay. Again from 2009-2015, across districts and demographic groups, student achievement is improving. Only a small number of Tennessee districts have performance pay schemes — so, that alone would indicate that performance pay is not driving improved student outcomes.  Then, a preliminary comparison of two districts suggests that both performance pay and non-performance pay districts see significant (and similar) TVAAS gains.

Reform may be working — but it may not be the reform the reformers want to push.

For more on education politics and policy in Tennessee, follow @TNEdReport

The Value of the Report Card on Teacher Training

Every year, the Tennessee Higher Education Commission issues a Report Card on the state’s teacher training program. To evaluate educator effectiveness, THEC uses the Tennessee Value-Added Assessment System.

Which effectively renders the Report Card of little value.

Not included in the report is a teacher’s overall effectiveness score on the TEAM model. That would include both observed scores and value-added data, plus other achievement measures. That would be a more robust score to report, but it’s not included.

I’ve written before on the very limited value of value-added data.

Here are some highlights of why we learn almost nothing from the THEC report in terms of whether or not a teacher education program is actually doing a good job:

Here’s the finding that gets all the attention: A top 5 percent teacher (according to value-added modeling or VAM) can help a classroom of students (28) earn $250,000 more collectively over their lifetime.

Now, a quarter of a million sounds like a lot of money.

But, in their sample, a classroom was 28 students. So, that equates to $8928.57 per child over their lifetime. That’s right, NOT $8928.57 MORE per year, MORE over their whole life.

For more math fun, that’s $297.61 more per year over a thirty year career with a VAM-designated “great” teacher vs. with just an average teacher.

Yep, get your kid into a high value-added teacher’s classroom and they could be living in style, making a whole $300 more per year than their friends who had the misfortune of being in an average teacher’s room.

If we go all the way down to what VAM designates as “ineffective” teaching, you’d likely see that number double, or maybe go a little higher. So, let’s say it doubles plus some. Now, your kid has a low VAM teacher and the neighbor’s kid has a high VAM teacher. What’s that do to his or her life?

Well, it looks like this: The neighbor kid gets a starting job offer of $41,000 and your kid gets a starting offer of $40,000.

So, THEC uses a marginal indicator of educator effectiveness to make a significant determination about whether or not educator training programs are effective. At the very least, such a determination should also include observed scores of these teachers over time or the entire TEAM score.

Until then, the annual Report Card on teacher training will add little value to the education policy discussion in Tennessee.

For more on education politics and policy in Tennessee, follow @TNEdReport