Inherently Unstable

That’s how TEA’s top lobbyist described the state’s teacher evaluation system that is based on so-called “value-added” modeling. The remarks were made during testimony before the House Education Committee. Here’s more from a TEA Facebook post:

MORE on TVAAS:

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support$5 or more – makes publishing education news possible.

Donate Button

Growth Scores

Get your kid assigned to the right teacher and they just might grow a little taller, new research suggests.

Tennessee has long used something called “value-added assessment” to determine the amount of academic growth students make from year to year. These “growth scores” are then used to generate a score for teachers. The formula in Tennessee is known as TVAAS — Tennessee Value Added Assessment System. Tennessee was among the first states in the nation to use value-added assessment, and the formula became a part of teacher evaluations in 2011.

Here’s how the Tennessee Department of Education describes the utility of TVAAS:


Because students’ performance is compared to that of their peers, and because their peers are moving through the same standards and assessment transitions at the same time, any drops in proficiency during these transitions have no impact on the ability of teachers, schools, and districts to earn strong TVAAS scores.

Now, research on value-added modeling indicates teacher assignment is almost as likely to predict the future height of students as it is their academic achievement. Here’s the abstract from a National Bureau of Economic Research working paper:

Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted valueadded measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.

The researchers offer a word of caution:

Taken together, our results provide a cautionary tale for the naïve application of VAMs to teacher evaluation and other settings. They point to the possibility of the misidentification of sizable teacher
“effects” where none exist. These effects may be due in part to spurious variation driven by the typically small samples of children used to estimate a teacher’s individual effect.

In short: Using TVAAS to make decisions regarding hiring, firing, and compensation is bad policy.

However, the authors note that policymakers thirst for low-cost, convenient solutions:

In the face of data and measurement limitations, school leaders and state
education departments seek low-cost, unbiased ways to observe and monitor the impact that their teachers have on students. Although many have criticized the use of VAMs to evaluate teachers, they remain a
widely-used measure of teacher performance. In part, their popularity is due to convenience-while observational protocols which send observers to every teacher’s classroom require expensive training and considerable resources to implement at scale, VAMs use existing data and can be calculated centrally at low cost.

While states like Hawaii and Oklahoma have moved away from value-added models in teacher evaluation, Tennessee remains committed to this flawed method. Perhaps Tennessee lawmakers are hoping for the formula that will ensure a crop of especially tall kids ready to bring home a UT basketball national title.

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support — $5 or more today — makes publishing education news possible.



Your TVAAS Story

TVAAS is Tennessee’s misguided approach to evaluating teachers.

While some states are moving away from value-added modeling for teacher evaluation, Tennessee remains focused on this flawed method.

I want to tell the story of how TVAAS impacts Tennessee teachers.

Has TVAAS impacted you… Through a merit pay scheme, tenure decision, or transfer? What impact have these scores had on your career?

If you want to share your story, get in touch by emailing me at andy@tnedreport.com

 

 

Nullification

Remember when the Tennessee General Assembly first past “hold harmless” legislation and then added “no adverse action” language so that TNReady scores from another failed administration would not negatively impact students, teachers, or schools?

It turns out, the return of TVAAS scores may in fact result in some adverse actions. I’ve reported on how the incorporation of TVAAS scores based on this year’s TNReady test into overall student growth projections could have lasting, negative impacts on teachers.

Now, Coffee County educator Mike Stein has a blog post up about this year’s TVAAS scores and a teacher’s Level of Effectiveness (LOE).

Here are a couple key takeaways:

Today is Thursday, October 25th and, as of today, I am 29% of the way into the school year. This afternoon, I received my overall teacher evaluation score from last school year (called the “level of effectiveness,” or L.O.E. for short). I have some major issues with how all of this is playing out.

To begin with, why am I just now finding out how well I did last school year? Teachers often times use the summer to make any kind of major adjustments to their curriculum and to their teaching strategies. It’s quite difficult to make changes in the middle of a unit in the middle of the second grading period–a situation where most teachers will find themselves right now. I remember a time not so long ago when teachers knew their L.O.E. by the end of the school year. Since the state’s implementation of TNReady, that hasn’t happened.

If I were a principal, I’m also upset about the timing of the release of the L.O.E. scores. They shouldn’t have to wait this long into the school year before finding out who their effective and ineffective teachers were last year. Part of their job is to help the ineffective teachers get back on track. Granted, a good principal will probably already know who these teachers are, but nothing can be made official until the L.O.E. scores are released. These scores are also used to determine whether teachers are rehired the following school year and if teachers will be granted tenure. Personnel decisions should be made over the summer, and the late release of these teacher effectiveness scores is not helpful in the least.

NULLIFY

If you find out, as Mike did, that including the scores may have an undesirable impact, you have the option of nullifying your entire LOE — in fact, even if the score is good, if TNReady makes up any part of your overall LOE, you have the nullification option. Here’s more from Mike on that:

What immediately struck me is that all three of these options include my students’ growth on a flawed test that, by law, isn’t supposed to hurt me if last year’s test results are included, which they are. My overall L.O.E. score is a 4 out of 5, which still isn’t too bad, but the previous three years it has been a 5 out of 5. This means that the TNReady scores are, in fact, hurting my L.O.E. So what do I do now?

As the president of the Coffee County Education Association, I received the following message from TEA today that I quickly forwarded to my members: “To comply with the [hold harmless] legislation, teachers and principals who have 2017-18 TNReady data included in their LOE may choose to nullify their entire evaluation score (LOE) for the 2017-18 school year at their discretion. An educator’s decision to nullify the LOE can be made independently or in consultation with his/her evaluator during the evaluation summative conference. Nullification is completed by the educator in the TNCompass platform. The deadline for an educator to nullify his/her LOE is midnight CT on Nov. 30.”

In addition to the valid concerns Mike raises, I’ve heard from teachers in several districts noting mistakes in the LOE number. These may result from including TVAAS data in a way that negatively impacts a teacher or using the incorrect option when it comes to factoring in scores. It is my understanding that several districts have alerted TDOE of these errors and are awaiting a response.

One key question is: What happens if you nullify your scores, and therefore have no LOE this year? Here’s an answer from TDOE:

Educators who choose to nullify their 2017-18 LOE may still be able to earn Professional Development Points (PDPs). Educators who choose to nullify their 2017-18 LOE may use their 2016-17 score to earn applicable PDPs;

So, PDPs are covered if you nullify. Great.

For educators who nullify their 2017-18 LOE, the number of observations required in 2018- 19 will be calculated based on 2016-17 data in conjunction with the educator’s current license type.

Looks like classroom observations have also been covered.

If a teacher chooses to nullify his or her 2017-18, LOE he or she may still become eligible for tenure this year. Pursuant to T.C.A. § 49-5-503(4), “a teacher who has met all other requirements for tenure eligibility but has not acquired an official evaluation score during the last one (1) or two (2) years of the probationary period due to an approved extended leave; transfer to another school or position within the school district; or invalidated data due to a successful local level evaluation grievance pursuant to § 49-1-302(d)(2)(A) may utilize the most recent two (2) years of available evaluation scores achieved during the probationary period.”

The bottom line: If you do nullify (and many are in situations where that’s a good idea), there should be no future adverse impact according to TDOE’s guidance.

The larger issue, in my view, is the one Mike raises: It’s pretty late in the year to be returning evaluation feedback to teachers and principals. The LOE determines the number of observations a teacher is to have (which impacts principal workload). It could, as Mike indicates, also point to areas for improvement or teachers who need additional support. But providing those numbers well into the school year significantly reduces the opportunity for meaningful action on those fronts.

Despite all these stubborn facts, Tennessee’s Commissioner of Education points to the teacher evaluation process a “key driver” of our state’s education success.

It seems highly unlikely a process this flawed is making much of a positive impact on teachers and schools.

 

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support keeps the education news flowing!


 

Key Driver

Much is being made of Tennessee’s teacher evaluation system as a “key driver” in recent “success” in the state’s schools.

A closer look, however, reveals there’s more to the story.

Here’s a key piece of information in a recent story in the Commercial Appeal:

The report admits an inability to draw a direct, causal link from the changes in teacher evaluations, implemented during the 2011-12 school year, and the subsequent growth in classrooms across the state.

Over the same years, the state has also raised its education standards, overhauled its assessment and teacher preparation programs and implemented new turnaround programs for struggling schools.

Of course, it’s also worth noting that BEFORE any of these changes, Tennessee students were scoring well on the state’s TCAP test — teachers were given a mark and were consistently hitting the mark, no matter the evaluation style.

Additionally, it’s worth noting that “growth” as it relates to the current TNReady test is difficult to measure due to the unreliable test administration, including this year’s problems with hackers and dump trucks.

While the TEAM evaluation rubric is certainly more comprehensive than those used in the past, the classroom observation piece becomes difficult to capture in a single observation and the TVAAS-based growth component is fraught with problems even under the best circumstances.

Let’s look again, though, at the claim of sustained “success” since the implementation of these evaluation measures as well as other changes.

We’ll turn to the oft-lauded NAEP results for a closer look:

First, notice that between 2009 and 2011, Tennessee saw drops in 4th and 8th grade reading and 8th grade math. That helps explain the “big gains” seen in 2013. Next, note that in 4th and 8th grade reading and 4th grade math, our 2017 scores are lower than the 2013 scores. There’s that leveling off I suggested was likely. Finally, note that in 4th and 8th grade reading, the 2017 scores are very close to the 2009 scores. So much for “fastest-improving.”

Tennessee is four points below the national average in both 4th and 8th grade math. When it comes to reading, we are 3 points behind the national average in 4th grade and 5 points behind in 8th grade.

All of this to say: You can’t say you’re the fastest-improving state on NAEP based on one testing cycle. You also shouldn’t make long-term policy decisions based on seemingly fabulous results in one testing cycle. Since 2013, Tennessee has doubled down on reforms with what now appears to be little positive result.

In other words, in terms of a national comparison of education “success,” Tennessee still has a long way to go.

That may well be because we have yet to actually meaningfully improve investment in schools:

Tennessee is near the bottom. The data shows we’re not improving (Since Bill Haslam became Governor). At least not faster than other states.

We ranked 44th in the country for investment in public schools back in 2010 — just before these reforms — and we rank 44th now.

Next, let’s turn to the issue of assessing growth. Even in good years, that’s problematic using value-added data:

And so perhaps we shouldn’t be using value-added modeling for more than informing teachers about their students and their own performance. Using it as one small tool as they seek to continuously improve practice. One might even mention a VAM score on an evaluation — but one certainly wouldn’t base 35-50% of a teacher’s entire evaluation on such data. In light of these numbers from the Harvard researchers, that seems entirely irresponsible.

Then, there’s the issue of fairness when it comes to using TVAAS. Two different studies have shown notable discrepancies in the value-added scores of middle school teachers at various levels:

Last year, I wrote about a study of Tennessee TVAAS scores conducted by Jessica Holloway-Libell. She examined 10 Tennessee school districts and their TVAAS score distribution. Her findings suggest that ELA teachers are less likely than Math teachers to receive positive TVAAS scores, and that middle school teachers generally, and middle school ELA teachers in particular, are more likely to receive lower TVAAS scores.

A second, more comprehensive study indicates a similar challenge:

The study used TVAAS scores alone to determine a student’s access to “effective teaching.” A teacher receiving a TVAAS score of a 4 or 5 was determined to be “highly effective” for the purposes of the study. The findings indicate that Math teachers are more likely to be rated effective by TVAAS than ELA teachers and that ELA teachers in grades 4-8 (mostly middle school grades) were the least likely to be rated effective. These findings offer support for the similar findings made by Holloway-Libell in a sample of districts. They are particularly noteworthy because they are more comprehensive, including most districts in the state.

These studies are based on TVAAS when everything else is going well. But, testing hasn’t been going well and testing is what generates TVAAS scores. So, the Tennessee Department of Education has generated a handy sheet explaining all the exceptions to the rules regarding TVAAS and teacher evaluation:

However, to comply with the Legislation and ensure no adverse action based on 2017-18 TNReady data, teachers and principals who have 2017-18 TNReady data included in their LOE (school-wide TVAAS, individual TVAAS, or achievement measure) may choose to nullify their entire evaluation score (LOE) for the 2017-18 school year at their discretion. No adverse action may be taken against a teacher or principal based on their decision to nullify his or her LOE. Nullifying an LOE will occur in TNCompass through the evaluation summative conference.

Then, there’s the guidance document which includes all the percentage options for using TVAAS:

What is included in teacher evaluation in 2017-18 for a teacher with 3 years of TVAAS data? There are three composite options for this teacher:

• Option 1: TVAAS data from 2017-18 will be factored in at 10%, TVAAS data from 2016-17 will be factored in at 10% and TVAAS data from 2015-16 will be factored in at 15% if it benefits the teacher.

• Option 2: TVAAS data from 2017-18 and 2016-17 will be factored in at 35%.

• Option 3: TVAAS data from 2017-18 will be factored in at 35%. The option that results in the highest LOE for the teacher will be automatically applied. Since 2017-18 TNReady data is included in this calculation, this teacher may nullify his or her entire LOE this year.

That’s just one of several scenarios described to make up for the fact that the State of Tennessee simply cannot reliably deliver a test.

Let’s be clear: Using TVAAS to evaluate a teacher AT ALL in this climate is educational malpractice. But, Commissioner McQueen and Governor Haslam have already demonstrated they have a low opinion of Tennesseans:

Let’s get this straight: Governor Haslam and Commissioner McQueen think no one in Tennessee understands Google? They are “firing” the company that messed up this year’s testing and hiring a new company that owns the old one and that also has a reputation for messing up statewide testing.

To summarize, Tennessee is claiming success off of one particularly positive year on NAEP and on TNReady scores that are consistently unreliable. Then, Tennessee’s Education Commissioner is suggesting the “key driver” to all this success is a highly flawed evaluation system a significant portion of which is based on junk science.

The entire basis of this spurious claim is that two things happened around the same time. Also happened since Tennessee implemented new teacher evaluation and TNReady? Really successful seasons for the Nashville Predators.

Correlation does NOT equal causation. Claiming teacher evaluations are a “key driver” of some fairly limited success story is highly problematic, though typical of this Administration.

Take a basic stats class, Dr. McQueen.

 

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support keeps the education news flowing!


 

Deleted

In the wake of last year’s TNReady troubles, the Tennessee General Assembly passed legislation saying “no adverse action” could be taken against teachers, students, or schools based on the results. While legislators passed the bill late in the session, the Tennessee Department of Education was left to implement policy.

As this school year is up and running, teachers and administrators are asking what to do with data from 2017-18. Helpfully, the TDOE released this handy guidance document. The document lets teachers know they can choose to nullify their entire Level of Effectiveness (LOE) score from 2017-18 if TNReady scores were included in any part of a teacher’s overall TEAM evaluation score.

But nullifying your score could lead to unintended “adverse actions,” couldn’t it? Well, maybe. But, the always thoughtful TDOE is ahead of the game. They also have a guide to nullification.

This guide makes clear that even if a teacher chooses to nullify his or her entire LOE for 2017-18, no adverse action will impact that teacher.

Here are a couple key points:

Educators who choose to nullify their 2017-18 LOE may still be able to earn Professional Development Points (PDPs). Educators who choose to nullify their 2017-18 LOE may use their 2016-17 score to earn applicable PDPs;

So, PDPs are covered if you nullify. Great.

For educators who nullify their 2017-18 LOE, the number of observations required in 2018- 19 will be calculated based on 2016-17 data in conjunction with the educator’s current license type.

Looks like classroom observations have also been covered.

If a teacher chooses to nullify his or her 2017-18, LOE he or she may still become eligible for tenure this year. Pursuant to T.C.A. § 49-5-503(4), “a teacher who has met all other requirements for tenure eligibility but has not acquired an official evaluation score during the last one (1) or two (2) years of the probationary period due to an approved extended leave; transfer to another school or position within the school district; or invalidated data due to a successful local level evaluation grievance pursuant to § 49-1-302(d)(2)(A) may utilize the most recent two (2) years of available evaluation scores achieved during the probationary period.”

Worried about tenure? TDOE has you covered!

So far, so good, right?

Well, then there was an email sent by the Education Value-Added Assessment System (the vendor that calculates TVAAS).

Here’s what teachers saw in their inboxes this week:

Due to the upcoming release of TVAAS reports for the 2017-18 school year, some of the data from the 2016-17 reporting will no longer be available.

*    The current student projections will be removed and replaced with new projections based on the most recent year of assessment data.
*    Current Custom Student reports will be removed.
*    District administrators will lose access to Teacher Value-Added reports and composites for teachers who do not receive a Teacher Value-Added report in their district in 2017-18.
*    School administrators will lose access to Teacher Value-Added reports and composites for teachers in their school who do not receive a Value-Added report in 2017-18.

If you would like to save value-added and student projection data from the 2016-17 reporting, you must print or export that data by September 26. TVAAS users are reminded to follow all local data policies when exporting or printing confidential data.

But wait, the 2016-17 data is crucial for teachers who choose to nullify their 2017-18 LOE. Why is a significant portion of this data being deleted?

Also, note that student projections are being updated based on the 2017-18 scores.

What?

The 2017-18 test was plagued by hackers, dump trucks, and mixed up tests. Still, the TDOE plans to use that data to update student projections. These projections will then be used to assign value-added scores going forward.

That’s one hell of an adverse impact. Or, it could be. It really depends on how the 2017-18 scores impact the projected performance of given students.

The legislation in plain language indicated teachers and schools would face “no adverse action” based on the 2017-18 TNReady administration. Now, teachers are being told that future student growth projections will be based on data from this test. It’s possible that could have a positive impact on a teacher’s future growth score. It certainly could also have a rather negative impact.

The potentially adverse action of allowing the 2017-18 TNReady scores to impact future growth scores for teachers and schools has not been addressed.

By the way, we now have the following set of apples, oranges, and bananas from which we are determining student growth:

2015 — TCAP

2016 — NO TNReady

2017 — pencil and paper TNReady

2018 — Hacker and Dump Truck TNReady

It’s difficult to see how any reliable growth score can be achieved using these results.

 

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support keeps the education news coming!


 

Survey Says

Teacher and blogger Mary Holden writes about her experience with TNReady this year as she reflects on a survey sent by the Comptroller.

Here’s some of what she has to say:

Let me see if I can sum up this year’s TNReady experience:

  • Some students couldn’t log on at all because their login information was incorrect.
  • Some students couldn’t log on at all because their laptops were offline and we had to find the IT person to help. Or get another laptop and hope it worked.
  • Some students logged on, started their tests, and then got booted off the testing site in the middle of testing. Then they had trouble logging back on.
  • Some students logged back in after being booted off the site and their progress hadn’t been saved so they had to start all over again.
  • Some students completed their whole test, clicked on the “Submit test” button, and then got booted off the site. Then they couldn’t log back on. Then maybe, hours later, when they were called back, they logged back on the site and then, hopefully, their progress had been saved and they were finally able to submit their test.
  • Some students needed an extra password – a proctor password – to log back in, so we had to find the person who had that.

Through all this frustration and stress with the online testing platform and connectivity issues, students were told to do their best because this test was going to count for 20 percent of their class grade. They were stressed. They were angry. They felt they were being jerked around by the state of Tennessee. And they weren’t wrong. In the middle of the testing window, we learned that scores would not count. And they still had to continue testing! It was unreal.

And that is only what I personally experienced as a test proctor.

Statewide, we had even more ridiculous things happening – the testing platform was hacked (a “deliberate attack” was made on the site)(ummmm…. should we be more worried about this?), the testing site was down, a dump truck may or may not have been involved in a severed cable line – a line that just happened to be responsible for the testing site (for real?), and some students took the wrong test – and I could go on and on and on.

READ MORE>

For more on education politics and policy in Tennessee, follow @TNEdReport


 

No Adverse Action

After much wrangling in a day that saw the Tennessee House of Representatives hold up proceedings in order to move forward with an effort to truly hold students, teachers, and schools harmless in light of this year’s TNReady trouble, it appears a compromise of sorts has been reached.

Here’s the language just adopted by the Senate and subsequently passed by the House:

SECTION 1. Tennessee Code Annotated, Title 49, Chapter 6, Part 60, is amended by adding the following language as a new section: Notwithstanding any law to the contrary, no adverse action may be taken against any student, teacher, school, or LEA based, in whole or in part, on student achievement data generated from the 2017-2018 TNReady assessments. For purposes of this section, “adverse action” includes, but is not limited to, the identification of a school as a priority school and the assignment of a school to the achievement school district.

This language does not explicitly address the issue of using TNReady for TVAAS, but it has an effect similar to legislation passed in 2016 during that year’s TNReady trouble. Yes, it seems problems with testing in Tennessee are the norm rather than the exception.

Here’s what this should mean for teachers: Yes, a TVAAS score will be calculated based on this year’s TNReady. But, if that TVAAS score lowers your overall TEAM score, it will be excluded — lowering your TEAM score would be an “adverse action.”

While not perfect, this compromise is a victory — the TNReady data from a messed up test will not harm grades or be used in the state’s A-F report card for schools or be used to give a negative growth score to a teacher via TVAAS.

Yes, TVAAS is still suspect, but there’s an election in November and a new Commissioner of Education coming after that. Heading into the November election is a great time to talk with candidates for the legislature and for Governor about the importance of evaluations that are fair and not based on voodoo math like TVAAS. Remember, even under the best of circumstances, TVAAS would not have yielded valid results this year.

While it is disappointing that Senators did not want to follow the lead of their House counterparts and explicitly deal with the TVAAS issue, there’s no doubt that persistent outreach by constituents moved the needle on this issue.

For more on education politics and policy in Tennessee, follow @TNEdReport

If you enjoy the education news provided here, consider becoming a patron!


 

Would You Eat This Pie?

After last week’s TNReady failure, the Tennessee General Assembly took some action to mitigate the impact the test would have on students and teachers.

I wrote at the time that the legislature’s action was a good step, but not quite enough:

  1. The law does say that districts and schools will not receive an “A-F” score based on the results of this year’s test. It also says schools can’t be placed on the state’s priority list based on the scores. That’s good news.

  2. The law gives districts the option of not counting this year’s scores in student grades. Some districts had already said they wouldn’t count the test due to the likelihood the scores would arrive late. Now, all districts can take this action if they choose.

  3. The law says any score generated for teachers based on this year’s test cannot be used in employment/compensation decisions.

Here’s what the law didn’t say: There will be NO TVAAS scores for teachers this year based on this data.

In other words, this year’s TNReady test WILL factor into a teacher’s evaluation.

The Department of Education took some steps to clarify what that means for teachers and offered a handy pie chart to explain the evaluation process:

First, this chart makes clear that this year’s TNReady scores WILL factor into a teacher’s overall evaluation.

Second, this chart is crazy. A teacher’s growth score is factored on tests from three different years and three types of tests.

15% of the growth score comes from the old TCAP (the test given in 2014-15, b/c the 2015-16 test had some problems). Then, 10% comes from last year’s TNReady, which was given on paper and pencil. Last year was the first year of a full administration of TNReady, and there were a few problems with the data calculation. A final 10% comes from this year’s TNReady, given online.

So, you have data from the old test, a skipped year, data from last year’s test (the first time TNReady had truly been administered), and data from this year’s messed up test.

There is no way this creates any kind of valid score related to teacher performance. At all.

In fact, transitioning to a new type of test creates validity issues. The way to address that is to gather three or more years of data and then build on that.

Here’s what I noted from statisticians who study the use of value-added to assess teacher performance:

Researchers studying the validity of value-added measures asked whether value-added gave different results depending on the type of question asked. Particularly relevant now because Tennessee is shifting to a new test with different types of questions.

Here’s what Lockwood and McCaffrey (2007) had to say in the Journal of Educational Measurement:

We find that the variation in estimated effects resulting from the different mathematics achievement measures is large relative to variation resulting from choices about model specification, and that the variation within teachers across achievement measures is larger than the variation across teachers. These results suggest that conclusions about individual teachers’ performance based on value-added models can be sensitive to the ways in which student achievement is measured.
These findings align with similar findings by Martineau (2006) and Schmidt et al (2005)
You get different results depending on the type of question you’re measuring.

The researchers tested various VAM models (including the type used in TVAAS) and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured.

And they concluded:

Our results provide a clear example that caution is needed when interpreting estimated teacher effects because there is the potential for teacher performance to depend on the skills that are measured by the achievement tests.

If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best. In the first year, such a comparison is invalid and unreliable. As more years of data become available, it may be possible to make some correlation between past TCAP results and TNReady scores.

I’ve written before about the shift to TNReady and any comparisons to prior tests being like comparing apples and oranges.

Here’s what the TN Department of Education’s pie chart does: It compares an apple to nothing to an orange to a banana.

Year 1: Apple (which counts 15%)

Year 2: Nothing, test was so messed up it was cancelled

Year 3: Orange – first year of TNReady (on pencil and paper)

Year 4: Banana – Online TNReady is a mess, students experience login, submission problems across the state.

From these four events, the state is suggesting that somehow, a valid score representing a teacher’s impact on student growth can be obtained. The representative from the Department of Education at today’s House Education Instruction and Programs Committee hearing said the issue was not that important, because this year’s test only counted for 10% of the overall growth score for a teacher. Some teachers disagree.

Also, look at that chart again. Too far up? Too confusing? Don’t worry, I’ve made a simpler version:

For more on education politics and policy in Tennessee, follow @TNEdReport


 

 

Your support keeps Tennessee Education Report going strong — thank you!

Driving Teachers Crazy

State Representative Jeremy Faison of Cosby says the state’s teacher evaluation system, and especially the portion that relies on student scores on TNReady is causing headaches for Tennessee’s teachers.

Faison made the remarks at a hearing of the House Government Operations Committee, which he chairs. The hearing featured teachers, administrators, and representatives from the Department of Education and Tennessee’s testing vendor, Questar.

Zach Vance of the Johnson City Press reports:

“What we’re doing is driving the teachers crazy. They’re scared to death to teach anything other than get prepared for this test. They’re not even enjoying life right now. They’re not even enjoying teaching because we’ve put so much emphasis on this evaluation,” Faison said.

Faison also said that if the Department of Education were getting ratings on a scale of 1 to 5, as teachers do under the state’s evaluation system (the TEAM model), there are a number of areas where the Department would receive a 1. Chief among them is communication:

“We’ve put an immense amount of pressure on my educators, and when I share with you what I think you’d get a one on, I’m speaking for the people of East Tennessee, the 11th House District, from what I’m hearing from 99.9 percent of my educators, my principal and my school superintendents.”

Rather frankly, Faison said both the state Department of Education and Questar should receive a one for its communication with local school districts regarding the standardized tests.

Faison’s concerns about the lack of communication from the TNDOE echo concerns expressed by Wilson County Director of Schools Donna Wright recently related to a different issue. While addressing the state’s new A-F report card to rate schools, Wright said:

We have to find a way to take care of our kids and particularly when you have to look at kids in kindergarten, kids in the 504 plan and kids in IEP. When you ask the Department of Education right now, we’re not getting any answers.

As for including student test scores in teacher evaluations, currently a system known as Tennessee Value Added Assessment System (TVAAS) is used to estimate the impact a teacher has on a student’s growth over the course of the year. At best, TVAAS is a very rough estimate of a fraction of a teacher’s impact. The American Statistical Association says value-added scores can estimate between 1-14% of a teacher’s impact on student performance.

Now, however, Tennessee is in the midst of a testing transition. While McQueen notes that value-added scores count less in evaluation (15% this past year, 20% for the current year), why county any percentage of a flawed score? When changing tests, the value of TVAAS is particularly limited:

Here’s what Lockwood and McCaffrey (2007) had to say in the Journal of Educational Measurement:

We find that the variation in estimated effects resulting from the different mathematics achievement measures is large relative to variation resulting from choices about model specification, and that the variation within teachers across achievement measures is larger than the variation across teachers. These results suggest that conclusions about individual teachers’ performance based on value-added models can be sensitive to the ways in which student achievement is measured.
These findings align with similar findings by Martineau (2006) and Schmidt et al (2005)
You get different results depending on the type of question you’re measuring.

The researchers tested various VAM models (including the type used in TVAAS) and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured.

And they concluded:

Our results provide a clear example that caution is needed when interpreting estimated teacher effects because there is the potential for teacher performance to depend on the skills that are measured by the achievement tests.

If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best.

After the meeting, Faison confirmed that legislation will be forthcoming that detaches TNReady data from teacher evaluation and student grades.

Faison’s move represents policy based on acknowledging that TNReady is in the early stages, and more years of data are needed in order to ensure a better performance estimate. Or, as one principal who testified before the committee said, there’s nothing wrong with taking the time to get this right.

For more on education politics and policy in Tennessee, follow @TNEdReport