Turns Out, TNReady Still Isn’t

The spectacular failure that is Tennessee’s statewide testing system for students (TNReady) just continues to fail. So much so that when districts announce that TNReady scores won’t be factored into student grades yet AGAIN, it’s not even a major news story.

Sure, the state pays in excess of $100 million for the test and yes, teachers are evaluated based on the results, but the test is a colossal waste of time year after year after year.

Here’s a recent announcement from Sumner County Schools about this year’s test scores:

Dear Parents,

Earlier this week, we were informed by the Tennessee Department of Education that the TNReady scores for third, fourth and fifth grade were incorrect for several elementary schools and were scored again by the state. The new scores were not returned before final report cards were sent home on Thursday. TNReady scores for grades 6–8 were received 3.5 school days before report cards were issued.State law requires TNReady testing to count a minimum of 15% of a student’s grade. School Board Policy 4.600 states that in the event of testing modifications by the state, such as a delay in scores being returned to the district, Sumner County Schools can waive the 15% TNReady grade. Due to this issue in testing, we will not include the TNReady score in your student’s final report card. Your student’s grade will be calculated by averaging the final grade from the first and second semester.In the fall, you will receive your child’s full TNReady scores

And here’s a notice from Metro Nashville Public Schools about TNReady:

I’m sure similar notices went out in other districts across the state.

So, the state spends millions on the test, schools spend hours prepping for it, students spend days taking the exams, and then — NOTHING. No score that is useful for grades, no return of data in a timely fashion.

In fact, TNReady has failed so often and in so many ways, the clown show is now just accepted as an annual rite of passage. We’ll give the test because the state can’t imagine NOT testing every year and then we’ll fully expect there to be one or several problems. A surprising TNReady year would be one in which there were no problems with administration AND the results came back on time.

It’s bad public policy when the bare minimum acceptable outcome IS the surprising outcome. Alas, that’s the case with TNReady.

For more on education politics and policy in Tennessee, follow @TNEdReport

Your support – $5 or more – makes publishing education news possible.

Muddy Waters

Laura Faith Kebede of Chalkbeat reports on the challenges in generating reliable TVAAS scores as a result of TNReady trouble last year. Her story cites a statistician from the Center for Assessment who explains the issue this way:

Damian Betebenner, a senior associate at Center for Assessment that regularly consults with state departments, said missing data on top of a testing transition “muddies the water” on results.

“When you look at growth over two years, so how much the student grew from third to fifth grade, then it’s probably going to be a meaningful quantity,” he said. “But to then assert that it isolates the school contribution becomes a pretty tenuous assertion… It adds another thing that’s changing underneath the scene.”

In other words, it’s difficult to get a meaningful result given the current state of testing in Tennessee. I wrote recently about this very issue and the problem with the validity of the growth scores this year.

Additionally, two years ago, I pointed out the challenges the state would face when shifting to a new test. Keep in mind, this was before all the TNReady trouble that further muddied the waters. Here’s what I said in March of 2015:

Here’s the problem: There is no statistically valid way to predict expected growth on a new test based on the historic results of TCAP. First, the new test has (supposedly) not been fully designed. Second, the test is in a different format. It’s both computer-based and it contains constructed-response questions. That is, students must write-out answers and/or demonstrate their work.

Since Tennessee has never had a test like this, it’s impossible to predict growth at all. Not even with 10% confidence. Not with any confidence. It is the textbook definition of comparing apples to oranges.

The way to address this issue? Build multiple years of data in order to obtain reliable results:

If you measure different skills, you get different results. That decreases (or eliminates) the reliability of those results. TNReady is measuring different skills in a different format than TCAP. It’s BOTH a different type of test AND a test on different standards. Any value-added comparison between the two tests is statistically suspect, at best. In the first year, such a comparison is invalid and unreliable. As more years of data become available, it may be possible to make some correlation between past TCAP results and TNReady scores.

So, now we have two challenges: We have two different types of tests AND we have a missing year of data. Either one of these challenges creates statistical problems. The combination of the two calls for a serious reset of the state’s approach to accountability.

As I suggested yesterday, taking the time to get this right would mean not using the TNReady data for accountability for teachers, students, or schools until 2019 at the earliest. If our state is committed to TNReady, we should be committed to getting it right. We’re spending a lot of money on both TNReady and on TVAAS. If we’re going to invest in these approaches, we should also take the time to be sure that investment yields useful, reliable information.

Why does any of this matter? Because, as Kebede points out:

At the same time, TVAAS scores for struggling schools will be a significant factor to determine which improvement tracks they will be be placed on under the state’s new accountability system as outlined in its plan to comply with the federal Every Student Succeeds Act. For some schools, their TVAAS score will be the difference between continuing under a local intervention model or being eligible to enter the state-run Achievement School District. The school growth scores will also determine which charter schools are eligible for a new pot of state money for facilities.

TVAAS scores also count in teacher evaluations. TNReady scores were expected to count in student grades until the quick scores weren’t back in time. If all goes well with the online administration of TNReady this year, the scores will count for students.

The state says TNReady matters. The state evaluates schools based on TVAAS scores. The state teacher evaluation formula includes TVAAS scores for teachers and TNReady scores as one measure of achievement that can be selected.

In short: Getting this right matters.

For more on education politics and policy in Tennessee, follow @TNEdReport


 

PET Releases Testing Survey

Professional Educators of Tennessee (PET) released a survey this week on teacher attitudes toward standardized testing.

Here’s the release and a link to a detailed report:

In April of 2015, Professional Educators of Tennessee surveyed Tennessee educators regarding their opinion of standardized testing in the state of Tennessee. The survey was distributed via email to all members and on social media, as well as being made available to all educators on the Professional Educators of Tennessee website.

208 educators completed the survey, with 134 being classroom teachers. Eighty-five percent of educators stated that standardized testing takes up “too much” of classroom instructional time. And, as the state moves to online testing, there appear to be numerous glitches in the testing procedure.

Based on the survey results, PET recommends:

Based on these survey results, standardized testing in Tennessee proves to be a major driving force in classroom instruction. This survey indicates that virtually every school has broadband internet, yet 89% indicated there were issues with the online testing provided.   These issues can and will negatively impact tests results. Professional Educators of Tennessee proposes that all testing continue to be done on paper/pencil OR that testing sessions interrupted by technical difficulties be coded in a special way and either discarded or given again, with different test items, OR that schools endure the tests with possible difficulties with technology and be held harmless until the percentage of tests taken without technical interference or interruption reaches a threshold of 95% or higher.  Also, before a teacher’s TVAAS scores are linked to students’ testing performance, these online testing malfunctions (computers/websites freezing, connectivity issues) must be addressed.

For more on education politics and policy in Tennessee, follow @TNEdReport

Is THAT even legal?

That’s the question the Tennessee Education Association is asking about the use of value-added data (TVAAS) in teacher evaluations.

The TEA, joining with the National Education Association, has filed a lawsuit challenging the constitutionality of Tennessee’s use of TVAAS data in teacher evaluations.

According to a press release, TEA is specifically concerned about teachers who receive value-added scores based on students they have never taught. A significant number of Tennessee teachers currently receive a portion of their evaluation score based on TVAAS scores from school-wide or other data, meaning teachers are graded based on students they’ve never taught.

The release states:

More than half of the public school teachers in Tennessee receive evaluations that are based substantially on standardized test scores of students in subjects they do not teach. The lawsuit seeks relief for those teachers from the arbitrary and irrational practice of measuring their effectiveness with statistical estimates based on standardized test scores from students they do not teach and may have never met. 

While Governor Haslam is proposing that the legislature reduce the impact of TVAAS scores on teacher evaluations during the state’s transition to new standardized tests, his proposal does not address the issues of statistical validity with the transition. There is no way to determine how TCAP scores will interface with the scores from a test that has not even been developed yet. To hold teachers accountable for data generated in such an unreliable fashion is not only statistically suspect, it’s disrespectful.

Finally, it’s worth noting that value-added data doesn’t do much in terms of differentiating teacher performance. Of course, even if it did, holding teachers accountable for students they don’t teach defies logic.

For more on education politics and policy in Tennessee, follow @TNEdReport