As Tennessee prepares to test more students than ever via an online platform, there are some signs of potential trouble.
State officials said Thursday they are confident the new digital platform will work under heavy traffic, even as their new testing vendor, Questar, had headaches administering computer-based tests in New York on Wednesday. Some students there struggled to log on and submit their exam responses — issues that Questar leaders blamed on a separate company providing the computer infrastructure that hosts the tests.
Tennessee officials say they are working with Questar to ensure similar problems don’t occur in Tennessee. They also point out our online testing infrastructure is different and that Questar will have troubleshooting staff in the state during the test administration.
Here’s the problem: Across multiple testing vendors and dating back to TCAP, Tennessee has had problems with testing. This includes the now perennial issue of not being able to deliver scores back to districts in a timely manner. In fact, in December, districts were told scores might not be back in a timely fashion this year, either.
It’s possible the state and Questar have all the issues worked out and this year’s test administration will be nearly flawless. However, the record over the past few years is not encouraging.
Then, there’s the issue of what happens with the results. If they are available for factoring into student scores, it is up to districts to choose the method. I’ve written before about why that’s problematic. Here’s a quick summary:
The cube root method yielded on average a quick score, the score that goes for a grade, of 4.46 points higher. In other words, a studentscoring basic with a raw score of 30 or higher would, on average, receive an extra 4.46% on their final quick score grade, which goes on their report card. A student who scored a 70 last year could expect to receive a 74 under the new quick score calculation.
The additional points do drop as one goes up the raw score scale, however. For the average basic student grades 3-8 with a raw score between 30 and 47, they would receive an extra 5.41 extra points under the new method.
The average proficient student grades 3-8 with a raw score between 48 and 60 would get 4.32 extra points under the new method.
The average advanced student grades 3-8 with a raw score of between 61 and 67 would receive an extra 1.97 extra points under the new method.
The difference varies much more widely for below basic students, but the difference can be as much as 25 points in some cases.
So, for those districts using quick scores in report cards, there could be a wide variance across districts depending on the method chosen. It seems to me, districts should have already communicated to families how they will calculate quick scores with some justification for that choice. Alternatively, the state could have (should have?) mandated a method so that there is score consistency across the state.
Then, of course, there’s the issue of using these scores in teacher evaluation. Let’s say testing goes well this year. This would be the first year of a test without problems. If that happens, this should serve as the baseline for any test-based teacher evaluation. Yes, I think using value-added scores is a misguided approach, but if Tennessee is going to go this route, the state ought to take steps to ensure the data is as accurate as possible. That would require at least three years of successful test administration. So far, we have zero.
If TNReady is a great test that has the potential to offer us useful insight into student learning, it’s worth taking the time to get it right. So far, it seems Tennessee has yet to learn the lesson of the NAEP outlier — we don’t need rapid acceleration, we need to be patient, take our time, and focus.
For more on education politics and policy in Tennessee, follow @TNEdReport