Over at Bluff City Ed, Jon Alfuth digs into the questions surrounding this year’s release of TCAP quick scores and their correlation to student performance on the TCAP.
This year, the way quick scores were calculated in relation to raw scores was shifted so that grades 3-8 (TCAP) scores matched the EOC scores students see in high school.
One key question is why make this change in the last year of TCAP? Next year, Tennessee students will see TNReady — so, making the calculation change now doesn’t seem to serve much purpose.
Alfuth does a nice job of explaining what’s going on and why it matters. Here are some key highlights:
Lack of Communication
They (TN DOE) didn’t make it clear to teachers, parents or students that they were changing the policy, resulting in a lot of confusion and frustration over the past few days as everyone grapples with these new quick scores.
From the second memo, they note that they changed to raw scores because of concerns about getting final quick scores out on time during the transition to a new test, stating that if they did it based on proficiency, it would take until the middle of the summer to make them happen.
I’d buy that…except that the Department of Education has always been able to get the quick scores out on time before. And last I checked, we weren’t transition to TNReady this year – the transition occurs next year. So why mess with the cut scores this year? Is this just a trial run, an experiment? It feels like we’re either not getting the whole story, or that if we are there is some serious faulty logic behind this decision that someone is just trying to explain away.
It’s worth noting that last year, the quick scores weren’t available on time and most districts received a waiver from including TCAP scores in student grades. I note this to say that concern about getting quick scores out on time has some merit given recent history.
To me, though, this raises the question: Why are TCAP scores factored into a student’s grades? Ostensibly, this is so 1) students take the tests seriously and 2) how a teacher assesses a student matches up with the desired proficiency levels on the appropriate standards.
Of course, quick scores are only available for tested subjects, leaving one to wonder if other subjects are less important or valuable to a student’s overall academic well-being. Or, if there’s another way to assess student learning beyond a bubble-in test or even a test with some constructed response, such as TNReady.
I’d suggest a project-based learning approach as a means of assessing what student’s have actually learned across disciplines. Shifting to project-based learning with some grade-span testing would allow for the accountability necessary to ensure children are meeting state standards while also giving students (and their teachers) a real opportunity to demonstrate the learning that has occurred over an academic year.
The Department has also opened itself to some additional criticism that it is “massaging” the scores – that is, trying to make parents happy by bringing grades up in the last year under the old testing regime. We can’t say for certain that this is the motivating factor behind this step, but in taking this step without more transparency the Department of Education has opened itself up to this charge. And there will definitely be some people who accuse the state of doing this very thing, especially given the reasons that they cited in their memo. I personally don’t ascribe any sinister motives to the state, but you have to admit that it looks a little fishy.
In fact, TC Weber is raising some important questions about the process. He notes:
If people don’t believe in the fidelity of the system, it becomes too easy to attribute outside factors to the results. In other words, they start to feel that data is being manipulated to augment an agenda that they are not privy to and not included in. I’m not saying results are being manipulated or not being manipulated when it comes to our student evaluation system, but I am saying that there seems be a growing belief that they are, and without some kind of change, that perception will only grow. I’ve always maintained that perception is nine-tenths of reality.
As both Alfuth and Weber note, the central problem is lack of communication and transparency. As we shift to a new testing regime with uncertain results, establishing confidence in the system and those administering it is critical. After last year’s late score debacle and this year’s quick score confusion, establishing that trust will be difficult. Open communication and a transparent process can go a long way to improving perception and building support.
For more on education politics and policy in Tennessee, follow @TNEdReport