New and Not Ready

Connie Kirby and Carol Bomar-Nelson, English teachers at Warren County High School, share their frustration with the transition to TNReady and what it means for teacher evaluation.

Connie Kirby:

This is going to be long, but I don’t usually take to social media to “air my grievances.” Today I feel like there’s no better answer than to share how I feel. It’s been a long year with some of the highest of the highs and lowest of the lows. I work in a wonderful department at a great school with some of the most intelligent, hard-working people I know. As the years have progressed, we have gone through many changes together and supported each other through the good and the bad (personally and professionally). We do our best to “comply” with the demands that the state has put on us, but this year everything that we’ve been hearing about and preparing for for years has come to fruition. We’re finally getting familiar with the “real deal” test, instead of dealing with EOCs and wondering how it’s going to change. I’ve seen the posts and rants about Common Core and have refrained from jumping on the bandwagon because I have had no issues with the new standards. I do, however, see an issue with the new assessment, so I have held my hand in the hopes that I might find something worth sharing and putting my name next to. Today, I witnessed an exchange between one of my colleagues and the state, and I couldn’t have said it better myself. With her permission, I am sharing her words.

Carol Bomar-Nelson:

I don’t know how to fix the problems with the test. I agree that teachers should have accountability, and I think student test scores are one way of doing that. Having said that, if the state is going to hold teachers accountable for student test scores, then the test needs to be fair. From what I have seen, I firmly believe that is not the case. I am not just basing this conclusion on the one “Informational Test” in MICA. Other quizzes I have generated in MICA have had similar flaws. When my department and I design common assessments in our PLC’s, we all take the tests and compare answers to see which questions are perhaps ambiguous or fallacious in some way. I do not see any evidence that the state is doing this for the tests that it is manufacturing. A team of people can make a test that is perfect with respect to having good distractors, clear wording, complex passages, and all the other components that make up a “good” test, but until several people take the test, compare answers, and discuss what they missed, that test is not ready for students to take–especially not on a high stakes test that is supposed to measure teacher effectiveness. I understand that this is the first year of this test. I am sympathetic to the fact that everyone is going through a ‘learning process’ as they adapt to the new test. Students have to learn how to use the technology; teachers have to learn how to prepare their students for a new type of tests; administrators have to figure out how to administer the test; the state has to work out the kinks in the test itself…The state is asking everyone to be “patient” with the new system. But what about for the teachers? Yes, the teacher effectiveness data only counts for 10% this year, but that 10% still represents how I am as a teacher. In essence, this new tests is like a pretest, correct? A pretest to get a benchmark about where students stand at the end of the year with this new test that has so many flaws and so many unknowns. In the teaching profession, I think all would agree that it is bad practice to count a pretest AT ALL for a student’s grade. Not 35%, not 25%, not even 10%. So how is it acceptable practice to count a flawed test for 10% of a teacher’s evaluation? We can quibble all day about which practice questions…are good and which questions are flawed, but that will not fix the problem. The problem lies in the test development process. If the practice questions go through the same process as the real questions, it would stand to reason that the real test questions are just as flawed as the practice questions. My students have to take that test; I never get to see it to determine if it is a fair test or not, and yet it still counts as 10% of my evaluation that shows my effectiveness as a teacher. How is that fair in any way whatsoever? In what other profession are people evaluated on something that they never get to see? Especially when that evaluation ‘tool’ is new and not ready for use?

I know how to select complex texts. I know how to collaborate with my PLC. I can teach my students how to read, think critically, analyze, and write. When I do not know how to do something, I have no problem asking other teachers or administrators for suggestions, advice, and help. I am managing all of the things that are in my control to give my students the best possible education. Yet in the midst of all of these things, my teacher accountability is coming from a test that is generated by people who have no one holding them accountable. And at the end of the year, when those scores come back to me, I have no way to see the test to analyze its validity and object if it is flawed.

For more on education politics and policy in Tennessee, follow @TNEdReport

CAPEd Crusaders

At last night’s MNPS Board meeting, members of newly-formed education advocacy group CAPE spoke out about the time spent testing students this year as the state shifts to new TNReady tests.

Here’s what one member and teacher had to say to WSMV:

“It disrupts our schedules. It demoralizes the students. It demoralizes the teachers. It creates chaos,” Kale said. “Our students don’t even know what their schedules are … because they’re interrupted so many times for testing.”

The new state tests significantly increase the time students will spend testing, especially in the earlier grades.

The increased time spent testing comes at a time when a state task force has recommended both reduced testing and more testing transparency.

While the 2016 session of the Tennessee General Assembly may take up the issue, that likely won’t stop the administration of this year’s TNReady.

For more on education politics and policy in Tennessee, follow @TNEdReport

Not Yet TNReady?

As students and teachers prepare for this year’s standardized tests, there is more anxiety than usual due to the switch to the new TNReady testing regime. This according to a story in the Tennessean by Jason Gonzalez.

Teachers ask for “grace”

In his story, Gonzalez notes:

While teachers and students work through first-year struggles, teachers said the state will need to be understanding. At the Governor’s Teacher Cabinet meeting Thursday in Nashville, 18 educators from throughout the state told Gov. Bill Haslam and McQueen there needs to be “grace” over this year’s test.

The state has warned this year’s test scores will likely dip as it switches to a new baseline measure. TCAP scores can’t be easily compared to TNReady scores.

Despite the fact that the scores “can’t be easily compared,” the state will still use them in teacher evaluations. At the same time, the state is allowing districts to waive the requirement that the scores count toward student grades, as the TCAP and End of Course tests have in the past.

In this era of accountability, it seems odd that students would be relieved of accountability while teachers will still be held accountable.

While that may be one source of anxiety, another is that by using TNReady in the state’s TVAAS formula, the state is introducing a highly suspect means of evaluating teachers. It is, in fact, a statistically invalid approach.

As noted back in March citing an article from the Journal of Educational Measurement:

These results suggest that conclusions about individual teachers’ performance based on value-added models can be sensitive to the ways in which student achievement is measured.

The researchers tested various VAM models (including the type used in TVAAS) and found that teacher effect estimates changed significantly based on both what was being measured AND how it was measured. 

 

That means that the shift to TNReady will change the way TVAAS estimates teacher effect. How? No one knows. We can’t know. We can’t know because the test hasn’t been administered and so we don’t have any results. Without results, we can’t compare TNReady to TCAP. And, even once we have this year’s results, we can’t fairly establish a pattern — because we will only have one year of data. What if this year’s results are an anomaly? With three or more years of results, we MAY be able to make some estimates as to how TCAP compares to TNReady and then possibly correlate those findings into teacher effect estimates. But, we could just end up compounding error rates.

Nevertheless, the state will count the TNReady results on this year’s teacher evaluations using a flawed TVAAS formula. And the percentage these results will count will grow in subsequent years, even if the confidence we have in the estimate does not. Meanwhile, students are given a reprieve…some “grace” if you will.

I’d say that’s likely to induce some anxiety.

For more on education politics and policy in Tennessee, follow @TNEdReport

Phil Williams, Testing, and MNPS

NewsChannel5’s Phil Williams sent this tweet today teasing his story on alleged testing irregularities in MNPS:

Phil Williams (@NC5PhilWilliams)
Coming up on @NC5 at 6, #NC5investIgates: Have some Metro high schools been #FakingTheGrade? pic.twitter.com/tRRYeUl4lk

Here’s the full response from MNPS:

Tonight, November 2, 2015, investigative reporter Phil Williams of News Channel 5 plans to air a story containing accusations about end-of-course exams in Metro Schools. Below is our full and detailed response to Phil, as well as a record of our communication with him during his reporting.

DOWNLOAD a PDF copy of this statement.

Beginning late in the week of October 19 and continuing throughout the week of October 26, there have been regular email and telephone conversations – often daily – to address your questions related to accusations that some Metro high schools are using various methods to avoid administering state-mandated End-of-Course (EOC) exams to certain students in order to inflate their performance data. As stated numerous times throughout these conversations, we take these accusations extremely seriously. We asked for evidence of specific wrong-doing in your possession so that the instances in question can be thoroughly investigated and to allow us to fully respond to your story.

Below is a comprehensive response to the questions you have posed thus far related to the “general EOC concerns” story you say is scheduled to air this evening, Monday, Nov. 2, 2015. This response includes questions and requests of us, along with a summary of how we have fulfilled them. Further responses may follow related to other specific concerns you plan to address in future stories.

General Statement on EOC Exams

Students are required to take all state-mandated EOC exams at the end of the second semester of a course regardless of when or how they complete the course. To determine if there is evidence of a wide-spread trend with students not completing the required EOCs, over the last week our Research and Evaluation department has been carefully reviewing transcript and EOC exam files for the most recent cohort of MNPS graduates.

Records reviewed to date indicate that there is no evidence of systematic avoidance of EOC exams. We have found a relatively small number of students who received a regular high school diploma in the spring of 2015 and who took EOC courses in our schools but do not appear to have ever attempted the EOC exam. The department went through several years of files in order to track students’ course and test history. Our investigation is focused on the courses for which the Tennessee Department of Education establishes accountability targets, called Annual Measureable Objectives (AMOs), which requires each high school to have a 95% participation rate on EOC exams.

With a 2015 graduating class of 4,221 students, they should have collectively taken 16,884 exams with AMOs over the course of their high school careers. Of those 16,884 exams, the district lacks a test record for only 231 or 1.37%. These cases appear to be spread out and not unusually high for any particular school. All high schools fall within the 1-2% range. Given an average daily attendance rate of 93%, there will be students that never make up an EOC. There may also be some who took the EOC at another time outside of MNPS or whose student ID was incorrectly coded on an EOC answer sheet and who do not match our course enrollment files.

The 231 missed EOC exams are broken down as follows: There were 44 students missing an Algebra I EOC test record and 10 students marked absent. An answer sheet is supposed to be turned in for every student enrolled in the course, and those that do not test or make up the test should be coded as absent. It is likely that many, if not most, of those students missing an EOC document were absent during testing and an answer sheet marked “absent” was not submitted. There were 32 missing an Algebra II EOC and 32 more marked absent. For English II, 26 had no test record and 16 were shown as absent. There were 35 missing for English III and 36 absent.

If NewsChannel 5 is in possession of documentation that contradicts the district’s findings of its own internal review described above, Metro Schools requests to be given access to the documentation immediately to allow us to thoroughly investigate the claims. Likewise, if former or current MNPS employees are in possession of documentation that indicates a systematic attempt to inflate performance data for individual schools, those individuals are urged to bring their concerns forward to district leadership so that they can be properly investigated. We have no record of an open complaint of this nature.

Use of Credit Recovery in High Schools

Metro Nashville Public Schools has made personalized learning the focus of our instructional practice. Our goal is to prepare every student for success in college and career, which personalized learning allows us to do. Personalized learning involves teachers meeting students where they are, regularly monitoring their progress, and moving students forward only when they’re able to demonstrate mastery of the content. This includes intervening as early as possible when a student’s performance indicates he or she is failing to master the content of a course.

As part of this approach, credit recovery is offered to high school students who fail a semester of a course. If a student fails a course in the fall to the degree that grade-averaging the two semesters is unlikely to result in the student passing the course as a whole, the student is given the option to take the fall course through credit recovery before proceeding to the spring course. For example, a student who fails “Algebra I Fall” will be given the option to retake the fall course of Algebra I during the spring semester. The student will then take “Algebra I Spring” during the summer semester or subsequent fall semester. All attempts are made to place the student in “Algebra 1 Spring” during the following summer or fall. If there is a scheduling conflict, the student may have to wait to the following spring to take the spring course.

It is in the best interest of the student to take this approach because if he or she has not mastered the content of a fall course, he or she will be ill-prepared to succeed in the spring course, which builds on the content knowledge from the fall. The decision to enter into credit recovery is made by the student and his or her parent/guardian in consultation with the teacher and the student’s counselor.

If a student takes a spring course during the summer or fall semester, he or she will take the EOC at that time. Meaning a student who fails Algebra I this fall may take the Algebra I EOC in July or December of 2016, depending on when he or she completes both courses.

The opinion that this approach to instruction in intended solely to inflate EOC scores is misguided. This is a standard practice used by school districts in our state. The fact that the state’s testing calendar allows for EOCs to be taken in the spring and summer is evidence that this practice is supported by the state. The state does not use EOCs to measure the academic performance of a specific grade level. Unlike grades K through 8, high school courses are offered to students based on their individual academic level. For example, an advanced student may take Algebra I in eighth grade instead of ninth grade, in which case the EOC score is calculated into the middle school’s math data, rather than the high school the student goes on to attend. Similarly, students who take AP classes do not take EOC exams for those subjects, therefore their academic performance is not included in the high school’s overall EOC data. EOC data is intended to reflect the high school’s ability to successfully teach the state standards in main subject areas, regardless of when the student takes the course during his or her time in high school. There is a clear disincentive for high schools to unnecessarily delay a student’s promotion among courses since the state calculates a high school’s graduation rate based on “on-time” graduates, defined as students who graduate within four years and one summer of starting high school. Because all students are required to earn four math credits and four English credits, when they are delayed from completing one of those required credits it risks requiring the student to take more than four years to graduate.

Most importantly, our focus is on helping students succeed. Ultimately, our goal is to prepare every student for college and career. If a student requires extra time to successfully master the content of a course, we believe the student should be allowed that time. Forcing students to progress in course schedules when they are not prepared to understand or master the content would equate to setting our students up for failure.    

Use of Content Recovery in High Schools

In addition to “credit recovery,” which is a student re-taking a failed semester of a course, Metro Schools also offers “content recovery” courses to support students who are struggling with the foundational skills needed to succeed in an EOC course.

For example, the district offers “Algebra I A,” a content recovery course to support students enrolled in Algebra I. The Algebra I A course may cover basic math skills, such as fractions, based on what underlining knowledge is needed for a student to understand the Algebra lessons. Similar classes are offered for English courses, and are listed as “English I CAR,” with “CAR” standing for Content Area Reading.

It is district practice for students to be enrolled in content recovery courses either simultaneously or prior to taking an EOC course. A content recovery course cannot be taken in place of an EOC course. Although students do earn credits for content recovery courses, the credits do not qualify for the math or English credits required for graduation. Additionally, enrollment in a content recovery course does not negate a student’s requirement to take the EOC exam at the end of the second semester of the EOC course.

Pearl-Cohn Entertainment Magnet High School

  • You claim:
    • Pearl-Cohn has removed students from EOC exam classes and placed them in independent study courses as a means of avoiding their scores from affecting the school’s overall EOC score. You intimate in an email to Principal Sonia Stewart that direction for this practice is coming from supervision in the district office.
  • We responded:
    • Verbally on the phone the week of Oct. 26 explaining the district’s practice of remediation with students who are failing EOC classes. Further detail and explanation is provided above in the statements on credit recovery and content recovery.
  • You asked for:
    • All course offerings for Fall 2015 and number of students enrolled in each class
  • We fulfilled this request on Friday, Oct. 30, 2015

Stratford STEM Magnet School

  • You claim:
    • Students being “physically pulled” from EOC exam rooms or barred from entering EOC exam rooms.
  • We responded:
    • Verbally on the phone the week of Oct. 26 explaining Stratford’s EOC participation rate is consistently 95% or above for the last two years. The data is as follows:
      • Algebra I – 100% in 2014 and 97% in 2015
      • Algebra II – 95% in 2014 and 96% in 2015
      • English II – 98% in 2014 and 98% in 2015
      • English III – 96% in 2014 and 95% in 2015
    • We further explained that given the AMOs of 95% participation and average daily attendance of 93%, there is no incentive for principals to withhold students from EOC exams, lest they risk failing to meet the AMO.
  • You asked for:
    • All course offerings for Fall 2015 and number of students enrolled in each class
  • We fulfilled this request on Friday, Oct. 30, 2015.

Hunters Lane High School

  • You claim:
    • Hunters Lane has removed students from EOC exam classes and placed them in elective courses as a means of avoiding their scores from affecting the school’s overall EOC score.
  • We responded:
    • Verbally on the phone the week of Oct. 26 explaining the district’s practice of remediation with students who are failing EOC classes. Further detail and explanation is provided in the above statements on credit recovery and content recovery.
  • You asked for:
    • All course offerings for Fall 2015 and number of students enrolled in each class
  • We fulfilled this request on Oct. 30, 2015.
  • On Oct. 29, you asked for:
    • Insight into the situation of a specific Hunters Lane student who was allegedly removed from EOC courses she was passing.
  • Our response:
    • We are still investigating the details of this student, including a close look at the student’s data. However, there are extenuating circumstances surrounding this particular student, which are part of her private record and may not be discussed with you without a written waiver from the parent/guardian.

Maplewood High School

  • You claim:
    • Without knowing the specific mechanism being used, that students are being either pulled from EOC classes or prevented from taking EOC exams.
  • We responded:
    • Verbally on the phone the week of Oct. 26 explaining the district’s practice of remediation with students who are failing EOC classes. Further detail and explanation is provided in the above statements on credit recovery and content recovery.
  • You claim:
    • A source reported to you seeing an email from Jay Steele giving direction in this practice.
  • We responded:
    • Verbally on the phone the week of Oct. 26 that no such email is known to exist, but that it could have been confused with an email sent by Aimee Wyatt on Feb. 11, 2014, to high school principals giving guidance on how to use credit recovery for course remediation. You were provided a copy of this email.
  • You asked for:
    • All course offerings for Fall 2015 and number of students enrolled in each class
  • We fulfilled this request on Oct. 30, 2015.

 

For more on education politics and policy in Tennessee, follow @TNEdReport

 

The NAEP Spin Room

Yesterday, I wrote about the very rosy interpretation of NAEP data being advanced by Tennessee leaders. Governor Haslam said:

“Today, we’re very excited to say that based on 2015 NAEP results, we’re still the fastest improving state in the nation since 2011. What this means is a new set of fourth- and eighth-graders proved that the gains that we made in 2013 were real.”

After analyzing the Tennessee results and putting them in context with national results (both of which essentially remained steady from 2013) , I noted:

It’s also worth noting that states that have adopted aggressive reforms and states that haven’t both remained flat. The general trend was “holding steady,” and it didn’t seem to matter whether your state was using a reform agenda (charters, vouchers, value-added teacher scores in teacher evaluations) or not.

Again, this makes it difficult to suggest that any one or even a package of educational practices drives change.

Then, I read the statement issued by SCORE (Statewide Collaborative on Reforming Education) Executive Director Jamie Woodson. Here’s what she had to say:

Since 2011, Tennessee has made record-setting gains, held them, and progressed in state rankings because of a multi-faceted strategy of high standards, great teaching, accountability, and common-sense adjustments based on the feedback of educators and citizens.

Note that she assigns causality based on these results. I wonder, then, what to make of the states that didn’t adopt the multi-faceted strategy she references? Last year, a number of states showed significant gains on NAEP. Some, like DC and Tennessee were reform-oriented states, others were not.

Additionally, in a post about the NAEP results two years ago, I noted:

Kentucky and Tennessee have posted gains over time on NAEP — in most categories, Kentucky started out tied or very slightly ahead of Tennessee and today, Kentucky remains ahead.  Kentucky posted some pretty big gains in the mid-90s and again from 2003-2009.  Since then, they’ve held fairly steady.  That’s an expected result, by the way — a big gain followed by steady maintenance of the new level.  For Tennessee, that won’t be enough, but celebrating the big gain is certainly warranted.  It’s also important to take care in assigning causality.

Note here that what I suggested then was an expected result (big gain, followed by holding steady) is exactly what happened in Tennessee this year. That’s good news — it means we’re not declining. But it also means we can’t really say that 2013 was something special.  As I noted last year, Kentucky had a series of big gains in the 1990s and then again in the early 2000s. It wasn’t just a big bump one time. So far, Tennessee has had one banner year (2013) and this year, returned to normal performance.

However, the narrative of “fastest-improving” keeps being repeated. In fact, Bethany Bowman of Professional Educators of Tennessee (PET) released a statement that said in part:
Tennessee students are still the fastest improving in the nation since 2011 according to the 2015 National Assessment of Educational Progress (NAEP), commonly known as the Nation’s Report Card. “This year’s results from National Assessment of Educational Progress (NAEP) show that Tennessee has maintained the positive gains that we achieved in 2013.

We had one year in which we made a big splash and then, as I noted in 2013:

As the data shows, Kentucky and Tennessee in many cases posted similar net gains over time, with Kentucky seeing big jumps in the mid-90s and again in the early part of the last decade.

That is to say, over a 20-year period, both states saw similar net gains. This year’s scores, in which Tennessee remained steady relative to the 2013 scores suggest, if anything, that the 2013 jump was likely an outlier. Had the 2013 gains been followed by gains in 2015 and again in 2017, more could be suggested. And frankly, it is my hope that we see gains (especially in reading) in 2017. But, it’s problematic to suggest that any specific reform or set of reforms caused the one-time jump we saw in 2013. Saying we are the fastest improving state in the nation over the last 4 years when we only saw a jump in 2013 is like saying we started the first quarter of a football game way behind, scored a bunch in the second quarter, (so we’re not as far behind), and then scored the same number of points in the third quarter. The result is we’re still behind and still have a long way to go.

So, yes, let’s celebrate that we made a big jump and held it steady. But, let’s also put those results in context and focus on how we can move forward instead of using these results to advance our favorite plays. For example, I’m not a huge fan of vouchers, but NAEP data doesn’t really help me make the case for or against. Likewise, states with and without strong collective bargaining posted gains in 2013 and held steady in 2015 — that is, the presence or absence of bargaining has no impact on NAEP scores.

NAEP can be an important source of information — but, too often, the results are subjected to spin that benefits a political agenda. As that narrative gets reinforced, focus on progress can be lost.

For more on education politics and policy in Tennessee, follow @TNEdReport

 

 

 

NAEP: First Take

The 2015 NAEP results are out today and there is already discussion about what they mean both state-by-state and nationally.

Here’s what Governor Haslam had to say:

“Today, we’re very excited to say that based on 2015 NAEP results, we’re still the fastest improving state in the nation since 2011. What this means is a new set of fourth- and eighth-graders proved that the gains that we made in 2013 were real.”

That’s pretty strong language. Proved. Governor Haslam said this year’s results proved that the gains seen in 2013 were real.

Here’s what we know: Tennessee remained relatively flat – no significant growth, relatively small decline in reading scores. Basically, we are where we were in 2013.

Here’s what else we know: The entire nation remained relatively flat — no significant growth, some decline in math.

So, here’s what that means: In 2013, Tennessee gained faster than the national average. In exactly one testing cycle. In 2015, Tennessee didn’t do worse than the rest of the country. We also didn’t do better. Like the rest of America, we remained steady.

That is, it’s entirely possible the 2013 gains seen in Tennessee were a one-time occurrence. An outlier.

Had Tennessee again made gains that outpaced the nation, one could say the results suggest something special or different is happening in Tennessee that may be causing the gains. It’s important to be cautious until you have several years of data and more thorough analysis.

It’s also worth noting that states that have adopted aggressive reforms and states that haven’t both remained flat. The general trend was “holding steady,” and it didn’t seem to matter whether your state was using a reform agenda (charters, vouchers, value-added teacher scores in teacher evaluations) or not.

Again, this makes it difficult to suggest that any one or even a package of educational practices drives change.

Was Tennessee’s performance on NAEP in 2013 a blip or an indicator of actual progress? The 2015 results don’t provide much insight.

The good news: Tennessee held steady. The related news: So did everyone else.

I’ll be doing some more digging in to the data to examine trends over time and what more can be learned from 2015. Stay tuned…

For more on education politics and policy in Tennessee, follow @TNEdReport

 

 

 

Quickly Dropped?

Some members of the Knox County School Board are considering action that would result in removing standardized testing “quick scores” from a student’s final grades.

This follows a year of changes to quick score calculations that created confusion for school districts across the state.

Discussing the matter, board member Karen Carson said:

“I think it’s one of those laws that generally you do it to hold students accountable and motivate them to do their best, but frankly it only increases the stakes for students,” she said.

“I don’t see that it benefits our students in any way. I don’t think student test scores, this test, should impact a student’s grade.”

Because of the transition to TNReady, scores will not be ready in time to be included in student grades this year. This prompted the Knox County Board to ponder asking the General Assembly to remove the requirement altogether.

For more on education politics and policy in Tennessee, follow @TNEdReport

Grassroots Education Groups Applaud Testing Task Force Findings

Following the release of Tennessee’s Assessment Task Force findings recommending reduced use of standardized tests in Tennessee schools and transparency for the tests that are administered, a coalition of groups that in June had called for just this sort of testing reform issued a press release applauding the findings and urging timely action to make them reality.

Here’s the release:

Pro-education groups today announced their support for recommendations issued by the Tennessee Assessment Task Force, chaired by state education commissioner Candice McQueen. The recommendations call for the elimination of standardized testing for kindergartners and first graders; fewer standardized tests for older students; a parent advisory group and greater testing transparency.

“This is a great step in the right direction,” said Lyn Hoyt, president of  Tennesseans Reclaiming Educational Excellence (TREE) and public school parent. “Professional educators, teachers, and students all know that the singular focus on standardized tests is counterproductive. The science is clear: Forcing the youngest students to take these tests is both useless and developmentally inappropriate. Hoyt also lamented about the shroud of secrecy that the Department of Education wraps around the tests. Touting their habitual inconsistency with reporting test scores, including delayed release of TCAP scores in 2014 and seemingly artificially inflated “quick scores” in 2015, and cut scores that change every year. “It is time for the secrecy surround these tests to end,” Hoyt said. “We called for testing transparency months ago and now it is time for Governor Haslam and the legislature to act.”

TREE in partnership with a dozen other advocacy groups circulated a petition earlier this summer calling for the publication of standardized test questions and answers; pre-determined cut scores; and a reduction in the use of standardized tests.

“We urge the state to adopt these recommendations in a timely manner and continue to make efforts to both reduce the testing burden, increase instruction time away from test prep and increase confidence in the process,” Hoyt said. “Standardized tests should be used as tools to guide future learning, not as a weapon to use against our teachers and students.”

The coalition includes the following groups:

Strong Schools (Sumner County)
Williamson Strong (Williamson County)
SPEAK (Students, Parents, Educators Across Knox County)
SOCM (Statewide Organizing for Community eMpowerment)
Momma Bears Blog
Gideon’s Army, Grassroots Army for Children (Nashville)
Advocates for Change in Education (Hamilton County)
Concerned Parents of Franklin County (Franklin County)
The Dyslexia Spot
Parents of Wilson County, TN, Schools
Friends of Oak Ridge Schools (City of Oak Ridge Schools)
TNBATs (State branch of National BATs)
East Nashville United
Tennessee Against Common Core (Statewide)
Coalition Advocating of Public Education (CAPE)

 

For more on education politics and policy in Tennessee, follow @TNEdReport

 

PET Talks Testing

Audrey Shores, Director of Communications and Technology for Professional Educators of Tennessee (PET) offers some thoughts on standardized testing in Tennessee.
The 2015-2016 school year ushers in some big changes to assessments that have been developed as the state has reacted to changing standards and legislation. Professional Educators of Tennessee Board President Cathy Kolb and Director of Technology & Communications Audrey Shores participated in the Assessment Practices Task Force that convened in April and continued meeting each month throughout the summer. A final report from the TN Department of Education on the findings and recommendations of the task force was released today.

The task force was established by the Department of Education to gather and analyze information regarding opinions about the assessment landscape in Tennessee form a variety of stakeholders including classroom teachers, district leaders, legislators and parents. The goal was to establish a set of principles and recommendations to guide decision-making around assessments, particularly in regard to the new TNReady assessments that will be implemented this year for ELA and Math.

While a university degree is not appropriate for everyone, studies show wide income gaps for those who do not go on to some type of post-secondary training. This is why standards are developed with “college and career readiness” in mind, and TNReady is designed to assess student’s proficiency in relation to the standards. One feature of the new tests is the more varied and interactive nature of the questions. Designed to be administered online, TNReady will utilize a variety of question types in addition to multiple choice. Some math questions will allow the use of a calculator instead of banning them outright, and ELA questions will involve activities such as highlighting passages. Sample questions are available online through MICA, a platform designed to be available to students and the public accessible through any browser. This also gives students who would like more practice with the system the ability to access it outside of the classroom. The MIST system will be
available to teachers for creating practice tests for students in the classroom. There is a waiver option for districts who are not ready for online test-taking, but overall the online system will reduce costs, and after the first year should reduce the time it takes the department to provide results.

A series of surveys this past year,including a statewide survey we conducted last spring (https://proedtn.site-ym.com/news/249809/) , uncovered a pattern of concerns regarding the culture of testing. Disruptions to regular instruction that affect the entire school and the amount of testing are two of the key concerns expressed, and that the Department says they are working hard to address.

Scheduling and Class Disruption
One of the biggest complaints that surfaced from teachers and district leaders was how disruptive assessments are, leading to a loss of valuable instruction. Past assessments were not designed with the variety of schedules utilized by different districts, which often led to a virtual shutdown of the entire school during testing. First, the new TNReady assessments are designed to fit within a regular 45-60 minute class period. Rules regarding the surrounding environment have also been relaxed, so teachers will no longer have to paper their entire rooms to cover walls or move the class to another location. Testing windows have also been developed to provide more flexibility on both the school and district level. Districts can choose their own windows within those provided by the state, and not all school within a district have to test on the same day. The Scheduling and Logistics Task Force began meeting over the summer to develop exemplary schedules based on a variety of scheduling models. It will continue to meet throughout the next year to provide feedback and guidance.

Too Much Testing!
The message has been clear – kids are getting too many tests, and not enough learning. Parents are upset by the stress they see their children coping with as they are pressured to perform well on tests throughout the year. Teachers are frustrated that they lose opportunities to teach more freely because they are constantly preparing the students to perform well on tests. Superintendents are stressed by trying to meet accountability requirements while responding to the concerns of educators and the community.

The amount of tests must be addressed at the summative, interim and formative levels. The state-required assessments are summative tests, of which there are only a few. Interim assessments are often required at the district level to address potential gaps that will affect student performance on the summative tests, and formative assessments include a wide array of test typically administered at the classroom level. Many feel that the high-stakes nature of testing at the state level drives a large quantity of tests at other levels, and leads to a disproportionately large amount of instructional time being devoted to test prep. While studies have found that most people believe that assessments and accountability are importance pieces of the education puzzle, they also feel that too much importance is placed on these aspects to the detriment of overall student learning.

Better Feedback
Relevance was a recurring topic that came up during the task force. Assessments need to provide feedback that is useful to students, teachers and parents. The Department of Education will be designing new reports this year to be both more aesthetically pleasing and easier to read in order to provide relevant information more clearly to parents and students. Clear, specific recommendations based on areas of weakness to help students improve is one of the primary goals of the new reports in order to provide more actionable information.

Being the first year of implementation for TNReady means that results will likely be delayed relative to previous years. One of the proposed benefits of the online system is that results will be available sooner in subsequent years. Criticism from teachers remains, however, because there is little they can do with this information once the child has left their classroom.

Testing and Evaluation
This spring, the 109th Tennessee General Assembly passed the Tennessee Teaching Evaluation Enhancement Act (http://wapp.capitol.tn.gov/apps/BillInfo/Default.aspx?BillNumber=HB0108) to lessen the effect that implementation of a new assessment will have on accountability measures for educators. The key portion of this legislation is the adjustment of the weighting of student growth data in teacher evaluations. This applies to the new TNReady ELA and Math assessments as well as the social studies and science TCAP tests. New assessments will only represent 10% of the evaluation for the 2015-2016 school year, 20% in the following year, and returning to 35% for the 2017-2018 school year. Only the most recent year’s data will be used if it results in a higher rating for the teacher. The act also decreases the weighting of growth data for teachers in non-tested subjects from 25% to 10% in for the 20-15-2016 school year, rising to a maximum 15% thereafter. For graphs showing growth score weighting for test v. non-tested subjects and more view the Tennessee Teaching Evaluation Enhancement Act page on the TEAMTN website.

Developing a plan for the new assessments has involved having conversations with and gathering feedback from a variety of stakeholders across the state. Legislators have taken steps to ease the transition and a variety of resources (see Resources, right) have been developed to assist everyone involved in understanding the changes that are being implemented this year. The process of gathering feedback and developing various components will continue throughout the year as the new tests are put to the test themselves.

For more on education politics and policy in Tennessee, follow @TNEdReport

Does TCAP Measure Proficiency or Poverty?

Ken Chilton, a professor at Tennessee State University, has a column in yesterday’s Chattanooga Times-Free Press in which he theorizes that poverty is a much better predictor of student performance on TCAP than teacher performance or other school-based factors.

Moreover, Chilton argues that the current emphasis on testing is misplaced and that frequent changes in standards and tests prevent meaningful long-term trend analysis.

He says:

Despite the proclamations of systemic failure, we don’t have enough longitudinal data to really know what is or is not working. The standards and the tests used to measure success change frequently. Consequently, it’s difficult to compare apples to apples. So, when scores change in one year we tend to mistake one data point for a trend by touting success or placing blame. Yet, most of us don’t know what proficiency means.

And he laments the expectations game played by policymakers and state education leaders:

Educators are under immense pressure to show improvement. Resources, careers and jobs are on the line. But, is it realistic to expect big jumps in proficiency from one academic year to the next, to the next and to the next? No, it’s incredibly unrealistic. And, it sets up a series of public expectations that are crushed year after year.

These unmet expectations contribute to the false perception that public schools are broken and thus are undeserving of additional tax revenues.

As for education reforms that get much attention in our state, Chilton says:

…but the annual TCAP gnashing of the teeth suggests that our expectations are out of whack with reality. None of the education reforms implemented in Tennessee address the underlying root causes that threaten the viability of our public schools — inequality.

Chilton’s analysis and claims regarding inequality and the impact of poverty are supported by (admittedly short-term) analysis of TCAP data from the top- and bottom-performing districts in the state:

An analysis of TCAP performance over time indicates that those school systems with consistently high levels of poverty tend to have consistently low scores on TCAP. Likewise, those systems with the least amount of poverty tend to have consistently higher scores on TCAP.

Additional analysis suggests:

The top 10 districts spend an average of 3 times more than the bottom 10 in terms of investment over the BEP formula. They also have an ACT average that is 5 points higher and a TCAP average that is nearly 20 points higher than the bottom ten.

In short, as Chilton suspects, there is a glaring inequality in terms of the educational opportunities offered Tennessee students. Add to that a growing inadequacy in terms of state investment in schools, and you have a recipe for certain failure.

For more on education politics and policy in Tennessee, follow @TNEdReport