Ready to Waive

Governor Bill Haslam and Commissioner of Education Candice McQueen announced today that in light of difficulties with the administration of the TNReady test, they are proposing that TNReady data NOT be included in this year’s round of teacher evaluations.

The statement comes after the Knox County Board of Education made a similar request by way of resolution in December. That resolution was followed by a statewide call for a waiver by a coalition of education advocacy groups. More recently, principals in Hamilton County weighed in on the issue.

Here’s Governor Haslam’s press release on the waiver:
Tennessee Gov. Bill Haslam today announced he would seek additional flexibility for teachers as the state continues its transition to the TNReady student assessment.

Under the proposal, teachers would have the choice to include or not to include student results from the 2015-2016 TNReady assessment in his or her evaluation score, which typically consists of multiple years of data. The proposal keeps student learning and accountability as factors in an educator’s evaluation while giving teachers the option to include this year’s results if the results benefit them. The governor will work with the General Assembly on specific language and a plan to move the proposal through the legislative process.

“Tennessee students are showing historic progress. The state made adjustments to teacher evaluation and accountability last year to account for the transition to an improved assessment fully aligned with Tennessee standards, which we know has involved a tremendous amount of work on the part of our educators,” Haslam said. “Given recent, unexpected changes in the administration of the new assessment, we want to provide teachers with additional flexibility for this first year’s data.”

Tennessee has led the nation with a teacher evaluation model that has played a vital role in the state’s unprecedented progress in education. Tennessee students are the fastest improving students in the country since 2011. The state’s graduation rate has increased three years in a row, standing at 88 percent. Since 2011, 131,000 more students are on grade-level in math and nearly 60,000 more on grade-level in science.  The plan builds upon the Teaching Evaluation Enhancement Act proposed by the governor and approved by the General Assembly last year. This year is the first administration of TNReady, which is fully aligned with the state’s college and career readiness benchmarks.

“Providing teachers with the flexibility to exclude first-year TNReady data from their growth score over the course of this transition will both directly address many concerns we have heard and strengthen our partnership with educators while we move forward with a new assessment,” Department of Education Commissioner Candice McQueen said. “Regardless of the test medium, TNReady will measure skills that the real world will require of our students.”

Most educator evaluations have three main components: qualitative data, which includes principal observations and always counts for at least half of an educator’s evaluation; a student achievement measure that the educator chooses; and a student growth score, which usually comprises 35 percent of the overall evaluation

 

While the release mentions last year’s changes to teacher evaluation to account for TNReady, it fails to note the validity problems created by an evaluation system moving from a multiple choice (TCAP) to a constructed-response test (TNReady).

Here’s the Tennessee Education Association on the announcement:

“TEA applauds Gov. Haslam on his proposal to give teachers the flexibility to not use TNReady test data in their 2015-16 evaluations. It is encouraging to see the governor listen to the widespread calls from educators, parents and local school boards for a one-year moratorium for TNReady data in teacher evaluations.”

 

“It is important that schools are given the same leniency as students and teachers during the transition to TNReady. These test scores that Gov. Haslam is acknowledging are too unreliable for use in teacher evaluations, are the same scores that can place a school on the priority list and make it eligible for state takeover. All high-stakes decisions tied to TNReady test data need to be waived for the 2015-16 school year.”

 

“While the governor’s proposal is a step in the right direction toward decoupling standardized test scores with high-stakes decisions, these measurements have proven to be unreliable statistical estimates that are inappropriate for use in teacher evaluations at all. TEA will continue its push to eliminate all standardized test scores from annual teacher evaluations.”

For more on education politics and policy in Tennessee, follow @TNEdReport

Hamilton Principals Call for TNReady Waiver

A group of school principals in Hamilton County is joining the call for a waiver of the use of TNReady scores in teacher evaluations and accountability data in light of day one problems with the administration of the online assessment.

Here’s the resolution:

 

 

HCPA Resolution Regarding State Assessments

 

(NOT) Ready on Day One

It’s campaign season and candidate after candidate is telling voters they are the clear choice because they will be “ready on day one.”

Likewise, it’s the beginning of statewide testing season in Tennessee and districts have been told the state’s new system would be ready on day one.

Except it wasn’t.

Brian Wilson at the Murfreesboro Daily News Journal reports:

A technology failure from a state vendor halted standardized testing across Tennessee on the first day that TNReady, the state’s new online exam program, was set to be administered on a widespread basis.

The state’s testing platform “experienced major outages across the state” Monday morning because of network issues with Measurement, Inc., who is contracted to administer the standardized exams, according to a memo Commissioner of Education Candice McQueen sent to schools directors across the state.

Don’t call us, we’ll call you …

As problems began this morning, the Department of Education sent the following notice to school districts:

At 8:25 a.m. CST the MIST platform experienced major outages across the state. These outages were caused because the network utilized by Measurement Inc. experienced a failure. We are urgently working with Measurement Inc. to identify the causes and correct the problem. At this time, we are advising that schools experiencing problems with the test discontinue testing, and return to their normal classes. Please do not begin any new additional testing you had planned for today until the department provides further information. However, if you have students that are successfully testing, please allow them to complete the current session.

Note, this problem affects both the MICA and MIST platforms. 

The MIST Help Desk is aware of the problem and will be not accepting additional phone calls on this issue. Please encourage your technology directors to call the department’s TNReady Focus Room.

We will provide frequent updates as information becomes available. Thank you for your patience.

It’s not clear how today’s delay will impact testing schedules across the state or whether the TNReady platforms will be ready tomorrow.

Williamson County Schools had already pushed the start of their TNReady testing back to Wednesday as a precaution against the sort of testing glitches that occurred today.

A Call for Fairness

The Tennessee Education Association issued a statement from their President, Barbara Gray, calling for fair treatment of teachers in light of the TNReady problems:

TEA has long had concerns about this transition to a statewide online  assessment. We have seen problems with pilot assessments and practice tests in the past, and unfortunately the first day of TNReady resulted in more issues and frustrations for our students and teachers.

 

Leading up to today’s testing, we have heard from educators and parents statewide about concerns with the state’s capacity to handle so many students on the server at one time, as well as concerns about local districts having enough resources to complete the testing with so little funding from the state.

 

It is unacceptable to have this kind of statewide failure when the state has tied so many high-stakes decisions to  the results of this assessment. Our students and teachers have enough stress and anxiety around these assessments without adding additional worries about technical issues.

 

The state must grant a one-year waiver – at a minimum – from including TNReady scores in teacher evaluations. It is unfair and inappropriate to stake our teachers’ professional standing on flawed, unreliable test scores in any year, but there are even greater implications and uncertainty while implementing a new assessment.

School Boards Expressing Concern

Ahead of the TNReady tests, several school boards have expressed concern about the use of the results in teacher evaluations this year.

MNPS and Knox County are among those asking the state to waive the results this year.

No word on whether state officials are still perplexed about why teachers are wary having TNReady count toward this year’s evaluations.

Again, it’s not clear when we’ll actually be TNReady, just that it wasn’t on day one.

For more on education politics and policy in Tennessee, follow @TNEdReport

 

As Flexible as a Brick Wall

Grace Tatter reports that officials at the Tennessee Department of Education are “perplexed” by concerns over using TNReady data in this year’s teacher evaluations.

While a number of districts have passed resolutions asking for a waiver from including TVAAS scores in this year’s teacher evaluations due to the transition to TNReady, a department spokesperson said:

“Districts have complete discretion to choose how they want to factor that data,” Ball said Thursday. “They don’t have to use TNReady or growth data in hiring, firing, retention or promotion.”

As Tatter’s story notes, however, data from TNReady will still be a part of a teacher’s TVAAS score — 10%. And that score becomes a part of a teacher’s overall evaluation score — a ranking from 1-5 that purports to measure a teacher’s relative effectiveness.

10% is enough to move a ranking up or down a number, and that can have significant impacts on a teacher’s career, even if they are not fired and their pay is not impacted. Of course, some districts may use this year’s data for those purposes, since it is not prohibited under the evaluation changes passed last year.

Dan Lawson outlines some of the of impact faced by teachers based on that final number:

The statutorily revised “new tenure” requires five years of service (probationary period) as well as an overall score of “4” or “5” for two consecutive years preceding the recommendation to the Board of Education. Last year, no social studies assessment score was provided since it was a field tested and the teacher was compelled to select a school wide measure of growth.  He chose POORLY and his observation score of a “4.38” paired with a school wide growth score in the selected area of a “2” producing a sum teacher score of “3” thereby making him ineligible for tenure nomination.

According to TCA 49-5-503, a teacher may not be awarded tenure unless she achieves a TEAM score of 4 or 5 in two consecutive years immediately prior to being tenure eligible. That means a TVAAS score that takes a teacher from a 4 to a 3 would render her ineligible.

Further, a tenured teacher who receives a TEAM score of a 1 or 2 in two consecutive years is returned to probationary status (TCA 49-5-504). So, that tenured teacher who was a 2 last year could be impacted by a TNReady-based TVAAS score that moves a TEAM score of a 3 down to a 2.

Districts don’t have “complete discretion” to waive state law as TNDOE spokesperson Ashley Ball seems to imply.

Further, basing any part of a teacher’s evaluation on TVAAS scores based on TNReady creates problems with validity. Why include a number in a teacher’s evaluation that is fundamentally invalid?

Teachers want an evaluation process that is fair and transparent. There’s nothing perplexing about that.

For more on education politics and policy in Tennessee, follow @TNEdReport

Growth Scores and Teacher Tenure

Dan Lawson is the Director of Schools at Tullahoma City Schools. This post reflects his thoughts on the current use of TVAAS as it relates to teacher tenure.

 

The issue: “growth scores” as a determinant for teacher tenure recommendations.

 

The Background: I employ an outstanding young teacher who enjoyed three consecutive years of a level “4”+ evaluations and those scores were moved to a “5” based on the TCAP growth score. In the “old tenure” model, that teacher would have been eligible for tenure recommendation to our Board of Education upon completion of three years of services and the recommendation of their superintendent.  

 

The statutorily revised “new tenure” requires five years of service (probationary period) as well as an overall score of “4” or “5” for two consecutive years preceding the recommendation to the Board of Education. Last year, no social studies assessment score was provided since it was a field tested and the teacher was compelled to select a school wide measure of growth.  He chose POORLY and his observation score of a “4.38” paired with a school wide growth score in the selected area of a “2” producing a sum teacher score of “3” thereby making him ineligible for tenure nomination.

 

This is a very real example of an inequity in our current tenure eligibility metrics.  In the 2014-15 evaluation cycle, more than 66.6% of Tullahoma teachers did not have an individual assessment score, so were compelled to select some other measure. In this case, we have a teacher that we are happy with, who produces great student outcomes and one that we would like to recognize with tenured status but we are unable to do so.  More than anything this sends a message that the process for the majority of our teachers is little more than some arbitrary guessing game, and that guessing games does little more than erode confidence; Our teachers deserve better.

A second teacher visited with his building principal and I related to standards that are not taught aligned with the state assessment.  He went on to produce competition results, ACT scores and AP calculus scores of the students in that “pipeline” in support of his math departmental teaching practice.  His request was simple:  Allow me to teach with a focus on the end product instead of a focus on a test this May.  Within that dialogue, he was quick to share the fact that he expected his growth score to suffer that year but in the long term our students would be better served.  Furthermore, he opined that as long as his principal and superintendent were in place and understood the “big picture” he really had no concerns.  I concurred.  However, his next statement was deeply troubling.  He said “while the number doesn’t mean anything to us, when I retire, that next teacher may believe that number is the most important measure of progress.”  

I believe in accountability.  My board and I embrace expectations of high performance and I am comfortable in making personnel decisions aligned with school improvement and the best academic and developmental opportunities for our children. In this circumstance, however, we are letting the “tail” of growth scores “wag the dog” of teacher evaluations and subsequent tenure eligibility.

A Proposed Solution: We are supportive of the award of tenure returning to a local decision with eligibility determined by service and evaluations.  If, however, that change is not palatable, I believe that an amendment to the current “tenure” statute language allowing a school district to present “mitigating and compelling reason(s)” sponsored by the superintendent to the TDOE for review is warranted. We find the current system of “growth scores” serving as the overwhelming criteria to be an ineffective measure since in our school system since a majority of our teachers do not have those scores available for their use and are thereby compelled to use some school wide measure over which they may have limited influence.

For more on education politics and policy in Tennessee, follow @TNEdReport

A Simple Wish

Amanda Kail, a teacher in MNPS and a member of CAPE, has released her prepared remarks ahead of tonight’s MNPS School Board meeting.

Here’s what she plans to say:

Dear ladies and gentlemen of the board. My name is Amanda Kail. I am an EL teacher at Margaret Allen Middle Prep. And I am here to talk about my wish list for this district.
So what do I wish? How would I make things different? I wish that this district would take teaching and learning seriously. I wish that instructional time was treated as the MOST important part of the school year. I wish that no one would even dream of asking teachers to shift their schedules and lesson plans constantly to make room for assessments that give us very little useful feedback.
Why don’t these assessments give us useful feedback? Because they are riddled with confusing formats, questions that are developmentally inappropriate, and require students to navigate unfamiliar technology. Because the internet connection is slow or the laptop malfunctions, or the test kicks them out for unknown reasons. Because they do not differentiate for our vastly diverse student population. Recently, one of my students, Z, told me that he has given up on school. Z is a bright, caring EL student with significant learning disabilities. When I asked him why, he told me that none of the work that he does in the classroom matters, because he is going to fail all the tests anyway. He said, “When my teachers give me work in the classroom, I understand it. But then the tests come and I just fail. I don’t understand anything. I give up.”
Z knows that he can learn. And so do his teachers. He can’t get there by the same path as everyone else, but he can get there. But the barrage of tests, which insist on assessing everyone the same way, tell him otherwise. We have got to stop putting so much trust in these tests that tell us our students are below basic, that our teachers are ineffective, and that our schools are failures. And on behalf of Z and every student like him, I am not giving up.
At some level, the state agrees with me. The TN Department of Education has given students a grace period of a year before TN Ready counts for them. However, this test will STILL count for teacher evaluations. So I am back to wishing that the district would take teaching and learning seriously. How many teachers do you think are going to continue to commit professional suicide by getting low evaluations due to test scores? Tests that they know, and even the state knows, our students have no hope in passing? Would you stay? Are we as a district weary of the teacher retention problem?
Luckily, dear board members, there is something you can do. The Knoxville school board recently passed a resolution asking the state to not count TN Ready scores in teacher evaluations. I am asking you to do the same. The state needs to hear from district leaders as a united front on this issue. It will go a long way to show that you do take teaching and learning seriously. That is my wish.
For more on education politics and policy in Tennessee, follow @TNEdReport

A Matter of Fairness

A coalition of education advocacy groups released an online petition today calling for a one year waiver from using student test scores in teacher evaluations in Tennessee.

Here’s the press release:

A coalition of groups supporting public education today launched an online petition asking the Tennessee General Assembly and Governor Bill Haslam to grant teachers a grace period from the use of student test scores in their evaluations in the first year of new TNReady tests. The petition tracks language adopted unanimously by the Knox County School Board, which passed a resolution last week opposing the use of student test scores in teacher evaluation for this academic year.

“The state has granted waivers so that TNReady scores aren’t required to be counted in student grades for this year,” said Lyn Hoyt, president of Tennesseans Reclaiming Educational Excellence (TREE). “If TNReady won’t count in student grades, it’s only fair that it shouldn’t count for teacher evaluation.” Hoyt noted that the transition to the new test means entering uncharted territory in terms of student scores and impact on teacher evaluation scores. As such, she said, there should be a one year or more grace period to allow for adjustment to the new testing regime.

“TNReady is different than the standardized tests we’ve had in the past,” Hoyt said. “Our students and teachers both deserve a reasonable transition period. We support the Knox County resolution and we are calling on the General Assembly to take notice and take action. Taking a thoughtful path transitioning to the new test can also build confidence and trust in the process.”

Hoyt also cited a recent policy statement by the American Educational Research Association that cautions against using value-added data in teacher evaluations and for high-stakes purposes. “Researchers who study value-added data are urging states to be cautious in how it is used to evaluate teachers,” Hoyt said. “The transition to TNReady is the perfect time to take a closer look at how test scores are used in teacher evaluations. Let’s take a year off, and give our students and teachers time to adjust. It’s a matter of fundamental fairness.”

Groups supporting the petition include:

Strong Schools (Sumner County)
Williamson Strong (Williamson County)
SPEAK (Students, Parents, Educators Across Knox County)
SOCM (Statewide Organizing for Community eMpowerment)

Middle TN CAPE (Coalition Advocating for Public Education)
Momma Bears Blog
Advocates for Change in Education (Hamilton County)
Concerned Parents of Franklin County (Franklin County)
Parents of Wilson County, TN, Schools
Friends of Oak Ridge Schools (City of Oak Ridge Schools)
TNBATs (State branch of National BATs)
TREE (Tennesseans Reclaiming Educational Excellence)
TEA (Tennessee Education Association)

For more on education politics and policy in Tennessee, follow @TNEdReport

New and Not Ready

Connie Kirby and Carol Bomar-Nelson, English teachers at Warren County High School, share their frustration with the transition to TNReady and what it means for teacher evaluation.

Connie Kirby:

This is going to be long, but I don’t usually take to social media to “air my grievances.” Today I feel like there’s no better answer than to share how I feel. It’s been a long year with some of the highest of the highs and lowest of the lows. I work in a wonderful department at a great school with some of the most intelligent, hard-working people I know. As the years have progressed, we have gone through many changes together and supported each other through the good and the bad (personally and professionally). We do our best to “comply” with the demands that the state has put on us, but this year everything that we’ve been hearing about and preparing for for years has come to fruition. We’re finally getting familiar with the “real deal” test, instead of dealing with EOCs and wondering how it’s going to change. I’ve seen the posts and rants about Common Core and have refrained from jumping on the bandwagon because I have had no issues with the new standards. I do, however, see an issue with the new assessment, so I have held my hand in the hopes that I might find something worth sharing and putting my name next to. Today, I witnessed an exchange between one of my colleagues and the state, and I couldn’t have said it better myself. With her permission, I am sharing her words.

Carol Bomar-Nelson:

I don’t know how to fix the problems with the test. I agree that teachers should have accountability, and I think student test scores are one way of doing that. Having said that, if the state is going to hold teachers accountable for student test scores, then the test needs to be fair. From what I have seen, I firmly believe that is not the case. I am not just basing this conclusion on the one “Informational Test” in MICA. Other quizzes I have generated in MICA have had similar flaws. When my department and I design common assessments in our PLC’s, we all take the tests and compare answers to see which questions are perhaps ambiguous or fallacious in some way. I do not see any evidence that the state is doing this for the tests that it is manufacturing. A team of people can make a test that is perfect with respect to having good distractors, clear wording, complex passages, and all the other components that make up a “good” test, but until several people take the test, compare answers, and discuss what they missed, that test is not ready for students to take–especially not on a high stakes test that is supposed to measure teacher effectiveness. I understand that this is the first year of this test. I am sympathetic to the fact that everyone is going through a ‘learning process’ as they adapt to the new test. Students have to learn how to use the technology; teachers have to learn how to prepare their students for a new type of tests; administrators have to figure out how to administer the test; the state has to work out the kinks in the test itself…The state is asking everyone to be “patient” with the new system. But what about for the teachers? Yes, the teacher effectiveness data only counts for 10% this year, but that 10% still represents how I am as a teacher. In essence, this new tests is like a pretest, correct? A pretest to get a benchmark about where students stand at the end of the year with this new test that has so many flaws and so many unknowns. In the teaching profession, I think all would agree that it is bad practice to count a pretest AT ALL for a student’s grade. Not 35%, not 25%, not even 10%. So how is it acceptable practice to count a flawed test for 10% of a teacher’s evaluation? We can quibble all day about which practice questions…are good and which questions are flawed, but that will not fix the problem. The problem lies in the test development process. If the practice questions go through the same process as the real questions, it would stand to reason that the real test questions are just as flawed as the practice questions. My students have to take that test; I never get to see it to determine if it is a fair test or not, and yet it still counts as 10% of my evaluation that shows my effectiveness as a teacher. How is that fair in any way whatsoever? In what other profession are people evaluated on something that they never get to see? Especially when that evaluation ‘tool’ is new and not ready for use?

I know how to select complex texts. I know how to collaborate with my PLC. I can teach my students how to read, think critically, analyze, and write. When I do not know how to do something, I have no problem asking other teachers or administrators for suggestions, advice, and help. I am managing all of the things that are in my control to give my students the best possible education. Yet in the midst of all of these things, my teacher accountability is coming from a test that is generated by people who have no one holding them accountable. And at the end of the year, when those scores come back to me, I have no way to see the test to analyze its validity and object if it is flawed.

For more on education politics and policy in Tennessee, follow @TNEdReport

Not Yet Ready for Teacher Evaluation?

Last night, the Knox County Board of Education passed a resolution asking the state to not count this year’s new TNReady test in teacher evaluation.

Board members cited the grace period the state is granting to students as one reason for the request. While standardized test scores count in student grades, the state has granted a waiver of that requirement in the first year of the new test.

However, no such waiver was granted for teachers, who are evaluated using student test scores and a metric known as value-added modeling that purports to reflect student growth.

Instead, the Department of Education proposed and the legislature supported a plan to phase-in the TNReady scores in teacher evaluations. This plan presents problems in terms of statistical validity.

Additionally, the American Educational Research Association released a statement recently cautioning states against using value-added models in high-stakes decisions involving teachers:

In a statement released today, the American Educational Research Association (AERA) advises those using or considering use of value-added models (VAM) about the scientific and technical limitations of these measures for evaluating educators and programs that prepare teachers. The statement, approved by AERA Council, cautions against the use of VAM for high-stakes decisions regarding educators.

So, regardless of the phase-in of TNReady, value-added models for evaluating teachers are problematic. When you add the transition to a new test to the mix, you only compound the existing problems, making any “score” assigned to a teacher even more unreliable.

Tullahoma City Schools Superintendent Dan Lawson spoke to the challenges with TVAAS recently in a letter he released in which he noted:

Our teachers are tasked with a tremendous responsibility and our principals who provide direct supervision assign teachers to areas where they are most needed. The excessive reliance on production of a “teacher number” produces stress, a lack of confidence and a drive to first protect oneself rather than best educate the child.

It will be interesting to see if other school systems follow Knox County’s lead on this front. Even more interesting: Will the legislature take action and at the least, waive the TNReady scores from teacher evaluations in the first year of the new test?

A more serious, long-term concern is the use of value-added modeling in teacher evaluation and, especially, in high-stakes decisions like the granting of tenure, pay, and hiring/firing.

More on Value-Added Modeling

The Absurdity of VAM

Unreliable and Invalid

Some Inconvenient Facts About VAM

For more on education politics and policy in Tennessee, follow @TNEdReport

 

It All Comes Down to a Number

Dan Lawson is the Director of Schools for Tullahoma City Schools. He sent this message and the American Educational Research Association press release to a group of Tennessee lawmakers.

I am the superintendent of Tullahoma City Schools and in light of the media coverage associated with Representative Holt and a dialogue with teachers in west Tennessee I wanted to share a few thoughts with each of who represent teachers in other districts in Tennessee. I am thankful that each of you have a commitment to service and work to cultivate a great relationship with teachers and communities that you represent.

While it is certainly troubling that the standards taught are disconcerting in that developmental appropriateness is in question by many, and that the actual test administration may be a considerable challenge due to hardware, software and capacity concerns, I think one of the major issues has been overlooked and is one that could easily address many concerns and restore a sense of confidence in many of our teachers.

Earlier this week the American Educational Research Association released a statement (see below) cautioning states “against the use of VAM for high-stakes decisions regarding educators.” It seems to me that no matter what counsel I provide, what resources I bring to assist and how much I share our corporate school district priorities, we boil our work and worth as a teacher down to a number. And for many that number is a product of how well they guess on what a school-wide number could be since they don’t have a tested area.

Our teachers are tasked with a tremendous responsibility and our principals who provide direct supervision assign teachers to areas where they are most needed. The excessive reliance on production of a “teacher number” produces stress, a lack of confidence and a drive to first protect oneself rather than best educate the child. As an example, one of my principals joined me in meeting with an exceptional middle school math teacher, Trent Stout. Trent expressed great concerns about the order in which the standards were presented (grade level) and advised that our math department was confident that a different order would better serve our students developmentally and better prepare them for higher level math courses offered in our community. He went on to opine that while he thought we (and he) would take a “hit” on our eighth grade assessment it would serve our students better to adopt the proposed timeline. I agreed. It is important to note that I was able to dialogue with this professional out of a sense of joint respect and trust and with knowledge that his status with our district was solely controlled by local decision makers. He is a recipient of “old tenure.” However, don’t mishear me, I am not requesting the restoration of “old tenure,” simply a modification of the newly enacted statute. I propose that a great deal of confidence in “listening and valuing” teachers could be restored by amending the tenure statute to allow local control rather than state eligibility.

I have teachers in my employ with no test data who guess well and are eligible for the tenure status, while I have others who guess poorly and are not eligible. Certainly, the final decision to award tenure is a local one, but local based on state produced data that may be flawed or based on teachers other than the potential nominee. Furthermore, if we opine that tenure does indeed have value, I am absolutely lost when I attempt to explain to new teachers that if they are not eligible for tenure I may employ them for an unlimited number of added contracts but if they are eligible based on their number and our BOE decides that they will not award tenure to anyone I am compelled to non-renew those who may be highly effective teachers. The thought that statue allows me to reemploy a level 1 teacher while compelling me to non-renew a level 5 teacher seems more than a bit ironic and ridiculous.

I greatly appreciate your service to our state and our future and would love to see an extensive dialogue associated to the adoption of Common Sense.

The American Educational Research Association Statement on Value-Added Modeling:

In a statement released today, the American Educational Research Association (AERA) advises those using or considering use of value-added models (VAM) about the scientific and technical limitations of these measures for evaluating educators and programs that prepare teachers. The statement, approved by AERA Council, cautions against the use of VAM for high-stakes decisions regarding educators.

In recent years, many states and districts have attempted to use VAM to determine the contributions of educators, or the programs in which they were trained, to student learning outcomes, as captured by standardized student tests. The AERA statement speaks to the formidable statistical and methodological issues involved in isolating either the effects of educators or teacher preparation programs from a complex set of factors that shape student performance.

“This statement draws on the leading testing, statistical, and methodological expertise in the field of education research and related sciences, and on the highest standards that guide education research and its applications in policy and practice,” said AERA Executive Director Felice J. Levine.

The statement addresses the challenges facing the validity of inferences from VAM, as well as specifies eight technical requirements that must be met for the use of VAM to be accurate, reliable, and valid. It cautions that these requirements cannot be met in most evaluative contexts.

The statement notes that, while VAM may be superior to some other models of measuring teacher impacts on student learning outcomes, “it does not mean that they are ready for use in educator or program evaluation. There are potentially serious negative consequences in the context of evaluation that can result from the use of VAM based on incomplete or flawed data, as well as from the misinterpretation or misuse of the VAM results.”

The statement also notes that there are promising alternatives to VAM currently in use in the United States that merit attention, including the use of teacher observation data and peer assistance and review models that provide formative and summative assessments of teaching and honor teachers’ due process rights.

The statement concludes: “The value of high-quality, research-based evidence cannot be over-emphasized. Ultimately, only rigorously supported inferences about the quality and effectiveness of teachers, educational leaders, and preparation programs can contribute to improved student learning.” Thus, the statement also calls for substantial investment in research on VAM and on alternative methods and models of educator and educator preparation program evaluation.

The AERA Statement includes 8 technical requirements for the use of VAM:

  1. “VAM scores must only be derived from students’ scores on assessments that meet professional standards of reliability and validity for the purpose to be served…Relevant evidence should be reported in the documentation supporting the claims and proposed uses of VAM results, including evidence that the tests used are a valid measure of growth [emphasis added] by measuring the actual subject matter being taught and the full range of student achievement represented in teachers’ classrooms” (p. 3).
  2. “VAM scores must be accompanied by separate lines of evidence of reliability and validity that support each [and every] claim and interpretative argument” (p. 3).
  3. “VAM scores must be based on multiple years of data from sufficient numbers of students…[Related,] VAM scores should always be accompanied by estimates of uncertainty to guard against [simplistic] overinterpretation[s] of [simple] differences” (p. 3).
  4. “VAM scores must only be calculated from scores on tests that are comparable over time…[In addition,] VAM scores should generally not be employed across transitions [to new, albeit different tests over time]” (AERA Council, 2015, p. 3).
  5. “VAM scores must not be calculated in grades or for subjects where there are not standardized assessments that are accompanied by evidence of their reliability and validity…When standardized assessment data are not available across all grades (K–12) and subjects (e.g., health, social studies) in a state or district, alternative measures (e.g., locally developed assessments, proxy measures, observational ratings) are often employed in those grades and subjects to implement VAM. Such alternative assessments should not be used unless they are accompanied by evidence of reliability and validity as required by the AERA, APA, and NCME Standards for Educational and Psychological Testing” (p. 3).
  6. “VAM scores must never be used alone or in isolation in educator or program evaluation systems…Other measures of practice and student outcomes should always be integrated into judgments about overall teacher effectiveness” (p. 3).
  7. “Evaluation systems using VAM must include ongoing monitoring for technical quality and validity of use…Ongoing monitoring is essential to any educator evaluation program and especially important for those incorporating indicators based on VAM that have only recently been employed widely. If authorizing bodies mandate the use of VAM, they, together with the organizations that implement and report results, are responsible for conducting the ongoing evaluation of both intended and unintended consequences. The monitoring should be of sufficient scope and extent to provide evidence to document the technical quality of the VAM application and the validity of its use within a given evaluation system” (AERA Council, 2015, p. 3).
  8. “Evaluation reports and determinations based on VAM must include statistical estimates of error associated with student growth measures and any ratings or measures derived from them…There should be transparency with respect to VAM uses and the overall evaluation systems in which they are embedded. Reporting should include the rationale and methods used to estimate error and the precision associated with different VAM scores. Also, their reliability from year to year and course to course should be reported. Additionally, when cut scores or performance levels are established for the purpose of evaluative decisions, the methods used, as well as estimates of classification accuracy, should be documented and reported. Justification should [also] be provided for the inclusion of each indicator and the weight accorded to it in the evaluation process…Dissemination should [also] include accessible formats that are widely available to the public, as well as to professionals” ( p. 3-4).

The bottom line:  Tennessee’s use of TVAAS in teacher evaluations is highly problematic.

More on TVAAS:

Not Yet TNReady

The Worst Teachers

Validating the Invalid

More on Peer Assistance and Review:

Is PAR a Worthy Investment?

For more on education politics and policy in Tennessee, follow @TNEdReport