Testing Resolve

The Tullahoma City Schools Board of Education will vote September 19th on a resolution related to state standardized tests. Specifically, the resolution calls for a shift to the use of ACT/SAT assessments and a significant reduction in the amount of time students spend taking tests.

A similar resolution was passed by the Board last year.

Here is the resolution:

A RESOLUTION OF THE TULLAHOMA CITY BOARD OF EDUCATION
IN SUPPORT OF ADMINISTRATION OF THE ACT OR SAT SUITE OF ASSESSMENTS TO MEET TCAP AND “EVERY STUDENT SUCCEEDS ACT” REQUIREMENTS IN OUR END OF COURSE ASSESSMENTS AT THE HIGH SCHOOL LEVEL AND AT THE 3-8 GRADE LEVELS

WHEREAS, the Tullahoma City Board of Education is the local governmental body responsible for providing a public education to the students and families of Tullahoma City, Tennessee; and

WHEREAS, the State of Tennessee through the work of the Tennessee General Assembly, the Tennessee Department of Education, the Tennessee Board of Education, and local boards of education has established nationally recognized standards and measures for accountability in public education; and

WHEREAS, the Tennessee Department of Education is currently working to implement a replacement to the former Tennessee Comprehensive Assessment Program (TCAP) for the 2016-2017 school year; and

WHEREAS, these new assessments are called TNReady for the areas of English/language arts and math, grades 3 – 8 and TCAP Social Studies Achievement and U.S. History End of Course (EOC) exams; and

WHEREAS, during the assessment cycle of the 2015-16 school year an attempt to administer the assessments was deemed by the Tennessee Department of Education to be a “No-Go;” and

WHEREAS, the Tennessee Department of Education terminated the contract with Measurements, Incorporated and has secured the services of Questar to provide assessment services for Tennessee; and

WHEREAS, ACT and SAT have been used for decades as standard measures of college readiness and that all universities and colleges in Tennessee and the United States utilize the ACT as an admission assessment; and

WHEREAS, Pursuant to T.C.A. § 49-6-6001, all public school students must participate in a postsecondary readiness assessment such as the ACT or SAT. Districts may choose to administer the ACT or the SAT. Districts can also provide both assessments and allow their students to choose the assessment that is right for them; and

WHEREAS, one of the strategic goals of the Tennessee Department of Education is an increase of the average composite score to 21, and the benchmark for college readiness is a composite score of 21. The ACT has further broken down the benchmarks into an 18 for English, 22 for Math, 22 for Reading, and 23 for Science. If a student is able to score at, or above, these important benchmarks, they have a high probability of success in credit-bearing college courses. School districts are distinguished by the percentage of students meeting college readiness benchmarks; and

WHEREAS, both the ACT and the SAT are designed to assist colleges, universities, employers, and policy makers in the determination of college and career ready students, and experts in education administration, child development, and child psychology endorse standardized testing as a limited measure of progress and effectiveness in the important task of learning;

NOW, THEREFORE, BE IT RESOLVED

The Tullahoma City Board of Education implores the Tennessee General Assembly and the Tennessee Department of Education to allow school districts the opportunity to select either the math and English language arts assessments provided by the State of Tennessee or an English or math test that is part of the suites of standardized assessments available from either ACT or SAT.

BE IT FURTHER RESOLVED,

The Tullahoma City Board of Education implores the Tennessee General Assembly and the Tennessee Department of Education to direct psychometricians, contractors, and developers to construct assessments designed to inform instructional practice and to provide accountability that would not require for administration a period of time in hours greater in aggregate than the specific grade level of the said child, and not to exceed eight hours in length per academic year.

More on testing:

Still Too Much Testing?

Testing Time Reductions Announced

Questar’s Challenge

For more on education politics and policy in Tennessee, follow @TNEdReport


 

A Modification

As we reported yesterday, Phase II of TNReady is not so ready. In fact, Grace Tatter reports that the problem is statewide, impacting grades 3-8.

Tatter cites an email from the Department of Education indicating the state is not sure when the Phase II tests will be delivered to districts.

The email also says:

“…Districts may modify their testing schedules as needed, without any prior approval or notice to the (state),”

The thing is, some districts have already been trying to modify their schedules by not giving the test at all. The idea of refusing to administer Phase II surfaced in Murfreesboro in late March and early April. The state responded by issuing a vague threat regarding withholding BEP funds.

Tullahoma City Schools on Monday approved a resolution unanimously calling on the state to cancel testing for the remainder of this year.

All of this was before the realization that Phase II tests would not make it to Tennessee districts on time. Now, though, the Department of Education’s own words suggest that districts may modify as they see fit without consulting the state. One possible modification would be to not administer the test at all. Another would be to schedule it for a time in June when students aren’t in school. Districts could say they offered the test, but no one showed up to take it.

The state has also made a big fuss about what happens to students/districts if students simply refuse to take the test. Trouble is, the state’s memo is based on some pretty fuzzy reasoning.

As this piece was being written, the Department of Education announced it will not ask districts to reschedule tests beyond the current testing window, which expires on May 10th. That means if materials are not received in time for administration by that date, districts don’t have to administer the tests. The Department also indicated it would provide additional flexibility to districts.

From Jason Gonzales:

The Tennessee Department of Education announced to districts Friday it won’t reschedule the TNReady testing window again this year and for those districts that don’t receive tests on time, will provide flexibility.

“We will not ask districts to reschedule again beyond what has been communicated to date, and we will not extend the testing window beyond May 10,” according to a statement sent to districts Friday.

So, what’s next? Will the state cancel the contract with testing vendor Measurement, Inc.? Will Commissioner McQueen assume responsibility for the failed transition to a new test?

Only time will tell, and there’s not much time left.

 

For more on education politics and policy in Tennessee, follow @TNEdReport

Still Not F*&#ing Ready

TNReady Phase II is supposed to be starting, except it won’t. It seems that shipping delays will prevent at least eight school districts from starting the planned administration of Phase II next week.

In Sumner County, emails have gone out confirming the delay and a new planned start date of May 2nd.

Officials in seven other districts have confirmed they have yet to receive the testing materials.

This comes after a disastrous first day of TNReady testing back in February and subsequent shipping delays of Phase 1 paper materials.

It also comes after the Murfreesboro City School Board discussed refusing to administer Phase II and the Tullahoma City Schools considered a resolution calling on the state to stop any further testing this year.

From the start, the transition to TNReady has been bungled. While Commissioner McQueen continues to make excuses, blame the vendor, and promise a better outcome next time, students in Tennessee schools face disrupted schedules and loss of learning time.

Instead of issuing threats to districts, perhaps the Department of Education should have been developing solutions or simply responding to the frustrations of students, parents, and teachers across the state. Maybe stopping after Phase I would have allowed for a true course correction.

In any case, we’re still not TNReady.

For more on education politics and policy in Tennessee, follow @TNEdReport

Phasing Out

As Tennessee schools prepare to administer Phase II of the TNReady tests in late April and early May, parents are petitioning the General Assembly to stop the second phase altogether.

Grace Tatter reports:

Nearly 2,000 parents have signed a petition asking Gov. Bill Haslam and other state leaders to nix the entire second part of Tennessee’s new standardized assessment for students grades 3-11.

The change.org petition, which was started last week, garnered 1,000 signatures in its first three days from parents across the state.

The petition was started by Tullahoma parent and School Board member Jessica Fogarty.

While the Department of Education indicates it has no plans to suspend TNReady testing for this year, the Tullahoma School Board is set to vote on a resolution asking for just that at a meeting on Monday, April 18th.

Here’s a draft of that resolution:

A RESOLUTION OF THE TULLAHOMA CITY BOARD OF EDUCATION

TO SUPPORT A DELAY IN THE ADMINISTRATION OF TCAP ASSESSMENTS AT THE 3-8 GRADE LEVELS UNTIL SUCH A TIME THAT THE ASSESSMENTS AT EACH GRADE LEVEL NOT EXCEED A TOTAL NUMBER OF HOURS AS ENUMERATED BY THE GIVEN GRADE

WHEREAS, the Tullahoma City Board of Education is the local governmental body responsible for providing a public education to the students and families of Tullahoma City, Tennessee; and

WHEREAS, the State of Tennessee through the work of the Tennessee General Assembly, the Tennessee Department of Education, the Tennessee Board of Education, and local boards of education, has established nationally recognized standards and measures for accountability in public education; and

WHEREAS, the Tennessee Department of Education is currently working to implement a replacement to the former Tennessee Comprehensive Assessment Program (“TCAP”) for the 2015-2016 school year; and

WHEREAS, these new assessments are called TNReady for the areas of English/language arts and math, 3 – 8 and TCAP Social Studies Achievement and U.S. History End of Course exams; and

WHEREAS, this school year is the first year that the new assessments will be administered and as such, the new assessments are more appropriate tools for establishing baseline performance than they are for evaluating or comparing performance; and

WHEREAS, because of the testing transition within TCAP including TNReady and other issues, the Tennessee Department of Education has already acknowledged that, for the 2015-2016 school year, public school systems in Tennessee will likely not be able to integrate the test results into each student’s final grades; and

WHEREAS, the Senate Education Committee of the Tennessee General Assembly has scheduled a hearing to address issues and concerns associated with the delivered assessment product provided by Measurement, Incorporated; and

WHEREAS, experts in education administration, child development, and child psychology endorse standardized testing as a limited measure of progress and effectiveness in the important task of learning; and

WHEREAS, current TCAP-TNReady mandated assessments in grade 3 exceed 11.23 hours per student, or more than the ACT Test at 2.95 hours,the SAT Test at 3.00 hours, the Graduate Records Examinations (GRE) at 3.75 hours, the Law School Admission Test (LSAT) at 2.83 hours or the Medical College Admission Test (MCAT) at 6.25 hours; and

WHEREAS, current TCAP-TNReady mandated assessments in grades four and five (4, 5) exceed 11.08 hours per student, or more than the ACT Test at 2.95 hours,the SAT Test at 3.00 hours, the Graduate Records Examinations (GRE) at 3.75 hours, the Law School Admission Test (LSAT) at 2.83 hours or the Medical College Admission Test (MCAT) at 6.25 hours; and

WHEREAS, current TCAP-TNReady mandated assessments in grades six, seven, and eight (6, 7, 8) exceed 11.83 hours per student, or more than the ACT Test at 2.95 hours,the SAT Test at 3.00 hours, the Graduate Records Examinations (GRE) at 3.75 hours, the Law School Admission Test (LSAT) at 2.83 hours or the Medical College Admission Test (MCAT) at 6.25 hours;

NOW, THEREFORE, BE IT RESOLVED

The Tullahoma City Board of Education implores the Tennessee General Assembly and the Tennessee Department of Education to direct school districts to delay administrations of the TNReady suite of assessments until such a time that the assessments are of a reasonable amount of time for student completion of the assessment.

BE IT FURTHER RESOLVED,

The Tullahoma City Board of Education implores the Tennessee General Assembly and the Tennessee Department of Education to direct psychometricians, contractors, and developers to construct assessments designed to inform instructional practice and to provide accountability that would not require for administration a period of time in hours greater in aggregate than the specific grade level of the said child.

 

 

For more on education politics and policy in Tennessee, follow @TNEdReport

 

Thoughts on Annual Student Assessments

Dan Lawson is the Director of Schools at Tullahoma City Schools

The Issue: Assessments of student academic progress.  As you well know, The state of Tennessee is transitioning assessments from our former suite to the new TNReady assessments.  Furthermore, you are also well aware of the fact that many of the standards on which the current assessment is based are currently under review with consideration for additions and removal.  

 

The Background: In the scenario I described about teachers and growth scores, a senior teacher representing a lauded math department was able to present data that clearly and convincingly aligned our instruction with two critical components in our academic program in Tullahoma City Schools: ACT and Advanced Placement.  As he visited with me, he did so with a concern that can best be characterized by this summary:  Our instructional path produces ACT and Advanced Placement scores significantly above the state average and if we teach all the prescribed TNReady standards in timeframes aligned with TNReady assessments, we are concerned that our student performance on ACT and Advanced PLacement assessments will decline.

Certainly, that statement is based on the experiences and anecdotes of my staff members, but there is tremendous logic in this fundamental question.  Since one of our primary purposes and expected outcomes is to produce students who are “college and career ready” as measured by ACT or SAT, why don’t we allow schools and districts with the desire to do so to assess based on the ACT or SAT suite of services aligned with the measure we aspire to accomplish?  While the issue of assessment often is directly linked to the issue of accountability, I submit that the accountability of most schools and districts would be enhanced by reporting scores that both our students and their parents readily understand.  To that end, nearly every high school student enrolled in Tennessee high schools clearly understands the difference between a “15” and a “30” on the ACT.  That understanding makes it much easier for a teacher and school leader to discuss and propose interventions to address the “15” that has been reported for that student.

 

A Proposed Solution: There has been a misalignment in the testing/teaching standards from SAT 10 to TCAP to ACT and this misalignment has allowed some system’s to experience low TVAAS scores for K-2, 3-8, and 9-12 assessments. Until we pick a plan and follow that plan, we will be hard pressed to see college and career readiness expand in Tennessee. IF college and career readiness is really our goal, then don’t we need a clearly and cleanly aligned set of standards to reach that goal?

 

Align the state assessment with the ACT or SAT suite of services.  I understand that concerns exist suggesting that we can not accomplish that outcome and be compliant with state procurement, but I am also well aware of the fact that other states utilize the ACT suite today.  I am confident that we have the ability to accomplish anything that the state of Alabama has accomplished.

For more on education politics and policy in Tennessee, follow @TNEdReport

Growth Scores and Teacher Tenure

Dan Lawson is the Director of Schools at Tullahoma City Schools. This post reflects his thoughts on the current use of TVAAS as it relates to teacher tenure.

 

The issue: “growth scores” as a determinant for teacher tenure recommendations.

 

The Background: I employ an outstanding young teacher who enjoyed three consecutive years of a level “4”+ evaluations and those scores were moved to a “5” based on the TCAP growth score. In the “old tenure” model, that teacher would have been eligible for tenure recommendation to our Board of Education upon completion of three years of services and the recommendation of their superintendent.  

 

The statutorily revised “new tenure” requires five years of service (probationary period) as well as an overall score of “4” or “5” for two consecutive years preceding the recommendation to the Board of Education. Last year, no social studies assessment score was provided since it was a field tested and the teacher was compelled to select a school wide measure of growth.  He chose POORLY and his observation score of a “4.38” paired with a school wide growth score in the selected area of a “2” producing a sum teacher score of “3” thereby making him ineligible for tenure nomination.

 

This is a very real example of an inequity in our current tenure eligibility metrics.  In the 2014-15 evaluation cycle, more than 66.6% of Tullahoma teachers did not have an individual assessment score, so were compelled to select some other measure. In this case, we have a teacher that we are happy with, who produces great student outcomes and one that we would like to recognize with tenured status but we are unable to do so.  More than anything this sends a message that the process for the majority of our teachers is little more than some arbitrary guessing game, and that guessing games does little more than erode confidence; Our teachers deserve better.

A second teacher visited with his building principal and I related to standards that are not taught aligned with the state assessment.  He went on to produce competition results, ACT scores and AP calculus scores of the students in that “pipeline” in support of his math departmental teaching practice.  His request was simple:  Allow me to teach with a focus on the end product instead of a focus on a test this May.  Within that dialogue, he was quick to share the fact that he expected his growth score to suffer that year but in the long term our students would be better served.  Furthermore, he opined that as long as his principal and superintendent were in place and understood the “big picture” he really had no concerns.  I concurred.  However, his next statement was deeply troubling.  He said “while the number doesn’t mean anything to us, when I retire, that next teacher may believe that number is the most important measure of progress.”  

I believe in accountability.  My board and I embrace expectations of high performance and I am comfortable in making personnel decisions aligned with school improvement and the best academic and developmental opportunities for our children. In this circumstance, however, we are letting the “tail” of growth scores “wag the dog” of teacher evaluations and subsequent tenure eligibility.

A Proposed Solution: We are supportive of the award of tenure returning to a local decision with eligibility determined by service and evaluations.  If, however, that change is not palatable, I believe that an amendment to the current “tenure” statute language allowing a school district to present “mitigating and compelling reason(s)” sponsored by the superintendent to the TDOE for review is warranted. We find the current system of “growth scores” serving as the overwhelming criteria to be an ineffective measure since in our school system since a majority of our teachers do not have those scores available for their use and are thereby compelled to use some school wide measure over which they may have limited influence.

For more on education politics and policy in Tennessee, follow @TNEdReport

Not Yet Ready for Teacher Evaluation?

Last night, the Knox County Board of Education passed a resolution asking the state to not count this year’s new TNReady test in teacher evaluation.

Board members cited the grace period the state is granting to students as one reason for the request. While standardized test scores count in student grades, the state has granted a waiver of that requirement in the first year of the new test.

However, no such waiver was granted for teachers, who are evaluated using student test scores and a metric known as value-added modeling that purports to reflect student growth.

Instead, the Department of Education proposed and the legislature supported a plan to phase-in the TNReady scores in teacher evaluations. This plan presents problems in terms of statistical validity.

Additionally, the American Educational Research Association released a statement recently cautioning states against using value-added models in high-stakes decisions involving teachers:

In a statement released today, the American Educational Research Association (AERA) advises those using or considering use of value-added models (VAM) about the scientific and technical limitations of these measures for evaluating educators and programs that prepare teachers. The statement, approved by AERA Council, cautions against the use of VAM for high-stakes decisions regarding educators.

So, regardless of the phase-in of TNReady, value-added models for evaluating teachers are problematic. When you add the transition to a new test to the mix, you only compound the existing problems, making any “score” assigned to a teacher even more unreliable.

Tullahoma City Schools Superintendent Dan Lawson spoke to the challenges with TVAAS recently in a letter he released in which he noted:

Our teachers are tasked with a tremendous responsibility and our principals who provide direct supervision assign teachers to areas where they are most needed. The excessive reliance on production of a “teacher number” produces stress, a lack of confidence and a drive to first protect oneself rather than best educate the child.

It will be interesting to see if other school systems follow Knox County’s lead on this front. Even more interesting: Will the legislature take action and at the least, waive the TNReady scores from teacher evaluations in the first year of the new test?

A more serious, long-term concern is the use of value-added modeling in teacher evaluation and, especially, in high-stakes decisions like the granting of tenure, pay, and hiring/firing.

More on Value-Added Modeling

The Absurdity of VAM

Unreliable and Invalid

Some Inconvenient Facts About VAM

For more on education politics and policy in Tennessee, follow @TNEdReport

 

It All Comes Down to a Number

Dan Lawson is the Director of Schools for Tullahoma City Schools. He sent this message and the American Educational Research Association press release to a group of Tennessee lawmakers.

I am the superintendent of Tullahoma City Schools and in light of the media coverage associated with Representative Holt and a dialogue with teachers in west Tennessee I wanted to share a few thoughts with each of who represent teachers in other districts in Tennessee. I am thankful that each of you have a commitment to service and work to cultivate a great relationship with teachers and communities that you represent.

While it is certainly troubling that the standards taught are disconcerting in that developmental appropriateness is in question by many, and that the actual test administration may be a considerable challenge due to hardware, software and capacity concerns, I think one of the major issues has been overlooked and is one that could easily address many concerns and restore a sense of confidence in many of our teachers.

Earlier this week the American Educational Research Association released a statement (see below) cautioning states “against the use of VAM for high-stakes decisions regarding educators.” It seems to me that no matter what counsel I provide, what resources I bring to assist and how much I share our corporate school district priorities, we boil our work and worth as a teacher down to a number. And for many that number is a product of how well they guess on what a school-wide number could be since they don’t have a tested area.

Our teachers are tasked with a tremendous responsibility and our principals who provide direct supervision assign teachers to areas where they are most needed. The excessive reliance on production of a “teacher number” produces stress, a lack of confidence and a drive to first protect oneself rather than best educate the child. As an example, one of my principals joined me in meeting with an exceptional middle school math teacher, Trent Stout. Trent expressed great concerns about the order in which the standards were presented (grade level) and advised that our math department was confident that a different order would better serve our students developmentally and better prepare them for higher level math courses offered in our community. He went on to opine that while he thought we (and he) would take a “hit” on our eighth grade assessment it would serve our students better to adopt the proposed timeline. I agreed. It is important to note that I was able to dialogue with this professional out of a sense of joint respect and trust and with knowledge that his status with our district was solely controlled by local decision makers. He is a recipient of “old tenure.” However, don’t mishear me, I am not requesting the restoration of “old tenure,” simply a modification of the newly enacted statute. I propose that a great deal of confidence in “listening and valuing” teachers could be restored by amending the tenure statute to allow local control rather than state eligibility.

I have teachers in my employ with no test data who guess well and are eligible for the tenure status, while I have others who guess poorly and are not eligible. Certainly, the final decision to award tenure is a local one, but local based on state produced data that may be flawed or based on teachers other than the potential nominee. Furthermore, if we opine that tenure does indeed have value, I am absolutely lost when I attempt to explain to new teachers that if they are not eligible for tenure I may employ them for an unlimited number of added contracts but if they are eligible based on their number and our BOE decides that they will not award tenure to anyone I am compelled to non-renew those who may be highly effective teachers. The thought that statue allows me to reemploy a level 1 teacher while compelling me to non-renew a level 5 teacher seems more than a bit ironic and ridiculous.

I greatly appreciate your service to our state and our future and would love to see an extensive dialogue associated to the adoption of Common Sense.

The American Educational Research Association Statement on Value-Added Modeling:

In a statement released today, the American Educational Research Association (AERA) advises those using or considering use of value-added models (VAM) about the scientific and technical limitations of these measures for evaluating educators and programs that prepare teachers. The statement, approved by AERA Council, cautions against the use of VAM for high-stakes decisions regarding educators.

In recent years, many states and districts have attempted to use VAM to determine the contributions of educators, or the programs in which they were trained, to student learning outcomes, as captured by standardized student tests. The AERA statement speaks to the formidable statistical and methodological issues involved in isolating either the effects of educators or teacher preparation programs from a complex set of factors that shape student performance.

“This statement draws on the leading testing, statistical, and methodological expertise in the field of education research and related sciences, and on the highest standards that guide education research and its applications in policy and practice,” said AERA Executive Director Felice J. Levine.

The statement addresses the challenges facing the validity of inferences from VAM, as well as specifies eight technical requirements that must be met for the use of VAM to be accurate, reliable, and valid. It cautions that these requirements cannot be met in most evaluative contexts.

The statement notes that, while VAM may be superior to some other models of measuring teacher impacts on student learning outcomes, “it does not mean that they are ready for use in educator or program evaluation. There are potentially serious negative consequences in the context of evaluation that can result from the use of VAM based on incomplete or flawed data, as well as from the misinterpretation or misuse of the VAM results.”

The statement also notes that there are promising alternatives to VAM currently in use in the United States that merit attention, including the use of teacher observation data and peer assistance and review models that provide formative and summative assessments of teaching and honor teachers’ due process rights.

The statement concludes: “The value of high-quality, research-based evidence cannot be over-emphasized. Ultimately, only rigorously supported inferences about the quality and effectiveness of teachers, educational leaders, and preparation programs can contribute to improved student learning.” Thus, the statement also calls for substantial investment in research on VAM and on alternative methods and models of educator and educator preparation program evaluation.

The AERA Statement includes 8 technical requirements for the use of VAM:

  1. “VAM scores must only be derived from students’ scores on assessments that meet professional standards of reliability and validity for the purpose to be served…Relevant evidence should be reported in the documentation supporting the claims and proposed uses of VAM results, including evidence that the tests used are a valid measure of growth [emphasis added] by measuring the actual subject matter being taught and the full range of student achievement represented in teachers’ classrooms” (p. 3).
  2. “VAM scores must be accompanied by separate lines of evidence of reliability and validity that support each [and every] claim and interpretative argument” (p. 3).
  3. “VAM scores must be based on multiple years of data from sufficient numbers of students…[Related,] VAM scores should always be accompanied by estimates of uncertainty to guard against [simplistic] overinterpretation[s] of [simple] differences” (p. 3).
  4. “VAM scores must only be calculated from scores on tests that are comparable over time…[In addition,] VAM scores should generally not be employed across transitions [to new, albeit different tests over time]” (AERA Council, 2015, p. 3).
  5. “VAM scores must not be calculated in grades or for subjects where there are not standardized assessments that are accompanied by evidence of their reliability and validity…When standardized assessment data are not available across all grades (K–12) and subjects (e.g., health, social studies) in a state or district, alternative measures (e.g., locally developed assessments, proxy measures, observational ratings) are often employed in those grades and subjects to implement VAM. Such alternative assessments should not be used unless they are accompanied by evidence of reliability and validity as required by the AERA, APA, and NCME Standards for Educational and Psychological Testing” (p. 3).
  6. “VAM scores must never be used alone or in isolation in educator or program evaluation systems…Other measures of practice and student outcomes should always be integrated into judgments about overall teacher effectiveness” (p. 3).
  7. “Evaluation systems using VAM must include ongoing monitoring for technical quality and validity of use…Ongoing monitoring is essential to any educator evaluation program and especially important for those incorporating indicators based on VAM that have only recently been employed widely. If authorizing bodies mandate the use of VAM, they, together with the organizations that implement and report results, are responsible for conducting the ongoing evaluation of both intended and unintended consequences. The monitoring should be of sufficient scope and extent to provide evidence to document the technical quality of the VAM application and the validity of its use within a given evaluation system” (AERA Council, 2015, p. 3).
  8. “Evaluation reports and determinations based on VAM must include statistical estimates of error associated with student growth measures and any ratings or measures derived from them…There should be transparency with respect to VAM uses and the overall evaluation systems in which they are embedded. Reporting should include the rationale and methods used to estimate error and the precision associated with different VAM scores. Also, their reliability from year to year and course to course should be reported. Additionally, when cut scores or performance levels are established for the purpose of evaluative decisions, the methods used, as well as estimates of classification accuracy, should be documented and reported. Justification should [also] be provided for the inclusion of each indicator and the weight accorded to it in the evaluation process…Dissemination should [also] include accessible formats that are widely available to the public, as well as to professionals” ( p. 3-4).

The bottom line:  Tennessee’s use of TVAAS in teacher evaluations is highly problematic.

More on TVAAS:

Not Yet TNReady

The Worst Teachers

Validating the Invalid

More on Peer Assistance and Review:

Is PAR a Worthy Investment?

For more on education politics and policy in Tennessee, follow @TNEdReport