Dan Lawson is the Director of Schools for Tullahoma City Schools. He sent this message and the American Educational Research Association press release to a group of Tennessee lawmakers.
I am the superintendent of Tullahoma City Schools and in light of the media coverage associated with Representative Holt and a dialogue with teachers in west Tennessee I wanted to share a few thoughts with each of who represent teachers in other districts in Tennessee. I am thankful that each of you have a commitment to service and work to cultivate a great relationship with teachers and communities that you represent.
While it is certainly troubling that the standards taught are disconcerting in that developmental appropriateness is in question by many, and that the actual test administration may be a considerable challenge due to hardware, software and capacity concerns, I think one of the major issues has been overlooked and is one that could easily address many concerns and restore a sense of confidence in many of our teachers.
Earlier this week the American Educational Research Association released a statement (see below) cautioning states “against the use of VAM for high-stakes decisions regarding educators.” It seems to me that no matter what counsel I provide, what resources I bring to assist and how much I share our corporate school district priorities, we boil our work and worth as a teacher down to a number. And for many that number is a product of how well they guess on what a school-wide number could be since they don’t have a tested area.
Our teachers are tasked with a tremendous responsibility and our principals who provide direct supervision assign teachers to areas where they are most needed. The excessive reliance on production of a “teacher number” produces stress, a lack of confidence and a drive to first protect oneself rather than best educate the child. As an example, one of my principals joined me in meeting with an exceptional middle school math teacher, Trent Stout. Trent expressed great concerns about the order in which the standards were presented (grade level) and advised that our math department was confident that a different order would better serve our students developmentally and better prepare them for higher level math courses offered in our community. He went on to opine that while he thought we (and he) would take a “hit” on our eighth grade assessment it would serve our students better to adopt the proposed timeline. I agreed. It is important to note that I was able to dialogue with this professional out of a sense of joint respect and trust and with knowledge that his status with our district was solely controlled by local decision makers. He is a recipient of “old tenure.” However, don’t mishear me, I am not requesting the restoration of “old tenure,” simply a modification of the newly enacted statute. I propose that a great deal of confidence in “listening and valuing” teachers could be restored by amending the tenure statute to allow local control rather than state eligibility.
I have teachers in my employ with no test data who guess well and are eligible for the tenure status, while I have others who guess poorly and are not eligible. Certainly, the final decision to award tenure is a local one, but local based on state produced data that may be flawed or based on teachers other than the potential nominee. Furthermore, if we opine that tenure does indeed have value, I am absolutely lost when I attempt to explain to new teachers that if they are not eligible for tenure I may employ them for an unlimited number of added contracts but if they are eligible based on their number and our BOE decides that they will not award tenure to anyone I am compelled to non-renew those who may be highly effective teachers. The thought that statue allows me to reemploy a level 1 teacher while compelling me to non-renew a level 5 teacher seems more than a bit ironic and ridiculous.
I greatly appreciate your service to our state and our future and would love to see an extensive dialogue associated to the adoption of Common Sense.
The American Educational Research Association Statement on Value-Added Modeling:
In a statement released today, the American Educational Research Association (AERA) advises those using or considering use of value-added models (VAM) about the scientific and technical limitations of these measures for evaluating educators and programs that prepare teachers. The statement, approved by AERA Council, cautions against the use of VAM for high-stakes decisions regarding educators.
In recent years, many states and districts have attempted to use VAM to determine the contributions of educators, or the programs in which they were trained, to student learning outcomes, as captured by standardized student tests. The AERA statement speaks to the formidable statistical and methodological issues involved in isolating either the effects of educators or teacher preparation programs from a complex set of factors that shape student performance.
“This statement draws on the leading testing, statistical, and methodological expertise in the field of education research and related sciences, and on the highest standards that guide education research and its applications in policy and practice,” said AERA Executive Director Felice J. Levine.
The statement addresses the challenges facing the validity of inferences from VAM, as well as specifies eight technical requirements that must be met for the use of VAM to be accurate, reliable, and valid. It cautions that these requirements cannot be met in most evaluative contexts.
The statement notes that, while VAM may be superior to some other models of measuring teacher impacts on student learning outcomes, “it does not mean that they are ready for use in educator or program evaluation. There are potentially serious negative consequences in the context of evaluation that can result from the use of VAM based on incomplete or flawed data, as well as from the misinterpretation or misuse of the VAM results.”
The statement also notes that there are promising alternatives to VAM currently in use in the United States that merit attention, including the use of teacher observation data and peer assistance and review models that provide formative and summative assessments of teaching and honor teachers’ due process rights.
The statement concludes: “The value of high-quality, research-based evidence cannot be over-emphasized. Ultimately, only rigorously supported inferences about the quality and effectiveness of teachers, educational leaders, and preparation programs can contribute to improved student learning.” Thus, the statement also calls for substantial investment in research on VAM and on alternative methods and models of educator and educator preparation program evaluation.
The AERA Statement includes 8 technical requirements for the use of VAM:
- “VAM scores must only be derived from students’ scores on assessments that meet professional standards of reliability and validity for the purpose to be served…Relevant evidence should be reported in the documentation supporting the claims and proposed uses of VAM results, including evidence that the tests used are a valid measure of growth [emphasis added] by measuring the actual subject matter being taught and the full range of student achievement represented in teachers’ classrooms” (p. 3).
- “VAM scores must be accompanied by separate lines of evidence of reliability and validity that support each [and every] claim and interpretative argument” (p. 3).
- “VAM scores must be based on multiple years of data from sufficient numbers of students…[Related,] VAM scores should always be accompanied by estimates of uncertainty to guard against [simplistic] overinterpretation[s] of [simple] differences” (p. 3).
- “VAM scores must only be calculated from scores on tests that are comparable over time…[In addition,] VAM scores should generally not be employed across transitions [to new, albeit different tests over time]” (AERA Council, 2015, p. 3).
- “VAM scores must not be calculated in grades or for subjects where there are not standardized assessments that are accompanied by evidence of their reliability and validity…When standardized assessment data are not available across all grades (K–12) and subjects (e.g., health, social studies) in a state or district, alternative measures (e.g., locally developed assessments, proxy measures, observational ratings) are often employed in those grades and subjects to implement VAM. Such alternative assessments should not be used unless they are accompanied by evidence of reliability and validity as required by the AERA, APA, and NCME Standards for Educational and Psychological Testing” (p. 3).
- “VAM scores must never be used alone or in isolation in educator or program evaluation systems…Other measures of practice and student outcomes should always be integrated into judgments about overall teacher effectiveness” (p. 3).
- “Evaluation systems using VAM must include ongoing monitoring for technical quality and validity of use…Ongoing monitoring is essential to any educator evaluation program and especially important for those incorporating indicators based on VAM that have only recently been employed widely. If authorizing bodies mandate the use of VAM, they, together with the organizations that implement and report results, are responsible for conducting the ongoing evaluation of both intended and unintended consequences. The monitoring should be of sufficient scope and extent to provide evidence to document the technical quality of the VAM application and the validity of its use within a given evaluation system” (AERA Council, 2015, p. 3).
- “Evaluation reports and determinations based on VAM must include statistical estimates of error associated with student growth measures and any ratings or measures derived from them…There should be transparency with respect to VAM uses and the overall evaluation systems in which they are embedded. Reporting should include the rationale and methods used to estimate error and the precision associated with different VAM scores. Also, their reliability from year to year and course to course should be reported. Additionally, when cut scores or performance levels are established for the purpose of evaluative decisions, the methods used, as well as estimates of classification accuracy, should be documented and reported. Justification should [also] be provided for the inclusion of each indicator and the weight accorded to it in the evaluation process…Dissemination should [also] include accessible formats that are widely available to the public, as well as to professionals” ( p. 3-4).
The bottom line: Tennessee’s use of TVAAS in teacher evaluations is highly problematic.
More on TVAAS:
More on Peer Assistance and Review:
For more on education politics and policy in Tennessee, follow @TNEdReport