Test Score Resolution 2016

Thank you to all who have contacted me regarding the resolution to limit the use of test score data for this school year. I believe the media may have created some confusion about the purpose of the resolution, as well as the test data that will be returned to us and I want to clear up some of that confusion.

  • The resolution does not seek to remove the test or the data that will be returned from the test. Teachers and students WILL have test data to use however they may need it, whether or not the resolution is adopted.
  • With or without this resolution, this test cannot be used to accurately measure student growth in comparison to past years, as it is not the same as any of the tests we have used previously. 
    • Since we have not yet used this test, we cannot say whether the items tested will match previous tests or our curriculum.
    • Comparing scores on three completely different achievement tests cannot give accurate results.
  • Data from the tests is expected to be delivered several months after the tests are given.
    • We have been told to expect data to be delivered sometime in November, months after the tests have been given.
    • Knox County Schools is still waiting on test data from last year’s tests and we have been told that it may not be available until the end of November
    • Teachers who DID use last year’s test data as part of their scores have been unable to determine when they may expect to receive any compensation for those scores.
    • If comparing scores on three completely different achievement tests is somehow useful to someone, that data will be available to both teachers and students, in the same time-frame as the previous test – which is many months too late to guide instruction.

I believe that our students should be evaluated by their teachers, based on the curriculum they are using in the classroom, using multiple measures. To put blind faith, student grades, and teacher evaluation scores all in one high-stakes that nobody has seen, used, or vetted, is premature, at best.

Especially concerning, is the message the Tennessee Department of Education is pushing out and how similar it is to the message we were given last year. Add to that the recent visits from the Commissioner of Education: one in which she and a legislative candidate met with only select people to answer questions about the test and actually refused to allow Knox County Board of Education members to attend and the other in which she held a celebration event at an elementary school in the same legislator’s district, during early voting, claiming that the students had made exceptional gains in science on the NAEP tests. (NAEP does not report scores at the individual or school level, which makes it impossible to say how this group of students performed.)

I have proctored TCAPS with elementary children when the test was familiar and READY and have seen how horrible it is for kids. This test is as unfamiliar and potentially as UN-ready as last year’s fiasco. (I’m still shocked that there wasn’t more public outcry when people learned that there was proof that the TN DOE knew the test wasn’t ready at the same time that they were telling the public that there were no problems.) The fact that the state is willing to push ahead with a test, just for the sake of testing, and without regard for kids, is shameful. To further require that test to potentially negatively impact all of the valid measures teachers use to assess student performance throughout the year is inexcusable.

As one who is elected to look out for the needs of our students in one of the largest districts in the nation, I strongly believe that the Knox County BOE has a duty and a responsibility to take a leadership role in this issue and demand that our Department of Education spend time looking out for the best interests of our students, rather than for the interests of their own high-paying jobs and the big-money interests that put them there.

 

Additional Information

 
From the American Statistical Association Statement on Using Value-Added Models for Educational Assessment, April 8, 2014
  • VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.
  • VAMs typically measure correlation, not causation: Effects – positive or negative –attributed to a teacher may actually be caused by other factors that are not captured in the model.
  • Under some conditions, VAM scores and rankings can change substantially when a different model or test is used, and a thorough analysis should be undertaken to evaluate the sensitivity of estimates to different models.
  • VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality.