Sunday, July 31, 2016

Frequent Changes in State Assessments Create More Problems

I have been fortunate to have had numerous phone and email conversations with Fred Smith, a retired testing expert from NYC schools.  Fred certainly helped me learn some of the insider terminology and methodology of test creation and data.  Here Mr. Smith analyzes release of the 2016 Grades 3-8 Assessment and how NYSED is trying to compare this year's results to recent years.  

_______________________________________________________

Trying to make sense of changes in proficiency from one year to the next should involve careful analysis and cautious interpretation.  This is especially true when there are large differences.  But instead of thoughtful examination of the data and reasonable explanation, the State Education Department's announcement of 2016's test scores provides conjecture and contradiction leading to confusion rather than any understanding of the educational meaning of the results.


Here are three points:

1-SED says the 2016 results are not comparable to 2015's. Yet, slides containing bar graphs of student proficiency levels over the last four years accompany SED's press release and clearly invite comparison. SED fails to explicitly note that the lack of comparability was largely due to Education Commissioner Elia's decision to suspend time limits on all parts of the ELA and math exams--each of which was given over a three-day time period. This ill-conceived concession to criticisms of the exams meant that the time students were allowed to complete the tests could vary widely from class to class, school to school and district to district. Thus, standardized testing conditions ceased to exist in 2016. And without standardization, there can be no meaningful comparisons of growth over time or, for that matter, of achievement between classes, schools and districts within the same year. 

2- By extension, how will it be possible to make true comparisons between the 2016 and 2017 outcomes after next year's tests are given--given the lack of uniform test administration procedures? This portends another year that will be wasted on a testing program that will yield little useful information. Since test publisher Pearson conducted its first New York statewide exams in 2012, SED has now been unable to make legitimate comparisons twice times. Look for that to recur in 2017. What have we been paying Pearson for? And why do we remain on this testing treadmill? 

3- The press release leaves us with a slanted picture of the continued success of the opt-out movement. A close reading of the "test refusal" data indicates that the percentage of opt-out students increased by two percent (2%). SED characterizes this as evidence that the percentage has remained "relatively flat." SED apparently wants to create the impression that opt-out has hit its peak, reached a plateau or, perhaps, run its course. At the same time, a leading point in the 
release is that math proficiency has increased by one percent (1%). So, 1% marks a highlighted gain in math, but 2% more test refusals are minimized as representing no change. 

SED and the Education Commissioner continue to deal in distortion and duplicity. 

Fred Smith