we told you so; or, the times fails to learn the lessons of history

The moment many people have been dreading, or eagerly awaiting, came this week, as the New York Times published its online tool for searching the results of NYC’s Teacher Data Reports for non-charter public school teachers in the city. Comments on the Times’ website were almost unanimously negative, and several readers and bloggers swiftly pointed out that the Times had repeated the 2010 mistake of the LA Times, which forced the Los Angeles Unified School District to attach individual names to teacher value-added data that was in turn made available by the newspaper.

For those who aren’t familiar with the story, the LA Times teacher data release has become a touchstone for many opponents of education “reform”, in large part because of the subsequent suicide of a LAUSD teacher — reported by the paper itself — who was reportedly depressed by the release of his less-than-perfect value-added scores. (At the time, U.S. Secretary of Education Arne Duncan publicly supported the data release as a means of recognizing good teachers, and the California Secretary of Education spoke of the “market-driven approach to results” to which the data release would contribute.) The paper still has the value-added tool up on its website, searchable by individual teacher and/or school name. And more than one of its op-ed columnists still calls for the district to release individual names with the value-added data, which the district currently refuses to do.

With this example fresh in memory, the reaction to the New York Times’ push for the city’s public schools to release individual-level teacher data was loud and harsh, and occasionally came from otherwise unexpected places. Bill Gates, whose private foundation supports research and impole wrote an op-ed for the Times explaining his objections to the release of the data, mostly having to do with the ways in which public scrutiny and reporting can undermine internal employee improvement processes. At least one New York City principal has stated that many of the scores for her school are “simply wrong” and that the interpretations made in the Times reports are “arbitrary and often flawed”. Even after joining in the request for the release of the data by city school officials, education reporting site GothamSchools.com refused to print the New York teacher data reports due to serious problems with both their construction and their interpretation.

But New York Mayor Michael Bloomberg, among others, supported the release of names with data as a “right to know”. The city school district did not fight the newspapers’ request for the release of the data under the Freedom of Information Act. The Times and the Wall Street Journal published the data as planned. And the New York Post went one step further, publishing the name and photo of the teacher whom they judged the worst in the city on the basis of her data report. (Really.)

Both the LA and the New York Times encouraged teachers to “respond” to their scores, the LA Times by allowing teachers in 2010 to receive their scores in advance of the paper’s publication of the data. While this might well have been an attempt at fairness, it places those teachers who do not rank near the top in an awkward position: whatever the teacher says could easily be construed as defensive rationalization, while no response can also be easily construed as acquiescence to the interpretation of their scores as an accurate representation of their individual aptitude and professional performance. Jose Vilson has already written about why he refuses to take the New York Times up on their offer.

From the beginning, the federal Race to the Top grant program has emphasized the adoption of longitudinal teacher-evaluation data systems, placing the disaggregation of teacher-associated student test data (and its use for compensation and employment decisions) high on its list of ways that states can gain points in favor of their application. In their analysis of RTTT, the National Center for Teacher Quality considered student-teacher data linkage one of the basic, non-negotiable requirements for an application to be considered in the competition. So some states that did not win either of the first two rounds of RTTT nonetheless adopted such data systems in the hopes of improving their chances, meaning that the types of teacher data used in the New York and LA Times reports are already out there for many teachers in other states and districts, ready to be FOIAed by other media outlets or made into a parental-rights issue when a politician or pundit wishes to do so. (These longitudinal student and teacher data systems are also quite expensive, both in the short and the long term; the low quality and similarly low utility of the resulting information indicates that a lot of the money spent on across-the-board value-added analyses should be directed elsewhere, if improving educational equity and quality are actually the end goals.)

In that sense, it may seem inevitable that more data releases will follow. I recall that, when the RTTT guidelines were released in late 2009, I was incredulous as to the overt manner in which the federal Department of Ed was leading the states, many of which otherwise would have resisted, to adopt individual-teacher data disaggregation on a fast track, and I recall an even stronger sense of foreboding as to how these inevitably flawed data would affect teachers, particularly in terms of recruitment and attrition. At the time, it hadn’t occurred to me that they might eventually make their way to the public without proper context and analysis, with names attached. And through two of the top newspapers in the country, no less.

If anything, the incredulity and foreboding have led me to read more about disaggregated test scores and the ways that they’re being used and abused in public. I’ve read a lot of blogs, op-eds, and reportage about the New York Times’ repetition of the LA Times’ mistakes, and there are two in particular that I was actually glad to have read. First, at ShankerBlog, Matt DiCarlo has a thorough, extremely technical breakdown of the problems with the NYC data as relates to the interpretation of the reported margins of error and relative scoring of individual teachers. Then, the Assailed Teacher explains teaching as an art that cannot be assessed by value-added measures, regardless of margins of error or improvements upon the mousetrap of quantitative teacher assessment. They’re very different pieces, but together they describe the major problems with the two Times’ approaches and with the whole enterprise of making quantitative comparisons among individual teachers and students: serious, potentially harmful design flaws and inaccuracies; wide circulation of unsupported interpretations and assumptions; and fundamental irrelevance to the actual practices of teaching and learning.

So, what do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 52 other followers