A thoughtful response from Chris Cerf

Chris Cerf, the current superintendent of Newark and the former state superintendent of New Jersey, posted a thoughtful response to my post on the Newark Harvard study.

A couple of thoughts.

First, I’ve had the opportunity to interact with Chris on and off over the past decade, and I have immense respect for his heart and mind. Kids in New Jersey are better off because of his leadership.

Second, I appreciated the tenor of Chris’ post. A primary reason I write this blog is so that  people in the education reform family can have public disagreements and learn from each other. Chris’ tone and use of data helped me get smarter on Newark.

While there is some cost to this approach (those who oppose our work can try to exploit our disagreements), ultimately, I think the gain of learning through public debate is well worth it.

Over the long haul, our success will have more to do on whether or not we continually delivered great educational opportunities for children than whether or not we win or lose the PR fire of the week. And putting our ideas out there for public testing is a good way to get smarter on how we deliver better opportunities for children.

As for the substance of Chris’ points, you should read the piece for yourself, but I found the below two graphs to be useful.

The first compares Newark’s overall performance to similarly situated districts in New Jersey (DFG A -> the green line). Newark’s relative performance to similarly situated districts has improved greatly over the past seven years.

newark-graph1

The second compares Newark’s traditional school performance to similarly situated districts in New Jersey (DFG A -> the green line). Newark’s traditional school performance was fairly flat until 2014, but has grown rapidly since then.

newark-graph2

I’m not sure that Chris and I disagree on the data story.

Both of us, I think, would say that the early gains in Newark were driven by the open / close / shift strategy.

As for the improve the traditional sector strategy, Chris points to the last few years of growth to demonstrate that the reforms are starting to deliver for all schools – and that now that the foundation has been set, these gains will likely continue.

I emphasized the flat results for the first four years of the district improvement strategy and wondered whether the improve strategy was worth it – or whether putting more resources into the open / close / shift strategy would have laid an even better foundation for long-term gains.

It’s a great question to grapple with.

Lastly, our interpretations of the study go to an even more fundamental question of how we measure success: should a city’s reform efforts be evaluated on the cumulative gains it achieved during the transformation process, or should it be evaluated on the gains being delivered once a city is through a transformation process?

When we partner with a city, our team holds ourselves for cumulative gains over the initial 5-10 years of reform, but perhaps a more important metric is whether a city achieves a new and better equilibrium by the end of the reforms.

Chris has pushed me to think hard about this question, our team will be smarter for it.

2 thoughts on “A thoughtful response from Chris Cerf

  1. Mike G

    1. Agree that the tone of the back-and-forth discussion is civil and productive, alas rare. Props to you both.

    2. I have a question:

    The 2 main “Kane et al” graphs show:

    a. “Within school” English as Year 1 down, Year 2 down more, Year 3 down more, Year 4 up, Year 5 up. Total is .02 SD and not signif.

    b. “Within school” Math as Year 1 down, Year 2 down more, Year 3 up, Year 4 up a lot, Year 5 down a lot. Total is -.08 SD and (??) signif.

    How do the green-and-orange graphs (showing uptick) line up with the Kane main graphs (showing downtick)?

    LikeLike

    Reply
  2. nkingsl Post author

    The results roughly match up in direction of trend lines (note that 2016-17 data not in Kane data set, so the green lives show an extra year of data). For ELA you see the uptick in Green that mirrors the uptick in Kane study. Similar with math – up then down (and then green bar up again for newest year not included in Kane).

    The thing I’m not sure about is magnitude of Kane effects in a given year vs. magnitude of up and down ticks in green line in a given year.

    What am I missing?

    LikeLike

    Reply

Leave a Reply