Mathematica just released a study on how charter middle schools impact college enrollment and completion. Before I dig into the study, some quick context.
The research base on the connection between test scores and life outcomes
An important research finding over the past decade has been that schools with neutral test score impacts sometimes achieve positive lifetime outcomes for their students.
Equally important, schools with positive test score impacts often don’t achieve positive life outcomes for their students.
Another important finding (a finding that is often glossed over by my libertarian leaning friends) is that schools with negative test score impacts are much less likely to achieve positive lifetime outcomes for their students.
Taken together, these findings have narrowed my belief in test based accountability. Previously, I put a lot of weight on the difference between positive, neutral, and negative test score schools.
Now I focus more attention on selectively transforming schools with negative test score impacts.
I also care much more about parent demand than I used to.
The Mathematica Study Limitations
The Mathematica study has two very major limitations.
First, it covers charter schools from a previous era (2005-2007). In the subsequent decade, many charter schools (especially those we work closely with) have increased their focus on getting kids to and through college and into meaningful careers.
Second, the study is very small. The sample only included 10 urban charter schools. Additionally, only 3 schools in the entire study had poverty rates over 75%!
The small sample means the study is fairly under powered. With a sample this small, only large effects will be picked up. A modest but positive correlation between achievement and college graduation, for example, would not be statistically significant in this study.
Other similar studies have looked at entire states, and included many more high poverty charter schools, so I place much less weight on this study.
The study did not find a statistically significant relationship between test score and college completion results. The chart below details the major findings, with each dot representing a school.
When you eyeball the graph, some trends emerge, especially the low number of schools in the bottom right quadrant (high test effects, low college effects).
When I emailed the authors about this trend, they said they were not statistically significant and we should be cautious to draw conclusions by looking at patterns the scatter plot.
The small sample size and the decade old study window period mean that the study itself won’t move my opinion on the issue that much.
Only 3 schools in the study were high poverty schools. Nearly all of the schools that we work with are high poverty schools.
In partnership with the Arnold Foundation, we’re supporting a bunch of more research on the issue, and I’m excited to learn more.
I will change my mind if the findings change.