relinquishment

Is Roland Fryer Right? Or has the RCT Fallacy Reared its Ugly Head?

Roland Fryer just published a compilation guide to 196 RCTs in education. HT to my colleague Stuart Buck for passing it along.

The compilation is a good review of a bunch of interesting studies. Roland’s contributions always make me think. He also won the John Bates Clark Medal, which is basically the Nobel prize for economics for people under 40.

Yet, while this RCT compilation is informative, I’d be very, very, very hesitant to pass a bunch of laws and regulations based on this type of meta-research.

___

Increasingly, policy makers and pundits are using RCT evidence to make policy. This is generally a step in the right direction, and it’s great to see evidence playing a bigger role in policy making.

Yet, sometimes RCTs are more about Rigorously Contorted Tales than Randomized Controlled Trials.

Call it the RCT Fallacy.

In statistical terms, the RCT Fallacy is pretty close to the concept of external validity, but I think the RCT Fallacy has a little more psychology to it.

So here goes:

The RCT Fallacy occurs when thought leaders propose adoption of policies based on the results of   RCTs so as to avoid the messiness of politics, ideology, history, psychology, and evolution.

Fryer is more balanced than most, but, in this case, I think he still succumbs to the fallacy.

___

The RCT Fallacy is grounded in the following:

___

In other words, RCTs will never tell us:

Yes, well designed RCTs can inform our decisions on the above issues, but RCTs will not provide definitive evidence on these issues.

___

Fryer’s paper ends with his summary of the RCT evidence in education.

He argues that RCTs have demonstrated that four interventions work:  pre-k, high dosage tutoring, managed teacher PD, and charter schools.

The paper ends with the following rally cry:

I’m not sure courage is what we need:

Pre-K: There is pretty mixed evidence on our ability to scale effective pre-k. Fryer himself notes: “of the 64 treatment effects recorded in these randomized studies [on pre-k], 21 were statistically positive; zero were statistically negative and 43 were statistically indistinguishable from zero.”

Again, I’m not sure “courage” is the term I’d use to describe scaling an intervention that shows zero effect 67% of the time.

Tutoring: Fryer covers some high-dosage tutoring studies that show strong effects. However, the costs of these programs are sometimes upwards of 20% of total per-student spending. Moreover, there would likely be severe human capital limitations if we tried to give high dosage tutoring to all the students who needed it.

Managed Teacher PD: Fryer covers studies that show success for Success For All and Reading Recovery programs. The data seems robust and schools should surely consider adopting these programs. But here’s the thing: nothing is preventing districts from adopting these programs right now!

Perhaps either districts know something that these RCTs aren’t picking up, or perhaps districts are so poorly run that it takes a dramatic intervention to get them to adopt effective programs that have been around for 10+ years.

Charter Schools: While I clearly support charter expansion, charter RCTs often run into the issue of using lottery data which limits trials to schools that are oversubscribed (and thus creates positive bias); as such, I generally view CREDO’s far reaching urban quasi-expermintal studies to be of more use.

___

Again, I don’t mean to pick on Fryer. I’ve learned a ton from reading his research and children would be better off universities were filled with thinkers like him. His work on “looking under the hood” of high-performing charters greatly influenced my thinking on schools, as has his research on tutoring.

Moreover, it’s much better to try and build a policy regime from RCTs than from the weak theory that comes out of many education departments.

But, ultimately, I don’t think that (a) the RCTs covered in his study make a strong case for the scaling of his preferred interventions or (b) that RCTs can ever really tell us how to best design our public education systems.

I do think we should utilize RCTs to help schools make choices about which practices to adopt, but, ultimately, we should utilize theory and quasi-expermential evidence to handle the major public policy questions concerning education, which in mind have more to do with system structure than educational practice.

“We” (researchers, thought leaders, policy makers, etc.) shouldn’t be operationally scaling much; rather, we should be running experiments that give empowered educators and families more information to make great choices.