Hattie Vs. Simpson: can you use effect size to rank classroom interventions?

I recently listened to two very interesting episodes from the Education Research Reading Room podcast; the first with mathematician/ maths teacher and educational researcher Adrian Simpson and the second a follow up with educational research legend John Hattie. The episodes were rather long (but well worth the listen if this is in your sphere of interest) but this blog post from podcast host Ollie Lovell summarises them beautifully, and gives his own honest and intelligent reflection.

The main point is this: effect size can be influenced by so many things, that when you average out effect sizes across studies, as is done in a meta analysis, you are essentially averaging apples and oranges. You are making a category error. For example because one study uses a control group with no intervention, while another uses a control group with a different intervention.

To start with I thought this might ruin my own research, because Hattie’s focus on feedback as one of the most important classroom interventions is one of the reasons I chose my subject. And I have to say John Hattie was VERY defensive when it was his turn to respond. But after having read more of Hattie’s work and listen to him speak in later podcasts, I have heard that he himself is also trying to move away from effect sizes and speak more about the mechanisms that cause better teaching efficacy, and feedback is still on the list, so I am reassured 🙂