Hattie Vs. Simpson: can you use effect size to rank classroom interventions?

I recently listened to two very interesting episodes from the Education Research Reading Room podcast; the first with mathematician/ maths teacher and educational researcher Adrian Simpson and the second a follow up with educational research legend John Hattie. The episodes were rather long (but well worth the listen if this is in your sphere of interest) but this blog post from podcast host Ollie Lovell summarises them beautifully, and gives his own honest and intelligent reflection.

The main point is this: effect size can be influenced by so many things, that when you average out effect sizes across studies, as is done in a meta analysis, you are essentially averaging apples and oranges. You are making a category error. For example because one study uses a control group with no intervention, while another uses a control group with a different intervention.

I have to say I found John Hattie quite defensive when it was his turn to respond. But after having read more of Hattie’s work and listened to him speak in later podcasts, I have heard that he himself is also trying to move away from effect sizes and speak more about the mechanisms that cause better teaching efficacy. So yay to him and yay to everyone doing education research, a very difficult subject to study and therefore all the more important that people do so!

Hi there person who has stumbled onto this website. This is my random stuff blog for random thoughts and stuff. For more randomness, follow me on Twitter: https://twitter.com/Heddwen

Leave a Reply

Your email address will not be published. Required fields are marked *