July 26, 2024

Iscuk

International Student Club UK

The Scientific Principles of Reading Instruction – Education Rickshaw

The Scientific Principles of Reading Instruction – Education Rickshaw

When I was just starting out my career as an elementary teacher, I attended a staff meeting where the principal asked us what we thought about ending the practice of giving out homework. I was one of the few in the room who expressed their concern that such a move would deny students sufficient opportunities to practice with the material. But the principal had research in her back pocket that I didn’t yet have: John Hattie.

Hattie says, we were told, that homework in elementary has an effect size close to zero. We’d be better off doing pretty much any other intervention than wasting precious time designing, distributing, and collecting homework. And that was that.

This wouldn’t be the first time that I felt the silencing effect of Hattie’s meta-meta-analysis methodology on my teaching practice. No amount of contradictory research (e.g., What about the importance of spaced practice for learning?) or rational argumentation (e.g., Shouldn’t we focus on making our homework better, rather than eliminating it entirely?) could defeat the powerful and authoritative image of Hattie’s league table, which compares various teaching strategies by effect size. It was with this frustration that I read The Scientific Principles of Reading Instruction by Nathaniel Hansford, a book that re-establishes the secondary meta-analysis methodology popularized by John Hattie as a tool for examining reading research.

The first chapters of The Scientific Principles of Reading Instruction are a handy summary of research methods. As someone who often find himself grasping for his old PhD methods textbooks when engaging in online debate about the merits of a particular study, I found Hansford’s summary to be extremely useful. Readers will come away with an awareness of what distinguishes a good study from a bad study, but also a better appreciation of the state of education research today. I completed these chapters wanting to emphasize two things in particular: 1) The effect size reported in meta-analyses are highly dependent on the quality and design of the underlying studies and 2) if we eliminated the low-quality studies, we’d basically be discarding much of the evidence that exists in education. Before reading this book, my solution to this problem was that researchers should focus on creating practitioner-friendly narrative reviews that emphasize the highest quality studies, and fill in the blanks with descriptions of lower quality studies and areas for future research. However, after reading Hansford’s book, I’m persuaded that meta-analysis is an important check that we need to apply to our understanding of the evidence, as it removes the potential for researchers to ignore studies that do not confirm their views. Perhaps a blind adherence to researchers’ interpretations of the research – aka guruship via narrative review – would be just as hazardous as my former principal’s blind adherence to effect sizes from secondary meta-analyses.

Chapter 3 of the book is really special. In it, Hansford takes John Hattie’s most recent effect size list and categorizes them into principles – hence the title of the book. Reading how Hansford has grouped the various strategies into Quality Time Under Instruction, Teacher Clarity, Specificity, Appropriately Challenging Curriculum, Growth Mindset (which Hansford retracted recently, by the way, in this excellent article), and Reflective Teaching, made me wonder why nobody, at least to my knowledge, had done this before. While Hansford told me previously that he is generally more interested in the “what works” question than the “why” behind it all, the principles he outlines in chapter 3 certainly armed me with a better understanding of why, for example, inquiry-based learning is superior to discovery learning, but inferior to direct instruction. Another illuminating chapter is chapter 6, which gives a breezy analysis of the reading wars. Again, we can relate the relative superiority of phonics-intensive programs to balanced literacy to whole language, in that order, to Hansford’s principles from chapter 3.

The bulk of the rest of the book contains chapters that describe the state of the science for each of the major areas of reading instruction. If you’re looking to get your hands on a reference for what works in fluency, comprehension, vocabulary, and so on, so that you can refer to it when someone makes a silly claim about “what the research says”, this book does the trick. What differentiates this book from the mountain of education books in my collection is that this is a teacher synthesizing previous meta-analyses as opposed to describing individual studies or making arguments based on individual studies. Throughout the book, Hansford is transparent about which meta-analyses we should pay attention to, and which seem to be outliers or use inferior statistical methods and should probably be excluded. Hansford is at his best when he tells the story of the research through his practitioner’s lens and qualifies the conclusions he draws with appeals for more research. The final chapters provide a useful summary of the grade-level specific implications of the research, which Hansford described to me in great detail in his episode of Progressively Incorrect.

This is a book (did I mention it’s really large?) that I will be strategically placing on my instructional coaching table for some time. If my colleagues do choose to flip through it, I sincerely hope they land on the last line of the book: “Ultimately, if I could have the reader have one take-away from this book, it would be to question everything, hold nothing sacred, and be willing to change your own mind.”

Indeed.


TLDR: I encourage teachers and education leaders to read this book, especially those who are thirsty for more insights into effective reading instruction in a post-Sold a Story landscape.