Blogs About Reading
Shanahan on Literacy
Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy.
“It Works” and Other Myths of the Science of Reading Era
Recently, I wrote about the science of reading. I explained how I thought the term should be defined and described the kind of research needed to prescribe instruction.
Today I thought I’d put some meat on the bone; adding some details that might help readers to grasp the implications of a scientific or research-based approach to reading.
What does it mean when someone says an approach to reading instruction “works”?
The term “it works” has gnawed at me for more than fifty years! I remember as a teacher how certain activities or approaches grabbed me. They just seemed right. Then I’d try them out in my classroom and judge some to work, and others not so much.
What was it that led me to believe some of them “worked” and some didn’t?
It puzzled me even then.
Teachers, administrators, and researchers seem to have different notions of “what works.”
Teachers, I think, depend heavily on student response. If an activity engages the kids, we see it as hopeful. We give credence to whether an activity elicits groans or a buzz of activity.
When I do a classroom demonstration and students say they liked the activity and want to do more, most likely I’ve won that teacher over.
Teachers recognize that learning requires engagement, so when an activity pulls kids in, they’re convinced that it’s a good idea.
That satisfaction is sometimes denigrated because of its potential vapidity. Let’s face it. Bozo the Clown engages kids, too, but with how much learning?
What those complaints fail to recognize is that the teacher already has bought into the the pedagogical value of the activity. They assume it is effective. Student engagement is like gaining a third Michelin star.
What about administrators?
Their needs are different. To them, “it works,” is more about adult acceptance. If a program is adopted, the materials shipments arrive as promised, and neither teachers nor parents complain, it works!
And, to researchers?
To them, it means there has been an experimental study that compared that approach with some other and found it to be superior in terms of fostering learning.
If a method does no better than “business as usual” classroom practice, then it doesn’t work (which, confusingly, isn’t entirely correct, since the difference isn’t that everybody in one group learned and nobody in the other did).
I’ve worn all those hats — teacher, administrator, researcher — and I prefer the last one. The reason? Because it’s the only one that explicitly bases the judgment on student learning.
Will we accomplish higher achievement if we follow research and make our teaching consistent with the science?
That’s the basic idea, but even that doesn’t appear to be well understood.
I think we tend to get misled by medical science, particularly pharmacology.
New drugs are studied so thoroughly it’s possible for scientists to say that a particular nostrum will provide benefit 94% of the time and that 28% of patients will probably suffer some unfortunate side effect.
When I tell you that the research shows that a particular kind of instruction works (i.e., it led to more learning), I can’t tell you how likely it is that you will be able to make it work, too.
Our education studies reveal whether someone has managed to make an approach successful.
Our evidence indicates possibility, not certainty.
When we encourage you to teach like it was done in the studies, we are saying, “if they made it work, you may be able to make it work, too.”
That’s why I’m such a fan of multiple studies.
The more times other people have made an approach work under varied circumstances, the more likely you’ll be able to find a way to make it work as well.
If you show me one such study, it seems possible I could match their success. Show me 38, and it seems even more likely that I could pull it off.
That nuance highlights an important point: Our instructional methods don’t have automatic effects. We, as teachers, make these methods work.
Lackadaisical implementation of instruction is never likely to have good results. The teacher who thinks passive implementation of a science-based program is what works is in for a sad awakening.
I assure you that in the studies, everyone worked hard to make sure there were learning payoffs for the kids. That’s part of what made it work better than the business-as-usual approach.
That point is too often muffled by our rhetoric around science-based reading. But teacher buy-in, teacher effort, and teacher desire to see a program work for the kids are all ingredients in success.
I don’t get it, I’m hearing that some approach (e.g., 3-cueing) is harmful, and, yet I know of research-based programs that teach it. Does that make any sense?
You’re right about 3-cueing being part of some successful programs. But that doesn’t mean it’s a good idea. Instructional programs usually include multiple components. Studies of them tell if the program has been effective, but they usually say little about the various components that are integral to the program.
Without a direct test of the individual components, there are three possibilities: (1) a component may be an active ingredient, one of the reasons for the success; or (2) it’s a neutral — drop it and kids would do as well; or (3) it’s hurtful, the instruction would be even more effective without it.
Logically, 3-cueing makes no sense. It emphasizes behaviors good readers eschew.
That said, I know of no research that has evaluated 3-cueing specifically.
Claims that it’s harmful (beyond being a likely time waster) are, for the time being, overstatements. These claims rely on logic, not data.
The problem that you identify is a common one — people will tell you that multisensory instruction, a sole focus on decodable texts, advanced phonemic awareness, more social studies lessons, word walls, sound walls, and so on are all certain roads to improved achievement. Each is part of at least one successful program or another. But none have been evaluated directly. The truth is, we really don’t know if they have any value at all.
They might provide benefits, but that isn’t the same thing as knowing that they have done so before.
Our district has adopted new programs and instructional routines based on science. But our kids aren’t doing any better than before. Does that make any sense?
No, that makes no sense at all. The purpose of any kind of educational reform — including science-based reform — is to increase learning. The whole point is higher average reading scores or a reduction in the numbers of struggling students.
Whoever’s in charge should take this lack of success seriously and should be asking — and finding answers — to the following question:
Were these changes really based on the science and what does that mean?
Administrators often make choices based on minimal information. It is better to vet these things before adopting them, but in a case like this one, it is never too late to find out if the reform scheme was really consistent with the science.
How has the amount of reading instruction to students changed?
Some approaches work better than others because they have a bigger footprint. They provide a greater amount of teaching than business-as-usual approaches. Adopting such programs without making the schedule changes to facilitate their implantation will likely undermine potential success. Are kids getting more instruction, less instruction, or about the same as before?
How is the amount of reading instruction apportioned among phonemic awareness, phonics, text reading fluency, reading comprehension strategies, written language ability, and writing?
Often the adoption of new programs or reform efforts aimed at a particular piece of the puzzle lead to greater attention to certain abilities, but to diminished attention to other key parts of literacy. Make sure that you aren’t trading more phonics for less fluency work, or more vocabulary for less comprehension. You want to make sure that all components of reading are receiving adequate attention — not going overboard with some and neglecting others.
To what extent are teachers using the programs?
Compliance matters in program implementation. The adage that “teachers can do whatever they want when the door is closed” highlights one of the biggest roadblocks to making such efforts work. You need to make sure you have sufficient buy in for the men and women who do the daily teaching. You bought a new program or set new instructional policies. Are they being used or followed?
How well prepared are the teachers to provide the required instruction?
Program adoption requires a lot more than issuing a policy proclamation. Research shows that program implementation supported by substantial professional development is much more successful than just buying a program. You need to make sure that you’ve built the capacity for success and not just expected magic to happen.
National Research Council. (2002). Scientific research in education. Washington, DC: National Academy Press.
Shanahan, T. (2020). What constitutes a science of reading instruction. Reading Research Quarterly, 55(S1), S235-S247.
Stanovich, P. J. , & Stanovich, K. E. (2003). Using research and reason in education. Washington, DC: National Institute of Literacy.
Comment from Joan S.
Thank you for the practical explanations and examples. I want to pick up on a couple of lines you wrote. 1) "Lackadaisical and passive implementation"— yes, some teachers may not be interested in using a program or instructional practice with fidelity, but more often I find the problem is that they have not received sufficient professional development to understand the why and how to implement effectively. 2) "teacher buy-in, teacher effort, and teacher desire to see a program work for the kids are all ingredients in success"— again, professional development is often the missing link. I work with a lot of teachers who buy-in, give lots of effort, and really want to see a program work, but because they do not have the foundational knowledge about effective instruction and quality training for using a program, they are not able to teach the program in a way that will show results.
Comment from Laurel
There are many variables to consider within the classroom from year to year: The number of students in the class, the makeup of the class in terms of students who are generally focused and cooperative, and students who demonstrate disruptive, and distracted behaviors, students who read above grade level in the same class with students who are not reading at all, English speaking and second language students, students with special needs, the amount of extra adult support in the classroom (usually not much), and the number of pull out programs which take students out of the classroom at various times.
I'm curious about the educational research process. Do they carry out the research under these real life circumstances? In some senses the actual classroom is the ideal place to try things and see if they work. It is the ideal place to refine and innovate. Is this ever part of the process? Just because a particular approach or program works for one student, it does not mean it will work for another student... I know that there has been a piloting process in our district in the past, but this is a selection process after several full programs have already been developed. If I am given a decodable story to teach r-controlled vowels with "or": "Scorch and Dorn are forlorn due to the storm and the thorn in Scorch's foot." I feel like someone rushed this curriculum, and did not thoroughly test it and try it with students. Teaching does not have a hippocratic oath, but I'm still going to rewrite some "or" sentences rather than using the Scorch and Dorn text.
Reply from Tim Shanahan
Indeed, there is a good deal of classroom research (which varies in quality, of course). The Institute of Education Science of the U.S. Department of Education supports a good deal of this kind of research and there are other sources for this as well (including lone researchers doing as well as they can with the disparate support they cobble together).
Comment from George
I have not heard the debate about 3- cuing reference the work on Cognitive Load Theory (CLT). CLT notion of "Dual Coding" suggests the load on working memory is reduced by using pictures in addition to text. Thus improving memory. Recently they've added the concept of "Embodied Cognition" with the claim Dual Coding is extended to Multiple Coding by including physical experiences, e.g., Sweller et al (2019), "asserting that cognitive processes, including information processing and learning, are inextricably linked with sensory and motor functions within the environment, including gestures and other human movements (Barsalou 1999). Research supporting the embodied cognition view shows that observing or making gestures leads to richer encoding and therefore richer cognitive representations. Interestingly, the involvement of the more basic motor system seems to reduce load on working memory during instruction (e.g. Goldin-Meadow et al. 2001), which means that this richer encoding is less cognitively demanding and which confirms the evolutionary account of CLT."
Reply from Tim Shanahan
Okay, George, let's say that you are correct. Then it should be possible for proponents of 3-cueing to show that such instruction does better than explicit decoding instruction. However, those claims haven't been tested directly in more than 60 years. I won't recommend an approach to teaching that runs so counter to existing research (no matter how interesting the logic) unless the people promoting it believe in it enough that they'd be willing to go to the trouble to see if it actually works.
Much of the theoretical work on word reading supports the idea that memories for words are consolidated (including orthographic, phonological, semantic, and syntactic information), but even with that, there is no reason to believe given existing research on the topic that readers rely on that information to initially recognize words.
Comment from Lisa J.
I have been a bilingual Special Education / ELD teacher is some fashion for over 30 years. I was one of those straight A university students who had NO idea how to teach reading ( decoding/encoding/ comprehension) when I got out of college. I have trained extensively since then and consider myself to be very science based ( reading this blog is an example.) I despise the phrase "well the kids love it" as the major reason to perpetuate an activity. I have seen time and time again actual achievement being the treasure my students expect, yearn for and love.
With all that said, in all this time I have never experience a school board and/or a curriculum planning committee that adopted science based methods; reasonably developed the methods within the teachers ( very much imposed, not instructed) or stuck with such a program past a few years. Reasons? Superintendents, school boards, curriculum planners, principals come and go. Also, if a program has been adopted and doesn't make enough progress, the state requires we make a plan to improve which always means abandoning what we are currently doing in order to purchase another "evidenced based" program and then implement it with fidelity.
So in order for me to follow a "science based" reading program while interacting with many differing teachers and situations, I have to 'close my door and do what I think is best." I don't want to, I don't feel I have a choice. My best case scenario is a principal who watches to see how my students actually perform and then lets me go. I guess that phrase really touches a cord with me. I wish to be an accelerated, diagnostic responsive teacher who loves her students and does what best for them; my definition of a professional. When a doctor does this, we applaud or chose another doctor. When a teacher does this, we can be accused of malpractice as we are "closing our door and doing what we think best". Thoughts?
Reply from Tim Shanahan
Indeed, one thing that makes medical analogies not work in education is that no matter how successful an approach is, you can bet the schools will dump it in a few years when the new school board is elected, the new school superintendent is hired, or the curriculum director has attended a conference. Success means building quality on quality — changing programs should have nothing to do with fashion, but should be a data-based enterprise aimed at making things better for kids. Unfortunately, you are right.