The literacy field has long been beleaguered by generic terms that no one seems to understand — or more exactly, of which nobody agrees on the definitions. Terms like whole language, balanced literacy, direct instruction, dyslexia, sight words, and guided reading, are bandied about in journals, conference presentations, newspaper articles, and teacher’s lounges as if there was some shared dictionary out there that we were all accessing. Even terms that seem like they would be widely understood like research or fluency often turn out to be problematic.
This plague of vagueness is exasperating, and I think it prevents productive dialogue or any kind of substantive progress in the field.
Over the decades, reporters and policymakers have often asked me my opinion of [insert any of those undefined terms]. My usual response has been something along the lines of:
“Tell me what ________ is, and I’ll give you my opinion,” not-so-cleverly shifting the responsibility for definition to my questioner.
If they say, “balanced literacy means providing explicit instruction in key reading skills while trying to provide a motivational and supportive classroom environment”, I say, “I’m all for it.” If they tell me, “it means teaching reading with a minimum of explicit instruction, particularly in foundational skills like spelling and decoding,” then I’m strongly opposed.
That approach keeps me out of the soup, but it really doesn’t solve any important problem. My clarity and consistency aside, teachers are still inundated with invitations to professional development programs, textbooks, and classroom instructional practices that are supposedly aligned with some unspecified definition of today’s hot jargon.
The biggest offender now — if my Twitter feed is representative — is the “science of reading.”
I can’t believe the number of webinars, blogs, textbooks, professional development opportunities, and the like that aim to provide the latest and greatest information from the science of reading (whatever that is?).
My advice to everyone: Grab your wallets and run!
Okay, I admit that isn’t very helpful, but it should save you a lot of money and aggravation.
What would be more helpful?
Consumers of a science of reading should start out with a definition of what would fairly constitute such a science. That way they could always check to see if what was being promoted was what they were seeking.
Back in the late 1990s, federal education law — recognizing how misleadingly the term “research” was being used by textbook companies, consultants, and the like — provided definitions of “scientifically-based reading instruction (SBRR).”
Unfortunately, in one fell swoop, the feds stopped promoting instructional approaches based on research and did away with the legal definition of scientific evidence; moves that coincided, I might point out, with the last round of gains in national reading scores.
I’d suggest that, though that definition no longer has legal standing, it is a good starting point for deciding what should be in your personal definition of “a science of reading.”
What would that look like?
First, the evidence must be derived from a scientific method that is appropriate to the claim being made. If you want to claim that a particular instructional method or approach improves reading achievement, you need to prove that; that such instruction is more beneficial than other approaches.
That can only be accomplished through an educational experiment; that provides a sound comparison between students who are receiving that instruction and those who aren’t.
Other scientific methods can provide valuable information, but they can’t answer a “what works” kind of question.
Descriptive and correlational research methods are appropriate for many other important questions (e.g., Are kids of different races or genders making equal gains? What kinds of library books are students most interested in? Have reading scores risen in the past three years?). Those other research methods, if implemented appropriately, can provide sound answers to such questions.
You might be surprised how many fine scientists are out there telling teachers how and what to teach — even though their research has never tested the effectiveness of what they are recommending.
Evidence from their studies can be usefully provocative — that is, it may suggest worthwhile questions. If, for example, you noticed greater student engagement when kids were allowed to choose what to read, you might wonder, “Would such choice lead to more learning?” Unfortunately, too often, people see or think they see that kind of pattern and jump right to a conclusion, “Student choice must lead to more learning,” without bothering to test that claim through a rigorous experiment. (Sometimes research supports such a claim and sometimes it doesn’t. But it certainly can’t be recommended as being based on science without such a test).
Something we should remember that when science identifies a potentially valuable avenue to better learning that doesn’t mean we know how best to exploit that knowledge.
Basically, all I’m saying is, if you want to claim that something works, you need to try it out and show that it can be beneficial.
Second, a science of reading would require studies that provided a rigorous analysis of the data derived from educational experiments. Such analysis must ensure that the results are due to the instruction and not just to normal variations in performance. It also must ensure that the comparisons being made are sound. Some studies try to compare results with groups that are so different in the beginning that it would be impossible to attribute outcome differences to the instruction.
Third, the studies need to go through peer review or some other kind of independent scientific evaluation to protect against serious flaws in the reasoning or analysis.
Fourth, the studies need to be replicated or generalized. That’s why I depend so heavily on meta-analysis; it combines the results of multiple studies. It is not enough to know that the XYZ reading method had great results in one study, if there are 9 other investigations that showed it to be ineffective. That kind of pattern says to me, this technique can work, but it rarely does. Not something I’d be likely to adopt or to recommend to schools.
Fifth, it helps if there are convergent findings — in other words, other evidence that appears to be consistent with these findings. Like the U.S. Department of Education of two decades ago, I would never place the imprimatur of science upon an instructional approach that had not actually been tried out in classrooms and shown to be effective. But once I have that evidence, I am heartened to know of other supporting information.
I don’t talk much about the brain research in reading. Not because I’m unaware of its potential importance, but because of its insufficiency. Any pattern revealed in neurological investigations that suggests an instructional possibility still must be evaluated in the classroom. Sometimes a basic idea is sound, but it is more challenging or complicated to implement than you realize.
In any event, descriptive and correlational studies, theories, neurological investigations, and studies of other kinds of learning may bolster your trust in the instructional studies that you have.
We have many studies showing the effectiveness of decoding instruction. Those are studies that have compared the results of a strong phonics emphasis versus a no phonics or a weak phonics approach. My trust in those results goes up when I see the mRI studies showing how the brain connects the visual recognition of letters and words with the part of the brain that carries out phonological processing. That neurological evidence on its own, wouldn’t be enough to scientifically endorse phonics as an effective instructional approach, but it sure provides convergent proof that should strengthen my resolve to offer such instruction. (The same, in this case, could be said about digital simulation studies of reading as well.)
Where does this leave us?
If I were invited to a science of reading seminar, and wondered if it would be worthwhile, I’d ask the sponsors if the presenters will either
1. Limit their endorsement of instructional approaches to those that have been evaluated through rigorous and well analyzed classroom experiments that have been published in peer reviewed outlets, and replicated; or
2. Distinguish which of their instructional recommendations have such evidence and which do not?
If I had no choice but to attend, those would be the kinds of questions I’d be asking the presenters if their presentations didn’t make the foundations of their claims clear.
If we are serious about improving reading achievement for all children, we are only likely to get there if we hold ourselves to the highest standards of professional practice. Having a sound definition for what constitutes a “science of reading” is more than a game of semantics. Employing instructional approaches that have repeatedly benefited learners in rigorously implemented and analyzed studies is likely to be the most productive way to progress.
These days I’m seeing schools mandating instructional practices that have no direct research evidence in the name of the science of reading. Those practices don’t become part of the science of reading because someone wrote them down, or because they were recommended by a researcher, or because they address a particular aspect of reading development.
Comment from Rebecca
Thank you, Dr. Shanahan, for offering clarity and precision again. The body of actual experimental research and other types of research is so large that it is difficult even for experts to read and digest it all. Teachers look for “recipes” and quick fixes because they are so overloaded, though that is no excuse. I appreciate that your posts offer summaries of research and citations to follow. A few examples of topics where I find the SOR community not quite in line with the research are 1) use of 100% decoable or mostly decodable text only until all sound patterns are learned, and 2) learning syllable types and rules as always effective. There are fight about “three-cueing”, which is nuanced and you’ve addressed it here. I appreciated your careful analysis of “Sold a Story” as well. As a teacher educator in a small private college, I keep up as best I can, but often both side are critical of those in my role.
Comment from Harriet
Great points, Tim. Along these same lines, I recommend the following: The Reading League Journal (Jan. 2020) where Mark Seidenberg and Matt Cooper Borkenhagen provide eight tenets for teachers in their article: Reading Science and Educational Practice: Some Tenets for Teachers.
These tenets are:
1) “Evidence Based” doesn’t mean “true.”
2) Teachers can make use of scientific findings, but be cautious.
3) Teachers are cognitive theorists.
4) Reading problems are not necessarily about reading.
5) Skilled word-reading is like a reflex.
6) Most learning is implicit, but explicit instruction matters.
7) Balancing implicit learning and explicit instruction is hard.
8) “Components of reading are for teachers”, not for children.
Comment from Mark
Okay, Tim. You more than hinted at it saying, “These days I’m seeing schools mandating instructional practices that have no direct research evidence in the name of the science of reading.” Let’s get a famous Shanahan list of these instructional practices, please.
Comment from Joanne
Tim, no matter how much I read I continue to become more confused than ever! I would also really appreciate the list that Mr. Pennington would like you to create. I’ve done a great deal of reading, but I’m not well versed in scientific methods which is why I read your articles. I am in the position of teaching at the university level and there is debate about a push from some lawmakers to mandate science of reading as a part of the content in a methods class. What is your opinion on this?
Response from Tim Shanahan
While I sympathize with the idea of mandating various instructional practices by mandates from state legislators, I cringe at the thought of what the outcome of these efforts may be (and, so, I don’t support most of the legal maneuvering that is going on).
First, I oppose the practice of teaching kids to recognize words by looking at pictures or by three-cueing. However, once that becomes a law where does that leave the teacher who in frustration over a child’s bungled attempt to decode a word. “Look at the picture,” may slip mindlessly through her lips and in a moment her career, license, and reputation may be ruined. Sounds far fetched, but teachers who don’t obey such laws may inadvertently find herself in that kind of soup. Perhaps not the best way to address what may be a problem (3 cueing certainly is not consistent with how people appear to read, but there is no evidence that giving kids such guidance in the context of a substantial and appropriate phonics program does any actual harm — in other words, I think it should be discouraged and avoided, but if someone lapses into it, I’m not sure it is actually harmful).
Second, it is clear that some bureaucrats, legislators, and governors are idiots (I mean that in the politest possible way). They see the very real pain in the faces of mothers and fathers of dyslexic children whom the schools have failed to nurture, and want to do something. The press says they should follow the science, and there are plenty of parents, teachers, administrators, professors, consultants, lobbyists, publishers, etc. who are happy to tell them. However, with no understanding of the science and no standards for what kinds and qualities of research are needed to determine a favored practice, they are mandating a dog’s breakfast of regulations that range from those supported by solid evidence and those that have never been studied at all. Making such decisions may be politically sound, but it doesn’t do much to promote reading.
Comment from Helen
Appreciate your reasoned insight to help educators approach science of reading promoted methods and materials.
I wonder, too, about the danger of positioning research and teacher practice as separateand; one-way (research studies informing teaching practice and not the other way around)
Ultimately, teachers need to be their own researchers, too. Informed by “the science” to be sure, they also need to be positioned to learn to effectively measure the impact of their instruction and adjust instruction accordingly. In my experience, this piece of the puzzle, is often left out of the science of reading (sometimes very one-sided) conversation.
Response from Tim Shanahan
I have a slightly different take on that. Teachers lack the training, resources, time, and purpose for conducting research — just like the typical family physician in medicine. It’s possible for practitioners to play a role in research, but it is really a different job. However, I am always surprised that public agencies do not involve teachers in a substantial way in the determination of research priorities — in terms of research that gets funded and summaries of research that are commissioned. Researchers (me included) tend to focus on questions that interest the research community. That often means the practitioner is left out. These days I get tons of questions about how to teach various aspects of phonics. The research community still seems to be interested in whether phonics can be made to work, and they are less interested in the best ways to teach phonics effectively (just the opposite). What goes under the name “teacher research” tends to be a pale imitation of what is usually meant by rigorous research.
Comment from Mat
Another great article, thank you Tim! So I’m assuming we could apply this to the knowledge building trend that we see in schools and that is being spoken about a lot on social media. There was a well known baseball study showing that students who knew more about baseball had better reading comprehension than students who didn’t know about baseball but it hasn’t been replicated as far as I know. So we need to see more studies like that before we can be sure about this, right? In your article you wrote that practices can become part of the science of reading because someone wrote then down or were recommended by a researcher. Well E.D. Hirsch wrote down the need for a knowledge curriculum in his Why Knowledge Matters book and an educational journalist, Natalie Wexler, wrote a well known book called the Knowledge Gap. Some would say these have now found their way into what we call the science of reading, but I’m assuming you would say they have not earned their place there and being a little presumptious considering that evidence is still lacking.
Response from Tim Shanahan
Indeed, my comments cover all aspects of reading instruction. The baseball study was not an instructional study, it was a demonstration of the role that knowledge plays in reading comprehension. The instructional idea that has (incorrectly) been drawn from that demonstration is that building up students’ knowledge of factual information (declarative knowledge, world knowledge, background knowledge) is the best way to increase reading comprehension has not been evaluated through instruction. When one thinks about the millions of topics that it is possible to read about, the idea that teachers can intentionally build children’s knowledge of all of these so they will be able to read about them would be impossible. Sadly, that means kids (and later when they are adults) will not be able to read about anything they haven’t already learned. In classrooms, it increasingly seems to mean that if it is factual it is worth reading, if it only deals with human development, emotions, and relationships (stories) it is not. All of these claims are interesting, but none of them have been rigorously evaluated in practice.
Comment from Mary
What are your thoughts on E.D. Hirsch’s Core Knowledge curricula and the Knowledge Matters Campaign?
Response from Tim Shanahan
I’m a big fan of people knowing stuff and am a strong believer that schools should teach biology, physics, chemistry, geography, history, economics, literature, world culture, and the arts. I think reading (and viewing) should be an opportunity to learn about our social and natural worlds. However, until there is evidence that teaching those things improves general reading achievement, I’m going to focus my attention on those things that have been found to improve general reading achievement (e.g., decoding, written language, comprehension strategies, combinations of reading and writing).
Comment from Lynn
I 100% find this spot on, but it did leave me wondering about the huge cost to a company that has developed a program based on the information put out by researchers and based upon what has already been proven to work. If there are not yet double blind scientific research into a program, do we eschew such programs until there is and how do said programs go about getting the money for said research? It’s a bit of a quandary.
Response from Tim Shanahan
My argument isn’t that no one should publish programs or that schools shouldn’t purchase them. If anything, I’m pro program because of the consistency that they can add to a school’s offerings — and they keep teachers from constantly having to recreate the wheel. The issue is more of a truth in advertising kind of thing. Teachers should know what aspects of the program are consistent with actual research and which parts of the program have just been made up on the basis of their beliefs, theories, hopes, inferences, etc. That way teachers can think along with the curriculum designers and observe how things are going in their classrooms. If something that has a lot of research isn’t working, you either have to figure out what you are doing differently than the research or how your circumstance varies from that of the research. If something that is not based on research is not working, you might want to come up with a plan for changing it.