Skip to main content

I talk a lot about research in this space.

I argue for research-based instruction and policy.

I point out a dearth of empirical evidence behind some instructional schemes, and champion others that have been validated or verified to my satisfaction.

Some readers are happy to find out what is “known,” and others see me as a killjoy because the research findings don’t match well with what they claim to “know.”

Members of this latter group are often horrified by my conclusions. They often are certain that I’m wrong because they read a book for teachers that had lots of impressive citations that seem contradict my claims.

What is clear from these exchanges is that many educators don’t know what research is, why we should rely on it, or how to interpret research findings.

Research is used to try to answer a question, solve a problem, or figure something out. It requires the systematic and formal collection and analysis of empirical data. Research can never prove something with 100 percent certainty, but it can reduce our uncertainty.

“Systematic and formal” means that there are rules or conventions for how data in a research study need to be handled; the rigor of these methods is what make the data trustworthy and allow the research to reduce our uncertainty. Thus, if a researcher wants to compare the effectiveness of two instructional approaches, he or she has to make sure the groups to be taught with these approaches be equivalent at the beginning. Likewise, we are more likely to trust a survey that defines its terms, or an anthropological study that immerses the observer in the environment for a long period of time.

Research reports don’t just provide the results or outcomes of an investigation, but they explain — usually in great detail — the methods used to arrive at those results. Most people don’t find research reports very interesting because of this kind of detail, but it is that detail that allows us to determine how much weight to place on a study.

Given all of that, here are some guidelines to remember.

1. Just because something is written, doesn’t make it research

Many practitioners think that if an idea is in a book or magazine that it is research. Some even think my blog is research. It is not, and neither is the typical Reading Teacher article or Heinemann book.

That’s not a comment on their quality or value, but a recognition of what such writing can provide. In some cases, as with my blog, there is a serious effort to summarize research findings accurately/ I work hard trying to distinguish my opinions from actual research findings.

Many publications for teachers are no more than compendia of opinions or personal experiences, which is fine. However, these have all of the limits of that kind of thing.
Just because someone likes what they’re doing (e.g., teaching, investing, cooking) and then writes about how well they’ve done it … doesn’t necessarily mean it is really so great. That’s why 82% of people believe that they’re in the top 30% of drivers; something that obviously can’t be right.

As human beings we all fall prey to overconfidence, selective memory, and just a plain lack of systematicity in how we gain information about our impact.

Often when teachers tell me that kids now love reading as a result of how they teach, I ask how do you know? What evidence do you have? Usually the answer is something like, “A parent told me that their child now likes to read.” Of course, that doesn’t tell how the other 25 kids are doing, or whether the parent is a good observer of such things, or even the motivation for the, seemingly, offhand comment.

Even when you’re correct about things improving, it’s impossible — from personal experience alone — to know the source of the success. It could be the teaching method, or maybe just the force of your personality. If another teacher adopted your methods, things might not be so magical.

And, then there is opportunity cost. We all struggle with this one. No matter how good an outcome, I can’t possibly know how well things might have gone had I done it differently. The roads not traveled may have gotten me someplace less positive — but not necessarily. You simply can’t know.

That’s where research comes in … it allows us to avoid overconfidence, selective memory, lack of systematicity, lack of reliable evidence, incorrect causal attribution, and the narrowness of individual experience.

2. Research should not be used selectively

Many educators use research the same way advertisers and politicians do — selectively, to support their beliefs or claims — rather than trying to figure out how things work or how they could be made to work better.

I wish I had a doughnut for every time a school official has asked me to identify research that could be used to support their new policy! They know what they want to do and want research to sell it. Rather than studying the research to determine what they should do.

Cherry-picking an aberrant study outcome that matches one’s claims or ignoring a rigorously designed study in favor of one with a preferred outcome may be acceptable debater’s tricks but are bad science. And, they can only lead to bad instructional practice.

When it comes to determining what research means, you must pay attention not just to results that you like. Research is at its best when it challenges us to see things differently.

I vividly remember early in my career when Scott Paris challenged our colleagues to wonder why DISTAR, a scripted teaching approach was so effective, despite that fact that most of us despised it. Clearly, we were missing something; our theories were so strong that they were blinding us to the fact that what we didn’t like was positive for kids — at least for some kids or under some conditions (the kinds of things that personal experience can’t reveal).

3. Research, and the interpretation of research, require consistency

Admittedly, interpreting research studies is as much an art as science. During the nearly 50 years of my professional career, the interpretation of research has changed dramatically. It used to be entirely up to the discretion of each individual researcher as to which studies they’d include in a review and what criteria they would use to weigh these studies.

That led to some pretty funky science: research syntheses that identified only studies that supported a particular teaching method or inconsistent criteria for impeaching studies (this study should be ignored because it has a serious design weakness, but then using studies with more acceptable findings even though they suffer the same flaw).

I’ve been running into this problem a lot lately. Not among researchers, but among practitioners. When I point out a research-supported instructional practice (Reading Recovery) that is inconsistent with phonics theories, I’m told “anything works if it is taught one-on-one.” That sounds great, but those same people are offended when there is insufficient attention to phonics instruction, in spite of the evidence supporting phonics such as the National Reading Panel. The problem with this: the instruction in many of those positive phonics studies was delivered one-on-one.

I’m persuaded that both phonics and Reading Recovery work (because they both have multiple studies of sufficient quality showing their effectiveness). That doesn’t mean I think they work equally well, or that they are equally efficient, or that they even accomplish the same things for students.

I agree with those who argue against teaching cueing systems, because research evidence reveals that poor readers use non-orthographic information to identify words and that good readers do not. Teaching kids to read like poor readers makes no sense to me. Nevertheless, Reading Recovery clearly gives kids a learning advantage, and we’d be wise to look hard at it to see why (one study found adding more explicit phonics to it improved kids’ progress, and that’s a clue that may help us understand what it does and what it doesn’t).

The point isn’t phonics or Reading Recovery: but when we make those kinds of choices, we need to weigh evidence consistently — treating as the same those studies that challenge our deepest beliefs as well as those that are wind beneath our wings. What works in teaching, who it helps, how it helps them … those are complex questions requiring sound evidence and wise analysis rather than rage and cheap “hooray for our side” Tweets.

Let’s do better.

 

See comments here› (opens in a new window)

About the Author

Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy (opens in a new window).

Publication Date
November 15, 2018
Top