Skip to main content

I have recently encountered some severe criticism leveled at reviews and reviewers from What Works Clearinghouse (for example, this from the National Institute for Direct Instruction (opens in a new window)). I am concerned about recommending this site to teachers as a resource for program evaluations. I’m wondering if you agree with the criticisms, and if yes, where you would recommend teachers go for evidence-based program reviews. I know that NELP and NRP reports are possibilities but are also static documents that do not get updated frequently with new findings, so some of the information really isn’t current. Perhaps the Florida Center for Reading Research is an alternative? Do you have others than you would recommend?

I don’t agree with these criticisms and believe What Works Clearinghouse (WWC) has a valuable role to play in offering guidance to educators. I often recommend it to teachers and will continue to do so. It is the best source for this kind of information.

WWC is operated by the U.S. Department of Education. It reviews research claims about commercial programs and products in education. WWC serves as a kind of Good Housekeeping seal of approval. It is helpful because it takes conflict of interest out of the equation. WWC and its reviewers have no financial interest in whether a research claim is upheld or not.

I am an advisor to the WWC. Basically, that means I’m available, on a case-by-case basis, to help their review teams when questions come up about reading instruction or assessment. Such inquiries arise 2-3 times per year. I don’t think my modest involvement in WWC taints my opinion, but the whole point of WWC is to reduce the commercial influence on the interpretation of research findings, so it would be dishonorable for me not be open about my involvement.

I wish the “studies” and “reports” you referred me to were as disinterested. The National Institute for Direct Instruction (DI) cited in the question here has long been chagrined that the WWC reviews of DI products and programs haven’t been more positive. That the authors of these reports have a rooting interest in the results should be noted.

Different from the disinterested reviews of the Clearinghouse which follow a consistent rule-based set of review procedures developed openly by a team of outstanding scientists, these reports are biased, probably because they are aimed at trying to poke a finger in the eye of the reviewers who were unwilling to endorse their programs. That’s why there is so much non-parallel analysis, questionable assumptions, biased language, etc.

For example, one of the reports indicates how many complaints have been sent to the WWC (62 over approximately seven years of reviewing). This sounds like a lot, but what is the appropriate denominator … is it 62 complaints out of X reviews? Or 62 complaints about X decisions included in each of the X reviews? Baseball umpires make mistakes, too; but we evaluate them not on the number of mistakes, but the proportion of mistakes to decisions. (I recommend WWC reviews, in part, because they will re-review the studies and revise as necessary when there are complaints).

Or, another example: These reports include a table citing the “reasons for requesting a quality review of WWC findings,” which lists the numbers and percentage of times that complaints have focused on particular kinds of problems (e.g., misinterpretation of study findings, inclusion/exclusion of studies. But there is no comparable table showing the disposition of these complaints. I wonder why not? (Apparently, one learns in another portion of the report, that there were 146 specific complaints, 37 of which led to some kind of revision — often minor changes in a review for the sake of clarity; that doesn’t sound so terrible to me.)

The biggest complaint leveled here is that some studies should not have been included as evidence since they were studies of incomplete or poor implementations of a program. The problem with that complaint is that issues of implementation quality only arise when a report doesn’t support a program’s effectiveness. There is no standard for determining how well or how completely a program is implemented, so for those with an axe to grind, any time their program works it had to be well implemented and when it doesn’t it wasn’t.

Schoolchildren need to be protected from such scary and self-interested logic.

About the Author

Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy (opens in a new window).

Publication Date
April 6, 2015
Top