Blogs About Reading
Shanahan on Literacy
Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy.
Should We Administer Weekly Tests Linked to Standards?
Teacher question: My district instituted a weekly "checkpoint" (a short passage and multiple-choice assessment aligned to our standardized test). Teachers are required to give this, and then break it down by standard in a meeting with a coach. I've argued that these tests are likely not measuring what they think they are. They believe that these can tell teachers whether students are mastering certain standards and questions. We have a large proportion of students below grade level.
I'm concerned that valuable teaching time, focusing on working with complex texts, is going to be spent on testing, and that the nature of the assessments will lead to skills-focused teaching that won't result in better readers. I've been told that "teachers need something" to know how their kids are doing, and this is what strong districts do? They've asked what I would suggest. How would you answer my admin's question about how best to know whether teaching is resulting in learning, particularly for less-experienced teachers?
Man, if I had a nickel ….
I believe that your assessment of the situation is spot on.
It is not possible to reliably or validly assess those individual reading comprehension standards. That’s why the multi-billion-dollar testing companies that are capable of doing amazing things don’t even pretend to do that. With the approach that you describe kids get less instructional time (to accommodate all of the unnecessary testing), and the testing can’t possibly reveal anything specific that the teachers need to know to improve or shape the intensity or quality of their instruction.
The kids’ ability to answer the questions will be due more to the difficulty levels of the texts they are asked to read in the assessments than to the types of questions on the test … that’s why research repeatedly finds that reading comprehension tests measure a single factor — not all the individual factors that the questions or the standards supposedly represent.
This scheme is a time waster. It serves to make administrators feel good because they feel like they are taking positive action and looking rigorous …. But think about it. The most effective doctors aren’t the ones that prescribe placebos! And, that is just what this approach is; it is a sugar pill that will make you think you are really doing something — but, remember, it is just a sugar pill. It has no therapeutic value. (I find the statement that this is what “strong districts do” to be stunning. I assure you that it is not what those districts have high reading achievement.)
Kids’ ability to answer the questions will likely be due to how well they are able to read the particular texts (and the degree of prior knowledge they might have on the topics addressed in those texts). That means such testing should be done less often and should try to identify the difficulty levels of the texts that kids can and can’t handle, rather than on whether they could answer particular kinds of questions.
If administrators don’t believe this, they should look at their own data to see how reliably the kids perform week-to-week on each item type. If something valid was being measured reliably, then those scores should be pretty consistent — main idea ability or key ideas and details ability shouldn’t bounce up and down. They also might want to make readability estimates of the texts that they use and compare these with how the kids perform on the various sets of questions. What they are likely to see there is the same thing that ACT reports with their tests: if the passages are complex, then kids have trouble answering even the most straightforward or supposedly easy questions, and if the passages are relatively easy, they will be able to answer the supposedly hard questions.
I’d suggest that instead of a weekly test, the district provide an assessment two or possibly three times per year. What you want to test for isn’t which comprehension skills they do well on, but what levels of text they can handle. From that, you can make a pretty good estimate of who will be able to do well on the state assessment. And, you’ll know which kids you most need to stretch in terms of helping them develop the abilities to read those more complex texts.
The coaches should be supporting the teachers’ efforts to teach vocabulary effectively, to develop fluency, to extend kids’ reading stamina, to handle complex sentences and subtle or confusing cohesive links, and to make use of texts’ structures, rather than focusing on teaching kids to answer particular kinds of questions.
Why would I avoid the practices that you describe? Because they don’t work. Because they hurt kids by wasting their educational time. Because they make teachers, principals, and other administrators look stupid — since they don’t improve achievement.
Why would I take the approaches described here? Because they work. Because research shows that they work. Because my own personal experience as a district administrator tells me that they work on scale.