Skip to main content

Teacher question: I’ve attached a Student Work Analysis tool that we are using. I have read that you oppose attempts to grade students on the individual reading standards. Although this tool is not used for grading students, it is a standard-by-standard analysis of the students’ work, and I wonder what you think of it? [The form that was included provided spaces for teachers to analyze student success with each of their state’s math standards].

Shanahan’s response:

In the blog entry that you refer to, I spoke specifically about evaluating reading comprehension standards (not math or even the more foundational or skills-oriented decoding, vocabulary, or morphology).

A common error in reading education is to treat reading comprehension as if it were a skill or a collection of discrete skills.

Skills tend to be highly repeatable things…

Many of the items listed as comprehension skills are not particularly repeatable. All these standards or question types aimed at main idea, central message, key details, supporting details, inferencing, application, tone, comparison, purpose, etc. are fine, but none is repeatable in real reading situations.

Each of these actions is unique or at least high particularized. Each time these instances occur they are in completely different contexts. To execute them, it requires different steps from instance to instance.

Not only does each text have its own main ideas, but because the expression of each text is so different, what it takes to locate, identify, or construct a main idea will vary greatly from text to text. Contrast this with forming the appropriate phoneme for /sh/ or /ph/, computing the product of 3 X 3, or defining photosynthesis.

Another problem is that these supposed comprehension skills aren’t individually measurable.

My point isn’t that teachers can’t ask questions that would require students to figure out particular things about a text — of course they can — but performance on such questions is startlingly unreliable. Today, Johnny might answer the tone question like a champ, but tomorrow he won’t — since that is a different story, and the author revealed tone in a totally different way.

Also, comprehension questions asked about a particular text aren’t independent of each other (and item independence is imperative in assessment). The reason, little Johnny struggled with tone on the day after wasn’t because he forgot what he knew about tone, nor even because tone was handled more subtly in text two… but because his reading was deeply affected by that text’s more challenging vocabulary, complex sentences, or complicated time sequence — none of which are specifically tone issues.

That means that when teachers try to suss out how well Johnny can meet Standard 6 by asking tone questions, his answers will reveal how well he could make sense of tone in one particular text, but it won’t likely be indicative of how well he’ll handle tone on any other. (Not at all what one would expect to see with math, decoding, or vocabulary assessments).

Reading comprehension is so affected by the readers’ prior knowledge of the subject matter being read about and the language used to express those ideas (e.g., vocabulary, sentence structure, cohesion, text organization, literary devices, graphics), that focusing one’s attention on which kinds of question the kids could answer is a fool’s errand.

If I were trying to assess reading comprehension information to determine who might need more help, the kind of help to provide, or who I should worry about concerning the end of year testing, then I wouldn’t hesitate to ask questions that seemed to reflect the standards… but the information I’d use for assessment would ignore how well the kids could answer particular types of questions.

My interest would be in how well students did with particular types of texts.

Keep track of their overall comprehension with different types of text. I’d record the following information:

  1. How the student did on each overall text (the percentage of questions answered correctly, or an estimate of the percentage of key information the student could include in a summary).
  2. The topics of the texts (with, perhaps, some rating of each child’s familiarity with those topics).
  3. An estimate of the text difficulty (in terms of Lexiles or another readability estimate).
  4. The lengths of the texts (in numbers of words, preferably).
  5. Whether the text was Literary (narrative or poetry), or Informational (expository or argumentative).

Thus, a student record may look something like this:

  Comprehension Lexile Familiarity Text Type Length
Week 1 90% 400L 4 Fiction/Narrative words 300
Week 2 60% 570L 2 (habitats) Info/Exposition words 550
Week 3 75% 500L 2 Fiction/Narrative words 575
Week 4 75% 570L 4 (robots) Info/Exposition words 500
Week 5 80% 490L 4 Fiction/Narrative words 400
Week 6 65% 580L 3 (climate) Info/Exposition words 500
Week 7 85% 525L 3 Fiction/Narrative words 250

Over time, you’ll get some sense that junior does great with texts that are lower than 500L, but not so well with texts that are harder than 550L (unless they’re about robots).

Or, perhaps over the report card marking period you may notice a difference in performance on the the literary or informational texts (which you can see in my example above). But you also need to notice that the informational texts were relatively harder here, so it isn’t certain that the student would struggle more with content than literature (though one might make an effort to sort this out to see if there is a consistent pattern). Likewise the student seemed to be able to handle silent reading demands with the shorter texts, but comprehension tended to fall off with the longer texts. That may lead me to try to do more to build stamina with this student.

And so on.

Basically, the information that you are collecting should describe how well the student does with particular types of texts (in terms of discourse types, length, topic familiarity, and difficulty), rather than trying to figure out which comprehension skills the individual question responses may reveal.

If a student does well with many of the passages, then he or she will likely do well with the comprehension standards — as long as these weekly dipsticks are reflective of the difficulty, lengths, and types of texts that will appear on the end-of-year tests.

And, If students perform poorly with many of the passages, then their performance on all question types will be affected.

About the Author

Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy (opens in a new window).

Publication Date
November 11, 2019
Top