Skip to main content

Teacher question

I teach first-grade and this year I switched schools. In my previous school, we tested our students with DIBELS three times a year. The idea was to figure if the students were having trouble with decoding so that we could help them. That isn’t how my new principal does it. He has us giving kids a reading comprehension test with leveled books. I asked him about it and he said that the district didn’t care about DIBELS and he didn’t care about DIBELS (he only cares about how kids do on the ____ test). I’m confused. I thought the idea of teaching phonics and fluency was to enable comprehension, but the emphasis on this test seems to suggest — at least in my new school — that is no longer necessary. What should I do?

Shanahan’s response

Educators often complain about the intrusive accountability testing imposed by politicians and bureaucrats who wouldn’t know the difference between a textbook and a whiteboard. But many of the dumbest decisions made by teachers and principals are in pursuit of the tests we ourselves impose.

The accountability tests — PARRC, SBAC, and all the other state tests — are there to check up on how well we are doing our jobs. Not surprising that we don’t like those, and that, consequently, we might bend over backward to try to look good on such tests. That’s why many schools sacrifice real reading instruction — that is, the instruction that could actually be expected to help kids read better — in favor of so much test prep and test practice.

But what of the formative assessments that are supposed to help us do our jobs? You know, the alphabet soup of inventories, diagnostic tests, screeners, monitors, and dipsticks that pervade early reading instruction like DIBELS, ERDA, PALS, CTOPP, TPRI, ISIP, CAP, TRC, NWEA, AIMSweb, and TOWRE.

None of these instruments are problematic in and of themselves. Most set out to measure young children’s ability to … well, to … do something relevant to early literacy learning. For instance, they might evaluate how many letters the children can name, or how well they can hear the sounds within words. Sometimes, as in your school, they ask kids to read graded passages or little books and to answer questions about them, or as in your previous school, they might gauge student ability to perceive correctly the sounds within words.

The basic idea of these testing schemes is to find lacks and limitations. If Johnny doesn’t know his letters, then his kindergarten teacher should provide extra tuition in that. If Mary can’t understand the first-grade text, perhaps she should get her teaching from a somewhat easier book. And so on.

That is all well and good … but how we do twist those schemes out of shape! My goodness.

As a result, educators increasingly have grown restive with the “instructional validity” of assessment. Instructional validity refers to the appropriateness of the impact these tests have upon instruction.

DIBELS itself has often been the target of these complaints. These tests shine a light on parts of the reading process — and teachers and principals tend to focus their attention on these tested parts — neglecting anything about literacy development that may not have this kind of flashlight. Thus, one sees first-grade teachers spending inordinate amounts of time on word attack trying to raise NWF (nonsense word fluency) scores, but with little teaching of untested skills like vocabulary or comprehension or writing.

Even worse, we sometimes find instruction aimed at mastery of the nonsense words themselves too with the idea that this will result in higher scores.

Of course, this is foolishness. The idea of these formative testing regimes is to figure out how the children are doing with some skill that supports their reading progress, not to see who can obtain the best formative test scores.

The reason why DIBELS evaluates how well kids can read (decode or sound out) nonsense words is that research is clear that decoding ability is essential to learning to read and instruction that leads students to decode better eventually improves reading ability itself (including reading comprehension). Nonsense words can provide a good avenue for the assessment of this skill because they would not favor any particular curriculum (as real words would), they correlate with reading as well as real words do, and no one in their right mind would have children memorizing nonsense words. Oops … apparently, the last consideration is not correct. Teachers, not understanding or caring the purpose of the test, are sometimes willing to raise scores artificially by just this kind of memorization.

And, to what end? Remember, the tests are aimed at identifying learning needs that can be addressed with extra teaching. If I artificially make it appear that Hector can decode well when he can’t (memorizing the test words is one way to do this), then I get out of having to provide him the instruction that he needs. In other words, I’ve made it look like I’m a good teacher, but what I’ve really done is disguised the fact that Hector isn’t succeeding, and I’m delaying any help that may be provided until it is too late.

Another example of this kind of educational shortsightedness has to do with the idea of using the tests to determine who gets extra help, like from a Title I reading teacher, perhaps. In most schools, the idea is to catch kids literacy learning gaps early so we can keep them on the right track from the beginning. But what if you are in a school with high mobility (your kids move a lot)?

I know of principals who deploy these resources later — grades 2 or 3 — to try to make certain that these bucks improve reading achievement at their schools. Research might find it best to use these tests early to target appropriate interventions in Kindergarten and Grade 1, but these schmos don’t want to “waste” resources in that way since so many students don’t stick around all the way to the accountability testing. Instead of targeting the testing and intervention at the points where these will help kids the most, these principals aim them at what might make the principals themselves look better (kind of like the teachers teaching kids the nonsense words).

Back to your question… your school is only going to test an amalgam of fluency (oral reading of the graded passages) and reading comprehension. If all that you want to know is how well your students can read, that is probably adequate. If all the first-grade teachers tested her charges with that kind of test, the principal will end up with a pretty good idea of how well the first-graders in his school are reading so far. Your principal is doing nothing wrong in imposing that kind of test if that is what he wants to know. I assume those results will be used to identify which kids will need extra teaching.

I get your discomfort with this, however. You are a teacher. You are wondering … if Mary needs extra teaching what should that extra teaching focus on?

Because of the nature of reading, that kind of assessment simply can’t identify which reading skills are causing the problem. Mary might not read well — the test is clear about that, but we can’t tell whether this poor reading is due to gaps in phonological awareness (PA), phonics, oral reading fluency, vocabulary, or reading comprehension itself.

The default response for too many teachers, with this test or any other, is to teach something that looks like the test. In first grade that would mean neglecting those very skills that improve reading ability. The official panels that have carefully examined the research and concluded that decoding instruction was essential did so because such teaching resulted in better overall reading achievement (not just improvements in the skill that was taught). The same can be said about PA, fluency, and vocabulary instruction.

I’d love to tell you I have a great solution to your problem … for instance, perhaps all the children could be tested in the way that your principal requires and then anyone who failed to reach a particular reading level could then be tested further using DIBELS or something like DIBELS to identify the underlying skills that are likely holding those kids back. That sounds pretty sensible since it would keep teachers from just focusing on the underlying skills (and then ignoring reading comprehension), and yet, I quake at those teachers who will now teach reading with the test passages or who will coach the kids on the answering the test questions so that no one needs to be tested further–in other words, hiding the fact that their kids are struggling.

The key to making all of this work for kids is: 

  1. All teachers and principals need to know what skills are essential to reading success. There are skills and abilities inherent in reading comprehension itself (so testing comprehension is not unreasonable) but there are also enabling skills that make comprehension possible for young readers (and testing those skills makes sense too). Knowledge of letters, ability to perceive sounds, decoding facility, knowledge of high-frequency words, oral reading fluency, awareness of word meanings, and ability to make sense of text are all part of the reading process — and all of these should be taught and tested from the start.
  2. It is also critical for educators to know that this list of essential skills is not a sequence of teaching … in which one starts with letters and sounds and ends up with ideas. In fact, good early reading instruction provides a combination of instruction in decoding, fluency, comprehension, and writing — from the very beginning.
  3. Formative assessment can help us to monitor student progress in all of these areas, one is no more important than another. Because a student lags in one area, is no reason to neglect instruction in the other areas. If you find that a youngster does not decode well, I would provide added or improved decoding instruction — but I would also maintain a daily teaching regimen with all of the other literacy components, too.
  4. It is essential that educators know what tests can be used to measure the various components of literacy and how these assessments work. A nonsense word test, for instance, isn’t trying to find out if kids can read nonsense words (but how well they can decode any words). A fluency test is not about speed reading, but about being able to read the text so that it sounds like language. A comprehension test score might reveal comprehension problems, but you can’t determine that without additional tests of the underlying skills — since low comprehension may be, and usually is at this point, the result of poor decoding or lack of fluency.
  5. No educator should ever teach the test, nor should lessons look like the test. These kinds of tests are not competitive. They are there to help us identify who needs help and what they may need help with. Every screwy thing we do to make the scores look better than they are is harmful to the education of children.

So, a pox on both your houses … That your principal doesn’t care why kids are having reading trouble is a serious impediment for the boys and girls in that school. That you don’t recognize the value of a test of your students’ actual reading ability concerns me as it might indicate a willingness to go off the deep end, teaching some aspects of reading to the neglect of others. Teach it all, monitor it all … and help these children to succeed.

See comments here › (opens in a new window)

About the Author

Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy (opens in a new window).

Publication Date
January 29, 2018
Top