I believe in being upfront with my readers, so let me start with a confession: I don’t hate testing.
I know it is a horrible thing for a so-called “educator” to admit. It’s sort of a social disease.
Perhaps someone has a 12-step program that could help me. Assessment Anonymous. Perhaps.
When I was a practicing teacher working on my Master’s degree, I loved collecting tests in a big notebook. Sight word lists, multiple-choice phonics quizzes, informal reading inventories, motivation questionnaires. 3-holes punched in their left margins. Organized by purpose. I loved them all.
In one of my jobs I even did school entry testing, putting prospective kindergartners through their paces.
Then, later, as my habit worsened, I started working on tests — the ACT, the SAT, the National Assessment — eventually even co-authoring a state test in Illinois.
You probably know how this story comes out … everyone hooked on testing eventually hits bottom, the dark night of the soul when you know you have to change or it will be all over. I reached my nadir when I found myself writing a positive review of DIBELS for the Buros Mental Measurements Yearbook.
Okay, now that I have that off my chest, let’s get real.
Over the past 15 years or so, we have so overdone the testing thing. Not just the tests that educators usually don’t like — the high stakes accountability tests — but even the instructionally relevant ones that we believe can be beneficial — the running records, informal reading inventories, DIBELS-style screeners and monitors, and a slew of acronym-titled diagnostic measures. All of them. Too much. Too damn much.
Accountability testing was not a bad idea — it just hasn’t worked the way its proponents thought it would. Nothing wrong with that: You have an idea; you try it out on millions of kids without any empirical evidence that it will work; then after a couple of decades of doing that with few victories … you keep doing it?
The basic idea was this: schools should be run more like businesses. Business figured out how to improve quality by measuring quality. By carefully monitoring their products and services — by testing them — they could ensure higher quality. It’s why your car starts in morning, every morning.
By analogy, the idea was that if we tested kids, we’d see which districts, schools, and teachers weren’t getting the job done, then resources and efforts would be focused and kids’ learning would improve. That movement started back in the 1970s, but really got going full-bore in the 1990s — more than 20 years ago. Needless to say, we are still waiting with baited-breath for the uptick.
I still like the idea of the public knowing how well schools are doing, even if that has no direct impact on kids’ learning. However, we don’t need to test as much as we do to find out how schools are doing. Such tests need to be as brief as possible, and they only should be administered to samples of children, not all children (the National Assessment does a very good job of this on a national basis, testing fewer than 100,000 kids every two years).
But whether or not we adopt an accountability-testing plan that makes sense, there is NO excuse for teachers to spend inordinate amounts of time getting kids ready for these exams. So-called “test prep” should be banned if it goes more than a couple of hours a week; like having kids take a practice test the week before testing. Almost all of the time currently devoted to prepping kids for the PARCC, SBAC, STAAR, Aspire, and the other state tests should be devoted to … wait for it … teaching! That time could be profitably spent teaching reading, writing, math, science, social studies, and the rest of the curriculum.
Why would I recommend such a crazy thing? Because the surest way to raise reading achievement is not through test prep, but through teaching kids to read.
But the testing glut is not just due to the politicians and their accountability schemes. A good deal of the over-testing we have brought on ourselves. Again, the theory has seemed reasonable.
If we know which kids are lagging in which skills, then we can be sure to teach those skills to the right kids, and voilà, higher reading achievement. This idea is especially prevalent among those responsible for kids with learning problems; often it is proposed that those children be tested weekly! The claim is that such testing represents a more rigorous effort on behalf of the strugglers.
But that claim has no basis in research at least as far as reading achievement goes. I’m not arguing against occasionally testing certain skills to see what kind of progress is being made, and if anyone is falling through the cracks — but that can be accomplished well by testing 2-3 times per school year. I’m also not talking about the teachers who observe kids’ performance within daily instruction and who look carefully at kids’ written work (in fact, they’re my heroes).
But interrupting instruction frequently to have kids take tests — even tests aimed at focusing instruction — is a big time waster. There is no evidence that such testing regimens actually improve learning, but there is plenty of evidence supporting the teaching of reading. Our New Year’s resolution should be, “Let’s teach, not test!” Let’s devote our instructional time to teaching kids to read — not to preparing them for tests, not for administering tests.