Skip to main content

Teacher question: What are your thoughts on standards-based grading in ELA which is used in many districts? For example, teachers may be required to assign a number 1-4 (4 being “mastery”) that indicates a student’s proficiency level on each ELA standard. Teachers need to provide evidence to document how they determined the level of mastery. Oftentimes tests are created with items that address particular standards. If students get those items correct, that is evidence of mastery. What do you recommend?

Shanahan’s response: 

Oh boy… this answer is going make me popular with your district administration!

The honest answer is that this kind of standards-based grading makes no sense at all.

It is simply impossible to reliably or meaningfully measure performance on the individual reading standards. Consequently, I would not encourage teachers to try to do that.

If you doubt me on this, contact your state department of education and ask them why the state reading test doesn’t provide such information.

Or better yet, see if you can get those administrators who are requiring this kind of testing and grading to make the call.

You (or they) will find out that there is a good reason for that omission, and it isn’t that the state education officers never thought of it themselves.

Or, better yet check with the agencies who designed the tests for your state. Call AIR, Educational Testing Service, or ACT, or the folks who designed PARCC and SBAC or any of the other alphabet soup of accountability monitoring.

What you’ll find out is that no one has been able to come up with a valid or reliable way of providing scores for individual reading comprehension “skills” or standards.

Those companies hired the best psychometricians in the world, and have collectively spent billions of dollars designing tests, and haven’t been able to do what your administration wants. And, if those guys can’t, why would you assume that Mrs. Smith in second grade can do it in her spare time?

Studies have repeatedly shown that standardized reading comprehension tests measure a single factor — not a list of skills represented by the various types of question asked.

What should you do instead?

Test kids’ ability to comprehend a text of a target readability level. For instance, in third grade you might test kids with passages at 475L, 600L, 725L, and 850L at each report card marking. What you want to know is whether kids could make sense of such texts through silent reading.

You can still ask questions about these passages based on the “skills” that seem to be represented in your standards — you just can’t score them that way.

What you want to know is whether kids can make sense of such texts with 75% comprehension.

In other words, it’s the passages and text levels that should be your focus, not the question types or individual standards.

If kids can read such passages successfully, they’ll be able to answer your questions. And, if they can’t, then you need to focus on increasing their ability to read such texts. That means teaching things like vocabulary, text structure, and cohesion and having the kids reading texts that are sufficiently challenging — not practicing answering particular types of questions.

Sorry administrators, you’re sending teachers on a fool’s errand. One that will not lead to higher reading achievement, just misleading information for parents and kids and a waste of effort for teachers.

References

ACT. ( 2006 ). Reading between the lines. Iowa City, IA : American College Testing.

Davis , F.B. ( 1944 ). Fundamental factors in comprehension in reading. Psychometrika , 9( 3), 185–197.

Kulesz, P. A., Francis, D. J., Barnes, M. A., & Fletcher, J. M. (2016). The influence of properties of the test and their interactions with reader characteristics on reading comprehension: An explanatory item response study. Journal of Educational Psychology, 108(8), 1078-1097.

Muijselaar, M. M. L., Swart, N. M., Steenbeek-Planting, E., Droop, M., Verhoeven, L., & de Jong, P. F. (2017). The dimensions of reading comprehension in dutch children: Is differentiation by text and question type necessary? Journal of Educational Psychology, 109(1), 70-83. 

Spearritt , D. ( 1972 ). Identification of subskills of reading comprehension by maximum likelihood factor analysis. Reading Research Quarterly , 8( 1), 92–111 .

Thorndike, R. (1973). Reading as reasoning. Reading Research Quarterly, 9(2), 135-147.

 

See comments › (opens in a new window)

About the Author

Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy (opens in a new window).

Publication Date
September 10, 2019
Top