Menu

Strategies for Conducting Outcome Evaluations of Early Intervention Literacy Programs

By: Jessica Blom-Hoffman, Julie F. Dwyer, Angela T. Clarke, Thomas J. Power
Learn how school psychologists can partner with reading specialists and classroom teachers to evaluate the benefits of early intervention reading programs in their districts.

Approximately 20% of U.S. children have difficulty learning to read (Grossen, 1997). It is widely recognized that early reading difficulties tend to persist over time. Juel (1988) demonstrated that first grade students who struggle with letter and word recognition plus phonological processing were highly likely to have reading difficulties in the later elementary grades. In response, there have been numerous efforts to develop early intervention programs for young children at risk for reading difficulties (e.g., Adams, Foorman, Lundberg & Beeler, 1998; Slavin, Madden, & Wasik, 1997; Torgesen & Bryant, 1993).

In a recent article related to the prevention of reading difficulties, Torgesen (2002) identified two types of skills that are required for successful reading comprehension. These include:

  1. General language comprehension
  2. Word recognition fluency

Torgesen emphasized that a prerequisite for recognizing and comprehending words is the acquisition of phonemic awareness skills. Increasingly, educators are recognizing the importance of phonological awareness as a building block to literacy. Therefore, it is important to include strategies that promote phonological awareness, in addition to strategies that promote letter and word recognition fluency and oral language skills, in programs aimed at helping young children at risk for reading difficulties.

Many school psychologists have a strong knowledge base in the areas of instructional intervention and outcome evaluation, enabling them to serve as a resource to educators in the design and evaluation of literacy development programs. The purpose of this article is to describe how school psychologists can partner with reading specialists and classroom teachers to evaluate the benefits of early intervention reading programs in their districts. This article will describe five domains to be considered in comprehensive outcome evaluations of early intervention reading programs:

  1. Instructional outcomes (e.g., phonemic awareness skills and letter recognition fluency)
  2. Process variables (e.g., amount of instruction and active engagement in reading); (c) procedural integrity
  3. Social validity
  4. Family involvement

Additionally, specific measures to evaluate outcomes in each of these domains will be discussed (see Table 1).

Table 1. Domains of assessment for early literacy programs

Assessment Domain

Brief Description

Examples of Measures

Instructional Outcomes (Alphabet Recognition)

Student's fluency with regard to recognizing upper and lower case letters of the alphabet.

  • CBM
  • DIBELS
  • WIAT-II

Instructional Outcomes

(Phonological Skills)

Student's fluency with regard to a variety of phonological skills, including: rhyming, blending, and segmenting

  • DIBELS
  • CTOPP
  • WIAT-II

Academic Learning Time

Amount of time a student is engaged in instruction (i.e., on-task behavior)

  • Code for Instructional Structure and Student Academic Response (CISSAR)

Amount of Instruction

Amount of time a student is provided with reading instruction (i.e., the dose of the intervention)

  • Number and length of sessions provided

Social Validity

The degree to which participants in the intervention find it acceptable, fair and appropriate.

  • Treatment Evaluation Inventory (TEI)
  • Intervention Rating Profile (IRP)
  • Children's Intervention Rating Profile (CIRP)

Procedural Integrity

The degree to which a program is implemented as intended

  • Integrity check lists

Family Involvement

A wide range of activities that can include engaging in learning activities at home and in the community

  • Family Involvement Questionnaire
  • Parent-Teacher Involvement Questionnaire

Instructional outcomes

Dynamic Indicators of Basic Early Literacy Skills (DIBELS)

The DIBELS (Good & Kaminski, 1996) was designed to assess early literacy skills of emergent readers. This measure identifies students who are not making sufficient progress in the acquisition of important early literacy skills, and it is useful for monitoring the effectiveness of reading interventions (Kaminski & Good, 1996). The DIBELS measures were designed to assess phonological awareness, knowledge of the alphabet and fluency with text. The measures assess a broad range of important early literacy skills (i.e., initial sounds, letter naming, phoneme segmentation, nonsense word reading) that are predictive of later reading proficiency (see http://dibels.uoregon.edu).

Comprehensive Test of Phonological Processing (CTOPP)

The CTOPP (Wagner, Torgeson, & Rashotte, 1999) is a norm-referenced measure used to assess phonological awareness, phonological memory and rapid naming. The primary uses of this instrument are to:

  • Identify individuals who are below their peers in phonological skills
  • Document a student's progress in response to intervention
  • determine strengths and weaknesses among phonological processes
  • Validate systematic instruction programs through research

The Phonological Awareness Composite of the CTOPP is especially useful for evaluating early literacy instruction. This composite combines three subtests (Elision, Blending Words and Sound Matching) to obtain a measure of a student's ability to segment and blend sounds. These skills are thought to be of primary importance in later word decoding. The subtests also contain practice items that enable the examinee to learn the task and receive feedback before scoring begins. However, the samples used to derive the norms do not closely match those populations typically found within urban schools (i.e., students from racially and ethnically diverse backgrounds). In addition, this measure does not have alternate forms; therefore, it has limited utility for repeated measurement.

Wechsler Individual Achievement Test – Second Edition (WIAT-II).

The Word Reading subtest of the WIAT-II (Wechsler, 2001) provides a norm-referenced measure of reading decoding. The examinee is required to name letters of the alphabet, identify and generate rhyming words, match similar beginning and ending sounds, match sounds with letters and letter blends, and read words in isolation. The discrete skills measured in this subtest can provide useful information to parents and teachers about which skills the student has developed, and which should be targeted for supplemental instruction. However, as with the CTOPP, the sample used to obtain the norms for the WIAT-II does not closely match populations typically found within urban schools. Additionally, the test floor in this instrument also can be inadequate for children below 6 years of age (Flanagan, Ortiz, Alfonso, & Mascolo, 2002).

Process variables

Academic learning time

Academic learning time (ALT), defined as the amount of time a student is actively, successfully and productively involved in learning, is strongly related to academic achievement (Gettinger & Siebert, 2002). ALT is comprised of a number of components, including allocated time, instructional time, engaged time, and successful and productive learning time. For further explanation of ALT and its components, the reader is directed to Gettinger and Siebert (2002). School psychologists can use systematic, direct observations to assess academic learning time during reading interventions. The Code for Instructional Structure and Student Academic Response (CISSAR; Stanley & Greenwood, 1981) is an example of a system that school psychologists can use to assess environmental instructional variables. The CISSAR can be used to measure students' active responses, including reading aloud, asking questions, answering questions and engaging in academic talk. Off-task behaviors and teacher behaviors can also be assessed with this system.

Amount of instruction

Torgesen (2002) argued that children with reading difficulties need more intense instruction in reading (i.e., more learning opportunities) compared with peers with average reading skills. Amount of instruction is a process variable that can be recorded quite simply by logging the number of reading sessions that made up the intervention.

Social validity

Social validity refers to the degree to which participants (e.g., students, teachers, parents and administrators) in behavioral and academic interventions find them acceptable in terms of:

  • The goals of the intervention
  • The appropriateness of the procedures
  • The importance of the treatment implications (Wolf, 1978)

Treatment acceptability is one aspect of social validity, referring to the perceived fairness and appropriateness of intervention procedures. Treatment acceptability is an important variable to assess in outcome evaluations of early literacy programs, as it is helpful in determining the likelihood that the instructional procedures in the program will be implemented (Reimers, Wacker, & Koeppl, 1987). There are several scales that can be used to assess treatment acceptability. The Treatment Evaluation Inventory (TEI; Kazdin, 1980) for parents and the Intervention Rating Profile (IRP-15; Witt & Elliott, 1985) for teachers are two examples. The Children's Intervention Rating Profile (CIRP; Witt & Elliott, 1985) is a brief scale for assessing child perceptions of acceptability.

Procedural integrity

Procedural integrity refers to the degree to which a program is implemented as intended. With regard to interpreting results from literature development programs, measures of procedural integrity provide an index of the degree to which discrete components of the program were implemented. Procedural integrity can be assessed through direct observations of instruction or by audio- or video-taping sessions and coding them at a later date (Ehrhardt, Barnett, Lentz, Stollar, & Reifin, 1996). Integrity checklists should be developed prior to the implementation of the program and should include the steps to follow in the lessons. Additionally, checklists can include process variables such as amount of praise provided to children during the session and the level of student engagement.

Family involvement

Both education researchers (Christenson & Sheridan, 2001; Miedel & Reynolds, 2000) and policymakers, such as the U.S. Department of Education (www.ed.gov/pubs/CompactforReading/kit_ack.html, encourage family involvement in early reading programs. Family involvement represents a wide range of activities, including: meeting children's basic health and safety needs, communicating with teachers and administrators, serving as a parent volunteer or in school governance, and engaging in learning activities at home and in the community (Epstein & Dauber, 1991). In comparison to interventions based solely in the school, reading programs with a family involvement component have the potential for better academic outcomes because caregivers are able to support the acquisition of reading skills in the home (Hoover-Dempsey & Sandler, 1995). In addition, caregivers can support their child's education by ensuring that the child regularly attends school and by explicitly teaching the value of learning.

Although research on the assessment of family involvement in education is still in its infancy, two measurement tools show particular promise for evaluating early reading programs. The Family Involvement Questionnaire (FIQ; Fantuzzo, Tighe, & Childs, 2000) is administered to caregivers of children in preschool to first grade and measures three dimensions of family involvement: school-based involvement, home-school conferencing and home-based involvement. Another sound measure of family involvement is the Parent-Teacher Involvement Questionnaire (PTIQ; Kohl, Lengua, McMahon, & Conduct Problems Prevention Research Group, 2000). The PTIQ has both a parent and a teacher version and has been validated with children in kindergarten and first grade. Other methods of assessing family involvement in early reading programs include asking parents to estimate the level of their involvement in various education-related activities (e.g., Hoover-Dempsey, Bassler, & Brissie, 1992) and reviewing school records to calculate the frequency and purpose of home-school contact.

Conclusions

Early recognition of reading difficulties and effective intervention to promote literacy skills are important to prevent life-long educational and social struggles. Given their skills in assessment and outcome evaluation, school psychologists can play an important role in working with educators to assess the effectiveness of early intervention reading programs in their school districts. The purpose of this article was to describe domains to consider when developing an outcome evaluation plan, as well as specific measures that can be used to assess each of these domains. Information from outcome evaluations of early literacy programs can be helpful to: monitor children's progress, understand whether the current literacy program in the school district is effective or needs to be modified, and provide a rationale to administrators for continued program funding.

References

Click the "References" link above to hide these references.

Adams, M. J., Foorman, B. R., Lundberg, I., & Beeler, T. (1998). Phonemic Awareness in Young Children. Baltimore: Brookes.

Christenson, S. L., & Sheridan, S. M. (2001). School and families: Creating essential connections for learning. New York: Guilford Press.

Ehrhardt, K. E., Barnett, D. W., Lentz, F. E., Stollar, S. A. & Reifin, L. H. (1996). Innovative methodology in ecological consultation: Use of scripts to promote treatment acceptability and integrity. School Psychology Quarterly, 11, 149-168.

Epstein, J. L., & Dauber, S. L. (1991). School programs and teacher practices of parent involvement in inner-city elementary and middle schools. The Elementary School Journal, 91, 289-305.

Fantuzzo, J. F., Tighe, E., & Childs, S. (2000). Family involvement questionnaire: A multivariate assessment of family participation in early childhood education. Journal of Educational Psychology, 92, 367-376.

Flanagan, D.P., Ortiz, S.O., Alfonso, V.C., & Mascolo, J.T. (2002). The achievement test desk reference (ATDR): Comprehensive assessment and learning disabilities. Boston: Allyn & Bacon.

Gettinger, M. & Seibert, J. K. (2002). Best practices in increasing academic learning time. In A. Thomas (Ed.), Best practices in school psychology IV: Vol. 1 (4th> ed., pp. 773-787). Bethesda, MD: National Association of School Psychologists.

Good, R. H. & Kaminski, R. A. (1996). Assessment for instructional decisions: toward a proactive/prevention model of decision-making for early literacy skills. School Psychology Quarterly, 11, 326-336.

Grossen, B. (1997). Thirty years of research: What we know about how children learn to read: A synthesis of research on reading from the National Institute of Child Health and Human Development. Santa Cruz, CA: Center for the Future of Teaching and Learning.

Hoover-Dempsey, K. V., Bassler, O. C., & Brassie, J. S. (1992). Explorations in parent-school relations. Journal of Educational Research, 85, 287-294.

Hoover-Dempsey, K. V., & Sandler, H. M. (1995). Parental involvement in children's education: Why does it make a difference? Teachers College Record, 97, 310-331.

Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of Educational Psychology, 80, 437-447.

Kaminiski, R.A., & Good, R.H. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25 (2), 215-227.

Kazdin, A. E. (1980). Acceptability of alternative treatments for deviant child behavior. Journal of Applied Behavior Analysis, 13, 259-273.

Kohl, G. O., Lengua, L. J., McMahon, R. J., & Conduct Problems Prevention Research Group (2000). Parent involvement in school: Conceptualizing multiple dimensions and their relations with family and demographic risk factors. Journal of School Psychology, 38, 501-523.

Miedel, W. T., & Reynolds, A. J. (2000). Parent involvement in early intervention for disadvantaged children: Does it matter? Journal of School Psychology, 37, 379-402.

Official DIBELS Home Page (n.d.). Retrieved August 12, 2002, from http://dibels.uoregon.edu.

Reimers, T. M., Wacker, D. P., & Koeppl, G. (1987). Acceptability of behavioral interventions: A review of the literature. School Psychology Review, 16, 212-227.

Slavin, R. E., Madden, N. A., & Wasik, B. A. (1996). Success for all and roots and wings: Summary of research on achievement outcomes. Baltimore, MD: Johns Hopkins University, Center for Research on the Education of Students Placed at Risk.

Stanley, S. D. & Greenwood, C. R. (1981). Code for instructional structure and student academic response: Observer's manual. Kansas City, KS: Juniper Gardens Children's Project, Bureau of Child Research, University of Kansas.

Torgesen, J. K. (2002). The prevention of reading difficulties. Journal of School Psychology, 40, 7-26.

Torgesen, J. K. & Bryant, B. R. (1993). Phonological awareness training for reading. Austin, TX: ProEd.

U.S. Department of Education (1999). School-home links reading kit. Retrieved from www.ed.gov/pubs/CompactforReading/kit_ack.html.

Wagner, R.K., Torgeson, J.K., & Rashotte, C.A. (1999). The Comprehensive test of phonological processing: Examiner's manual. Austin, Texas: Pro-Ed.

Wechsler, D. (2001). Wechsler Individual Achievement Test-Second Edition. San Antonio, TX: The Psychological Corporation.

Witt, J. C. & Elliott, S. N. (1985). Acceptability of classroom management strategies. In T. R. Kratochwill (Ed.), Advances in school psychology: Vol. 4. (pp. 251-288). Hillsdale, NJ: Erlbaum.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203-214.

Jessica Blom-Hoffman, Ph.D. is an Assistant Professor of School Psychology at Northeastern University in Boston, MA.; Julie F. Dwyer, M.Ed., is a fourth-year doctoral student in the school psychology program at Temple University in Philadelphia, PA.; Thomas Power, Ph.D., NCSP, is Associate Professor of School Psychology in Pediatrics at the University of Pennsylvania School of Medicine and Program Director of the Center for Management of ADHD and Community Schools Program at The Children's Hospital of Philadelphia; Angela T. Clarke, Ph.D., is a Post-Doctoral Fellow in the Community Schools Program at the Children's Hospital of Philadelphia.

This article has been posted with the permission of NASP as part of the NASP-Reading Rockets Partnership. NASP retains the copyright of these materials. All reprint or use permission should be directed to NASP via publications@naspweb.org.

Reprints

For any reprint requests, please contact the author or publisher listed.

Comments

Add comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Sign up for our free newsletters about reading
Advertisement
Reading Blogs
Start with a Book: Read. Talk. Explore.
"A book is a gift you can open again and again." — Garrison Keillor