We hear a lot about fidelity of implementation when talking about RTI. What does this really mean?
Response from Doug Fuchs
Fidelity of implementation, I think can best be explained this way. RTI, the people who first promoted RTI, were very much interested — and I think rightly interested — in promoting best evidence practices in schools. Promoting the idea that teachers and ancillary personnel should be using research backed or research validated instruction. When I say research backed instruction I mean instruction that was developed through a process, usually directed by researchers, a very carefully conceptualized and operationalized process of instruction to determine its effects on student performance.
As someone who has developed instructional programs I know that this process is often iterative, it's sometimes trial and error, you work very hard, you try to develop programs, and you find out that they don't work, you go back to the drawing board, you try again, and over time you develop a program, an explicit, carefully delineated program that through research you can say that if this program is implemented as the researcher implemented it you can expect X, Y, or Z student outcomes. The researchers then share these instructional programs with practitioners and they should be saying to practitioners, "Look, this is how we developed the program, this is the program. If you deliver the program the way we have detailed it, it's a good bet that you will get results as we did." So what we're really saying is, we're encouraging fidelity of treatment implementation, meaning we're encouraging you to implement our program the way we implemented it when we validated it. Importantly, this doesn't mean that practitioners can't take a validated instructional program, customize it to their own students and circumstances and do better and have their children do even better than the children who participated in our research. But it's also possible that if they customize it, change it in some fashion, their students could also do worse.
The point is, we don't know. When practitioners take validated programs and customize them, change them, we simply don't know what the effects of those changes will be on students. So, I suppose you could say a prudent course of action would be to take a researcher's validated program of instruction, use it as closely as possible to or as in similar a fashion as possible to the way the researcher did and then over time customize it, change it. See if you can tweak it in ways that make sense to the practitioner given considerations of students — types of students — that are involved and school systems and policies and so forth. As long as you continue to take data on children, you can determine whether the tweaking, the customizing, enhances or diminishes the effectiveness of the validated program.