A quality assurance program is a key component of evidence-based programming. There are two main components to quality assurance for evidence-based programs: fidelity monitoring and evaluation.
Fidelity Monitoring
Evidence-based programs depend on the program being implemented with fidelity, or that the program is being delivered how it was initially designed. Conducting the programs with fidelity improves the quality of what is being delivered and ensures that all the activities of the program are implemented correctly to benefit the participants. Fidelity should be considered at every phase of the implementation of an evidence-based program from recruitment, trainings, program delivery, and evaluation. While leaders will learn about the basics of program fidelity during training, it is up to the license holder or host organization to be sure that the program is being delivered accurately so workshop participants experience similar positive outcomes as originally studied. You can learn more about fidelity requirements by program through the CDSME and Falls Prevention Fidelity Hubs:
Furthermore, the Administration for Community Living published a fidelity evaluation of evidence-based health promotion and disease prevention programs implemented by the aging network. Review the following report and tools:
- Fidelity Evaluation of ACL’s Evidence-Based Programs
- Appendices
- Fidelity Monitoring Tool
- Fidelity Worksheet
Your organization may want to implement a Continuous Quality Improvement plan or develop a Logic Model to ensure quality assurance. Other strategies to assure quality delivery of programming include:
- Fidelity checks: Check with the program developer to see if they provide a fidelity checklist
- Fidelity agreement: This agreement can be a standalone document or incorporated into your leader or master trainer agreement.
- Monitoring participant attendance rates by leader
- Calling leaders before and after the workshop to check-in
- Pair experienced leaders with new instructors
- Develop fidelity in-service training for leaders
- Leader self-evaluation and peer-feedback opportunities
Health Foundation of South Florida's Quality Assurance Plan
Massachusetts Healthy Living Center of Excellence's logic model for implementing and monitoring fidelity
Massachusetts-Fidelity-and-QI-Logic-Model
This presentation from the Massachusetts Healthy Living Center of Excellence is an example of what might be presented during a training for leaders.
Massachusetts-Presentation-Fidelity-101
If you operate a network of organizations who provide a program, you may consider implementing a fidelity self-assessment survey similar to this one developed by the Maryland Department for the Aging.
Maryland-CQI-Self-Assessment-2013
Evaluation
The first step of evaluation is to ensure that all the required participant data is collected appropriately. It may be beneficial to provide leaders with a data collection checklist. Many organizations also hold regular trainings on data collection and reporting for facilitators.
Evidence-based programs complete a rigorous evaluation process to become approved by the U.S. Administration for Community Living. Many organizations may still choose to track outcome and participant satisfaction data locally.
Below are some examples from other organizations on the type of evaluations and reports completed:
Virginia Department of Health – 2012 CDSMP Evaluation Report
Living Well Alabama – Assessment of CDSME at Work Sites
Measuring participant satisfaction is a key factor in determining the success of your program. These surveys and testimonials can also be used when encouraging new participants to join or new partner organizations. An Area Agency on Aging in Maine developed participant satisfaction surveys for the CDSMP classes.
See NCOA’s Key Components of Evidence-Based Programming: Evaluation webpage for additional guidance.