Outcome Evaluation
Outcome evaluation measures your diabetes program’s impact on participants and assesses whether the program achieved its intended goals. These methods can help to answer the question, “Did our program work?” Evidence about program effectiveness is often of particular interest to funders and other stakeholders.
“You really need to establish yourself as a sound program that really benefits the community in order to start continuing on with funding and support…You have to have the stats.”
– DSMES implementer
What Should We Evaluate?
Diabetes programs may influence a wide range of outcomes, including factors like knowledge and beliefs, health behaviors, and health markers. It is important to keep in mind that no program can evaluate every possible outcome! You should work with a program evaluator to identify which outcomes should be prioritized, considering a range of factors, such as the following:
- Alignment with goals: Which outcomes does your program aim to achieve most?
- Feasibility: Which outcomes are possible to measure accurately and to analyze effectively given the time, staff, and resources you have available?
- Participant burden: Which outcome measurements will minimize time burden, discomfort, or distress for program participants?
- Prioritization by stakeholders: Which outcomes do key actors, like food bank leadership or funders, care about most?
The table below describes different types of outcomes that food banks have used to evaluate their diabetes services:
Type of Measure | Examples |
Common Method |
---|---|---|
Psychological and social factors |
|
|
Health behaviors |
|
|
Food security |
|
|
Clinical markers |
|
|
Healthcare cost saving |
|
|
This survey was used by one food bank to evaluate its diabetes food pantry and diabetes self-management education and support (DSMES) services and includes validated questions for many of the variables described above.
Demographic Data
In addition to the types of outcomes described above, it is also important to collect data about participants themselves, including factors like gender, age, race/ethnicity, and socioeconomic status. Collecting those data can help you:
- Understand whom your program is reaching
- Assess whether priority groups are represented
- Characterize your participant population for funders or other stakeholders
- Evaluate whether your program is having equitable effects for all participants
When Should We Collect Data?
Community organizations commonly use a pre-post design to evaluate changes in participant outcomes. This type of design involves:
- Taking measurements with participants before the program starts
- Repeating those same measurements after the program is complete
- Analyzing whether outcomes improved on average among program participants
- If so, that suggests that your program may be having a positive effect
Organizations may consider other types of evaluation designs, depending on factors like time, resources, and expertise that are available. You can work with your program evaluator to determine which type of design works best for your organization.
Collecting Quality Data
The results of your evaluation plan are only as good as the data that you collect. It is important to take steps to increase the likelihood that the methods you use are telling you what you want to know.
- Using survey instruments developed and rigorously tested by experts can increase your confidence in the quality of the data you collect.
- It is important to have clear, consistent procedures for collecting data.
- An evaluation expert can help you to identify the best measures to use and to develop protocols for effectively administering your measurements.
Using and Disseminating Evaluation Results
Once your evaluation data have been collected and analyzed, it is critical to consider the implications of the results and how they should be disseminated. Your evaluation may highlight areas where your program is working well or identify areas that can be improved, and both types of findings are valuable. For example, evidence of success can be used to justify program maintenance or to attract program funding. Findings about areas of improvement may help you identify programmatic changes that will enhance program implementation and effectiveness in the future.
You may wish to consider sharing evaluation results with a range of stakeholders, including the following:
- Current or potential funders
- Food bank leadership
- Program staff
- Partner organizations
- Past or future program participants
Equity and Cultural Competence in Evaluation
Issues of equity and cultural competence are critical to consider as you develop and implement your evaluation plans. Some example questions to consider include the following:
- Are questions available in the preferred language of your program participants?
- Are the types of instruments you use culturally appropriate for your priority audience? For example, if you measure dietary outcomes, do you use measures that include culturally relevant foods?
- What are the literacy and numeracy levels among program participants, and are your evaluation methods understandable?
- Are questions presented in ways that accommodate the full range of participant identities? For example, if you are asking questions about participant gender, do you ask those questions in ways that accommodate identities aside from male and female, such as non-binary?
- Are your evaluation plans designed in ways that will help you understand whether your program has equitable effects for priority groups?
A food bank decided to implement a program providing DSMES services in a community where individuals had very low literacy. To meet the needs of those individuals, they developed and administered a short pre-post survey to evaluate changes in food security and diabetes knowledge. The surveys were administered verbally to avoid reading challenges, and the knowledge scale in the survey was specifically designed for people with low literacy. The baseline survey also included demographic questions to assess characteristics like age, gender, race-ethnicity, and socioeconomic status. In addition, the program evaluated pre-post changes in clinical measures, including hemoglobin A1c, body mass index, and blood pressure, which were administered by trained staff. Feedback forms were also developed with minimal words and smiley-face rating scales that participants could use to report their satisfaction with different aspects of the program. These approaches were responsive to the needs of the participants and generated useful data about the participants and the changes they demonstrated during the program.