×
 

Step 5: Evaluate impact and improve the quality of activities

Climate Change and Health, Justice, Equity, Diversity and Inclusion PlaybookIncorporating evaluation to document and improve climate and health adaptation efforts

The last step of the BRACE process is evaluation and improvement. Though it is technically the last step of BRACE, it is a crucial aspect to consider throughout the steps. Evaluation* can provide needed insight all along the way about the viability, acceptability, functioning, and effectiveness of a program.

Evaluative thinking is an ongoing and iterative process that can generate inquisitive and an open-minded approach to program implementation. This is especially important for upstream-focused programs trying to change fundamental drivers of health outcomes or achieve health equity. Evaluation can be a critical tool for generating buy-in, cultivating a culture of transparency, building trust with communities, and demonstrating that your actions are working.

Standards FrameworkAn important first step is to use an existing framework for evaluation, which can help break down a complex and potentially overwhelming process into simpler steps.

One public health-oriented tool is the CDC Framework for Program Evaluation.

This is a flexible, non-prescriptive 6-step guide, designed to summarize and organize essential elements of program evaluation for public health practitioners.

Using this framework can be a driving force for planning effective climate adaptation strategies, improving ongoing work, and demonstrating the results of your investments.

ENGAGE STAKEHOLDERS IN EVALUATION PLANNING

Similar to each of the other BRACE steps, stakeholder engagement is the first key element of planning a strong evaluation, from the simplest to the most complex designs. This fundamental aspect of good evaluation practice is in alignment with core JEDI principles. The stakeholders involved in Steps 1-4 should continue to have the opportunity to provide input into decisions about evaluation questions, designs, methods, data collection instruments, analysis, interpretation of results, and dissemination plans.

CDC’s Practical Strategies for Culturally Competent Evaluation highlights some key activities.1

  • An essential first step is for evaluation planners to reflect on our own cultural background and life experiences all of which shape our thoughts and behaviors and thus influence how we conduct an evaluation. Reflecting on personal history challenges us to uncover our biases, prejudices, and assumptions.
  • Engage stakeholders that reflect the diversity of the community or those impacted. Consider diversity to be more than demographic characteristics; differences in beliefs, ideologies, knowledge, institutions, or religion also influence what people do, how they think, and how they interact with others.
  • Lay clear ground rules for participation to establish equality. Use the evaluation skill of facilitation to “check in” regularly with participants to elicit their perspectives. Power imbalances are often entrenched in our behaviors and can inhibit equal participation.
INCORPORATE EQUITY INTO EVALUATION QUESTIONS

Evaluation questions guide the overall evaluation process, and thus it is critical to explicitly incorporate JEDI into these questions. This is appliable to process and outcome type questions.

Typically, process questions focus on planning, engagement, and implementation of a strategy. Identify areas where equity-oriented process questions can assess program implementation. Incorporate questions related to inclusion such as, “How did the execution of activities take into consideration the community and cultural context of the priority population?” Questions regarding reach are also applicable here. For example, “To what extent did the program reach the intended audience?” Importantly, the answers to these questions may later help explain successful (or unsuccessful) outcomes, especially as they relate to health inequities. Outcome questions focus on the effect of an initiative, including across different populations.

Similarly, the logic model can be used to identify equity-oriented outcome questions, such as: “To what extent has this program been effective for those most impacted by climate change? Did Black residents benefit more, less or the same as White residents? Did certain characteristics, such as gender or income moderate the effect of the adaptation action?”

When developing evaluation questions, ensure that the evaluation questions reflect the stakeholders’ values. The choice of evaluation questions should reflect the knowledge gained through engaging stakeholders and describing the program, the first two steps of the CDC Framework for Program Evaluation. Discussing the community’s information needs and plans can build buy-in and support for the evaluation.

TRACK POPULATIONS EXPERIENCING INEQUITIES

And include health equity indicators in performance monitoring

To answer equity-oriented evaluation questions, you will need to identify appropriate data sets, methods, sampling strategies, and variables. You may need to create indicators and ensure access to multiple measurements over time to assess progress. These can be focused on the health outcomes of the population intended to benefit and the implementation of the actual adaptation action. Consider indicators that reflect vulnerability as well as resilience. In general, this kind of tracking allows you to monitor progress, identify necessary mid-course corrections, and answer emergent questions.

Do not wait until late in the process to confirm the availability of these key evaluation elements. For example, create analysis plans, data collection schedules, to be assured that you will have needed data at the right time. Carefully plan sampling methods to ensure diversity or minimum sample sizes to conduct required stratified analysis to answer equity-oriented questions.

USE MIXED METHODS TO UNDERSTAND AN INTERVENTION'S IMPACT

Similar to Steps 2 and 3, even simple evaluation plans can be improved by using different types of methods, especially a combination of quantitative and qualitative approaches. This is especially true for complex adaptation plans, where contextual issues, such as politics or organizational contingencies, may be a key factor for success. Combining approaches, such as GIS analysis, focus groups, assessment of environmental improvements or key informant interviews, for understanding an intervention’s effect can paint a more comprehensive picture, provide more nuance, and broaden the range of credible evidence. Also, consider the needed time horizon to answer your evaluation questions. Additionally, consider whether proposed data collection methods are culturally appropriate and will reflect the real, lived experience of populations experiencing marginalization. For example, if a community has a low participation in government surveys or census due to lack of trust, these data may not be seen as credible or relevant. In other settings that value oral traditions, stories may be much more compelling than statistics. When selecting data collection instruments, assure the language of the instrument is appropriate or use an existing instrument that has been already vetted by intended participants. Include the priority community's perspective when establishing an approach to evaluation by seeking help from community leaders regarding the adaptation’s responsiveness to the needs of the population.

IDENTIFY POTENTIAL NEGATIVE UNINTENDED CONSEQUENCES

Climate and health adaptation is inherently complex due to many contingencies and uncertainty. Maladaptation occurs when an adaptation results in increased, not decreased, vulnerability. This can happen in several ways. First, an adaptation may reinforce existing vulnerabilities or inequalities. For example, an adaptation that focuses only increasing the resiliency of individuals who own land or a home, rather than those who rent or who are unstably housed, may increase inequity. A second type of maladaptation can happen when adaptation redistributes vulnerability. For example, due to spatially shifting vulnerability along coast lines, flood embankments protecting one community can increase the vulnerability of downstream communities or local ecology. Alternatively, if a city invests in green spaces that drive up the desirability of a low-income neighborhood, existing residents may be forced to relocate due to higher rents. A third pathway of maladaptation is when interventions bring new risks and sources of vulnerability. Well known examples of this include agricultural programs that promote increased use of fertilizer and pesticide that create risks to human health and ecological systems or increased reliance on irrigation without considering water availability in the future.2   Therefore, it is critical to think ahead about potential unintended consequences of strategies, especially novel approaches that have not been previously tried. Consider a pilot program to try new interventions to support learning when possible. In terms of evaluation, establish mechanisms to identify negative consequences before implementation starts or early in the process. Work with a diverse set of partners and community members to identify potential barriers and negative consequences and build in support and mechanisms to address them.

WIDELY DISSEMINATE THE EVALUATION RESULTS

It is essential to commit to disseminating evaluation findings and share lessons learned to the stakeholders who enabled and contributed to the adaptation and the evaluation. To help ensure this, create a dissemination plan at the start of your evaluation. Disseminating results contributes to the evidence-base and is especially important if results identified disparate effects, that either increased or decreased health inequities. Share findings in accessible formats and use clear language that can be understood by stakeholders, partners, and community members. Sharing results can be a fundamentally democratizing process; by building capacity and increasing awareness among community members and stakeholders you are enabling informed engagement to prioritize or decide on next steps.1

Cultural competence flow chart

(Figure courtesy Centers for Disease Control and Prevention. Practical Strategies for Culturally Competent Evaluation. Atlanta, GA: US Dept of Health and Human Services; 2014)

CDC’s Practical Strategies for Culturally Competent Evaluation highlights some key steps:

  • Tailor the dissemination of evaluation findings to stakeholder needs. This helps prevent preparing documents that gather dust on the shelf. How results are disseminated can determine how they are received and put to action. Some specific tips include:
    1. Work with stakeholders to find out what they need to be able to act on the information presented.
    2. Make sure the messenger fits the message and the audience.
    3. Pilot dissemination products.
  • Ensure that uses and action steps are culturally appropriate and draw on community strengths.
  • Identify lessons learned.
  • Encourage the use of evaluation information. This can also be achieved by helping stakeholders create concrete action plans for use of the results.
SELF-REFLECT

On how health equity has been integrated into your adaptation and evaluation plans

Even though you may not be an evaluation expert, you can incorporate evaluative thinking into JEDI adaption planning. Considering periodically asking you and your colleagues the following questions to help keep you on track. Think about whether you have evidence to substantiate your answers.

  • What are our current health equity strategies, activities, and goals? Are these reflected in our logic model?
  • How are we currently assessing the effect(s) of our efforts to address health equity?
  • Do we have a reasonable estimate of the health impacts of climate change and the most at-risk populations in its jurisdiction?
  • Has the process allowed us to prioritize health impacts of greatest concern in priority communities?
  • Has the process allowed us to prioritize interventions based on available evidence and community partner expertise?
  • To what extent are indicators or metrics in place to evaluate the implementation of selected interventions?
  • How can the process be improved in the next iteration?
  • What is hindering our progress? What barriers and unintended consequences need to be tracked and addressed?
  • What are our top learning priorities in the next iteration of BRACE with a JEDI lens?
KEY RESOURCES

Consider these resources from different areas that have implemented community adaptation programs to mitigate the impacts of climate change, with a specific focus on vulnerable populations.

CDC Climate and Health Program: Video series on climate and health evaluation
The Climate and Health Program uses CDC’s Framework for Program Evaluation to monitor and evaluate the effectiveness of climate and health work. In order to better understand and improve our impact across climate and health programs, interventions, and projects, the program has translated the first three steps of the framework into a video series.
CDC Framework for Program Evaluation
This framework guides public health professionals in their use of program evaluation. It is a practical, nonprescriptive tool, designed to summarize and organize essential elements of program evaluation.
CDC Practical Strategies for Culturally Competent Evaluation
This resource provides a step by step guidance on how to incorporate the principles of cultural competency into evaluation.
Oregon Health Authority Climate Change Resilience Planning Toolkit: Evaluate and Improve
The toolkit provides resources for evaluating both the planning process and impact.
CalBRACE Step Five: Evaluating Impact and Improving Quality of Activities
This resource highlights how public health departments are responding to climate change as an emerging public health threat, and illustrate how case stories can be used to for evaluation purposes.
Addendum to the Arizona Climate and Health Adaptation Plan 2018
This addendum to the 2017 State of Arizona Climate and Health Adaptation Plan describes the progress on specific preparedness activities that protect the health and well-being of Arizonans from current and future climate sensitive hazards.
WORKS CITED

1. Practical strategies for culturally competent evaluation. U.S. Department of Health and Human Services. Centers for Disease Control and Prevention. 2014. Retrieved from https://www.cdc.gov/asthma/program_eval/cultural_competence_guide.pdf

2. Eriksen S, et al. Adaptation interventions and their effect on vulnerability in developing countries: Help, hindrance or irrelevance? World Development, Volume 141, 2021, 105383, ISSN 0305-750X. https://doi.org/10.1016/j.worlddev.2020.105383

*TYPES OF EVALUATION

There are many types of evaluation. Formative evaluations can include a needs assessment, which may be conducted early in the planning process to gain insight about the community’s perspectives, needs and assets. A process evaluation, another type of formative evaluation, is conducted throughout the implementation of a program to ensure that the program is being executed as expected and determine if there are serious gaps or areas for improvement. Summative evaluation is a second category of evaluations. Here, an outcome evaluation can be implemented to assess the effectiveness of an intervention in terms of achieving the intended short- to medium-term outcomes or objectives. An impact evaluation also focuses on outcomes achieved by a program or a suite of programs, but is typically concerned with longer-term results at the community level that occurred as a result of the program. Which types of evaluations are conducted is dependent on the needs of the program stakeholders and the potential use of the findings.