This video provides an overview of EvaluATE’s Checklist for Program Evaluation Report Content, and three reasons why this checklist is useful to evaluators and clients.
Project evaluators are aware that evaluation aims to support learning and improvement. Through a series of planned interactions, event observations, and document reviews, the evaluator is charged with reporting to the project leadership team and ultimately the project’s funding agency, informing audiences of the project’s merit. This is not to suggest that reporting should only aim to identify positive impacts and outcomes of the project. Equally, there is substantive value in informing audiences of unintended and unattained project outcomes.
Evaluation reporting should discuss aspects of the project’s outcomes, whether anticipated, questionable, or unintended. When examining project outcomes the evaluator analyzes obtained information and facilitates project leadership through reflective thinking exercises for the purpose of defining the significance of the project and summarizing why outcomes matter.
Let’s be clear, outcomes are not to be regarded as something negative. In fact, with the projects that I have evaluated over the years, outcomes have frequently served as an introspective platform informing future curriculum decisions and directions internal to the institutional funding recipient. For example, the outcomes of one STEM project that focused on renewable energy technicians provided the institution with information that prompted the development of subsequent proposals and projects targeting engineering pathways.
Discussion and reporting of project outcomes also encapsulates lessons learned and affords the opportunity for the evaluator to ask questions such as:
- Did the project increase the presence of the target group in identified STEM programs?
- What initiatives will be sustained during post funding to maintain an increased presence of the target group in STEM programs?
- Did project activities contribute to the retention/completion rates of the target group in identified STEM programs?
- Which activities seemed to have the greatest/least impact on retention/completion rates?
- On reflection, are there activities that could have more significantly contributed to retention/completion rates that were not implemented as part of the project?
- To what extent did the project supply regional industries with a more diverse STEM workforce?
- What effect will this have on regional industries during post project funding?
- Were partners identified in the proposal realistic contributors to the funded project? Did they ensure a successful implementation enabling the attainment of anticipated outcomes?
- What was learned about the characteristics of “good” and “bad” partners?
- What are characteristics to look for and avoid to maximize productivity with future work?
Factors influencing outcomes include, but are not limited to:
- Institutional changes, e.g., leadership;
- Partner constraints or changes; and
- Project/budgetary limitations.
In some instances, it is not unusual for the proposed project to be somewhat grandiose in identifying intended outcomes. Yet, when project implementation gets underway, intended activities may be compromised by external challenges. For example, when equipment is needed to support various aspects of a project, procurement and production channels may contribute to delays in equipment acquisition, thus adversely effecting project leadership’s ability to launch planned components of the project.
As a tip, it is worthwhile for those seeking funding to pose the outcome questions at the front-end of the project – when the proposal is being developed. Doing this will assist them in conceptualizing the intellectual merit and impact of the proposed project.
Resources and Links:
Developing an Effective Evaluation Report: Setting the Course for Effective Program Evaluation. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity, 2013.
In this blog, I provide advice for Advanced Technological Education (ATE) principal investigators (PIs) on how to include information from their project evaluations in their annual reports to the National Science Foundation (NSF).
Annual reports for NSF grants are due within 90 days of the award’s anniversary date. That means if your project’s initial award date was September 1, your annual reports will be due between June and August each year until the final year of the grant (at which point an outcome report is due within 90 days after the award anniversary date).
When you prepare your first annual report for NSF at Research.gov, you may be surprised to see there is no specific request for results from your project’s evaluation or a prompt to upload your evaluation report. That’s because Research.gov is the online reporting system used by all NSF grantees, whether they are researching fish populations in Wisconsin lakes or developing technician education programs. So what do you do with the evaluation report your external evaluator prepared or all the great information in it?
1. Report evidence from your evaluation in the relevant sections of your annual report.
The Research.gov system for annual reports includes seven sections: Cover, Accomplishments, Products, Participants, Impact, Changes/Problems, and Special Requirements. Findings and conclusions from your evaluation should be reported in the Accomplishments and Impact sections, as described in the table below. Sometimes evaluation findings will point to a need for changes in project implementation or even its goals. In this case, pertinent evidence should be reported in the Changes/Problems section of the annual report. Highlight the most important evaluation findings and conclusions in these report sections. Refer to the full evaluation report for additional details (see Point 2 below).
|NSF annual report section||What to report from your evaluation|
Do you have a logic model that delineates your project’s activities, outputs, and outcomes? Is your evaluation report organized around the elements in your logic model? If so, a straightforward rule of thumb is to follow that logic model structure and report evidence related to your project activities and outputs in the Accomplishments section and evidence related to your project outcomes in the Impacts section of your NSF annual report.
2. Upload your evaluation report.
Include your project’s most recent evaluation report as a supporting file in the Accomplishments or Impact section of Research.gov. If the report is longer than about 25 pages, make sure it includes a 1-3 page executive summary that highlights key results. Your NSF program officer is very interested in your evaluation results, but probably doesn’t have time to carefully read lengthy reports from all the projects he or she oversees.
Evaluation reports have a reputation for being long, overly complicated, and impractical. The recent buzz about fresh starts and tidying up for the new year got me thinking about the similarities between these infamous evaluation reports and the disastrously cluttered homes featured on reality makeover shows. The towering piles of stuff overflowing from these homes reminds me of the technical language and details that clutter up so many evaluation reports. Informational clutter, like physical clutter, can turn reports, just like homes, into difficult-to-navigate obstacle courses that can render the contents virtually unusable. If you are looking for ideas on how to organize and declutter your reports, check out the Checklist for Straightforward Evaluation Reports that Lori Wingate and I developed. The checklist provides guidance on how to produce comprehensive evaluation reports that are concise, easy to understand, and easy to navigate. Main features of the checklist include:
- Quick reference sheet: A one-page summary of content to include in an evaluation report and tips for presenting content in a straightforward manner.
- Detailed checklist: A list and description of possible content to include in each report section.
- Straightforward reporting tips: General and section-specific suggestions on how to present content in a straightforward manner.
- Recommended resources: List of resources that expand on information presented in the checklist.
Evaluators, evaluation clients, or other stakeholders can use the report to set reporting expectations such as what content to include and how to present information.
Straightforward Reporting Tips
Here are some tips, inspired by the checklist, on how to tidy up your reports:
- Use short sentences: Each sentence should communicate one idea. Sentences should contain no more than 25 words. Downsize your words to only the essentials, just like you might downsize your closet.
- Use headings: Use concise and descriptive headings and subheadings to clearly label and distinguish report sections. Use report headings, like labels on boxes, to make it easier to locate items in the future.
- Organize results by evaluation questions: Organize the evaluation results section by evaluation question with separate subheadings for findings and conclusions under each evaluation question. Just like most people don’t put decorations for various holidays in one box, don’t put findings for various evaluation questions in one findings section.
- Present takeaway messages: Label each figure with a numbered title and separate takeaway message. Similarly, use callout to grab readers’ attention and highlight takeaway messages. For example, use a callout in the results section to summarize the conclusion in one-sentence under the evaluation question.
- Minimize report body length: Reduce page length as much as possible without compromising quality. One way to do this is to place details that enhance understanding—but are not critical for basic understanding—in the appendices. Only information that is critical for readers’ understanding of the evaluation process and results should be included in the report body. Think of the appendices like a storage area such as a basement, attic, or shed where you keep items you need but don’t use all the time.
If you’d like to provide feedback you can write your comments in an email or return a review form to firstname.lastname@example.org. We are especially interested in getting feedback from individuals that have used the checklist as they develop evaluation reports.
Last year I went to the D23 Expo in Anaheim, California. This was a conference for Disney fans everywhere. I got to attend panels where I learned past Disney secrets and upcoming Disney plans. I went purely for myself, since I love Disney everything, and I never dreamed I would learn something that could be applicable to my evaluation practice.
In a session with John Lasseter, Andrew Stanton, Pete Doctor, and others from Pixar, I learned about a technique created by Ralph Eggleston (who was there too) called color scripting. Color scripting is a type of story boarding, but Ralph would change the main colors of each panel to reflect the emotion the animated film was supposed to portray at that time. It helped the Pixar team understand what was going on in the film emotionally at a quick glance, and it also made it easier to create a musical score to enhance those emotions.
Then, a few weeks later, I was sitting in a large event for a client, observing from the back of the room. I started taking notes on the engagement and energy of the audience based on who was presenting. I created some metrics on the spot including number of people on their mobile devices, number of people leaving the event, laughter, murmuring, applause, etc. I thought I would create a simple chart with a timeline of the event, highlighting who was presenting at different times, and indicating if engagement was high/medium/low and if energy was high/medium/low. I quickly realized, when analyzing the data, that engagement and energy were 100% related. If engagement was high, then energy followed shortly as being high. So, instead of charting two dimensions, I really only needed to chart one: engagement & energy combined (see definitions of engagement and energy in the graphic below). That’s when it hit me – color scripting! Okay, I’m no artist like Ralph Eggleston, so I created a simple color scheme to use.
In sharing this with the clients who put on the event, they could clearly see how the audience reacted to the various elements of the event. It was helpful in determining how to improve the event in the future. This was a quick and easy visual, made in Word, to illustrate the overall reactions of the audience.
I have since also applied this to a STEM project, color scripting how the teachers in a two-week professional development workshop felt at the end of each day based on one word they shared upon exiting the workshop each day. By mapping participant feelings in the different cohorts and comparing what and how things were taught each day, this resulted in thoughtful conversations with the trainers about how they want the participants to feel and what they need to change to match reality with intention.
You never know where you’re going to learn a technique or tool that could be useful in your evaluation practice and useful to the client. Be open to learning everywhere you go.
In a retrospective pretest,1 trainees rate themselves before and after a training in a single data collection event. It is useful for assessing individual-level changes in knowledge and attitudes as one part of an overall evaluation of an intervention. This method fits well with the Kirkpatrick Model for training evaluation, which calls for gathering data about participants’ reaction to the training, their learning, changes in their behavior, and training outcomes. Retrospective pretest data are best suited for evaluating changes in learning and attitudes (Level 2 in the Kirkpatrick Model).
The main benefit of using this method is that it reduces response-shift bias, which occurs when respondents change their frame of reference for answering questions. It is also convenient, more accurate than self-reported data gathered using traditional pre-post self-assess methods, adaptable to a wide range of contexts, and generally more acceptable to adult learners than traditional testing. Theodore Lamb provides a succinct overview of the strengths and weaknesses of this method in a Harvard Family Research Project newsletter article—see bit.ly/hfrp-retro.
The University of Wisconsin Extension’s Evaluation Tip Sheet 27: Using the Retrospective Post-then-Pre Design provides practical guidelines about how to use this method: bit.ly/uwe-tips.
The focus of retrospective pretest questions should be on the knowledge, skills, attitudes, or behaviors that are the focus of the intervention being evaluated. General guidelines for formatting questions: 1) Use between 4 and 7 response categories in a Likert-type or partially anchored rating scale; 2) Use formatting to distinguish pre and post items; 3) Provide clear instructions to respondents. If you are using an online survey platform, check your question type options before committing to a particular format. To see examples and learn more about question formatting, see University of Wisconsin Extension’s Evaluation Tip Sheet 28: “Designing a Retrospective Post-then-Pre Question” at bit.ly/uwe-tips.
For several examples of Likert-type rating scales, see bit.ly/likert-scales—be careful to match question prompts to rating scales.
Analysis and Visualization
Retrospective pretest data are usually ordinal, meaning the ratings are hierarchical, but the distances between the points on the scale (e.g., between “somewhat skilled” and “very skilled”) are not necessarily equal. Begin your analysis by creating and examining the frequency distributions for both the pre and post ratings (i.e., the number and percentage of respondents who answer in each category). It is also helpful to calculate change scores—the difference between each respondent’s before and after ratings—and look at those frequency distributions (i.e., the number and percentage of respondents who reported no change, reported a change of 1 level, 2 levels, etc.).
For more on how to analyze retrospective pretest data and ordinal data in general, see the University of Wisconsin Extension’s Evaluation Tip Sheet 30: “Analysis of Retrospective Post-then-Pre Data” and Tip Sheet 15: “Don’t Average Words” bit.ly/uwe-tips.
For practical guidance on creating attractive, effective bar, column, and dot plot charts, as well as other types of data visualizations, visit stephanieevergreen.com.
To use retrospective pretest data to make improvements to an intervention, examine the data to determine if some groups (based on characteristic such as job, other demographic characteristics, and incoming skill level) gained more or less than others and compare results to the intervention’s relative strengths and weaknesses in terms of achieving its objectives. Make adjustments to future offerings based on lessons learned and monitor to see if the changes lead to improvements in outcomes.
To learn more, see the slides and recording of EvaluATE’s December 2015 webinar on this topic: https://www.evalu-ate.org/webinars/2015-dec/
For a summary of research on this method, see Klatt and Powell’s (2005) white paper, “Synthesis of Literature Relative to the Retrospect Pretest Design:” bit.ly/retro-syn.
1 This method has other names, such as post-then-pre and retrospective pretest-posttest.
It’s all too easy for our evaluation reports to become a lifeless pile of numbers that gather dust on a shelf. As evaluators and PIs, we want to tell our stories and we want those stories to be heard. Data visualizations (like graphs and infographics) can be powerful ways to share evaluation findings, quickly communicate key themes, and ultimately have more impact.
Communicating evaluation findings visually can also help your stakeholders became better data analysts themselves. I’ve found that when stakeholders see a graph showing survey results, they are much more likely to spend time examining the findings, asking questions, and thinking about what the results might mean for the project than if the same information is presented in a traditional table of numbers.
Here are a few tips to get you started with data visualization:
- Start with the data story. Pick one key finding that you want to communicate to a specific group of stakeholders. What is the key message you want those stakeholders to walk away with?
- Put the mouse down! When you’re ready to develop a data viz, start by sketching various ways of showing the story you want to tell on a piece of paper.
- Use Stephanie Evergreen’s and Ann Emery’s checklist to help you plan and critique your data visualization: http://stephanieevergreen.com/dataviz-checklist/.
- Once you’ve drafted your data viz, run it by one or two colleagues to get their feedback.
Some PIs, funders, and other stakeholders still want to see tables with all the numbers. We typically include tables with the complete survey results in an appendix.
Some of my favorite data viz resources:
- Stephanie Evergreen has written a number of blogs with step-by-step instructions showing you how to create various charts in Excel: http://stephanieevergreen.com/tag/step-by-step/
- Ann Emery’s website has a section with step-by-step videos showing how to produce charts in Excel: http://annkemery.com/excel/charts/
- Cole Nussbaumer also has a great blog that includes several examples of how to redesign charts to make them more effective: http://www.storytellingwithdata.com/
For more design inspiration, check out:
Finally, don’t expect to hit a home run your first time at bat. (I certainly didn’t!) You will get better as you become more familiar with the software you use to produce your data visualizations and as you solicit and receive feedback from your audience. Keep showing those stories!
This week I am in Atlanta at the American Evaluation Association (AEA) Summer Evaluation Institute, presenting a workshop on Translating Evaluation Findings into Actionable Recommendations. Although the art of crafting practical, evidence-based recommendations is not covered in-depth either in evaluation textbooks or academic courses, most evaluators (86% according to Fleischer and Christie’s survey of AEA members) believe that making recommendations is part of an evaluator’s job. By reading as much as I can on this topic and reflecting on my own practice, I have assembled 14 tips for how to develop, present, and follow-up on evaluation recommendations:
- Determine the nature of recommendations needed or expected. At the design stage, ask stakeholders: What do you hope to learn from the evaluation? What decisions will be influenced by the results? Should the evaluation include recommendations?
- Generate possible recommendations throughout the evaluation. Keep a log of ideas as you collect data and observe the program. I like Roberts-Gray, Buller, and Sparkman’s (1987) evaluation question-driven framework.
- Base recommendations on evaluation findings and other credible sources. Findings are important, but they’re often not sufficient for formulating recommendations. Look to other credible sources, such as program goals, stakeholders/program participants, published research, experts, and the program’s logic model.
- Engage stakeholders in developing and/or reviewing recommendations prior to their finalization. Clients should not be surprised by anything in an evaluation report, including the recommendations. If you can engage stakeholders directly in developing recommendations, they will feel more ownership. (Read Adrienne Adam’s article about a great process for this).
- Focus recommendations on actions within the control of intended users. If the evaluation client doesn’t have control over the policy governing their programs, don’t bother recommending changes at that level.
- Provide multiple options for achieving desired results. Balance consideration of the cost and difficulty of implementing recommendations with the degree of improvement expected; if possible, offer alternatives so stakeholders can select what is most feasible and important to do.
- Clearly distinguish between findings and recommendations. Evaluation findings reflect what is, recommendations are a predication about what could be. Developing recommendations requires a separate reasoning process.
- Write recommendations in clear, action-oriented language. I often see words like consider, attend to, recognize, and acknowledge in recommendations. Those call the clients’ attention to an issue, but don’t provide guidance as to what to do.
- Specify the justification sources for each recommendation. It may not be necessary to include this information in an evaluation report, but be prepared to explain how and why you came up with the recommendations.
- Explain the costs, benefits, and challenges associated with implementing recommendations. Provide realistic forecasts of these matters so clients can make informed decisions about whether to implement the recommendations.
- Be considerate—exercise political and interpersonal sensitivity. Avoid “red flag” words like fail and lack, don’t blame or embarrass, and be respectful of cultural and organizational values.
- Organize recommendations, such as by type, focus, timing, audience, and/or priority. If many recommendations are provided, organize them to help the client digest the information and prioritize their actions.
- Meet with stakeholders to review and discuss recommendations in their final form. This is an opportunity to make sure they fully understand the recommendations as well as to lay the groundwork for action.
- Facilitate decision making and action planning around recommendations. I like the United Nations Development Programme’s “Management Response Template” as an action planning tool.
See also my handy one-pager of these tips for evaluation recommendations.