We EvaluATE - Reporting

Blog: What Goes Where? Reporting Evaluation Results to NSF

Posted on April 26, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I provide advice for Advanced Technological Education (ATE) principal investigators (PIs) on how to include information from their project evaluations in their annual reports to the National Science Foundation (NSF).

Annual reports for NSF grants are due within 90 days of the award’s anniversary date. That means if your project’s initial award date was September 1, your annual reports will be due between June and August each year until the final year of the grant (at which point an outcome report is due within 90 days after the award anniversary date).

When you prepare your first annual report for NSF at Research.gov, you may be surprised to see there is no specific request for results from your project’s evaluation or a prompt to upload your evaluation report. That’s because Research.gov is the online reporting system used by all NSF grantees, whether they are researching fish populations in Wisconsin lakes or developing technician education programs.  So what do you do with the evaluation report your external evaluator prepared or all the great information in it?

1. Report evidence from your evaluation in the relevant sections of your annual report.

The Research.gov system for annual reports includes seven sections: Cover, Accomplishments, Products, Participants, Impact, Changes/Problems, and Special Requirements. Findings and conclusions from your evaluation should be reported in the Accomplishments and Impact sections, as described in the table below. Sometimes evaluation findings will point to a need for changes in project implementation or even its goals. In this case, pertinent evidence should be reported in the Changes/Problems section of the annual report. Highlight the most important evaluation findings and conclusions in these report sections. Refer to the full evaluation report for additional details (see Point 2 below).

NSF annual report section What to report from your evaluation
Accomplishments
  • Number of participants in various activities
  • Data related to participant engagement and satisfaction
  • Data related to the development and dissemination of products (Note: The Products section of the annual report is simply for listing products, not reporting evaluative information about them.)
Impacts
  • Evidence of the nature and magnitude of changes brought about by project activities, such as changes in individual knowledge, skills, attitudes, or behaviors or larger institutional, community, or workforce conditions
  • Evidence of increased participation by members of groups historically underrepresented in STEM
  • Evidence of the project’s contributions to the development of infrastructure that supports STEM education and research, including physical resources, such as labs and instruments; institutional policies; and enhanced access to scientific information
Changes/Problems
  • Evidence of shortcomings or opportunities that point to a need for substantial changes in the project

Do you have a logic model that delineates your project’s activities, outputs, and outcomes? Is your evaluation report organized around the elements in your logic model? If so, a straightforward rule of thumb is to follow that logic model structure and report evidence related to your project activities and outputs in the Accomplishments section and evidence related to your project outcomes in the Impacts section of your NSF annual report.

2. Upload your evaluation report.

Include your project’s most recent evaluation report as a supporting file in the Accomplishments or Impact section of Research.gov. If the report is longer than about 25 pages, make sure it includes a 1-3 page executive summary that highlights key results. Your NSF program officer is very interested in your evaluation results, but probably doesn’t have time to carefully read lengthy reports from all the projects he or she oversees.

Blog: Declutter Your Reports: The Checklist for Straightforward Evaluation Reports

Posted on February 1, 2017 by  in Blog (, )

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation reports have a reputation for being long, overly complicated, and impractical. The recent buzz about fresh starts and tidying up for the new year got me thinking about the similarities between these infamous evaluation reports and the disastrously cluttered homes featured on reality makeover shows. The towering piles of stuff overflowing from these homes reminds me of the technical language and details that clutter up so many evaluation reports. Informational clutter, like physical clutter, can turn reports, just like homes, into difficult-to-navigate obstacle courses that can render the contents virtually unusable. If you are looking for ideas on how to organize and declutter your reports, check out the Checklist for Straightforward Evaluation Reports that Lori Wingate and I developed. The checklist provides guidance on how to produce comprehensive evaluation reports that are concise, easy to understand, and easy to navigate. Main features of the checklist include:

  • Quick reference sheet: A one-page summary of content to include in an evaluation report and tips for presenting content in a straightforward manner.
  • Detailed checklist: A list and description of possible content to include in each report section.
  • Straightforward reporting tips: General and section-specific suggestions on how to present content in a straightforward manner.
  • Recommended resources: List of resources that expand on information presented in the checklist.

Evaluators, evaluation clients, or other stakeholders can use the report to set reporting expectations such as what content to include and how to present information.

Straightforward Reporting Tips

Here are some tips, inspired by the checklist, on how to tidy up your reports:

  • Use short sentences: Each sentence should communicate one idea. Sentences should contain no more than 25 words. Downsize your words to only the essentials, just like you might downsize your closet.
  • Use headings: Use concise and descriptive headings and subheadings to clearly label and distinguish report sections. Use report headings, like labels on boxes, to make it easier to locate items in the future.
  • Organize results by evaluation questions: Organize the evaluation results section by evaluation question with separate subheadings for findings and conclusions under each evaluation question. Just like most people don’t put decorations for various holidays in one box, don’t put findings for various evaluation questions in one findings section.
  • Present takeaway messages: Label each figure with a numbered title and separate takeaway message. Similarly, use callout to grab readers’ attention and highlight takeaway messages. For example, use a callout in the results section to summarize the conclusion in one-sentence under the evaluation question.
  • Minimize report body length: Reduce page length as much as possible without compromising quality. One way to do this is to place details that enhance understanding—but are not critical for basic understanding—in the appendices. Only information that is critical for readers’ understanding of the evaluation process and results should be included in the report body. Think of the appendices like a storage area such as a basement, attic, or shed where you keep items you need but don’t use all the time.

If you’d like to provide feedback you can write your comments in an email or return a review form to info@evalu-ate.org. We are especially interested in getting feedback from individuals that have used the checklist as they develop evaluation reports.

Blog: Color Scripting to Measure Engagement

Posted on March 30, 2016 by  in Blog ()

President, iEval

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Last year I went to the D23 Expo in Anaheim, California. This was a conference for Disney fans everywhere. I got to attend panels where I learned past Disney secrets and upcoming Disney plans. I went purely for myself, since I love Disney everything, and I never dreamed I would learn something that could be applicable to my evaluation practice.

In a session with John Lasseter, Andrew Stanton, Pete Doctor, and others from Pixar, I learned about a technique created by Ralph Eggleston (who was there too) called color scripting. Color scripting is a type of story boarding, but Ralph would change the main colors of each panel to reflect the emotion the animated film was supposed to portray at that time. It helped the Pixar team understand what was going on in the film emotionally at a quick glance, and it also made it easier to create a musical score to enhance those emotions.

Then, a few weeks later, I was sitting in a large event for a client, observing from the back of the room. I started taking notes on the engagement and energy of the audience based on who was presenting. I created some metrics on the spot including number of people on their mobile devices, number of people leaving the event, laughter, murmuring, applause, etc. I thought I would create a simple chart with a timeline of the event, highlighting who was presenting at different times, and indicating if engagement was high/medium/low and if energy was high/medium/low. I quickly realized, when analyzing the data, that engagement and energy were 100% related. If engagement was high, then energy followed shortly as being high. So, instead of charting two dimensions, I really only needed to chart one: engagement & energy combined (see definitions of engagement and energy in the graphic below). That’s when it hit me – color scripting! Okay, I’m no artist like Ralph Eggleston, so I created a simple color scheme to use.

Graphic 1

In sharing this with the clients who put on the event, they could clearly see how the audience reacted to the various elements of the event. It was helpful in determining how to improve the event in the future. This was a quick and easy visual, made in Word, to illustrate the overall reactions of the audience.

I have since also applied this to a STEM project, color scripting how the teachers in a two-week professional development workshop felt at the end of each day based on one word they shared upon exiting the workshop each day. By mapping participant feelings in the different cohorts and comparing what and how things were taught each day, this resulted in thoughtful conversations with the trainers about how they want the participants to feel and what they need to change to match reality with intention.

Graphic 2

You never know where you’re going to learn a technique or tool that could be useful in your evaluation practice and useful to the client. Be open to learning everywhere you go.

Blog: The Retrospective Pretest Method for Evaluating Training

Posted on March 16, 2016 by  in Blog (, )

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In a retrospective pretest,1 trainees rate themselves before and after a training in a single data collection event. It is useful for assessing individual-level changes in knowledge and attitudes as one part of an overall evaluation of an intervention. This method fits well with the Kirkpatrick Model for training evaluation, which calls for gathering data about participants’ reaction to the training, their learning, changes in their behavior, and training outcomes. Retrospective pretest data are best suited for evaluating changes in learning and attitudes (Level 2 in the Kirkpatrick Model).

The main benefit of using this method is that it reduces response-shift bias, which occurs when respondents change their frame of reference for answering questions. It is also convenient, more accurate than self-reported data gathered using traditional pre-post self-assess methods, adaptable to a wide range of contexts, and generally more acceptable to adult learners than traditional testing. Theodore Lamb provides a succinct overview of the strengths and weaknesses of this method in a Harvard Family Research Project newsletter article—see bit.ly/hfrp-retro.

The University of Wisconsin Extension’s Evaluation Tip Sheet 27: Using the Retrospective Post-then-Pre Design provides practical guidelines about how to use this method: bit.ly/uwe-tips.

Design

The focus of retrospective pretest questions should be on the knowledge, skills, attitudes, or behaviors that are the focus of the intervention being evaluated. General guidelines for formatting questions: 1) Use between 4 and 7 response categories in a Likert-type or partially anchored rating scale; 2) Use formatting to distinguish pre and post items; 3) Provide clear instructions to respondents. If you are using an online survey platform, check your question type options before committing to a particular format. To see examples and learn more about question formatting, see University of Wisconsin Extension’s Evaluation Tip Sheet 28: “Designing a Retrospective Post-then-Pre Question” at bit.ly/uwe-tips.

For several examples of Likert-type rating scales, see bit.ly/likert-scales—be careful to match question prompts to rating scales.

Analysis and Visualization

Retrospective pretest data are usually ordinal, meaning the ratings are hierarchical, but the distances between the points on the scale (e.g., between “somewhat skilled” and “very skilled”) are not necessarily equal. Begin your analysis by creating and examining the frequency distributions for both the pre and post ratings (i.e., the number and percentage of respondents who answer in each category). It is also helpful to calculate change scores—the difference between each respondent’s before and after ratings—and look at those frequency distributions (i.e., the number and percentage of respondents who reported no change, reported a change of 1 level, 2 levels, etc.).

For more on how to analyze retrospective pretest data and ordinal data in general, see the University of Wisconsin Extension’s Evaluation Tip Sheet 30: “Analysis of Retrospective Post-then-Pre Data” and Tip Sheet 15: “Don’t Average Words” bit.ly/uwe-tips.

For practical guidance on creating attractive, effective bar, column, and dot plot charts, as well as other types of data visualizations, visit stephanieevergreen.com.

Using Results

To use retrospective pretest data to make improvements to an intervention, examine the data to determine if some groups (based on characteristic such as job, other demographic characteristics, and incoming skill level) gained more or less than others and compare results to the intervention’s relative strengths and weaknesses in terms of achieving its objectives. Make adjustments to future offerings based on lessons learned and monitor to see if the changes lead to improvements in outcomes.

To learn more, see the slides and recording of EvaluATE’s December 2015 webinar on this topic: http://www.evalu-ate.org/webinars/2015-dec/

For a summary of research on this method, see Klatt and Powell’s (2005) white paper, “Synthesis of Literature Relative to the Retrospect Pretest Design:” bit.ly/retro-syn.

1 This method has other names, such as post-then-pre and retrospective pretest-posttest.

Blog: Show Me a Story: Using Data Visualization to Communicate Evaluation Findings

Posted on January 13, 2016 by  in Blog (, )

Senior Research Associate, Education Development Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

graphic1

It’s all too easy for our evaluation reports to become a lifeless pile of numbers that gather dust on a shelf. As evaluators and PIs, we want to tell our stories and we want those stories to be heard. Data visualizations (like graphs and infographics) can be powerful ways to share evaluation findings, quickly communicate key themes, and ultimately have more impact.

Communicating evaluation findings visually can also help your stakeholders became better data analysts themselves. I’ve found that when stakeholders see a graph showing survey results, they are much more likely to spend time examining the findings, asking questions, and thinking about what the results might mean for the project than if the same information is presented in a traditional table of numbers.

Here are a few tips to get you started with data visualization:

  • Start with the data story. Pick one key finding that you want to communicate to a specific group of stakeholders. What is the key message you want those stakeholders to walk away with?
  • Put the mouse down! When you’re ready to develop a data viz, start by sketching various ways of showing the story you want to tell on a piece of paper.
  • Use Stephanie Evergreen’s and Ann Emery’s checklist to help you plan and critique your data visualization: http://stephanieevergreen.com/dataviz-checklist/.
  • Once you’ve drafted your data viz, run it by one or two colleagues to get their feedback.
    Some PIs, funders, and other stakeholders still want to see tables with all the numbers. We typically include tables with the complete survey results in an appendix.

Some of my favorite data viz resources:

For more design inspiration, check out:

Finally, don’t expect to hit a home run your first time at bat. (I certainly didn’t!) You will get better as you become more familiar with the software you use to produce your data visualizations and as you solicit and receive feedback from your audience. Keep showing those stories!

graphic 2

Blog: Tips for Evaluation Recommendations

Posted on June 3, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This week I am in Atlanta at the American Evaluation Association (AEA) Summer Evaluation Institute, presenting a workshop on Translating Evaluation Findings into Actionable Recommendations.  Although the art of crafting practical, evidence-based recommendations is not covered in-depth either in evaluation textbooks or academic courses, most evaluators (86% according to Fleischer and Christie’s  survey of AEA members) believe that making recommendations  is part of an evaluator’s job. By reading as much as I can on this topic[1] and reflecting on my own practice, I have assembled 14 tips for how to develop, present, and follow-up on evaluation recommendations:

DEVELOP

  1. Determine the nature of recommendations needed or expected.  At the design stage, ask stakeholders: What do you hope to learn from the evaluation? What decisions will be influenced by the results? Should the evaluation include recommendations?
  2. Generate possible recommendations throughout the evaluation. Keep a log of ideas as you collect data and observe the program. I like Roberts-Gray, Buller, and Sparkman’s (1987) evaluation question-driven framework.
  3. Base recommendations on evaluation findings and other credible sources. Findings are important, but they’re often not sufficient for formulating recommendations.  Look to other credible sources, such as program goals, stakeholders/program participants, published research, experts, and the program’s logic model.
  4. Engage stakeholders in developing and/or reviewing recommendations prior to their finalization. Clients should not be surprised by anything in an evaluation report, including the recommendations. If you can engage stakeholders directly in developing recommendations, they will feel more ownership. (Read Adrienne Adam’s article about a great process for this).
  5. Focus recommendations on actions within the control of intended users. If the evaluation client doesn’t have control over the policy governing their programs, don’t bother recommending changes at that level.
  6. Provide multiple options for achieving desired results.  Balance consideration of the cost and difficulty of implementing recommendations with the degree of improvement expected; if possible, offer alternatives so stakeholders can select what is most feasible and important to do.

PRESENT

  1. Clearly distinguish between findings and recommendations. Evaluation findings reflect what is, recommendations are a predication about what could be. Developing recommendations requires a separate reasoning process.
  2. Write recommendations in clear, action-oriented language. I often see words like consider, attend to, recognize, and acknowledge in recommendations. Those call the clients’ attention to an issue, but don’t provide guidance as to what to do.
  3. Specify the justification sources for each recommendation. It may not be necessary to include this information in an evaluation report, but be prepared to explain how and why you came up with the recommendations.
  4. Explain the costs, benefits, and challenges associated with implementing recommendations. Provide realistic forecasts of these matters so clients can make informed decisions about whether to implement the recommendations.
  5. Be considerate—exercise political and interpersonal sensitivity. Avoid “red flag” words like fail and lack, don’t blame or embarrass, and be respectful of cultural and organizational values.
  6. Organize recommendations, such as by type, focus, timing, audience, and/or priority. If many recommendations are provided, organize them to help the client digest the information and prioritize their actions.

FOLLOW-UP

  1. Meet with stakeholders to review and discuss recommendations in their final form.  This is an opportunity to make sure they fully understand the recommendations as well as to lay the groundwork for action.
  2. Facilitate decision making and action planning around recommendations. I like the United Nations Development Programme’s “Management Response Template” as an action planning tool.

See also my handy one-pager of these tips for evaluation recommendations.

[1] See especially Hendricks & Papagiannis (1990) and Utilization-Focused Evaluation (4th ed.) by Michael Quinn Patton.