Kelly Robertson

Senior Research Associate, The Evaluation Center at Western Michigan University

Kelly has a Ph.D. in evaluation and more than eight years of experience in the field of evaluation. She works as a Senior Research Associate at The Evaluation Center at Western Michigan University. Dr. Robertson has worked on evaluations at the local, regional, national, and international levels, spanning a wide variety of sectors (e.g., STEM education, adult education, career and technical education, and evaluation capacity development). Her research interests primarily focus on evaluation as it relates to equity, cultural competence, and making evaluation more user-friendly.


Pilot Testing: Checklist for Program Evaluation Report Content

Posted on June 15, 2017 by , in

EvaluATE invites individuals who are currently writing evaluation reports to pilot test the Checklist for Program Evaluation Report Content and provide feedback for improvement by August 15, 2017. 

A form for providing feedback is available from bit.ly/rep-check-pilot

After a few questions about the context of your work, this form will prompt you to answer three open-ended questions about your experience with the checklist:

  1. What was especially helpful about this checklist?
  2. What frustrated or confused you?
  3. What would you add, change, or remove?
  4. If using this checklist affected how you wrote your report or what you included in it, please describe how it influenced you.

About the Checklist: this checklist identifies and describes the elements of an evaluation report. This checklist should not be treated as a rigid set of requirements. Rather, it should be used as a flexible guide for determining an evaluation report’s content. Each checklist element is a prompt for decision making about what content is appropriate for a particular evaluation context should be made with consideration of the report audience’s information needs and resources available for report development.

Blog: Declutter Your Reports: The Checklist for Straightforward Evaluation Reports

Posted on February 1, 2017 by  in Blog (, )

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation reports have a reputation for being long, overly complicated, and impractical. The recent buzz about fresh starts and tidying up for the new year got me thinking about the similarities between these infamous evaluation reports and the disastrously cluttered homes featured on reality makeover shows. The towering piles of stuff overflowing from these homes reminds me of the technical language and details that clutter up so many evaluation reports. Informational clutter, like physical clutter, can turn reports, just like homes, into difficult-to-navigate obstacle courses that can render the contents virtually unusable. If you are looking for ideas on how to organize and declutter your reports, check out the Checklist for Straightforward Evaluation Reports that Lori Wingate and I developed. The checklist provides guidance on how to produce comprehensive evaluation reports that are concise, easy to understand, and easy to navigate. Main features of the checklist include:

  • Quick reference sheet: A one-page summary of content to include in an evaluation report and tips for presenting content in a straightforward manner.
  • Detailed checklist: A list and description of possible content to include in each report section.
  • Straightforward reporting tips: General and section-specific suggestions on how to present content in a straightforward manner.
  • Recommended resources: List of resources that expand on information presented in the checklist.

Evaluators, evaluation clients, or other stakeholders can use the report to set reporting expectations such as what content to include and how to present information.

Straightforward Reporting Tips

Here are some tips, inspired by the checklist, on how to tidy up your reports:

  • Use short sentences: Each sentence should communicate one idea. Sentences should contain no more than 25 words. Downsize your words to only the essentials, just like you might downsize your closet.
  • Use headings: Use concise and descriptive headings and subheadings to clearly label and distinguish report sections. Use report headings, like labels on boxes, to make it easier to locate items in the future.
  • Organize results by evaluation questions: Organize the evaluation results section by evaluation question with separate subheadings for findings and conclusions under each evaluation question. Just like most people don’t put decorations for various holidays in one box, don’t put findings for various evaluation questions in one findings section.
  • Present takeaway messages: Label each figure with a numbered title and separate takeaway message. Similarly, use callout to grab readers’ attention and highlight takeaway messages. For example, use a callout in the results section to summarize the conclusion in one-sentence under the evaluation question.
  • Minimize report body length: Reduce page length as much as possible without compromising quality. One way to do this is to place details that enhance understanding—but are not critical for basic understanding—in the appendices. Only information that is critical for readers’ understanding of the evaluation process and results should be included in the report body. Think of the appendices like a storage area such as a basement, attic, or shed where you keep items you need but don’t use all the time.

If you’d like to provide feedback you can write your comments in an email or return a review form to info@evalu-ate.org. We are especially interested in getting feedback from individuals that have used the checklist as they develop evaluation reports.

Checklist: Program Evaluation Report Content

Posted on December 13, 2016 by , in

This checklist identifies and describes the elements of an evaluation report. This checklist should not be treated as a rigid set of requirements. Rather, it should be used as a flexible guide for determining an evaluation report’s content. Each checklist element is a prompt for decision making about what content is appropriate for a particular evaluation context should be made with consideration of the report audience’s information needs and resources available for report development.

File: Click Here
Type: Checklist
Category: Resources
Author(s): Kelly Robertson, Lori Wingate

Webinar: Anatomy of a User-Friendly Evaluation Report

Posted on October 17, 2016 by , in Webinars

Presenter(s): Kelly Robertson, Lori Wingate
Date(s): December 14, 2016
Time: 1-2:00 p.m. Eastern
Recording: https://youtu.be/61oKkQ4EGfg

While long-form evaluation reports are the main way information about evaluations is communicated, they are largely underutilized. Barriers to report use most often relate to user-friendliness—the degree to which they are easy-to-navigate, easy-to-understand, and provide information that readers can use. In this webinar, participants will see first-hand what user-friendly evaluation reports look like and learn strategies for creating readable, meaningful, and useful reports. Participants will also be introduced to the Checklist for User-Friendly Evaluation Reports. Evaluators can use this checklist as a guide to developing reports, and project leaders can use it to communicate their needs and preferences for reports to their evaluators.

Resources:
Slides
Checklist for Straightforward Evaluation Reports
Example of a Infographic-Style Evaluation Report

2016 High Impact Technology Exchange Conference (HI-TEC)

Posted on July 15, 2016 by , in Conferences ()

2016 High Impact Technology Exchange Conference (HI-TEC)
Pittsburgh, PA
July 25-28, 2016

Workshop

Logic Models: The Swiss Army Knife of Project Planning and Evaluation
Kelly Robertson
Lyssa Wilson

July 27, 2016 | 3:45-4:30 p.m.

A logic model is a graphic depiction of how a project translates its resources and activities into outcomes. Logic models are useful tools for succinctly communicating a project’s goals and activities, but they have many other applications. They provide a foundation for a project evaluation plan (and subsequent reporting) and can be used to organize the content of a grant proposal.  In this session, participants will learn the basics of how to create a logic model and we will demonstrate its use for planning a project evaluation and organizing a grant proposal.  Participants will receive the Evaluation Planning Checklist for ATE Proposals and ATE Project Logic Model Template.

Participants will receive the Evaluation Planning Checklist for ATE Proposals and ATE Project Logic Model Template.

For more information about the conference, and for conference registration, please visit http://www.highimpact-tec.org/

Resources:
Slides
Handout

Newsletter: Bridging the Gap: Using Action Plans to Facilitate Evaluation Use

Posted on April 1, 2016 by  in Newsletter - ()

Senior Research Associate, The Evaluation Center at Western Michigan University

NSF requires evaluation, in part, because it is an essential tool for project improvement. Yet all too often, evaluation results are not used to inform project decision making. There is a gap in the project improvement cycle between dissemination of evaluation results and decision making. One way to bridge this gap is through use of an action plan for project improvement informed by evaluation findings, conclusions, and recommendations. The United Nations Development Programme’s (UNDP) “Management Response Template” (http://bit.ly/undp-mrt) provides a format for  such an action plan. The template was created to encourage greater use of evaluation by projects. UNDP’s template is designed for use in international development contexts, but could be used for any type of project, including ATE centers and projects.

As shown, the template is organized around evaluation recommendations or issues. Any important issue that emerged from an evaluation would be an appropriate focus for action planning. The form allows for documentation of evaluation-based decisions and tracking implementation of those decisions. The inclusion of a time frame for each key action, who is responsible, and status encourages structure and accountability around use of evaluation results and project improvement.

Resources table