Archive: evaluation design

Blog: Designing Accessible Digital Evaluation Materials

Posted on August 19, 2020 by  in Blog ()

Developmental Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on July 23, 2020:

Designing Accessible Digital Evaluation Materials title graphic

Hi, I am Don Glass, a DC-based developmental evaluator, learning designer, and proud member of the AEA Disabilities and Underrepresented Populations TIG.

COVID-19 has increased our reliance on- and maybe fast-tracked- our use of digital and online communication to serve our diverse evaluation clients and audiences. This is an opportunity to push our evaluation communication design to the next level. Just like AEA members enthusiastically embraced Stephanie Evergreen’s and Sheila Robinson’s contributions to Potent Presentations and established a flourishing Data Visualization TIG, we can now integrate inclusive design routines into our communication practice!

Being inclusive is part of the AEA mission- and for some of us a legal duty– to make sure that our digital communications are barrier-free and accessible to all. This article is a quick reference guide for design considerations for digital communication like AEA365 blogs, social media, online webinars/courses, virtual conference presentations, and evaluation reports— any digital content, really, that uses text, images, and media.

The evaluation field has had a solid foundation in our literature to guide inclusive evaluation thinking and design. Donna M. Merten’s 1999 AEA Presidential Address crystallized the rationale for inclusive approaches to evaluation. In 2011, Jennifer Sulewski and June Gothberg first developed a Universal Design for Evaluation Checklist to help evaluators systematically think about the inclusive design of all aspects of your evaluation practice. The guidance in this blog focuses on:

Principle 4: Perceptible Information. The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities.

Social Media Accessibility: Plain language, CamelCase Hashtags, Image Descriptions, Captioning and Audio, Link Shorteners

Hot Tips

Text: Provide supports to access this primary form of content and navigate its organization.

  • Structured Text: Use headers and bulleted/numbered lists. Think about reading order.
  • Fonts and Font Size: Make text large and legible enough to easily read. Avoid serif fonts.
  • Colors and Contrast: Make sure text and background are not too similar. Consider a contrast checker tool.
  • Descriptive Hyperlinks: Embed links in text that describe the destination. Remember, links should look like links.

Images: Provide a barrier-free and purposeful use of images beyond aesthetics.

  • Alternative Text: Write a short description about the content and function of an image read by a screen-reader, web-browser, and search engine.
  • Accessible Images: Select or design images and diagrams to enhance comprehension and communication.

Media: Provide supports to make media content accessible and search-able.

  • Closed Captioning: Make text versions of the spoken word presented in multimedia. Consider auto-captioning on YouTube.
  • Transcripts: Make a full text version of spoken word presented in multimedia. Explore searching transcripts as a way of navigating media.
  • Audio Description: A narration that describes visual-only content in media. Check out examples of Descriptive Video Service on your streaming service.

Rad Resources

Blog: Integrating Perspectives for a Quality Evaluation Design

Posted on August 2, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
John Dorris

Director of Evaluation and Assessment, NC State Industry Expansion Solutions

Dominick Stephenson

Assistant Director of Research Development and Evaluation, NC State Industry Expansion Solutions

Designing a rigorous and informative evaluation depends on communication with program staff to understand planned activities and how those activities relate to the program sponsor’s objectives and the evaluation questions that reflect those objectives (see white paper related to communication). At NC State Industry Expansion Solutions, we have worked long enough on evaluation projects to know that such communication is not always easy because program staff and the program sponsor often look at the program from two different perspectives: The program staff focus on work plan activities (WPAs), while the program sponsor may be more focused on the evaluation questions (EQs). So, to help facilitate communication at the beginning of the evaluation project and assist in the design and implementation, we developed a simple matrix technique to link the WPAs and the EQs (see below).

Click to enlarge

For each of the WPAs, we link one or more EQs and indicate what types of data collection events will take place during the evaluation. During project planning and management, the crosswalk of WPAs and EQs will be used to plan out qualitative and quantitative data collection events.

Click to enlarge

The above framework may be more helpful with the formative assessment (process questions and activities). However, it can also enrich the knowledge gained by the participant outcomes analysis in the summative evaluation in the following ways:

Understanding how the program has been implemented will help determine fidelity to the program as planned, which will help determine the degree to which participant outcomes can be attributed to the program design.
Details on program implementation that are gathered during the formative assessment, when combined with evaluation of participant outcomes, can suggest hypotheses regarding factors that would lead to program success (positive participant outcomes) if the program is continued or replicated.
Details regarding the data collection process that are gathered during the formative assessment will help assess the quality and limitations of the participant outcome data, and the reliability of any conclusions based on that data.

So, for us this matrix approach is a quality-check on our evaluation design that also helps during implementation. Maybe you will find it helpful, too.

Webinar: Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants

Posted on March 22, 2016 by , , in Webinars

Presenter(s): Ann Beheler, Leslie Goodyear, Lori Wingate
Date(s): May 25, 2016
Time: 3-4:00 p.m.

External evaluation is a requirement of many federal grant programs. Understanding and addressing these requirements is essential for both successfully seeking grants and achieving the objectives of funded projects. In this webinar, we will review the evaluation language from a variety federal grant programs and translate the specifications into practical steps. Topics will include finding a qualified evaluator, budgeting for evaluation, understanding evaluation design basics, reporting and using evaluation results, and integrating past evaluation results into future grant submissions.

Additional Resource

Blog: The Retrospective Pretest Method for Evaluating Training

Posted on March 16, 2016 by  in Blog (, )

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In a retrospective pretest,1 trainees rate themselves before and after a training in a single data collection event. It is useful for assessing individual-level changes in knowledge and attitudes as one part of an overall evaluation of an intervention. This method fits well with the Kirkpatrick Model for training evaluation, which calls for gathering data about participants’ reaction to the training, their learning, changes in their behavior, and training outcomes. Retrospective pretest data are best suited for evaluating changes in learning and attitudes (Level 2 in the Kirkpatrick Model).

The main benefit of using this method is that it reduces response-shift bias, which occurs when respondents change their frame of reference for answering questions. It is also convenient, more accurate than self-reported data gathered using traditional pre-post self-assess methods, adaptable to a wide range of contexts, and generally more acceptable to adult learners than traditional testing. Theodore Lamb provides a succinct overview of the strengths and weaknesses of this method in a Harvard Family Research Project newsletter article—see

The University of Wisconsin Extension’s Evaluation Tip Sheet 27: Using the Retrospective Post-then-Pre Design provides practical guidelines about how to use this method:


The focus of retrospective pretest questions should be on the knowledge, skills, attitudes, or behaviors that are the focus of the intervention being evaluated. General guidelines for formatting questions: 1) Use between 4 and 7 response categories in a Likert-type or partially anchored rating scale; 2) Use formatting to distinguish pre and post items; 3) Provide clear instructions to respondents. If you are using an online survey platform, check your question type options before committing to a particular format. To see examples and learn more about question formatting, see University of Wisconsin Extension’s Evaluation Tip Sheet 28: “Designing a Retrospective Post-then-Pre Question” at

For several examples of Likert-type rating scales, see—be careful to match question prompts to rating scales.

Analysis and Visualization

Retrospective pretest data are usually ordinal, meaning the ratings are hierarchical, but the distances between the points on the scale (e.g., between “somewhat skilled” and “very skilled”) are not necessarily equal. Begin your analysis by creating and examining the frequency distributions for both the pre and post ratings (i.e., the number and percentage of respondents who answer in each category). It is also helpful to calculate change scores—the difference between each respondent’s before and after ratings—and look at those frequency distributions (i.e., the number and percentage of respondents who reported no change, reported a change of 1 level, 2 levels, etc.).

For more on how to analyze retrospective pretest data and ordinal data in general, see the University of Wisconsin Extension’s Evaluation Tip Sheet 30: “Analysis of Retrospective Post-then-Pre Data” and Tip Sheet 15: “Don’t Average Words”

For practical guidance on creating attractive, effective bar, column, and dot plot charts, as well as other types of data visualizations, visit

Using Results

To use retrospective pretest data to make improvements to an intervention, examine the data to determine if some groups (based on characteristic such as job, other demographic characteristics, and incoming skill level) gained more or less than others and compare results to the intervention’s relative strengths and weaknesses in terms of achieving its objectives. Make adjustments to future offerings based on lessons learned and monitor to see if the changes lead to improvements in outcomes.

To learn more, see the slides and recording of EvaluATE’s December 2015 webinar on this topic:

For a summary of research on this method, see Klatt and Powell’s (2005) white paper, “Synthesis of Literature Relative to the Retrospect Pretest Design:”

1 This method has other names, such as post-then-pre and retrospective pretest-posttest.

Blog: Needs Assessment: What is it and why use it?

Posted on January 27, 2016 by  in Blog ()

Owner/Evaluator, IMSA Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hi! I am Mary Siegrist from IMSA, a research and evaluation company located in Colorado. I would like to talk about needs assessment and why it is an important step in an evaluation process. So many projects skip this step and jump right into implementing a solution.

What is a needs assessment? It is a study of the current knowledge, ability, interest, or attitude of a defined audience or group. This definition can be broken into two goals:

Goal 1: To learn what your audience already knows and thinks, so that you can determine what educational services are needed. Think of it in terms of what is the current state of skills, knowledge, and ability of current individuals.

Goal 2: To understand what you can do to make your educational services more acceptable and useful to your audience.

And when a needs assessment is properly thought out, it will provide the following information:

  • Impact: Insights about how education and training can impact your audience
  • Approaches: Knowledge about educational approaches that may be most effective
  • Identification of gaps in available training
  • Outcomes: Information about the current situation that you will use to document outcomes in your logic model
  • Demand: Knowledge about the potential demand for future programs and products
  • Credibility that the program is serving the target audience

Ready to start but not sure how? Begin with developing a needs assessment plan. This plan will be a description of the what, when, who, how, and why of your project. Use these seven steps to help with writing your needs assessment plan.

  1. Write objectives: What do you want to learn?
  2. Select audience: Who is the target audience?
  3. Select audience sample: How will you select sample audience?
  4. Pick an instrument: What will you use to collect the data?
  5. Collect data: How will you collect data?
  6. Analyze data: How will you make sense of the data that will be gathered?
  7. Follow-up: What will you do with this information?

Have I convinced you yet? A needs assessment allows you to demonstrate the foundation of your logic model to funders. Because most funding sources insist that a project be evaluated, the information in a needs assessment helps form the basis for a program evaluation.

An example:

A university decided to develop a GIS (Geographic Information System) program for their undergraduate students but wanted to make sure the program would teach the students the GIS skills that their community businesses were looking for when hiring new employees. A needs assessment was conducted in the community. The businesses that utilize GIS technology were contacted by phone and in person and asked what skills they would like to see in potential new hires. Based on this information, the university created curriculum that ensured their students graduated with these skills.

Next time you write your proposal to ATE to fund a new idea, consider including a needs assessment in the first year of the grant.


Blog: Logic Models and Evaluation Planning – Working Together!

Posted on January 20, 2016 by  in Blog ()

Research Scientist, Education Development Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Logic Models and Evaluation Planning-Working Togeather

As an evaluator, I am often asked to work on evaluation plans for National Science Foundation proposals. I believe it is important for evaluators and clients to work together, so I start the conversation by asking about project goals and outcomes and then suggest that we work together to develop a project logic model. Developing a logic model helps create a unified vision for the project and promotes common understanding. There are many types and formats of logic models and while no model is “best,” a logic model usually has the following key elements: inputs, activities, outputs, outcomes, and impacts.

    Reasons to develop logic models:

  • A logic model is a visual “elevator speech” that can be helpful when reviewing the proposal as it provides a quick overview of the project.
  • It is logical! It aligns the resources, activities, deliverables (outputs), and outcomes (short, and medium) with impacts (long-term outcomes). I have often been told that it has helped my clients organize their proposals.

Focus: I love logic models because they help me, the evaluator, focus my work on critical program elements. When a logic model is developed collaboratively by the project team (client) and the evaluator, there is a shared understanding of how the project will work and what it is designed to achieve.

Frame the evaluation plan: Now comes the bonus! A logic model helps form the basis of an outcomes-based evaluation plan. I start the plan by developing indicators with my client for each of the outcomes on the logic model. Indicators are the criteria used for measuring the extent to which projected outcomes are being achieved. Effective indicators align directly to outcomes and are clear and measurable. And while measurable, indicators do not always need to be quantifiable. They can be qualitative and descriptive such as “Youth will describe that they ….” Note that in this example, it is stated how you will determine whether an outcome has been met (youth state that… self-report). It is likely you will have more than one indicator for each outcome. An indicator answers questions like these: How will you know it when you see it? What does it look like when an outcome is met? What is the evidence?

Guide the evaluation questions: After the indicators are developed we decide on the guiding evaluation questions (what we will be evaluating), and I get to work on the rest of the evaluation plan. I figure out an overall design and then add methods, measures, sampling, analysis, reporting, and dissemination (potential topics for future blog posts). Once the project is funded, we refine the evaluation plan, develop a project/evaluation timeline, and determine the ongoing evaluation management and communication – then we are ready for action.

1. W.K. Kellogg Foundation Logic Model Development Guide
2. W.K. Kellogg Foundation Evaluation Handbook (also available in Spanish)
3. EvaluATE’s Logic Model Template for ATE Projects and Centers

Newsletter: How can you make sure your evaluation meets the needs of multiple stakeholders?

Posted on October 1, 2015 by  in Newsletter () ()

Executive Director, The Evaluation Center at Western Michigan University

We talk a lot about “stakeholders” in evaluation. These are the folks who are involved in, affected by, or simply interested in the evaluation of your project.  But what these stakeholders want or need to know from the evaluation, the time they have available for the evaluation, and their level of interest are probably quite variable.  Here is a generic guide to types of ATE evaluation stakeholders, what they might need, and how to meet those needs.

Stakeholder groups What they might need Tips for meeting those needs
Project leaders (PI, co-PIs)
  • Information that will help you make improvements to the project as it is unfolding
  • Results you can include in your annual reports to NSF to demonstrate accountability and impact
Communicate your needs clearly to your evaluator, including when you need the information in order to make use of it.
Advisory committees or National Visiting Committees
  • Results from the evaluation that show whether the project is on track for meeting its goals, if changes in direction or operations are warranted
  • Summary information about the projects’ strengths and weaknesses
Many advisory committee members donate their time, so they probably aren’t interested in reading lengthy reports.  Provide a brief memo and/or short presentation at meetings with key findings and invite questions about the evaluation. Be forthcoming about strengths and weaknesses.
Participants who provide data for the evaluation
  • Access to reports where their information was used
  • Summaries of what actions were taken based on the information they needed to provide
The most important thing for this group is to demonstrate use of the information they provided.  You can share reports, but a personal message from project leaders along the lines of “we heard you and here is what we’re doing in response” is most valuable.
NSF program officers
  • Evidence that the project is on track for meeting its goals
  • Evidence of impact (not just what was done, but what difference the work is making)
  • Evidence that the project is using evaluation results to make improvements
Focus on Intellectual Merit (the intrinsic quality of the work and potential to advance knowledge) and Broader Impacts (the tangible benefits for individuals and progress toward desired societal outcomes). If you’re not sure about what your program officer needs from your evaluation, ask him or her for clarification.
College administrators (department chairs, deans, executives, etc.)
  • Results that demonstrate impact on students, faculty, institutional culture, infrastructure, and reputation.
Make full reports available upon request, but most busy administrators probably don’t have the time to read technical reports or need the fine-grained data points. Prepare memos or share presentations that focus on the information they’re most interested in.
Partners and collaborators
  • Information that helps them assess the return on the investment of their time or other resources
See above – like with college administrators, focus on providing the information most pertinent to this group.

In case you didn’t read between the lines—the underlying message here is to provide stakeholders with the information that is most relevant to their particular “stake” in your project. A good way to not meet their needs is to only send everyone a long, detailed technical report with every data point collected. It’s good to have a full report available for those who request it, but many simply won’t have the time or level of interest needed to consume that quantity of evaluative information about your project. Most importantly, don’t take our word as to what they might need: Ask them!

Not sure what stakeholders to involve in your evaluation or how? Check out our worksheet on Identifying Stakeholders and Their Roles in an Evaluation at (

Newsletter: Creating an Evaluation Scope of Work

Posted on October 1, 2015 by  in Newsletter - ()

Executive Director, The Evaluation Center at Western Michigan University

One of the most common requests we get at EvaluATE is for examples of independent contractor agreements and scope of work statements for external evaluators. First, let’s be clear about the difference between these two types of documents.

An independent contractor agreement is typically 90 percent boilerplate language required by your institution. Here at Western Michigan University, contracts are run through one of three offices (Business Services, Research and Sponsored Programs, Grants and Contracts, or Purchasing), depending on the type of contract and the nature of the work/service. We can’t tell you the name of the office at your institution, but there definitely is one and they probably have boilerplate contract forms that you will need to use.

A scope of work statement should be attached to and referenced by the independent contractor agreement (or other type of contract). But unlike the contract, it should not be written in legalese, but in plain language understandable to all parties involved. The key issues to cover in a scope of work statement include the following:

Evaluation questions (or objectives): Including information about the purpose of the evaluation is a good reminder to those involved about why the evaluation is being done. It may serve as a useful reference down the road if the evaluation starts to experience scope creep (or shrinkage).

Main tasks and deliverables (with timelines or deadlines): This information should make clear what services and products the evaluator will provide. Common examples include a detailed evaluation plan (what was included in your proposal probably doesn’t have enough detail), data collection instruments, reports, and presentations.

It’s critical to include timelines (generally when things will occur) and deadlines (when they must be finished) in this statement.

Conditions for payment: You most likely specified a dollar amount for the evaluation in your grant proposal, but you probably do not plan on paying that in a lump sum either at the beginning or end of the evaluation or even yearly. Specify in what increments payments should be made and what conditions must be met for payment. Rather than tying payment(s) to certain dates, consider making payment(s)contingent on the completion of certain tasks or deliverables.

Be sure to come to agreement on these terms in collaboration with your evaluator. This is an opportunity to launch your working relationship from a place of open communication and shared expectations.