Lyssa Wilson Becho

Research Associate, Western Michigan University

Lyssa is a research associate with EvaluATE and contributes to various projects including the ATE annual survey report, survey snapshots, conference presentations, and blog posts. She is a student in the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. She has worked on a number of different evaluations, both big and small. Her interests lie in improving the way we conduct evaluation through research on evaluation methods and theories, as well as creating useful and understandable evaluation reports through data visualization.


Webinar: Getting Everyone on the Same Page: Practical Strategies for Evaluator-Stakeholder Communication

Posted on May 1, 2019 by , , in Webinars ()

Presenter(s): Kelly Robertson, Lyssa Wilson Becho, Michael Lesiecki
Date(s): May 22, 2019
Time: 1:00-2:00 p.m. Eastern
Recording: https://youtu.be/vld5Z9ZLxD4

To ensure high-quality evaluation, evaluators and project staff must collaborate on evaluation planning and implementation. Whether at the proposal stage or the official start of the project, setting up a successful dialog begins at the very first meeting between evaluators and project staff and continues throughout the duration of the evaluation. Intentional conversations and planning documents can help align expectations for evaluation activities, deliverables, and findings. In this webinar, participants will learn about innovative and practical strategies to improve communication between those involved in evaluation planning, implementation, and use. We will describe and demonstrate strategies developed from our own evaluation practice for

  • negotiating evaluation scope
  • keeping project staff up-to-date on evaluation progress and next steps
  • insuring timely report development
  • establishing and maintaining transparency
  • facilitating use of evaluation results.

Register

Resources:
Slides
Handouts

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.

Report: 2018 ATE Annual Survey

Posted on February 1, 2019 by , in Annual Survey ()

This report summarizes data gathered in the 2018 survey of ATE program grantees. Conducted by EvaluATE — the evaluation support center for the ATE program, located at The Evaluation Center at Western Michigan University — this was the 19th annual ATE survey. Included here are findings about ATE projects and the activities, accomplishments, and impacts of the projects during the 2017 calendar year (2017 fiscal year for budget-related questions).

File: Click Here
Type: Report
Category: ATE Annual Survey
Author(s): Lori Wingate, Lyssa Becho

Webinar: Basic Principles of Survey Question Development

Posted on January 30, 2019 by , in Webinars ()

Presenter(s): Lori Wingate, Lyssa Wilson Becho, Mike Lesiecki
Date(s): February 20, 2019
Time: 1:00-2:00 p.m. EASTERN
Recording: https://youtu.be/64nXDeRm-9c

Surveys are a valuable source of evaluation data. Obtaining quality data relies heavily on well-crafted survey items that align with the overall purpose of the evaluation. In this webinar, participants will learn fundamental principles of survey question construction to enhance the validity and utility of survey data. We will discuss the importance of considering data analysis during survey construction and ways to test your survey questions. Participants will receive an overview of survey do’s and don’ts to help apply fundamental principles of survey question development in their own work.

Resources:
Slides
Handout

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Webinar: Creating One-Page Reports

Posted on March 13, 2018 by , in Webinars ()

Presenter(s): Emma Perk, Lyssa Becho
Date(s): April 18, 2018
Time: 1-2 p.m. Eastern
Recording: https://youtu.be/V2TBfz24RpY

One-page evaluation reports are a great way to provide a snapshot of a project’s activities and impact to stakeholders such as advisory groups, college administrators, and NSF program officers. Summarizing key evaluation facts in a format that is easily and quickly digestible engages the busy reader and can make your project stand out.

Although traditional, long-form evaluation reports are still an excellent way to distribute evaluation results, one-page reports increase the engagement, understanding, and use of evaluation for both the current grant and leveraging findings with potential follow-up grants.

In this webinar, we will provide you with the tools and resources you need to create effective one-page reports and share some examples that have worked well in our practice.

One-Page Report Resources

Resources:
10 steps to creating one-page reports
One-page report worksheet
Slides
South Seattle One-Page Report

Blog: One Pagers: Simple and Engaging Reporting

Posted on December 20, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Blog: One Pagers Simple and Engaging Reporting

Traditional, long-form reports are often used to detail the depth and specifics of an evaluation. However, many readers simply don’t have the time or bandwidth to digest a 30-page report. Distilling the key information into one page can help catch the eye of busy program staff, college administrators, or policy makers.

When we say “one pager,” we mean a single-page document that summarizes evaluation data, findings, or recommendations. It’s generally a stand-alone document that supplements a longer report, dataset, or presentation.

One pagers are a great way to get your client the data they need to guide data-driven decisions. These summaries can work well as companion documents for long reports or as a highlight piece for an interim report. We created a 10-step process to help facilitate the creation of a one pager. Additional materials are available, including detailed slides, grid layouts, videos, and more.

Ten-step process for creating a one pager:

1. Identify the audience

Be specific about who you are talking to and their information priorities. The content and layout of the document should be tailored to meet the needs of this audience.

2. Identify the purpose

Write a purpose statement that identifies why you are creating the one pager. This will help you decide what information to include or to exclude.

3. Prioritize your information

Categorize the information most relevant to your audience. Then rank each category from highest to lowest priority to help inform layout of the document.

4. Choose a grid

Use a grid to intentionally organize elements visually for readers. Check out our free pre-made grids, which you can use for your own one pagers, and instructions on how to use them in PowerPoint (video).

5. Draft the layout

Print out your grid layout and sketch your design by hand. This will allow you to think creatively without technological barriers and will save you time.

6. Create an intentional visual path

Pay attention to how the reader’s eye moves around the page. Use elements like large numbers, ink density, and icons to guide the reader’s visual path. Keep in mind the page symmetry and need to balance visual density. For more tips, see Canva’s Design Elements and Principles.

7. Create a purposeful hierarchy

Use headings intentionally to help your readers navigate and identify the content.

8. Use white space

The brain subconsciously views content grouped together as a cohesive unit. Add white space to indicate that a new section is starting.

9. Get feedback

Run your designs by a colleague or client to help catch errors, note areas needing clarification, and ensure the document makes sense to others. You will likely need to go through a few rounds of feedback before the document is finalized.

10. Triple-check consistency

Triple-check, and possibly quadruple-check, for consistency of fonts, alignment, size, and colors. Style guides can be a useful way to keep track of consistency in and across documents. Take a look at EvaluATE’s style guide here.

The demand for one pagers is growing, and now you are equipped with the information you need to succeed in creating one. So, start creating your one pagers now!