Lyssa Wilson Becho

Research Associate, Western Michigan University

Lyssa is a research associate with EvaluATE and contributes to various projects including the ATE annual survey report, survey snapshots, conference presentations, and blog posts. She is a student in the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. She has worked on a number of different evaluations, both big and small. Her interests lie in improving the way we conduct evaluation through research on evaluation methods and theories, as well as creating useful and understandable evaluation reports through data visualization.


Webinar: Evaluation: The Secret Sauce in Your ATE Proposal

Posted on July 3, 2019 by , , in Webinars

Presenter(s): Emma Perk, Lyssa Wilson Becho, Michael Lesiecki
Date(s): August 21, 2019
Time: 1:00pm-2:30pm Eastern
Recording: https://youtu.be/XZCfd7m6eNA

Planning to submit a proposal to the National Science Foundation’s Advanced Technological Education (ATE) program? Then this is a webinar you don’t want to miss! We will cover the essential elements of an effective evaluation plan and show you how to integrate them into an ATE proposal. We will also provide guidance on how to budget for an evaluation, locate a qualified evaluator, and use evaluative evidence to describe the results from prior NSF funding. Participants will receive the Evaluation Planning Checklist for ATE Proposals and other resources to help integrate evaluation into their ATE proposals.

An extended 30-minute Question and Answer session will be included at the end of this webinar. So, come prepared with your questions!

 

Resources:
Slides
External Evaluator Visual
External Evaluator Timeline
ATE Evaluation Plan Checklist
ATE Evaluation Plan Template
Guide to Finding and Selecting an ATE Evaluator
ATE Evaluator Map
Evaluation Data Matrix
NSF Evaluator Biosketch Template
NSF ATE Program Solicitation
Question and Answer Panel Recording

Webinar: Getting Everyone on the Same Page: Practical Strategies for Evaluator-Stakeholder Communication

Posted on May 1, 2019 by , , in Webinars ()

Presenter(s): Kelly Robertson, Lyssa Wilson Becho, Michael Lesiecki
Date(s): May 22, 2019
Time: 1:00-2:00 p.m. Eastern
Recording: https://youtu.be/vld5Z9ZLxD4

To ensure high-quality evaluation, evaluators and project staff must collaborate on evaluation planning and implementation. Whether at the proposal stage or the official start of the project, setting up a successful dialog begins at the very first meeting between evaluators and project staff and continues throughout the duration of the evaluation. Intentional conversations and planning documents can help align expectations for evaluation activities, deliverables, and findings. In this webinar, participants will learn about innovative and practical strategies to improve communication between those involved in evaluation planning, implementation, and use. We will describe and demonstrate strategies developed from our own evaluation practice for

  • negotiating evaluation scope
  • keeping project staff up-to-date on evaluation progress and next steps
  • insuring timely report development
  • establishing and maintaining transparency
  • facilitating use of evaluation results.

Resources:
Slides
Handouts

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho
Managing Director
EvaluATE
Research Manager
EvaluATE

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.

Report: 2018 ATE Annual Survey

Posted on February 1, 2019 by , in Annual Survey ()

This report summarizes data gathered in the 2018 survey of ATE program grantees. Conducted by EvaluATE — the evaluation support center for the ATE program, located at The Evaluation Center at Western Michigan University — this was the 19th annual ATE survey. Included here are findings about ATE projects and the activities, accomplishments, and impacts of the projects during the 2017 calendar year (2017 fiscal year for budget-related questions).

File: Click Here
Type: Report
Category: ATE Annual Survey
Author(s): Lori Wingate, Lyssa Becho

Webinar: Basic Principles of Survey Question Development

Posted on January 30, 2019 by , in Webinars ()

Presenter(s): Lori Wingate, Lyssa Wilson Becho, Mike Lesiecki
Date(s): February 20, 2019
Time: 1:00-2:00 p.m. EASTERN
Recording: https://youtu.be/64nXDeRm-9c

Surveys are a valuable source of evaluation data. Obtaining quality data relies heavily on well-crafted survey items that align with the overall purpose of the evaluation. In this webinar, participants will learn fundamental principles of survey question construction to enhance the validity and utility of survey data. We will discuss the importance of considering data analysis during survey construction and ways to test your survey questions. Participants will receive an overview of survey do’s and don’ts to help apply fundamental principles of survey question development in their own work.

Resources:
Slides
Handout

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho
Principal Research Associate
The Evaluation Center
Research Manager
EvaluATE

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Webinar: Creating One-Page Reports

Posted on March 13, 2018 by , in Webinars ()

Presenter(s): Emma Perk, Lyssa Becho
Date(s): April 18, 2018
Time: 1-2 p.m. Eastern
Recording: https://youtu.be/V2TBfz24RpY

One-page evaluation reports are a great way to provide a snapshot of a project’s activities and impact to stakeholders such as advisory groups, college administrators, and NSF program officers. Summarizing key evaluation facts in a format that is easily and quickly digestible engages the busy reader and can make your project stand out.

Although traditional, long-form evaluation reports are still an excellent way to distribute evaluation results, one-page reports increase the engagement, understanding, and use of evaluation for both the current grant and leveraging findings with potential follow-up grants.

In this webinar, we will provide you with the tools and resources you need to create effective one-page reports and share some examples that have worked well in our practice.

One-Page Report Resources

Resources:
10 steps to creating one-page reports
One-page report worksheet
Slides
South Seattle One-Page Report