Lyssa Wilson Becho

Research Associate, Western Michigan University

Lyssa leads the training elements of EvaluATE, including webinars, workshops, resources, and evaluation coaching. She also works with Valerie on strategy and reporting for the ATE annual survey. Lyssa is a senior research associate at The Evaluation Center at Western Michigan University and co-principal investigator for EvaluATE. She holds a Ph.D. in evaluation and has 7 years of experience conducting evaluations for a variety of local, national, and international programs.


Blog: Making the Most of Virtual Conferences: An Exercise in Evaluative Thinking

Posted on September 2, 2020 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We at EvaluATE affectionately call the fall “conference season.” Both the ATE PI Conference and the American Evaluation Association’s annual conference usually take place between October and November every year. This year, both conferences will be virtual events. Planning how our project will engage in this new virtual venue got me thinking: What makes a virtual conference successful for attendees? What would make a virtual conference successful for me?

I started by considering what makes an in-person conference successful, and I quickly realized that this was an exercise in evaluative thinking. The concept of evaluative thinking has been defined in a variety of ways—as a “type of reflective practice” (Baker & Bruner, 2012, p. 1), a combination of “critical thinking, creative thinking, inferential thinking, and practical thinking” (Patton, 2018, p. 21), and a “problem-solving approach” (Vo, 2013, p. 105). In this case, I challenged myself to consider what my personal evaluation criteria would be for a successful conference and what my ideal outcomes would look like.

In my reflection process, I came up with a list of key outcomes for attending a conference. Specifically, at conferences, I hope to:

  • build new relationships with peers;
  • grow relationships with existing partners;
  • learn about new trends in research and practice;
  • learn about future research opportunities (places I might be able to fill in the gaps); and
  • feel part of a community and re-energized about my work.

I realized that many of these outcomes are typically achieved through happenstance. For example, at previous conferences, most of my new relationships with peers occurred because of a hallway chat or because I sat next to someone in a session and we struck up a conversation and exchanged information. It’s unlikely these situations would occur organically in a virtual conference setting. I would need to be intentional about how I participated in a virtual conference to achieve the same outcomes.

I began to work backwards to determine what actions I could take to ensure I achieved these outcomes in a virtual conference format. In true evaluator fashion, I constructed a logic model for my virtual conference experience (shown in Figure 1). I realized I needed to identify specific activities—agreements with myself—to get the most out of the experience and have a successful virtual conference.

For example, one of my favorite parts of a conference is feeling like I am part of a larger community and becoming re-energized about my work. Being at home, it can be easy to become distracted and not fully engage with the virtual platform, potentially threatening these important outcomes. To address this, I have committed to blocking off time on my schedule during both conferences to authentically engage with the content and attendees.

How do you define a successful conference? What outcomes do you want to achieve in upcoming conferences that have gone virtual? While you don’t have to make a logic model out of your thoughts, I would challenge you to think evaluatively about upcoming conferences, asking yourself what you hope to achieve and how can you ensure that it happens.

Figure 1. Lyssa’s Logic Model to Achieve a Successful Virtual Conference

Figure 1. Lyssa’s Logic Model to Achieve a Successful Virtual Conference

Webinar: How to Avoid Common Pitfalls When Writing Evaluation Plans for ATE Proposals

Posted on July 28, 2020 by , , in Webinars ()

Presenter(s): Anastasia Councell, Emma Leeburg, Lyssa Wilson Becho
Date(s): August 19, 2020
Time: 1 p.m. – 2 p.m. Eastern
Recording: https://youtu.be/LTMShY2tM0o

Join this webinar to learn what pitfalls to watch out for when writing evaluation plans for grant proposals! In this webinar, we will share some of the biggest mistakes made in evaluation plans for ATE proposals and how to fix them. This webinar will go beyond EvaluATE’s current checklist for writing evaluation plans to highlight the good and the bad from real-world examples. Grant writers, project staff, and evaluators are encouraged to attend! Those completely new to grant writing may want to review the basic elements of an evaluation plan in our short video series prior to attending this webinar.

Resources:
Slides
Toolkit for Writing Evaluation Plans for ATE Proposals
Blog: Kirkpatrick Model for ATE Evaluation
Blog: Three Questions to Spur Action from Your Evaluation Report
Video Series: Evaluation: The Secret Sauce

Webinar: Adapting Evaluations in the Era of Social Distancing

Posted on April 27, 2020 by , , in Webinars

Presenter(s): Anastasia Councell, Lyssa Wilson Becho, Michael Lesiecki
Date(s): May 27, 2020
Time: 1:00 p.m. – 2:00 p.m. Eastern
Recording: https://youtu.be/Ylo9p111Mcc

As we continue to social distance to keep our communities safe, evaluators and project stakeholders must think about and conduct evaluations in new ways. In this webinar, we will share 10 strategies for adapting to this new evaluation reality. These strategies will help participants rethink evaluation plans amidst project changes and disruptions, engage stakeholders virtually, and adapt to remote data collection. Participants will have a chance to hear from other evaluators and share their own successes and struggles with adjusting evaluation practices in the era of social distancing. This webinar will provide practical tools to apply to evaluation work during this time of uncertainty and change. 

Resources:
Slides
Chat Transcript
Handout
Additional Resources

Checklist: Communication Plan for ATE Principal Investigators and Evaluators

Posted on March 31, 2020 by , in Checklist ()

Creating a clear communication plan at the beginning of an evaluation can help project personnel and evaluators avoid confusion, misunderstandings, or uncertainty. The communication plan should be an agreement between the project’s principal investigator and the evaluator, and followed by members of their respective teams. This checklist highlights the decisions that need to made when developing a clear communication plan.

  • Designate one primary contact person from the project staff and one from the evaluation team. Clearly identify who should be contacted regarding questions, changes, or general updates about the evaluation. The project staff person should be someone who has authority to make decisions or approve small changes that might occur during the evaluation, such as the principal investigator or project manager.
  • Set up recurring meetings to discuss evaluation matters. Decide on the meeting frequency and platform for the project staff and evaluation team to discuss updates on the evaluation. These regular meetings should occur throughout the life of a project.
    • Frequency — At minimum, plan to meet monthly. Increase the frequency as needed to maintain momentum and meet key deadlines.
    • Platform — Real-time interaction via phone calls, web meetings, or in-person meetings will help ensure those involved give adequate attention to the matters being discussed. Do not rely on email or other asynchronous communication platforms.
    • Agenda — Tailor the agendas to reflect the aspects of the evaluation that need attention. In general, the evaluator should provide a status update, identify challenges, and explain what the project staff can do to facilitate the evaluation. The project staff should share important changes or challenges in the project, such as delays in timelines or project staff turnover. Conversations should close with clear action items and deadlines.
  • Agree on a process for reviewing and finalizing data collection instruments and procedures, and evaluation reports. Determine the project staff’s role in providing input on instruments (such as questionnaires or interview protocols), the mechanisms by which data will be collected, and reports. Establish a turnaround time for feedback, to avoid delays in implementing the evaluation.
  • Clarify who is responsible for disseminating reports. As a rule of thumb, responsibility and authority for the distribution of evaluation report lies with the project’s principal investigator. Make it clear whether the evaluator may use the reports for their own purposes and under what conditions.

Downloads

Communication Checklist (PDF)

 

Webinar: Impact Evaluation: Why, What, and How

Posted on October 31, 2019 by , in Webinars

Presenter(s): Lyssa Wilson Becho, Michael Lesiecki
Date(s): December 11, 2019
Time: 1:00 pm – 2:00 pm EST
Recording: https://youtu.be/mRSoGtHQa7Q

Impact evaluation can be a powerful way to assess the long-term or broader effects of a project. Attention to causal inference, which attributes change to the project and its activities, sets impact evaluation apart from other types of evaluation. Impact evaluation can support deeper learning and direction for project scaling and future sustainability.

This webinar is an introduction to impact evaluation and how it can be realistically implemented in ATE projects. ATE principal investigators, project and center staff, and evaluators who attend this webinar will learn:
(1) the basic tenets of impact evaluation,
(2) strategies for determining causal attribution, and
(3) the resources needed to implement impact evaluation for your project

Further Reading:
Impact Evaluation Video Series by UNICEF
Understanding Causes of Outcomes and Impacts by Better Evaluation
Strategies for Causal Attribution by Patricia Rogers and UNICEF
Establishing Cause and Effect by Web Center for Social Research Methods

Resources:
Slides
Three questions to determine causality handout

Webinar: Evaluation: The Secret Sauce in Your ATE Proposal

Posted on July 3, 2019 by , , in Webinars

Presenter(s): Emma Perk, Lyssa Wilson Becho, Michael Lesiecki
Date(s): August 21, 2019
Time: 1:00pm-2:30pm Eastern
Recording: https://youtu.be/XZCfd7m6eNA

Planning to submit a proposal to the National Science Foundation’s Advanced Technological Education (ATE) program? Then this is a webinar you don’t want to miss! We will cover the essential elements of an effective evaluation plan and show you how to integrate them into an ATE proposal. We will also provide guidance on how to budget for an evaluation, locate a qualified evaluator, and use evaluative evidence to describe the results from prior NSF funding. Participants will receive the Evaluation Planning Checklist for ATE Proposals and other resources to help integrate evaluation into their ATE proposals.

An extended 30-minute Question and Answer session will be included at the end of this webinar. So, come prepared with your questions!

 

Resources:
Slides
External Evaluator Visual
External Evaluator Timeline
ATE Evaluation Plan Checklist
ATE Evaluation Plan Template
Guide to Finding and Selecting an ATE Evaluator
ATE Evaluator Map
Evaluation Data Matrix
NSF Evaluator Biosketch Template
NSF ATE Program Solicitation
Question and Answer Panel Recording

Webinar: Getting Everyone on the Same Page: Practical Strategies for Evaluator-Stakeholder Communication

Posted on May 1, 2019 by , , in Webinars ()

Presenter(s): Kelly Robertson, Lyssa Wilson Becho, Michael Lesiecki
Date(s): May 22, 2019
Time: 1:00-2:00 p.m. Eastern
Recording: https://youtu.be/vld5Z9ZLxD4

To ensure high-quality evaluation, evaluators and project staff must collaborate on evaluation planning and implementation. Whether at the proposal stage or the official start of the project, setting up a successful dialog begins at the very first meeting between evaluators and project staff and continues throughout the duration of the evaluation. Intentional conversations and planning documents can help align expectations for evaluation activities, deliverables, and findings. In this webinar, participants will learn about innovative and practical strategies to improve communication between those involved in evaluation planning, implementation, and use. We will describe and demonstrate strategies developed from our own evaluation practice for

  • negotiating evaluation scope
  • keeping project staff up-to-date on evaluation progress and next steps
  • insuring timely report development
  • establishing and maintaining transparency
  • facilitating use of evaluation results.

Resources:
Slides
Handouts

Checklist: Do’s and Don’ts: Basic Principles of Data Visualization

Posted on March 26, 2019 by , in

A quick guide goes over the 14 do’s and don’ts of data visualization. This guide is not intended to teach these do’s and don’ts but rather serve as a reminder.

File: Click Here
Type: Doc
Category: Reporting & Use
Author(s): Emma Leeburg, Lyssa Wilson Becho

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho
Managing Director
EvaluATE
Research Manager
EvaluATE

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.