Blog




Blog: Tips for Building and Strengthening Stakeholder Relationships

Posted on November 23, 2020 by  in Blog ()

Project Manager, EvaluATE at The Evaluation Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I am Valerie Marshall, I work on a range of projects at The Evaluation Center, including EvaluATE, where I serve as the administrator and analyst for the annual ATE Survey.

A cornerstone of evaluation is working with stakeholders. Stakeholders are individuals or groups who are part of an evaluation or are otherwise interested in its findings. They may be internal or external to the program being evaluated.

Stakeholders’ interests and involvement in evaluation activities may vary. But they are a key ingredient to evaluation success. They can provide critical insight into project activities and evaluation questions, serve as the gatekeepers to other stakeholders or data, and help determine if evaluation findings and recommendations are implemented.

Given their importance, identifying ways to build and nurture relationships with stakeholders is pivotal.

So the question is: how can you build relationships with evaluation stakeholders?

Below is a list of tips based on my own research and evaluation experience. This list is by no means exhaustive. If you are an ATE PI or evaluator, please join EvaluATE’s Slack community to continue the conversation and share some of your own tips!

Tip 1: Be intentional and adaptative about how you communicate. Not all stakeholders will prefer the same mode of communication. And how stakeholders want to communicate can change over the course a project’s lifecycle. In my experience, using communication styles and tools that align with stakeholders’ needs and preferences often results in greater engagement. So, ask stakeholders how they would like to communicate at various points throughout your work together.

Tip 2: Build rapport. ATE evaluator and fellow blogger George Chitiyo previously noted that building rapport with stakeholders can make them feel valued and, in turn, help lead to quality data. Rapport is defined as a friendly relationship that makes communication easier (Merriam-Webster). Chatting during “down time” in a videoconference, sharing helpful resources, and mentioning a lighthearted story are great ways to begin fostering a friendly relationship.

Tip 3: Support and maintain transparency. Communicate with stakeholders about what is being done, when, and why. This not only reduces confusion but also facilitates trust. Trust is pivotal to building  productive, healthy relationships with stakeholders. Providing project staff with a timeline of research or evaluation activities, giving regular progress updates, and meeting with stakeholders one-on-one or in small groups to answer questions or address concerns are all helpful ways to generate transparency.

Tip 4: Identify roles and responsibilities. When stakeholders know what is expected of them and how they can and cannot contribute to different aspects of a research or evaluation project, they can engage in a more meaningful way. The clarity generated from the process of outlining the roles and responsibilities of both stakeholders and research and evaluation staff can help reduce misunderstandings. At the beginning of a project, and as new staff and stakeholders join the project, make sure to review roles and expectations with everyone.

Blog: Four Warning Signs of a Dusty Shelf Report

Posted on November 11, 2020 by  in Blog

Data Visualization Designer, Depict Data Studio, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Four Warning Signs of a Dusty Shelf Report

When was the last time your report influenced real-life decision-making?

I used to write lengthy reports filled with statistical jargon. Important information sat around and gathered dust.

Now I design reports people actually want to read. Fewer paragraphs. More visuals. My audience can understand the information, so the data actually gets used.

What Reports Are Supposed to Do Every Single Time

Maybe a policy maker voted differently after reading your report…

Maybe your board of directors changed your programming after reading your report…

Maybe your supervisor adjusted your budget or staffing based on your findings…

Maybe your stakeholder group formed a task force to fix the issues brought up by your report…

We’ve all had successes here and there.

But does this happen every single time?

Four Red Flags to Watch For

A dusty shelf report is a report that people refuse to read. Or they glance at it once, don’t read it all the way through, and then repurpose it as a dust collector. Here are four signs that you’ve got a dusty shelf report on your hands. (Or a dusty dashboard, dusty infographic, or dusty slideshow. Watch for these red flags with all dissemination formats.)

1.  No Response

You email your report to the recipient. Or you post it on your website.

You don’t get any response. The silence is deafening.

2.   A Promise to Follow Up Later

You email your report to the recipient. They respond!

But the response is, “Thanks. We received the report. We’ll follow up later if we have any questions.”

This is not use! This is not engagement! We can do better.

3.  “Compliments”

You email your report to the recipient. They respond!

But the response is, “Thanks. We received the report. We’ll follow up later if we have any questions. I can tell that a really technical team worked on this report.

Yikes… that “compliment” is a red flag.

I used to hear this a lot. I thought, “Wow, they must’ve checked out our LinkedIn profiles! They can tell that our entire team has master’s degrees and Ph.D.s! They know we speak at national conferences!”

Later, I realized the reader was (kindly) mentioning our statistical jargon.

Watch for this one. It’s a red flag in disguise.

4.  Won’t Read It

The recipients flat-out say, “We’re not going to read this.”

Sometimes, this red flag is expressed as a request for another format:

“Do you happen to have an infographic?” Red flag.

“Do you happen to have a slideshow?” Red flag.

I’ve seen this with several government agencies over the past couple of years. They explicitly require a two-pager in addition to the technical report. They recognize that the traditional format doesn’t meet their needs.

How to Transform Your Reports

If you’ve experienced any of the red flags, you’ve got a dusty shelf report on your hands.

But there’s hope! Dusty shelf reports aren’t inevitable.

Want to start transforming your reports? Try the the 30-3-1 approach to reporting, use data storytelling, or get better at translating technical information for non-technical audiences. Our courses on these and other data visualization topics will help you soar beyond the dusty shelf report

Blog: Strategies for Communicating in Virtual Settings

Posted on October 21, 2020 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Jeffrey Hillman
Doctoral Student
The Evaluation Center
Doctoral Student
The Evaluation Center

We are Ouen and Jeffrey, the authors of the recently published resource “Effective Communication Strategies for Interviews and Focus Groups.” Thank you to everyone who provided feedback. During the review, we noticed a need to address strategies for conducting online interviews and focus groups.

Your interview environment can promote sharing of stories or deter it. Here are some observations we find helpful to improve communication in virtual settings:

1.Keep your video on, but do not require this of your interviewees. People feel more at ease sharing their stories if they can see the person receiving their information.

2. Keep your background clear of clutter! If this is not an option, test out a neutral virtual background or use a high-quality photo of an uncluttered space of your choice. For example, your office space as a picture background provides a personalized yet professional touch to your virtual setting. Be warned that virtual backgrounds can cut certain body parts out! Test the background, and plan your outfits accordingly (don’t wear green!).

3.  Exaggerate your nonverbal expressions a little to ensure that you are not interrupting the people sharing their stories. Additionally, typical verbal cues of attentiveness can cause delays and skips in a virtual setting. Show your attentiveness by nodding a few times purposefully for affirmations instead of saying “Yes” or “Agreed.” Move your body every now and then to assure people that you are listening and have not lost your internet connection.

4. If you have books in the background, turn the spines of the books away. The titles of the books can be distracting and can communicate unintended messages to the interviewees. More importantly, certain book titles can be trauma triggers. If you want to include decorations, use plants. Additionally, you can place your camera facing the corner of a room to provide visual depth.

5. Be in a quiet room free of other people or pets. Noise and movement can distract your participants from concentrating on the interview.

6. Be sure you have good lighting. People depend on your facial expressions for communication. Face a window (do not have the window behind you), or use lamps or selfie rings if you need additional light.

7. On video calls, most people naturally tend to look at the person’s image. So, it’s important to arrange your camera at the proper angle to see the participants on your screen.

On a laptop, place the laptop camera or separate webcam at eye level; this can be accomplished by using a stand or even a stack of books. Tilt the camera down at approximately 30 degrees, and arm’s length away from you. Experiment with the angle to assure a more natural appearance.

If you use a monitor with a webcam, place the webcam at eye level, tilted down approximately 30 degrees, and arm’s-length away from you. If needed, you can use a small tripod.

Whatever your arrangement, keeping the participant’s picture on the screen close to the camera will remind you where to look.

8. If possible, use a separate webcam, microphone, and headset. A pre-installed webcam generally has a lower resolution than a separate webcam.

Using a separate microphone will provide clearer speech, and a separate set of headphones will help you hear better. Listen to the laptop microphone recording (left) versus the separate condenser microphone recording (right).

Be sure to place the microphone away from view so the microphone does not block the view of your face. Using a plug-in headset instead of a Bluetooth headset will ensure you do not run out of battery.

Pre-Installed Microphone

Separate Condenser Microphone

HOT TIP: Try out the following office setup for your next online interview or focus group!

We would love to hear from you regarding tips that we could not cover in this blog!

Ouen Hunter: Ouen.C.Hunter@wmich.edu
Jeffrey Hillman: Jeffrey.A.Hillman@wmich.edu

Blog: Examining STEM Participation Through an Equity Lens: Measurement and Recommendations

Posted on October 14, 2020 by  in Blog ()

Director of Evaluation Services, Higher Ed Insight

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 Examining STEM Participation Through an Equity Lens_ Measurement and Recommendations

Hey there—my name is Tashera, and I’ve served as an external evaluator for dozens of STEM interventions and innovations. I’ve learned that a primary indicator of program success is recruitment of learners to participate in project activities.

Given that this metric is foundational to most evaluations, measurement of this outcome is rarely thought to be a challenge. A simple count of learners enrolled in programming provides information about participants rather easily.

However, this tells a very limited story. As many of us know, a major priority of STEM initiatives is to broaden participation to be more representative of diverse populations—particularly among groups historically marginalized. As such, we must move beyond reporting quantitative metrics as collectives and instead shift towards disaggregation by student demographics.

This critical analytical approach lets us identify where potential disparities exist. And it can help transform evaluation from a passive system of assessment into a mechanism that helps programs reach more equitable outcomes.

Moreover, program implementation efforts must be deliberate. Activities must be intentionally designed to reach and support populations disproportionally underrepresented within STEM. We can aid this process in our role as evaluators. I would even go so far as to argue that it is our responsibility—as stipulated by AEA’s Guiding Principles for Evaluators—to do so.

During assessment, make it a practice to examine whether program efforts are equitable, inclusive, and accessible. If you find that clients are experiencing challenges relating to locating or recruiting diverse students, the following recommendations can be provided during formative feedback:

  1. Go to the target population: “Traditional” marketing and outreach strategies that have been used time and time again won’t attract the diverse learners you are seeking—otherwise, there wouldn’t be such a critical call for broadened STEM participation today. You can, however, successfully reach these students if you go where they are.

a. Looking for Black, Latino, or female students to partake in your innovative engineering or IT program? Try reaching out to professional campus-based STEM organizations (e.g., National Society of Black Engineers, Black and Latinx Information Science and Technology Support, Women in Science and Engineering).

b. Numerous organizations on college campuses serve the students you are seeking to engage.

          • Locate culture-based organizations: the National Pan-Hellenic Council, National Association of Latino Fraternal Organizations, National Black Student Union, or Latino Student Council.
          • Leverage programs that support priority student groups (e.g., first-generation, low-income, students with disabilities): Higher Education Opportunity Program, Student Support Services, or Office for Students with Disabilities.

2. Cultural responsiveness must be embedded throughout the program’s design.

a. Make sure that implementation approaches—including recruitment—and program materials (e.g., curriculum, marketing and outreach) are culturally responsive, interventions are culturally relevant, and staff are culturally sensitive.

b. Ensure staff diversity at all levels of leadership (e.g., program directors and staff, faculty, mentors).

There is increased likelihood of students’ participation and persistence when they feel they belong, which at minimum encompasses seeing themselves represented across a program’s spectrum.

As an evaluation community, we cannot allow the onus of equitable STEM opportunity to be placed solely on programs or clients. A lens of equity must also be deeply embedded throughout our evaluation approach, including during analyses and recommendations. It is this shift in paradigm—a model of shared accountability—that allows for equitable outcomes to be realized.

 

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President, CSEdResearch.org

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from https://www.tacc.utexas.edu/epic/research. Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.

 

Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: Shorten the Evaluation Learning Curve: Avoid These Common Pitfalls*

Posted on September 16, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This EvaluATE blog is focused on getting started with evaluation. It’s oriented to new ATE principal investigators who are getting their projects off the ground, but I think it holds some good reminders for veteran PIs as well. To shorten the evaluation learning curve, avoid these common pitfalls:

Searching for the truth about “what NSF wants from evaluation.” NSF is not prescriptive about what an ATE evaluation should or shouldn’t look like. So, if you’ve been concerned that you’ve somehow missed the one document that spells out exactly what NSF wants from an ATE evaluation—rest assured, you haven’t overlooked anything. But there is information that NSF requests from all projects in annual reports and that you are asked to report on the annual ATE survey. So it’s worthwhile to preview the Research.gov reporting template (bit.ly/nsf_prt) and the ATE annual survey questions (bit.ly/ATEsurvey16). And if you’re doing research, be sure to review the Common Guidelines for Education Research and Development – which are pretty cut-and-dried criteria for different types of research (bit.ly/cg-checklist). Most importantly, put some time into thinking about what you, as a project leader, need to learn from the evaluation. If you’re still concerned about meeting expectations, talk to your program officer.

Thinking your evaluator has all the answers. Even for veteran evaluators, every evaluation is new and has to be tailored to context. Don’t expect your evaluator to produce a detailed, actionable evaluation plan on Day 1. He or she will need to work out the details of the plan with you. And if something doesn’t seem right to you, it’s OK to ask for something different.

 Putting off dealing with the evaluation until you are less busy. “Less busy” is a mythical place and you will probably never get there. I am both an evaluator and a client of evaluation services, and even I have been guilty of paying less attention to evaluation in favor of “more urgent” matters. Here are some tips for ensuring your project’s evaluation gets the attention it needs: (a) Set a recurring conference call or meeting with your evaluator (e.g., every two to three weeks); (b) Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation matters; (c) Give someone other than the PI responsibility for attending to the evaluation—not to replace the PI’s attention, but to ensure the PI and other project members are staying on top of the evaluation and communicating regularly with the evaluator; (d) Commit to using the evaluation results in a timely way—if you do something on a recurring basis, make sure you gather feedback from those involved and use it to improve the next activity.

Assuming you will need your first evaluation report at the end of Year 1. PIs must submit their annual reports to NSF 90 days prior to the end of the current budget period. So if your grant started on September 1, your first annual report is due around June 1. And it will take some time to prepare, so you should probably start writing in early May. You’ll want to include at least some of your evaluation results, so start working with your evaluator now to figure what information is most important to collect right now.

Veteran PIs: What tips do you have for shortening the evaluation learning curve?  Submit a blog to EvaluATE and tell your story and lessons learned for the benefit of new PIs.

*Blog is a reprint of the 2015 newsletter article

Blog: Making the Most of Virtual Conferences: An Exercise in Evaluative Thinking

Posted on September 2, 2020 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We at EvaluATE affectionately call the fall “conference season.” Both the ATE PI Conference and the American Evaluation Association’s annual conference usually take place between October and November every year. This year, both conferences will be virtual events. Planning how our project will engage in this new virtual venue got me thinking: What makes a virtual conference successful for attendees? What would make a virtual conference successful for me?

I started by considering what makes an in-person conference successful, and I quickly realized that this was an exercise in evaluative thinking. The concept of evaluative thinking has been defined in a variety of ways—as a “type of reflective practice” (Baker & Bruner, 2012, p. 1), a combination of “critical thinking, creative thinking, inferential thinking, and practical thinking” (Patton, 2018, p. 21), and a “problem-solving approach” (Vo, 2013, p. 105). In this case, I challenged myself to consider what my personal evaluation criteria would be for a successful conference and what my ideal outcomes would look like.

In my reflection process, I came up with a list of key outcomes for attending a conference. Specifically, at conferences, I hope to:

  • build new relationships with peers;
  • grow relationships with existing partners;
  • learn about new trends in research and practice;
  • learn about future research opportunities (places I might be able to fill in the gaps); and
  • feel part of a community and re-energized about my work.

I realized that many of these outcomes are typically achieved through happenstance. For example, at previous conferences, most of my new relationships with peers occurred because of a hallway chat or because I sat next to someone in a session and we struck up a conversation and exchanged information. It’s unlikely these situations would occur organically in a virtual conference setting. I would need to be intentional about how I participated in a virtual conference to achieve the same outcomes.

I began to work backwards to determine what actions I could take to ensure I achieved these outcomes in a virtual conference format. In true evaluator fashion, I constructed a logic model for my virtual conference experience (shown in Figure 1). I realized I needed to identify specific activities—agreements with myself—to get the most out of the experience and have a successful virtual conference.

For example, one of my favorite parts of a conference is feeling like I am part of a larger community and becoming re-energized about my work. Being at home, it can be easy to become distracted and not fully engage with the virtual platform, potentially threatening these important outcomes. To address this, I have committed to blocking off time on my schedule during both conferences to authentically engage with the content and attendees.

How do you define a successful conference? What outcomes do you want to achieve in upcoming conferences that have gone virtual? While you don’t have to make a logic model out of your thoughts, I would challenge you to think evaluatively about upcoming conferences, asking yourself what you hope to achieve and how can you ensure that it happens.

Figure 1. Lyssa’s Logic Model to Achieve a Successful Virtual Conference

Figure 1. Lyssa’s Logic Model to Achieve a Successful Virtual Conference

Blog: Designing Accessible Digital Evaluation Materials

Posted on August 19, 2020 by  in Blog ()

Developmental Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on July 23, 2020:
https://aea365.org/blog/designing-accessible-digital-evaluation-materials-by-don-glass/

Designing Accessible Digital Evaluation Materials title graphic

Hi, I am Don Glass, a DC-based developmental evaluator, learning designer, and proud member of the AEA Disabilities and Underrepresented Populations TIG.

COVID-19 has increased our reliance on- and maybe fast-tracked- our use of digital and online communication to serve our diverse evaluation clients and audiences. This is an opportunity to push our evaluation communication design to the next level. Just like AEA members enthusiastically embraced Stephanie Evergreen’s and Sheila Robinson’s contributions to Potent Presentations and established a flourishing Data Visualization TIG, we can now integrate inclusive design routines into our communication practice!

Being inclusive is part of the AEA mission- and for some of us a legal duty– to make sure that our digital communications are barrier-free and accessible to all. This article is a quick reference guide for design considerations for digital communication like AEA365 blogs, social media, online webinars/courses, virtual conference presentations, and evaluation reports— any digital content, really, that uses text, images, and media.

The evaluation field has had a solid foundation in our literature to guide inclusive evaluation thinking and design. Donna M. Merten’s 1999 AEA Presidential Address crystallized the rationale for inclusive approaches to evaluation. In 2011, Jennifer Sulewski and June Gothberg first developed a Universal Design for Evaluation Checklist to help evaluators systematically think about the inclusive design of all aspects of your evaluation practice. The guidance in this blog focuses on:

Principle 4: Perceptible Information. The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities.

Social Media Accessibility: Plain language, CamelCase Hashtags, Image Descriptions, Captioning and Audio, Link Shorteners

Hot Tips

Text: Provide supports to access this primary form of content and navigate its organization.

  • Structured Text: Use headers and bulleted/numbered lists. Think about reading order.
  • Fonts and Font Size: Make text large and legible enough to easily read. Avoid serif fonts.
  • Colors and Contrast: Make sure text and background are not too similar. Consider a contrast checker tool.
  • Descriptive Hyperlinks: Embed links in text that describe the destination. Remember, links should look like links.

Images: Provide a barrier-free and purposeful use of images beyond aesthetics.

  • Alternative Text: Write a short description about the content and function of an image read by a screen-reader, web-browser, and search engine.
  • Accessible Images: Select or design images and diagrams to enhance comprehension and communication.

Media: Provide supports to make media content accessible and search-able.

  • Closed Captioning: Make text versions of the spoken word presented in multimedia. Consider auto-captioning on YouTube.
  • Transcripts: Make a full text version of spoken word presented in multimedia. Explore searching transcripts as a way of navigating media.
  • Audio Description: A narration that describes visual-only content in media. Check out examples of Descriptive Video Service on your streaming service.

Rad Resources

Blog: Quick Reference Guides Evaluators Can’t Live Without

Posted on August 5, 2020 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on May 15, 2020:
https://aea365.org/blog/quick-reference-guides-evaluators-cant-live-without-by-kelly-robertson/

My name is Kelly Robertson, and I work at The Evaluation Center at Western Michigan University and EvaluATE, the National Science Foundation–funded evaluation hub for Advanced Technological Education.

I’m a huge fan of quick reference guides. Quick reference guides are brief summaries of important content that can be used to improve practice in real time. They’re also commonly referred to as job aids or cheat sheets.

I found quick reference guides to be especially helpful when I was just learning about evaluation. For example, Thomas Guskey’s Five Critical Levels of Professional Development Evaluation helped me learn about different levels of outcomes (e.g., reaction, learning, organizational support, application of skills, and target population outcomes).

Even with 10-plus years of experience, I still turn to quick reference guides every now and then. Here are a few of my personal favorites:

My colleague Lyssa Becho is also a huge fan of quick reference guides, and together we compiled a list of over 50 evaluation-related quick reference guides. The list draws on the results from a survey we conducted as part of our work at EvaluATE. It includes quick reference guides that 45 survey respondents rated as most useful for each stage of the evaluation process.

Here are some popular quick reference guides from the list:

  • Evaluation Planning: Patton’s Evaluation Flash Cards introduce core evaluation concepts such as evaluation questions, standards, and reporting in an easily accessible format.
  • Evaluation Design: Wingate’s Evaluation Data Matrix Template helps evaluators organize information about evaluation indicators, data collection sources, analysis, and interpretation.
  • Data Collection: Wingate and Schroeter’s Evaluation Questions Checklist for Program Evaluation provides criteria to help evaluators understand what constitutes high-quality evaluation questions.
  • Data Analysis: Hutchinson’s You’re Invited to a Data Party! explains how to engage stakeholders in collective data analysis.
  • Evaluation Reporting: Evergreen and Emery’s Data Visualization Checklist is a guide for the development of high-impact data visualizations. Topics covered include text, arrangement, color, and lines.

If you find any helpful evaluation-related quick reference guides are missing from the full collection please contact kelly.robertson@wmich.edu.

Blog: Shift to Remote Online Work: Assets to Consider

Posted on July 22, 2020 by  in Blog ()

Principal Partner, Education Design, INC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m the principal partner of Education Design in Boston, focusing on STEM program evaluation. I first engaged in online instruction and design in 1994 with CU-SeeMe, a very early desktop videoconferencing app (without audio… that came in 1995!). While I’m certainly no expert in online learning, I’ve observed this newly accelerated shift toward virtual learning for several decades.

During 2020 we’ve seen nearly all of our personal and professional meetings converted to online interactions. In education this has been both challenging and illuminating. For decades, many in our field have planned and designed for the benefits online and digital learning might offer, often with predictive optimism. Clearly the future we anticipated is upon us.

Here, I want to identify some of the key assets and benefits of online and remote learning. I don’t intend to diminish the value of in-person human contact, but rather to help projects thrive in the current environment.

More Embrace than Rejection of Virtual

In nearly all our STEM learning projects, I’ve noticed far more embrace than rejection of virtual learning and socializing spaces.

In one project with partner colleges located in different states, online meetings and remote professional training were part of the original design. Funded in early 2020, the work has begun seamlessly, pandemic notwithstanding, owing to the colleges’ commitment to remote sharing and learning. These partners, leaders from a previous ATE project, will now become mentors for technical college partners, and that work will most likely be done remotely as well.

While forced to change approaches and learning modes, these partners haven’t just accepted remote interactions. Rather than focus on what is missing (site visits will not occur at this time), they’re actively seeking to understand the benefits and assets of connecting remotely.

“Your Zoom face is your presence”

Opportunities of the Online Context

  1. Videoconferencing presents some useful benefits: facial communication enables trust and human contact. Conversations flow more easily. Chat text boxes provide a platform for comments and freeform notes, and most platforms allow recording of sessions for later review. In larger meetings, group breakout functionality helps facilitate smaller sub-sessions.
  2. Online, sharing and retaining documents and artifacts becomes part of the conversation without depending on the in-person promise to “email it later.”
  3. There is an inherent scalability to online models, whether for instructional activities, such as complete courses or teaching examples, or for materials.
  4. It’s part of tomorrow’s landscape, pandemic or not. Online working, learning, and sharing has leapt forward out of necessity. It’s highly likely that when we return to a post-virus environment, many of the online shifts that have shown value and efficiency will remain in schools and the workforce, leading toward newer hybrid models. If you’re part of the development now, you’re better positioned for those changes.

Tip

As an evaluator, my single most helpful action has been to attend more meetings and events than originally planned, engaging with the team more, building the trust necessary to collect quality data. Your Zoom face is your presence.

Less Change than You’d Think

In most projects, re-calibration has been necessary, but you’d be surprised at how few changes may be required to continue your project work successfully in this new context simply through a change of perspective.