Blog




Blog: Building ATE Social Capital Through Evaluation Activities

Posted on February 24, 2021 by  in Blog () ()

President, Mullins Consulting, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Social networks have value. Social capital refers to the value of social networks, or whom people know, and the inclinations that arise from these networks to do things for each other. Thus, people benefit from the trust, reciprocity, information, and cooperation of these social networks” (Robert D. Putnam, Harvard Kennedy School of Government, 2018).

Within the context of “new-to-ATE” grants, many novice PIs have low social capital compared to more experienced PIs. New PIs are often not familiar with the norms of NSF grant proposal writing, reporting, and other communication; other PIs and collaborators in the community; and other elements that empower more experienced PIs. While proposal-writing mentoring programs are available, not all ATE applicants are granted this opportunity, and this mentoring typically ends once a program is funded.

The evaluator is in a unique position to strengthen social capital by offering new PIs access to their client pool of ATE grantees to facilitate networking and the sharing of information. Connections can be made through the evaluator, new knowledge shared, and relationships cultivated. Increasing access to networks and information can lead to stronger program implementation strategies as well as increased PI confidence in the process.

Here are three tips on when and how an evaluator can connect clients to each other.

1.     The First Six Months. My evaluation team continually discusses how the ATE programs we are evaluating might logically connect (e.g., discipline/area, program components). When a challenge arises, we see what connections can be made so that novice PIs have someone to use as a resource in navigating the challenge. Most experienced PIs are willing to share their experiences in order help others.

2.     National ATE PI Conference. As a lead evaluator, I find time at the ATE conference to introduce clients to one another over coffee or before or after sessions being attended by my clients. I preface these face-to-face meetings with inquiries beforehand to make sure clients are interested in meeting and have available time. Most report it helpful to meet others in similar fields and get a chance to talk to each other about their programs.

3.     Year One Reporting Time. I have found that, traditionally, novice ATE PIs are very anxious about writing their first annual report to the NSF. To address this challenge, I established a meeting of new and more experienced PIs to discuss year one reporting. In the meeting, a seasoned PI presents how they approached first-year reporting and answers questions alongside a former NSF program officer who provides further guidance. The positive feedback from this meeting has been tremendous.

Connecting new PIs with more experienced PIs facilitates the growth of social capital, resulting in better collaborative inquiry, stronger networks, persistence with project implementation, and subsequent reporting of impact.

 

Reference:

Harvard Kennedy School of Government. (2018). ​Social Capital Primer. http://robertdputnam.com/bowling-alone/social-capital-primer/

 

Blog: Lessons Learned Moderating Virtual Focus Groups During a Pandemic

Posted on February 10, 2021 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on December 22, 2020:
https://aea365.org/blog/lessons-learned-moderating-virtual-focus-groups-during-a-pandemic-by-jacob-schreiber-by-cha-chi-fung/

Hello! I’m Jacob Schreiber, Instructor of Clinical Medical Education and I’m Cha-Chi Fung, Assistant Dean of Medical Education at the Keck School of Medicine of USC. We frequently employ focus groups in evaluation and institutional research to collect valuable insights about our medical students’ classroom and clinical activities.

The Covid-19 pandemic has made it impossible to conduct in-person focus groups where 6-12 participants gather around a conference table for an intimate 90+ minute conversation.

Though virtual meeting platforms like Zoom, Skype, and Microsoft Teams have created a space for us to keep in touch with colleagues, many have struggled with issues such as poor internet connections, difficulty sharing screens, disruptions from around the home “office,” trying to speak when muted, or not muting themselves having off-screen conversations.

These issues can stall conversation in meetings and are a nightmare scenario for a focus group moderator who relies on the swift back-and-forth of face-to-face conversation. Over the last year, through many trial and errors, we have developed best practices to make virtual focus groups as smooth as possible in order to continue collecting the valuable data they yield.

Lessons Learned:

  • Reduce the average size of your focus groups. If you would typically recruit 8-12 participants for an in-person focus group, consider inviting only 6 or 7 so that the screen won’t be as cluttered and audio issues will be easier to manage.
  • Ensure your participants have a working webcam prior to the group beginning and encourage them to keep it on for the duration of the conversation.
  • Disable text chatting and discourage use of hand raising functions so participants are more likely to speak up.
  • Encourage all participants to utilize the gallery viewing option. This will most closely recreate the experience of being in a room together.
  • Ask that all participants keep their microphones open. The process of continuously muting and unmuting yourself pauses conversation briefly. But it can completely shut down a fast-paced exchange of ideas.
  • Consider technological aspects of your role as the moderator. Let your participants know you may use the host capabilities to mute people with a bad connection or in a loud environment so that everyone is able to hear each other.
  • Plan an extra 15-20 minutes into the focus group to account for troubleshooting with participants in the virtual environment.
  • Plug in! As the moderator, you want to ensure you have the best connection possible, so you don’t miss any information. We strongly encourage you use an ethernet connection rather than Wi-Fi. It is also helpful to suggest this to participants in the group invitation so they can plan to be plugged in if the option is available to them.

Rad Resources: For more information about the value of focus groups check out aea365 curator Sheila B. Robinson’s post.

Richard A. Krueger offers a wealth of resources about best practices for moderating focus groups on his website.

Blog: Creating an Evaluation Design That Allows for Flexibility

Posted on January 13, 2021 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Holly Connell Allison Teeter
Evaluator
Kansas State University
Assistant Director
Strategic Initiatives and Development

There is no better time than now to talk about the need for flexibility in evaluation design and implementation. It is natural for long-term projects involving many partners, institutions, and objectives to experience changes as they progress. This is especially apparent in the age of the coronavirus pandemic, where many projects are faced with decisions about how to move forward, while still needing to make and demonstrate impact. Having an evaluation design that is too rigid does not allow for adjustments throughout the implementation process.

This blog provides a general guide for building a flexible evaluation design.

Design the Evaluation

Develop an evaluation plan that provides you with four to six evaluation questions that align with the project’s goals and objectives but provides you with ample flexibility to allow for changes throughout the project’s implementation. A sound evaluation design will guide how you conduct the evaluation activities while answering your key evaluation questions. The design will include factors such as:

  • Methods of data collection: Consider your audience and what method will work best and will yield the most robust results. Further, if the method chosen does not yield results, consider whether this method should be used again later, or used at all. Ensure one activity is not responsible for collecting data towards all or most of the evaluation questions. It is best practice to use a triangulation approach; use multiple methods of data collection to strengthen the quality of your results. Wrap in evidence towards as many evaluation questions as applicable in each of your data collections. If an evaluation activity falls through or does not pan out as anticipated, you will still have data to provide evidence towards the evaluation.
  • Sample sizes: Consider at what point a sample size is too small¾or too large¾for what you have originally planned. Develop a backup plan for this situation. Collect data from a variety of stakeholders. Changes in project implementation can affect your target audiences differently. Build this into your evaluation plan by ensuring all applicable target audiences are represented throughout your data collections.
  • Timing of data collection: Be mindful of major events in the lives of the target audience. For example, holding an online survey during exam season will likely reduce your sample size. Do not limit yourself to specific timing of an evaluation activity unless necessary. For example, if a survey can take place at any time during the summer, specify “Summer 2021” rather than “August 2021.”

Keep in mind that most evaluation projects do not go completely as planned and that various aspects of the project may undergo changes.

Being flexible with your design can yield much more meaningful and impactful results rather than using the plan originally in place. Changes and revisions may be needed as the project evolves, or due to unforeseen circumstances. Don’t hesitate to revise the evaluation plan; just make sure to document and justify the changes being made. Defining a list of potential limitations (e.g., of methods, data sources, potential bias, etc.) while developing your initial evaluation design could assist later on when determining if it is best to stay on course with the original plan, or to make a revision to the evaluation design.

Find out more about developing evaluation plans in the Pell Institute Evaluation Toolkit.

Blog: What is the best way to coordinate internal and external evaluation?

Posted on December 11, 2020 by  in Blog

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

All ATE projects are required to allocate funds for external evaluation services. So, when it comes to internal and external evaluation, the one certain thing is that you must have an external evaluator. (In rare cases, alternative arrangements may be approved by a program officer.)

There are two types of external evaluators:

Type 1: Evaluators who are completely external to the institution

Type 2 Evaluators who are external to the project, but internal to the institution (such as an institutional researcher or a faculty member from a different department from where the project is located).

Both types are considered external, as long as the Type 2 external evaluator is truly independent from the project.

An internal evaluator is a member of the project staff, who is directly funded by the project, such as a project manager. More commonly, internal evaluation is a shared responsibility among team members. There are many options for coordinating internal and external evaluation functions. Over the years, I have noted four basic approaches:

(1) External Evaluator as Coach: The external evaluator provides guidance and feedback to the internal project team throughout the life of the grant. This is a good approach when there is already some evaluation competence among team members. The external evaluator’s involvement enhances the credibility of the evaluation and helps the team continue to build their evaluation knowledge and skills.

(2) External Evaluator as Heavy-Lifter: The external evaluator takes the lead in planning the evaluation, designing instruments, analyzing results, and writing reports. The internal team mainly gathers data and provides it to the external evaluator for processing. In this approach, the external evaluator should provide clear-cut data collection protocols to ensure systematic collection and handling of data by the internal team before they turn the information over to the external evaluator.

(3) External Evaluator as Architect: The external evaluator designs the overall evaluation and develops data collection instruments. The project team executes the plan, with technical assistance from the external evaluator as needed—particularly at critical junctures in the evaluation such as analysis and reporting. With this approach, it is important to front-load the evaluation budget in the first year of the project to allow for intensive involvement by the external evaluator.

(4) Divide-and-Conquer: The internal team is responsible for evaluating project implementation and immediate results. The external evaluator handles the evaluation of longer-term outcomes. This is the approach that EvaluATE uses. We carefully track and analyze data about how well we are reaching and engaging our audience. We are responsible for assessing immediate outcomes of our webinars and workshops (i.e., participants’ satisfaction, self-reported learning, and intent to use content). Our external evaluator is responsible for determining and assessing EvaluATE’s impact on the evaluation practices of our users.

A note of caution: Taking on part of an evaluation internally is often seen as a means of conserving project resources, and it can have that effect. But do not make the mistake of thinking internal evaluation is cost-free. At minimum, it takes time, which is sometimes a rarer commodity than money. In short, there is no one best way to coordinate internal and external evaluation. Your approach should make sense for your project in light of available resources (including staff time and expertise) and what you need your evaluation to do for your project.

What to know more? Check out this 5-minute video in which I explain what counts as independent, how to find an external evaluator, and how to divide responsibilities between integrate internal and external evaluators: Evaluation Basics Part 4: Who Can Do It?

Blog: Tips for Building and Strengthening Stakeholder Relationships

Posted on November 23, 2020 by  in Blog ()

Project Manager, EvaluATE at The Evaluation Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I am Valerie Marshall, I work on a range of projects at The Evaluation Center, including EvaluATE, where I serve as the administrator and analyst for the annual ATE Survey.

A cornerstone of evaluation is working with stakeholders. Stakeholders are individuals or groups who are part of an evaluation or are otherwise interested in its findings. They may be internal or external to the program being evaluated.

Stakeholders’ interests and involvement in evaluation activities may vary. But they are a key ingredient to evaluation success. They can provide critical insight into project activities and evaluation questions, serve as the gatekeepers to other stakeholders or data, and help determine if evaluation findings and recommendations are implemented.

Given their importance, identifying ways to build and nurture relationships with stakeholders is pivotal.

So the question is: how can you build relationships with evaluation stakeholders?

Below is a list of tips based on my own research and evaluation experience. This list is by no means exhaustive. If you are an ATE PI or evaluator, please join EvaluATE’s Slack community to continue the conversation and share some of your own tips!

Tip 1: Be intentional and adaptative about how you communicate. Not all stakeholders will prefer the same mode of communication. And how stakeholders want to communicate can change over the course a project’s lifecycle. In my experience, using communication styles and tools that align with stakeholders’ needs and preferences often results in greater engagement. So, ask stakeholders how they would like to communicate at various points throughout your work together.

Tip 2: Build rapport. ATE evaluator and fellow blogger George Chitiyo previously noted that building rapport with stakeholders can make them feel valued and, in turn, help lead to quality data. Rapport is defined as a friendly relationship that makes communication easier (Merriam-Webster). Chatting during “down time” in a videoconference, sharing helpful resources, and mentioning a lighthearted story are great ways to begin fostering a friendly relationship.

Tip 3: Support and maintain transparency. Communicate with stakeholders about what is being done, when, and why. This not only reduces confusion but also facilitates trust. Trust is pivotal to building  productive, healthy relationships with stakeholders. Providing project staff with a timeline of research or evaluation activities, giving regular progress updates, and meeting with stakeholders one-on-one or in small groups to answer questions or address concerns are all helpful ways to generate transparency.

Tip 4: Identify roles and responsibilities. When stakeholders know what is expected of them and how they can and cannot contribute to different aspects of a research or evaluation project, they can engage in a more meaningful way. The clarity generated from the process of outlining the roles and responsibilities of both stakeholders and research and evaluation staff can help reduce misunderstandings. At the beginning of a project, and as new staff and stakeholders join the project, make sure to review roles and expectations with everyone.

Blog: Four Warning Signs of a Dusty Shelf Report

Posted on November 11, 2020 by  in Blog

Data Visualization Designer, Depict Data Studio, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Four Warning Signs of a Dusty Shelf Report

When was the last time your report influenced real-life decision-making?

I used to write lengthy reports filled with statistical jargon. Important information sat around and gathered dust.

Now I design reports people actually want to read. Fewer paragraphs. More visuals. My audience can understand the information, so the data actually gets used.

What Reports Are Supposed to Do Every Single Time

Maybe a policy maker voted differently after reading your report…

Maybe your board of directors changed your programming after reading your report…

Maybe your supervisor adjusted your budget or staffing based on your findings…

Maybe your stakeholder group formed a task force to fix the issues brought up by your report…

We’ve all had successes here and there.

But does this happen every single time?

Four Red Flags to Watch For

A dusty shelf report is a report that people refuse to read. Or they glance at it once, don’t read it all the way through, and then repurpose it as a dust collector. Here are four signs that you’ve got a dusty shelf report on your hands. (Or a dusty dashboard, dusty infographic, or dusty slideshow. Watch for these red flags with all dissemination formats.)

1.  No Response

You email your report to the recipient. Or you post it on your website.

You don’t get any response. The silence is deafening.

2.   A Promise to Follow Up Later

You email your report to the recipient. They respond!

But the response is, “Thanks. We received the report. We’ll follow up later if we have any questions.”

This is not use! This is not engagement! We can do better.

3.  “Compliments”

You email your report to the recipient. They respond!

But the response is, “Thanks. We received the report. We’ll follow up later if we have any questions. I can tell that a really technical team worked on this report.

Yikes… that “compliment” is a red flag.

I used to hear this a lot. I thought, “Wow, they must’ve checked out our LinkedIn profiles! They can tell that our entire team has master’s degrees and Ph.D.s! They know we speak at national conferences!”

Later, I realized the reader was (kindly) mentioning our statistical jargon.

Watch for this one. It’s a red flag in disguise.

4.  Won’t Read It

The recipients flat-out say, “We’re not going to read this.”

Sometimes, this red flag is expressed as a request for another format:

“Do you happen to have an infographic?” Red flag.

“Do you happen to have a slideshow?” Red flag.

I’ve seen this with several government agencies over the past couple of years. They explicitly require a two-pager in addition to the technical report. They recognize that the traditional format doesn’t meet their needs.

How to Transform Your Reports

If you’ve experienced any of the red flags, you’ve got a dusty shelf report on your hands.

But there’s hope! Dusty shelf reports aren’t inevitable.

Want to start transforming your reports? Try the the 30-3-1 approach to reporting, use data storytelling, or get better at translating technical information for non-technical audiences. Our courses on these and other data visualization topics will help you soar beyond the dusty shelf report

Blog: Strategies for Communicating in Virtual Settings

Posted on October 21, 2020 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Jeffrey Hillman
Doctoral Student
The Evaluation Center
Doctoral Student
The Evaluation Center

We are Ouen and Jeffrey, the authors of the recently published resource “Effective Communication Strategies for Interviews and Focus Groups.” Thank you to everyone who provided feedback. During the review, we noticed a need to address strategies for conducting online interviews and focus groups.

Your interview environment can promote sharing of stories or deter it. Here are some observations we find helpful to improve communication in virtual settings:

1.Keep your video on, but do not require this of your interviewees. People feel more at ease sharing their stories if they can see the person receiving their information.

2. Keep your background clear of clutter! If this is not an option, test out a neutral virtual background or use a high-quality photo of an uncluttered space of your choice. For example, your office space as a picture background provides a personalized yet professional touch to your virtual setting. Be warned that virtual backgrounds can cut certain body parts out! Test the background, and plan your outfits accordingly (don’t wear green!).

3.  Exaggerate your nonverbal expressions a little to ensure that you are not interrupting the people sharing their stories. Additionally, typical verbal cues of attentiveness can cause delays and skips in a virtual setting. Show your attentiveness by nodding a few times purposefully for affirmations instead of saying “Yes” or “Agreed.” Move your body every now and then to assure people that you are listening and have not lost your internet connection.

4. If you have books in the background, turn the spines of the books away. The titles of the books can be distracting and can communicate unintended messages to the interviewees. More importantly, certain book titles can be trauma triggers. If you want to include decorations, use plants. Additionally, you can place your camera facing the corner of a room to provide visual depth.

5. Be in a quiet room free of other people or pets. Noise and movement can distract your participants from concentrating on the interview.

6. Be sure you have good lighting. People depend on your facial expressions for communication. Face a window (do not have the window behind you), or use lamps or selfie rings if you need additional light.

7. On video calls, most people naturally tend to look at the person’s image. So, it’s important to arrange your camera at the proper angle to see the participants on your screen.

On a laptop, place the laptop camera or separate webcam at eye level; this can be accomplished by using a stand or even a stack of books. Tilt the camera down at approximately 30 degrees, and arm’s length away from you. Experiment with the angle to assure a more natural appearance.

If you use a monitor with a webcam, place the webcam at eye level, tilted down approximately 30 degrees, and arm’s-length away from you. If needed, you can use a small tripod.

Whatever your arrangement, keeping the participant’s picture on the screen close to the camera will remind you where to look.

8. If possible, use a separate webcam, microphone, and headset. A pre-installed webcam generally has a lower resolution than a separate webcam.

Using a separate microphone will provide clearer speech, and a separate set of headphones will help you hear better. Listen to the laptop microphone recording (left) versus the separate condenser microphone recording (right).

Be sure to place the microphone away from view so the microphone does not block the view of your face. Using a plug-in headset instead of a Bluetooth headset will ensure you do not run out of battery.

Pre-Installed Microphone

Separate Condenser Microphone

HOT TIP: Try out the following office setup for your next online interview or focus group!

We would love to hear from you regarding tips that we could not cover in this blog!

Ouen Hunter: Ouen.C.Hunter@wmich.edu
Jeffrey Hillman: Jeffrey.A.Hillman@wmich.edu

Blog: Examining STEM Participation Through an Equity Lens: Measurement and Recommendations

Posted on October 14, 2020 by  in Blog ()

Director of Evaluation Services, Higher Ed Insight

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 Examining STEM Participation Through an Equity Lens_ Measurement and Recommendations

Hey there—my name is Tashera, and I’ve served as an external evaluator for dozens of STEM interventions and innovations. I’ve learned that a primary indicator of program success is recruitment of learners to participate in project activities.

Given that this metric is foundational to most evaluations, measurement of this outcome is rarely thought to be a challenge. A simple count of learners enrolled in programming provides information about participants rather easily.

However, this tells a very limited story. As many of us know, a major priority of STEM initiatives is to broaden participation to be more representative of diverse populations—particularly among groups historically marginalized. As such, we must move beyond reporting quantitative metrics as collectives and instead shift towards disaggregation by student demographics.

This critical analytical approach lets us identify where potential disparities exist. And it can help transform evaluation from a passive system of assessment into a mechanism that helps programs reach more equitable outcomes.

Moreover, program implementation efforts must be deliberate. Activities must be intentionally designed to reach and support populations disproportionally underrepresented within STEM. We can aid this process in our role as evaluators. I would even go so far as to argue that it is our responsibility—as stipulated by AEA’s Guiding Principles for Evaluators—to do so.

During assessment, make it a practice to examine whether program efforts are equitable, inclusive, and accessible. If you find that clients are experiencing challenges relating to locating or recruiting diverse students, the following recommendations can be provided during formative feedback:

  1. Go to the target population: “Traditional” marketing and outreach strategies that have been used time and time again won’t attract the diverse learners you are seeking—otherwise, there wouldn’t be such a critical call for broadened STEM participation today. You can, however, successfully reach these students if you go where they are.

a. Looking for Black, Latino, or female students to partake in your innovative engineering or IT program? Try reaching out to professional campus-based STEM organizations (e.g., National Society of Black Engineers, Black and Latinx Information Science and Technology Support, Women in Science and Engineering).

b. Numerous organizations on college campuses serve the students you are seeking to engage.

          • Locate culture-based organizations: the National Pan-Hellenic Council, National Association of Latino Fraternal Organizations, National Black Student Union, or Latino Student Council.
          • Leverage programs that support priority student groups (e.g., first-generation, low-income, students with disabilities): Higher Education Opportunity Program, Student Support Services, or Office for Students with Disabilities.

2. Cultural responsiveness must be embedded throughout the program’s design.

a. Make sure that implementation approaches—including recruitment—and program materials (e.g., curriculum, marketing and outreach) are culturally responsive, interventions are culturally relevant, and staff are culturally sensitive.

b. Ensure staff diversity at all levels of leadership (e.g., program directors and staff, faculty, mentors).

There is increased likelihood of students’ participation and persistence when they feel they belong, which at minimum encompasses seeing themselves represented across a program’s spectrum.

As an evaluation community, we cannot allow the onus of equitable STEM opportunity to be placed solely on programs or clients. A lens of equity must also be deeply embedded throughout our evaluation approach, including during analyses and recommendations. It is this shift in paradigm—a model of shared accountability—that allows for equitable outcomes to be realized.

 

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President, CSEdResearch.org

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from https://www.tacc.utexas.edu/epic/research. Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.

 

Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: Shorten the Evaluation Learning Curve: Avoid These Common Pitfalls*

Posted on September 16, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This EvaluATE blog is focused on getting started with evaluation. It’s oriented to new ATE principal investigators who are getting their projects off the ground, but I think it holds some good reminders for veteran PIs as well. To shorten the evaluation learning curve, avoid these common pitfalls:

Searching for the truth about “what NSF wants from evaluation.” NSF is not prescriptive about what an ATE evaluation should or shouldn’t look like. So, if you’ve been concerned that you’ve somehow missed the one document that spells out exactly what NSF wants from an ATE evaluation—rest assured, you haven’t overlooked anything. But there is information that NSF requests from all projects in annual reports and that you are asked to report on the annual ATE survey. So it’s worthwhile to preview the Research.gov reporting template (bit.ly/nsf_prt) and the ATE annual survey questions (bit.ly/ATEsurvey16). And if you’re doing research, be sure to review the Common Guidelines for Education Research and Development – which are pretty cut-and-dried criteria for different types of research (bit.ly/cg-checklist). Most importantly, put some time into thinking about what you, as a project leader, need to learn from the evaluation. If you’re still concerned about meeting expectations, talk to your program officer.

Thinking your evaluator has all the answers. Even for veteran evaluators, every evaluation is new and has to be tailored to context. Don’t expect your evaluator to produce a detailed, actionable evaluation plan on Day 1. He or she will need to work out the details of the plan with you. And if something doesn’t seem right to you, it’s OK to ask for something different.

 Putting off dealing with the evaluation until you are less busy. “Less busy” is a mythical place and you will probably never get there. I am both an evaluator and a client of evaluation services, and even I have been guilty of paying less attention to evaluation in favor of “more urgent” matters. Here are some tips for ensuring your project’s evaluation gets the attention it needs: (a) Set a recurring conference call or meeting with your evaluator (e.g., every two to three weeks); (b) Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation matters; (c) Give someone other than the PI responsibility for attending to the evaluation—not to replace the PI’s attention, but to ensure the PI and other project members are staying on top of the evaluation and communicating regularly with the evaluator; (d) Commit to using the evaluation results in a timely way—if you do something on a recurring basis, make sure you gather feedback from those involved and use it to improve the next activity.

Assuming you will need your first evaluation report at the end of Year 1. PIs must submit their annual reports to NSF 90 days prior to the end of the current budget period. So if your grant started on September 1, your first annual report is due around June 1. And it will take some time to prepare, so you should probably start writing in early May. You’ll want to include at least some of your evaluation results, so start working with your evaluator now to figure what information is most important to collect right now.

Veteran PIs: What tips do you have for shortening the evaluation learning curve?  Submit a blog to EvaluATE and tell your story and lessons learned for the benefit of new PIs.

*Blog is a reprint of the 2015 newsletter article