Archive: computer science

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President, CSEdResearch.org

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from https://www.tacc.utexas.edu/epic/research. Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.

 

Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: Untangling the Story When You’re Part of the Complexity

Posted on April 16, 2019 by  in Blog ()

Evaluator, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

I am wrestling with a wicked evaluation problem: How do I balance evaluation, research, and technical assistance work when they are so interconnected? I will discuss strategies for managing different aspects of work and the implications of evaluating something that you are simultaneously trying to change.

Background

In 2017, the National Science Foundation solicited proposals that called for researchers and practitioners to partner in conducting research that directly informs problems of practice through the Research Practice Partnership (RPP) model. I work on one project funded under this grant: Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All (RPPforCS). RPPforCS aims to learn how projects supported under this funding are conducting research and improving practice. It also brings a community of researchers and evaluators across funded partnerships together for collective capacity building.

The Challenge

The RPPforCS work requires a dynamic approach to evaluation, and it challenges conventional boundaries between research, evaluation, and technical assistance. I am both part of the evaluation team for individual projects and part of a program-wide research project that aims to understand how projects are using an RPP model to meet their computer science and equity goals. Given the novelty of the program and research approach, the RPPforCS team also supports these projects with targeted technical assistance to improve their ability to use an RPP model (ideas that typically come out of what we’re learning across projects).

Examples in Practice

The RPPforCS team examines changes through a review of project proposals and annual reports, yearly interviews with a member of each project, and an annual community survey. Using these data collection mechanisms, we ask about the impact of the technical assistance on the functioning of the project. Being able to rigorously document how the technical assistance aspect of our research project influences their work allows us to track change affected by the RPPforCS team separately from change stemming from the individual project.

We use the technical assistance (e.g., tools, community meetings, webinars) to help projects further their goals and as research and evaluation data collection opportunities to understand partnership dynamics. The technical assistance tools are all shared through Google Suite, allowing us to see how the teams engage with them. Teams are also able to use these tools to improve their partnership practice (e.g., using our Health Assessment Tool to establish shared goals with partners). Structured table discussions at our community meetings allow us to understand more about specific elements of partnership that are demonstrated within a given project. We share all of our findings with the community on a frequent basis to foreground the research effort, while still providing necessary support to individual projects. 

Hot Tips

  • Rigorous documentation The best way I have found to account for our external impact is rigorous documentation. This may sound like a basic approach to evaluation, but it is the easiest way to track change over time and track change that you have introduced (as opposed to organic change coming from within the project).
  • Multi-use activities Turn your technical assistance into a data collection opportunity. It both builds capacity within a project and allows you to access information for your own evaluation and research goals.