Archive: research

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President,

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.


Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.


Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.


Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.

Blog: Evaluator, Researcher, Both?

Posted on June 21, 2017 by  in Blog ()

Professor, College of William & Mary

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Having served as a project evaluator and as a project researcher, it is apparent to me how critical it is to have conversations about roles at the onset of funded projects.  Early and open conversations can help avoid confusion, help eliminate missed timing to collect critical data, and highlight where differences exist for each project team role. The blurring of lines over time regarding strict differences between evaluator and researcher requires project teams, evaluators, and researchers to create new definitions for project roles, to understand scope of responsibility for each role, and to build data systems that allow for sharing information across roles.

Evaluation serves a central role in funded research projects. The lines between the role of the evaluator and that of the researcher can blur, however, because many researchers also conduct evaluations. Scriven (2003/2004) saw the role of evaluation as a means to determine “the merit, worth, or value of things” (para. #1), whereas social science research instead is “restricted to empirical (rather than evaluative) research, and bases its conclusion only on factual results—that is, observed, measured, or calculated data” (para. #2).  Consider too, how Powell (2006) posited “Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes” (p. 102).  It is easy to see how confusion arises.

Taking a step back can shed light on the differences in these roles and ways they are now being redefined. The role of researcher shows a different project perspective, as a goal of research is the production of knowledge, whereas the role of the external evaluator is to provide an “independent” assessment of the project and its outcomes. Typically, an evaluator is seen as a judge of a project’s merits, which assumes a perspective that a “right” outcome exists. Yet inherent in the role of evaluation are the values held by the evaluator, the project team, and the stakeholders as context influences the process and who makes decisions on where to focus attention, why, and how feedback is used (Skolits, Morrow, & Burr, 2009).  Knowing more about how the project team intends to use evaluation results to help improve project outcomes requires a shared understanding of the role of the evaluator (Langfeldt & Kyvik, 2011).

Evaluators seek to understand what information is important to collect and review and how to best use the findings to relate outcomes to stakeholders (Levin-Rozalis, 2003).  Researchers instead focus on diving deep into investigating a particular issue or topic with a goal of producing new ways of understanding in these areas. In a perfect world, the roles of evaluators and researchers are distinct and separate. But, given requirements for funded projects to produce outcomes that inform the field, new knowledge is also discovered by evaluators. The swirl of roles results in evaluators publishing results of projects that informs the field, researchers leveraging their evaluator roles to publish scholarly work, and both evaluators and researchers borrowing strategies from each other to conduct their work.

The blurring of roles requires project leaders to provide clarity about evaluator and researcher team functions. The following questions can help in this process:

  • How will the evaluator and researcher share data?
  • What are the expectations for publication from the project?
  • What kinds of formative evaluation might occur that ultimately changes the project trajectory? How do these changes influence the research portion of the project?
  • How does shared meaning of terms, role, scope of work, and authority for the project team occur?

Knowing how the evaluator and researcher will work together provides an opportunity to leverage expertise in ways that move beyond the simple additive effect of both roles.  Opportunities to share information is only possible when roles are coordinated, which requires advanced planning. It is important to move beyond siloed roles and towards more collaborative models of evaluation and research within projects. Collaboration requires more time and attention to sharing information and defining roles, but the time spent on coordinating these joint efforts is worth it given the contributions to both the project and to the field.


Levin-Rozalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation, 18(2):1-31.

Powell, R. R. (2006).  Evaluation research:  An overview.  Library Trends, 55(1), 102-120.

Scriven, M. (2003/2004).  Michael Scriven on the differences between evaluation and social science research.  The Evaluation Exchange, 9(4).

Newsletter: Survey Says Winter 2016

Posted on January 1, 2016 by , in Newsletter - ()

On the 2015 ATE survey, 65 of 230 principal investigators (28%) reported spending some portion of their annual budgets on research. Six of these projects were funded as targeted research. Among the other 59 projects, expenditures on research ranged from 1% to 65% with a median of 14%. With just six targeted research projects and less than a third of all ATE grantees engaging in research, there is immense opportunity within the ATE program to expand research on technician education.



The full report of 2015 ATE survey findings, along with data snapshots and downloadable graphics, is available from

Blog: Building Effective Partnerships to Conduct Targeted Research on Student Pathways

Posted on March 4, 2015 by  in Blog ()

Associate Professor of Sociology, University of South Florida

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Will Tyson, associate professor of sociology at the University of South Florida. I am also principal investigator of PathTech (“Successful Academic and Employment Pathways in Advanced Technologies” [NSF #1104214]), an NSF ATE targeted research project aimed at better understanding pathways into technician education and into the workforce. In this post, I describe effective models through which ATE projects and centers can develop targeted research partnerships with STEM education researchers.

Personnel from 2-year and 4-year institutions bring different expertise to the table, but there is great potential for mutually beneficial partnerships built around the desire to learn more about student pathways and student outcomes. Within ATE, centers and projects are typically led by educators and practitioners with expertise in program development, curricular development, and professional development within their areas of technical expertise and technician education. Targeted research in technician education projects are led by STEM education researchers with backgrounds in social science and education interested in learning more about student pathways and outcomes while placing their experiences in a broader social context. What we do is very different, but our goals are the same.

When I discuss my research with ATE grantees and other stakeholders in K-12 education, community colleges, and local industry I get the same revealing responses: “NSF always wants to know about student outcomes, but we don’t really know how to do the research” and “We didn’t know there were people like you out there who did this research.” On the other hand, experienced NSF grantees who conduct research in K-12 education and/or four-year universities often know little about the “T” in STEM in community colleges and work being done through ATE Centers and Projects. Developing ways to bridge knowledge gaps between practitioners and researchers is necessary to increase our understanding of the processes of technician education.

PathTech is a partnership between social science and education researchers at the University of South Florida and the Florida Advanced Technological Education Center (FLATE), an NSF-ATE regional center of excellence. Such a partnership is both mandated by the ATE program solicitation and necessary to conduct high-impact research that can effectively be put into practice. This collaboration is an essential element of the PathTech research model, along with the proactive and enthusiastic participation of our community college, high school, and industry partners.

Through this multifaceted, interdisciplinary collaboration, we have been able to create a regional scale model that allows for the organic development of research objectives driven by the experiences and needs of college personnel as well as theory and scholarship. This is the foundation whereby knowledge is constructed and produced through interface and interaction with those experiencing technician educational and occupational pathways as administrators, teachers, students, employers, and policymakers. Most importantly, this collaboration also allows us to develop a mechanism for real-time dissemination of emerging findings and developing knowledge, thus allowing all parties to benefit from the research.