Archive: utilization-focused evaluation

Blog: Utilization-focused Evaluation

Posted on December 11, 2019 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
John Cosgrove
Senior Partner, Cosgrove & Associates
Maggie Cosgrove
Senior Partner, Cosgrove & Associates

 

As seasoned evaluators committed to utilization-focused evaluation, we partner with clients to create questions and data analysis connected to continuous improvement. We stress developmental evaluation[1] to help link implementation and outcome evaluation. Sounds good, right? Well, not so fast.

Confession time. At times a client’s attention to data wanes as a project progresses, and efforts to actively engage clients to use data for continuous improvement do not generate the desired enthusiasm. Although interest in data re-emerges as the project concludes, that enthusiasm seems more related to answering “How did we do?” rather than exploring “What did we learn?” This phenomenon, depicted in the U-shaped curve in Figure 1, suggests that when data may have great potential to impact continuous improvement (“the Messy Middle”), clients may be less curious about their data.           

To address this issue, we revisit Stufflebeam’s guiding principle: the purpose of evaluation is to improve, not prove.[2] Generally, clients have good intentions to use data for improvement and are interested in such endeavors. However, as Bryk points out in his work with networked improvement communities (NIC),[3] sometimes practitioners need help learning to improve. Borrowing from NIC concepts,[4] we developed the Thought Partner Group (TPG) and incorporated it into our evaluation. This group’s purpose is to assist with data interpretation, sharing, and usage. To achieve these goals, we invite practitioners or stakeholders who are working across the project and who have a passion for the project, an interest in learning, and an eagerness to explore data. We ask this group to go beyond passive data conversations and address questions such as:

  • What issues are getting in the way of progress and what can be done to address them?
  • What data and actions are needed to support sustaining or scaling?
  • What gaps exist in the evaluation?

The TPG’s focus on improvement and data analysis breathes life into the evaluation and improvement processes. Group members are carefully selected for their deep understanding of local context and a willingness to support the transfer of knowledge gained during the evaluation. Evaluation data has a story to tell, and the TPG helps clients give a voice to their data.

Although not a silver bullet, the TPG has helped improve our clients’ use of evaluation data and has helped them get better at getting better. The TPG model supports the evaluation process and mirrors Englebart’s C-level activity[5] by helping shed light on the evaluator’s and the client’s understanding of the Messy Middle.

 

 


[1] Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.
[2] Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational accountability. Journal of Research and Development in Education.
[3] Bryk, A., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Cambridge, MA: Harvard Education Publishing.
[4] Bryk A. S., Gomez, L. M., & Grunow A. (2010). Getting ideas into action: Building networked improvement communities in education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching. Also see McKay, S. (2017, February 23). Quality improvement approaches: The networked improvement model. [blog].
[5] Englebart, D. C. (2003, September). Improving our ability to improve: A call for investment in a new future. IBM Co-Evolution Symposium.

Blog: Building Research-Practice Collaborations for Effective STEM + Computing Education Evaluation Design

Posted on November 29, 2018 by  in Blog ()

Director of Measurement, Evaluation, and Learning, Kapor Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

At the Kapor Center, our signature three-summer educational program (SMASH Academy) aims to prepare underrepresented high school students of color to pursue careers in science, technology, education, and mathematics (STEM) and computing through access to courses, support networks, and opportunities for social and personal development.

In the nonprofit sector, evaluations can be driven by funder requirements, which often focus on outcomes. However, by solely focusing on outcomes, teams can lose sight of the goal of STEM evaluation: to inform programming (through the creation of process evaluation tools such as observation protocols and course evaluations) to ensure youth of color are prepared for the future STEM economy.

To keep that goal in focus, the Kapor Center ensures that the evaluation method driving its work is utilization-focused evaluation. Utilization-focused evaluation begins with the premise that the success metric of an evaluation is the extent to which it is used by key stakeholders (Patton, 2008). This framework requires joint decision making between the evaluator and stakeholders to determine the purpose of the evaluation, the kind of data to be collected, the type of evaluation design to be created, and the uses of the evaluation. Using this framework shifts evaluation from a linear, top-down approach to a feedback loop involving practitioners.

Figure 1. Evaluation Cycle of SMASH Academy

The evaluation cycle at the Kapor Center, a collaboration between our research team and SMASH’s program team, is outlined below:

  1. Inquiry: This stage begins with conversations with the stakeholders (e.g., programs and leadership teams) about common understandings of short-, medium-, and long-term outcomes as well as the key strategies that drive outcomes. Delineating outcomes has been integral to working transparently toward program priorities.
  2. Instrument Development: Once groups are in agreement about the goal of the evaluation and our path to it, we develop instruments. Instrument mapping, linking each tool and question to specific outcomes, has been a good practice to open the communication channels among teams.
  3. Instrument Administration: When working with seasonal staff at the helm of evaluation administration, documentation of processes has been crucial for fidelity. Not surprisingly, with varying levels of experience among program staff, the creation of systems to standardize data collection has been key, including scoring rubrics to be used during observations and guides for survey administration.

Data Analysis and Reporting: When synthesizing data, analyses and reporting need to not only tell a broad impact story but also provide concrete targets and priorities for the program

  1. In this regard, analyses have encompassed pre-post outcome differences and reports on program experiences.
  2. Reflection and Integration: At the end of the program cycle, the program team reflects on the data together to inform their path forward. In such a meeting, the team engages in answering three questions: 1) What did you observe about the data? 2) What can you infer about the data and what evidence supports your inference? and 3) What are the next steps to develop and prioritize program modifications?

Developing stronger research-practice ties have been integral to the Kapor Center’s understanding of what works, for whom, and under what context to ensure more youth of color pursue and persist in STEM fields. Beyond the SMASH program, the practice of collective cooperation between researchers and practitioners provides an opportunity to impact strategies across the field.

 

References

Patton, M. Q. (2008). Utilization-focused evaluation. Newbury Park, CA: Sage.