John Cosgrove

Senior Partner, Cosgrove & Associates

John Cosgrove is a senior partner with Cosgrove & Associates. Mr. Cosgrove has extensive evaluation and community college experience. He is currently leading the evaluations of a number of Department of Labor- and National Science Foundation-funded projects. In addition, he works with colleges to help improve their strategic planning, assessment, and internal research and decision-support data systems. Specific areas of expertise include developmental and utilization-focused evaluation, institutional research and strategic planning, development of user-friendly decision-support data systems, return on investment analysis, and academic program review. Mr. Cosgrove is committed to social justice and efforts to enhance equity in student outcomes for all students.


Blog: Utilization-focused evaluation

Posted on December 11, 2019 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
John Cosgrove
Senior Partner, Cosgrove & Associates
Maggie Cosgrove
Senior Partner, Cosgrove & Associates

 

As seasoned evaluators committed to utilization-focused evaluation, we partner with clients to create questions and data analysis connected to continuous improvement. We stress developmental evaluation[1] to help link implementation and outcome evaluation. Sounds good, right? Well, not so fast.

Confession time. At times a client’s attention to data wanes as a project progresses, and efforts to actively engage clients to use data for continuous improvement do not generate the desired enthusiasm. Although interest in data re-emerges as the project concludes, that enthusiasm seems more related to answering “How did we do?” rather than exploring “What did we learn?” This phenomenon, depicted in the U-shaped curve in Figure 1, suggests that when data may have great potential to impact continuous improvement (“the Messy Middle”), clients may be less curious about their data.           

To address this issue, we revisit Stufflebeam’s guiding principle: the purpose of evaluation is to improve, not prove.[2] Generally, clients have good intentions to use data for improvement and are interested in such endeavors. However, as Bryk points out in his work with networked improvement communities (NIC),[3] sometimes practitioners need help learning to improve. Borrowing from NIC concepts,[4] we developed the Thought Partner Group (TPG) and incorporated it into our evaluation. This group’s purpose is to assist with data interpretation, sharing, and usage. To achieve these goals, we invite practitioners or stakeholders who are working across the project and who have a passion for the project, an interest in learning, and an eagerness to explore data. We ask this group to go beyond passive data conversations and address questions such as:

  • What issues are getting in the way of progress and what can be done to address them?
  • What data and actions are needed to support sustaining or scaling?
  • What gaps exist in the evaluation?

The TPG’s focus on improvement and data analysis breathes life into the evaluation and improvement processes. Group members are carefully selected for their deep understanding of local context and a willingness to support the transfer of knowledge gained during the evaluation. Evaluation data has a story to tell, and the TPG helps clients give a voice to their data.

Although not a silver bullet, the TPG has helped improve our clients’ use of evaluation data and has helped them get better at getting better. The TPG model supports the evaluation process and mirrors Englebart’s C-level activity[5] by helping shed light on the evaluator’s and the client’s understanding of the Messy Middle.

 

 


[1] Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.
[2] Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational accountability. Journal of Research and Development in Education.
[3] Bryk, A., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Cambridge, MA: Harvard Education Publishing.
[4] Bryk A. S., Gomez, L. M., & Grunow A. (2010). Getting ideas into action: Building networked improvement communities in education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching. Also see McKay, S. (2017, February 23). Quality improvement approaches: The networked improvement model. [blog].
[5] Englebart, D. C. (2003, September). Improving our ability to improve: A call for investment in a new future. IBM Co-Evolution Symposium.

Blog: Measure What Matters: Time for Higher Education to Revisit This Important Lesson

Posted on May 23, 2018 by  in Blog (, )

Senior Partner, Cosgrove & Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If one accepts Peter Drucker’s premise that “what gets measured, gets managed,” then two things are apparent: measurement is valuable, but measuring the wrong thing has consequences. Data collection efforts focusing on the wrong metrics lead to mismanagement and failure to recognize potential opportunities. Focusing on the right measures matters. For example, in Moneyball, Michael Lewis describes how the Oakland Athletics improved their won-loss record by revising player evaluation metrics to more fully understand players’ potential to score runs.

The higher education arena has equally high stakes concerning evaluation. A growing number of states (more than 30 in 2017)[1] have adopted performance funding systems to allocate higher education funding. Such systems focus on increasing the number of degree completers and have been fueled by calls for increased accountability. The logic of performance funding seems clear: Tie funding to the achievement of performance metrics, and colleges will improve their performance. However, research suggests we might want to re-examine this logic.  In “Why Performance-Based College Funding Doesn’t Work,” Nicholas Hillman found little to no evidence to support the connection between performance funding and improved educational outcomes.

Why are more states jumping on the performance-funding train? States are under political pressure, with calls for increased accountability and limited taxpayer dollars. But do the chosen performance metrics capture the full impact of education? Do the metrics result in more efficient allocation of state funding? The jury may be still out on these questions, but Hillman’s evidence suggests the answer is no.

The disconnect between performance funding and improved outcomes may widen even more when one considers open-enrollment colleges or colleges that serve a high percentage of adult, nontraditional, or low-income students. For example, when a student transfers from a community college (without a two-year degree) to a four-year college, should that behavior count against the community college’s degree completion metric? Might that student have been well-served by their time at the lower-cost college? When community colleges provide higher education access to adult students who enroll on a part-time basis, should they be penalized for not graduating such students within the arbitrary three-year time period? Might those students and that community have been well-served by access to higher education?

To ensure more equitable and appropriate use of performance metrics, college and states would be well-served to revisit current performance metrics and more clearly define appropriate metrics and data collection strategies. Most importantly, states and colleges should connect the analysis of performance metrics to clear and funded pathways for improvement. Stepping back to remember that the goal of performance measurement is to help build capacity and improve performance will place both parties in a better position to support and evaluate higher education performance in a more meaningful and equitable manner.

[1] Jones, T., & Jones, S. (2017, November 6). Can equity be bought? A look at outcomes-based funding in higher ed [Blog post].

Blog: Partnering with Clients to Avoid Drive-by Evaluation

Posted on November 14, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
   
 John Cosgrove

Senior Partner, Cosgrove & Associates

 Maggie Cosgrove

Senior Partner, Cosgrove & Associates

If a prospective client says, “We need an evaluation, and we will send you the dataset for evaluation,” our advice is that this type of “drive-by evaluation” may not be in their best interest.

As calls for program accountability and data-driven decision making increase, so does demand for evaluation. Given this context, evaluation services are being offered in a variety of modes. Before choosing an evaluator, we recommend the client pause to consider what they would like to learn about their efforts and how evaluation can add value to such learning. This perspective requires one to move beyond data analysis and reporting of required performance measures to examining what is occurring inside the program.

By engaging our clients in conversations related to what they would like to learn, we are able to begin a collaborative and discovery-oriented evaluation. Our goal is to partner with our clients to identify and understand strengths, challenges, and emerging opportunities related to program/project implementation and outcomes. This process will help clients not only understand which strategies worked, but why they worked and lays the foundation for sustainability and scaling.

These initial conversations can be a bit of a dance, as clients often focus on funder-required accountability and performance measures. This is when it is critically important to elucidate the differences between evaluation and auditing or inspecting. Ann-Murray Brown examines this question and provides guidance as to why evaluation is more than just keeping score in Evaluation, Inspection, Audit: Is There a Difference? As we often remind clients, “we are not the evaluation police.”

During our work with clients to clarify logic models, we encourage them to think of their logic model in terms of storytelling. We pose commonsense questions such as: When you implement a certain strategy, what changes to you expect to occur? Why do you think those changes will take place? What do you need to learn to support current and future strategy development?

Once our client has clearly outlined their “story,” we move quickly to connect data collection to client-identified questions and, as soon as possible, we engage stakeholders in interpreting and using their data. We incorporate Veena Pankaj and Ann Emery’s (2016) data placemat process to engage clients in data interpretation.  By working with clients to fully understand their key project questions, focus on what they want to learn, and engage in meaningful data interpretation, we steer clear of the potholes associated with drive-by evaluations.

Pankaj, V. & Emery, A. (2016). Data placemats: A facilitative technique designed to enhance stakeholder understanding of data. In R. S. Fierro, A. Schwartz, & D. H. Smart (Eds.), Evaluation and Facilitation. New Directions for Evaluation, 149, 81-93.