Archive: collaborative evaluation

Blog: Partnering with Clients to Avoid Drive-by Evaluation

Posted on November 14, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
 John Cosgrove

Senior Partner, Cosgrove & Associates

 Maggie Cosgrove

Senior Partner, Cosgrove & Associates

If a prospective client says, “We need an evaluation, and we will send you the dataset for evaluation,” our advice is that this type of “drive-by evaluation” may not be in their best interest.

As calls for program accountability and data-driven decision making increase, so does demand for evaluation. Given this context, evaluation services are being offered in a variety of modes. Before choosing an evaluator, we recommend the client pause to consider what they would like to learn about their efforts and how evaluation can add value to such learning. This perspective requires one to move beyond data analysis and reporting of required performance measures to examining what is occurring inside the program.

By engaging our clients in conversations related to what they would like to learn, we are able to begin a collaborative and discovery-oriented evaluation. Our goal is to partner with our clients to identify and understand strengths, challenges, and emerging opportunities related to program/project implementation and outcomes. This process will help clients not only understand which strategies worked, but why they worked and lays the foundation for sustainability and scaling.

These initial conversations can be a bit of a dance, as clients often focus on funder-required accountability and performance measures. This is when it is critically important to elucidate the differences between evaluation and auditing or inspecting. Ann-Murray Brown examines this question and provides guidance as to why evaluation is more than just keeping score in Evaluation, Inspection, Audit: Is There a Difference? As we often remind clients, “we are not the evaluation police.”

During our work with clients to clarify logic models, we encourage them to think of their logic model in terms of storytelling. We pose commonsense questions such as: When you implement a certain strategy, what changes to you expect to occur? Why do you think those changes will take place? What do you need to learn to support current and future strategy development?

Once our client has clearly outlined their “story,” we move quickly to connect data collection to client-identified questions and, as soon as possible, we engage stakeholders in interpreting and using their data. We incorporate Veena Pankaj and Ann Emery’s (2016) data placemat process to engage clients in data interpretation.  By working with clients to fully understand their key project questions, focus on what they want to learn, and engage in meaningful data interpretation, we steer clear of the potholes associated with drive-by evaluations.

Pankaj, V. & Emery, A. (2016). Data placemats: A facilitative technique designed to enhance stakeholder understanding of data. In R. S. Fierro, A. Schwartz, & D. H. Smart (Eds.), Evaluation and Facilitation. New Directions for Evaluation, 149, 81-93.

Newsletter: Collaborative Evaluation

Posted on October 1, 2015 by  in Newsletter - () ()

A collaborative evaluation is one “in which there is a significant degree of collaboration or cooperation between evaluators and stakeholders in planning and/or conducting the evaluation.”1

Project leaders who are new to grant project evaluation may assume that evaluation is something that is done to them, rather than something they do with an evaluator. Although the degree of collaboration may vary, it is generally advisable for project leaders to work closely with their evaluators on the following tasks:

Define the focus of an evaluation: Be clear about what you, as a project leader, need to learn from the evaluation to help improve your work and what you need to be able to report to NSF to demonstrate accountability and impact.

Minimize barriers to data collection: Inform your evaluator about the best times and places to gather data. If the evaluator needs to collect data directly from students or faculty, an advance note from you or another respected individual from your institution can help a great deal. Help your evaluator connect with your institutional research office or other sources of organizational data.

Review data collection instruments: Your evaluator has expertise in evaluation and research methods, but you know your project’s content area and audience best. Review instruments (e.g., questionnaires, interview/focus group protocols) to ensure they make sense for your audience.

To learn more, visit the website of the American Evaluation Association’s topical interest group on collaborative, participatory, and empowerment evaluation: (

1Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices and consequences. American Journal of Evaluation, 17(3), p. 210.