Archive: learning

Blog: Kirkpatrick Model for ATE Evaluation

Posted on October 2, 2019 by  in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Jim Kirkpatrick Wendy Kayser Kirkpatrick
Senior Consultant, Kirkpatrick Partners President, Kirkpatrick Partners

The Kirkpatrick Model is an evaluation framework organized around four levels of impact: reaction, learning, behavior, and results. It was developed more than 50 years ago by Jim’s father, Dr. Don Kirkpatrick, specifically for evaluating training initiatives in business settings. For decades, it has been widely believed that the four levels are applicable only to evaluating the effectiveness of corporate training programs. However, we and hundreds of global “four-level ambassadors” — including Lori Wingate and her colleagues at EvaluATE — have successfully applied Kirkpatrick outside of the typical “training” box. The Kirkpatrick Model has broad appeal because of its practical, results-oriented approach.

The Kirkpatrick Model provides the foundation for evaluating almost any kind of social, business, health, or education intervention. The process starts with identifying what success will look like and driving through with a well-coordinated, targeted plan of support, accountability, and measurement. It is a framework for demonstrating ultimate value through a compelling chain of evidence.

Kirpatrick Model Visual

Whether your Advanced Technological Education (ATE) grant focuses on enhancing a curricular program, providing professional development to faculty, developing educational materials, or serving as a resource and dissemination center, the four levels are relevant.

At the most basic level (Level 1: Reaction), you need to know what your participants think of your work and your products. If they don’t value what you’re providing, you have little chance of producing higher-level results.

Next, it’s important to determine how and to what extent participants’ knowledge, skills, attitudes, confidence, and/or commitment changed because of the resources and follow-up support you provided (Level 2: Learning). Many evaluations, unfortunately, don’t go beyond Level 2. But it’s a big mistake to assume that if learning takes place, behaviors change and results happen. It’s critical to determine the extent to which people are doing things differently because of their new knowledge, skill, etc. (Level 3: Behavior).

Finally, you need to be able to answer the question “So what?” In the ATE context, that means determining how your work has impacted the landscape of advanced technological education and workforce development (Level 4: Results).

The four levels are the foundation of the model, but there is much more to it. We hope you’ll take the time to examine and reflect on how this approach can bring value to your initiative and its evaluation. To learn more about Kirkpatrick, visit our website or  kirkpatrickpartners.com, where you’ll find a wealth of free resources, as well as information on our certificate and certification programs.

Want to learn more about this topic? View EvaluATE’s webinar ATE Evaluation: Measuring Reaction, Learning, Behavior, and Results.

 

Blog: Partnering with Clients to Avoid Drive-by Evaluation

Posted on November 14, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
   
 John Cosgrove

Senior Partner, Cosgrove & Associates

 Maggie Cosgrove

Senior Partner, Cosgrove & Associates

If a prospective client says, “We need an evaluation, and we will send you the dataset for evaluation,” our advice is that this type of “drive-by evaluation” may not be in their best interest.

As calls for program accountability and data-driven decision making increase, so does demand for evaluation. Given this context, evaluation services are being offered in a variety of modes. Before choosing an evaluator, we recommend the client pause to consider what they would like to learn about their efforts and how evaluation can add value to such learning. This perspective requires one to move beyond data analysis and reporting of required performance measures to examining what is occurring inside the program.

By engaging our clients in conversations related to what they would like to learn, we are able to begin a collaborative and discovery-oriented evaluation. Our goal is to partner with our clients to identify and understand strengths, challenges, and emerging opportunities related to program/project implementation and outcomes. This process will help clients not only understand which strategies worked, but why they worked and lays the foundation for sustainability and scaling.

These initial conversations can be a bit of a dance, as clients often focus on funder-required accountability and performance measures. This is when it is critically important to elucidate the differences between evaluation and auditing or inspecting. Ann-Murray Brown examines this question and provides guidance as to why evaluation is more than just keeping score in Evaluation, Inspection, Audit: Is There a Difference? As we often remind clients, “we are not the evaluation police.”

During our work with clients to clarify logic models, we encourage them to think of their logic model in terms of storytelling. We pose commonsense questions such as: When you implement a certain strategy, what changes to you expect to occur? Why do you think those changes will take place? What do you need to learn to support current and future strategy development?

Once our client has clearly outlined their “story,” we move quickly to connect data collection to client-identified questions and, as soon as possible, we engage stakeholders in interpreting and using their data. We incorporate Veena Pankaj and Ann Emery’s (2016) data placemat process to engage clients in data interpretation.  By working with clients to fully understand their key project questions, focus on what they want to learn, and engage in meaningful data interpretation, we steer clear of the potholes associated with drive-by evaluations.

Pankaj, V. & Emery, A. (2016). Data placemats: A facilitative technique designed to enhance stakeholder understanding of data. In R. S. Fierro, A. Schwartz, & D. H. Smart (Eds.), Evaluation and Facilitation. New Directions for Evaluation, 149, 81-93.