Archive: results

Blog: Kirkpatrick Model for ATE Evaluation

Posted on October 2, 2019 by  in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Jim Kirkpatrick Wendy Kayser Kirkpatrick
Senior Consultant, Kirkpatrick Partners President, Kirkpatrick Partners

The Kirkpatrick Model is an evaluation framework organized around four levels of impact: reaction, learning, behavior, and results. It was developed more than 50 years ago by Jim’s father, Dr. Don Kirkpatrick, specifically for evaluating training initiatives in business settings. For decades, it has been widely believed that the four levels are applicable only to evaluating the effectiveness of corporate training programs. However, we and hundreds of global “four-level ambassadors” — including Lori Wingate and her colleagues at EvaluATE — have successfully applied Kirkpatrick outside of the typical “training” box. The Kirkpatrick Model has broad appeal because of its practical, results-oriented approach.

The Kirkpatrick Model provides the foundation for evaluating almost any kind of social, business, health, or education intervention. The process starts with identifying what success will look like and driving through with a well-coordinated, targeted plan of support, accountability, and measurement. It is a framework for demonstrating ultimate value through a compelling chain of evidence.

Kirpatrick Model Visual

Whether your Advanced Technological Education (ATE) grant focuses on enhancing a curricular program, providing professional development to faculty, developing educational materials, or serving as a resource and dissemination center, the four levels are relevant.

At the most basic level (Level 1: Reaction), you need to know what your participants think of your work and your products. If they don’t value what you’re providing, you have little chance of producing higher-level results.

Next, it’s important to determine how and to what extent participants’ knowledge, skills, attitudes, confidence, and/or commitment changed because of the resources and follow-up support you provided (Level 2: Learning). Many evaluations, unfortunately, don’t go beyond Level 2. But it’s a big mistake to assume that if learning takes place, behaviors change and results happen. It’s critical to determine the extent to which people are doing things differently because of their new knowledge, skill, etc. (Level 3: Behavior).

Finally, you need to be able to answer the question “So what?” In the ATE context, that means determining how your work has impacted the landscape of advanced technological education and workforce development (Level 4: Results).

The four levels are the foundation of the model, but there is much more to it. We hope you’ll take the time to examine and reflect on how this approach can bring value to your initiative and its evaluation. To learn more about Kirkpatrick, visit our website or  kirkpatrickpartners.com, where you’ll find a wealth of free resources, as well as information on our certificate and certification programs.

Want to learn more about this topic? View EvaluATE’s webinar ATE Evaluation: Measuring Reaction, Learning, Behavior, and Results.

 

Newsletter: Survey Says Winter 2016

Posted on January 1, 2016 by , in Newsletter - ()

On the 2015 ATE survey, 65 of 230 principal investigators (28%) reported spending some portion of their annual budgets on research. Six of these projects were funded as targeted research. Among the other 59 projects, expenditures on research ranged from 1% to 65% with a median of 14%. With just six targeted research projects and less than a third of all ATE grantees engaging in research, there is immense opportunity within the ATE program to expand research on technician education.

 

Survey-Says

The full report of 2015 ATE survey findings, along with data snapshots and downloadable graphics, is available from www.evalu-ate.org/annual_survey/.

Newsletter: Communicating Results from Prior NSF Support

Posted on January 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

ATE proposal season is many months away in early October, but if you are submitting for new funding this year, now is the time to reflect on your project’s achievements and make sure you will be able to write a compelling account of your current or past project’s results as they relate to the NSF review criteria of Intellectual Merit and Broader Impacts. A section titled Results from Prior NSF Support is required whenever a proposal PI or co-PI has received previous grants from NSF in the past five years. A proposal may be returned without review if it does not use the specific headings of “Intellectual Merit” and “Broader Impacts” when presenting results from prior support.

Given that these specific headings are required, you should have something to say about your project’s achievements in these distinct areas. It is OK for some projects to emphasize one area over another (Intellectual Merit or Broader Impacts), but grantees should be able to demonstrate value in both areas. Descriptions of achievements should be supported with evidence. Bold statements about a proposed project’s potential broader impacts, for example, will be more convincing to reviewers if the proposer can describe tangible benefits of previously funded work.

To help with this aspect of proposal development, EvaluATE has created a Results from Prior NSF Support Checklist (see http://bit.ly/prior-check). This one-page checklist lists the NSF requirements for this section of a proposal, as well as our additional suggestions for what to include and how.

Two EvaluATE blogs include additional guidance in this area: Amy Germuth (http://bit.ly/ag-reapply) offers specific guidance regarding wording and structure, and Lori Wingate (http://bit.ly/nsf-merit) shares tips for assessing the quality and quantity of evidence of a project’s Intellectual Merit and Broader Impacts, with links to helpful resources.

The task of identifying and collecting evidence of results from prior support should not wait until proposal writing time. It should be embedded in a project’s ongoing evaluation.