Archive: evaluation use

Evaluation Responsibility Diagram

Posted on March 14, 2018 by  in Resources ()

This diagram provides an overview of evaluation responsibilities for the project staff, external evaluator, and combined responsibilities. This example is an excerpt from the Evaluation Basics for Non-evaluators webinar. Access slides, recording, handout, and additional resources from bit.ly/mar18-webinar.

File: Click Here
Type: Doc
Category: Getting Started
Author(s): Lori Wingate

Blog: Strategic Knowledge Mapping: A New Tool for Visualizing and Using Evaluation Findings in STEM

Posted on January 6, 2016 by  in Blog (, )

Director of Research and Evaluation, Meaningful Evidence, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A challenge to designing effective STEM programs is that they address very large, complex goals, such as increasing the numbers of underrepresented students in advanced technology fields.

To design the best possible programs to address such a large, complex goal, we need a large, complex understanding (from looking at the big picture). It’s like when medical researchers seek to develop a new cure–they need deep understanding of how medications interact with the body, other medications, and how they will affect the patient based on their age and medical history.

A new method, Integrative Propositional Analysis (IPA), lets us visualize and assess information gained from evaluations. (For details, see our white papers.) At the 2015 American Evaluation Association conference, we demonstrated how to use the method to integrate findings from the PAC-Involved (Physics, Astronomy, Cosmology) evaluation into a strategic knowledge map. (View the interactive map.)

A strategic knowledge map supports program design and evaluation in many ways.

Measures understanding gained.
The map is an alternative logic model format that provides broader and deeper understanding than usual logic model approaches. Unlike other modeling techniques, IPA lets us quantitatively assess information gained. Results showed that the new map incorporating findings from the PAC-Involved evaluation had much greater breadth and depth than the original logic model. This indicates increased understanding of the program, its operating environment, how they work together, and options for action.

Graphic 1

Shows what parts of our program model (map) are better understood.
In the figure below, the yellow shadow around the concept “Attendance/attrition challenges” indicates that this concept is better understood. We better understand something when it has multiple causal arrows pointing to it—like when we have a map that shows multiple roads leading to each destination.

Graphic 2

Shows what parts of the map are most evidence supported.
We have more confidence in causal links that are supported by data from multiple sources. The thick arrow below shows a relationship that many sources of evaluation data supported. All five evaluation data sources—the project team interviews, student focus group, review of student reflective journals, observation, and student surveys all provided evidence that more experiments/demos/hands-on activities caused students to be more engaged in PAC-Involved.

graphic 3

Shows the invisible.
The map also helps us to “see the invisible.” If something does not have arrows pointing to it, we know that there is “something” that should be added to the map. This indicates that more research is needed to fill those “blank spots on the map” and improve our model.

Graphic 4

Supports collaboration.
The integrated map can support collaboration among the project team. We can zoom in to look at what parts are relevant for action.

Graphic 5

Supports strategic planning.
The integrated map also supports strategic planning. Solid arrows leading to our goals indicate things that help. Dotted lines show the challenges.

Graphic 6

Clarifies short-term and long-term outcomes.
We can create customized map views to show concepts of interest, such as outcomes for students and connections between the outcomes.

Graphic 7

We encourage you to add a Strategic Knowledge Map to your next evaluation. The evaluation team, project staff, students, and stakeholders will benefit tremendously.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
Manu Platt Ayesha Boyce

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs). 

Blog: Tips for Evaluation Recommendations

Posted on June 3, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This week I am in Atlanta at the American Evaluation Association (AEA) Summer Evaluation Institute, presenting a workshop on Translating Evaluation Findings into Actionable Recommendations.  Although the art of crafting practical, evidence-based recommendations is not covered in-depth either in evaluation textbooks or academic courses, most evaluators (86% according to Fleischer and Christie’s  survey of AEA members) believe that making recommendations  is part of an evaluator’s job. By reading as much as I can on this topic[1] and reflecting on my own practice, I have assembled 14 tips for how to develop, present, and follow-up on evaluation recommendations:

DEVELOP

  1. Determine the nature of recommendations needed or expected.  At the design stage, ask stakeholders: What do you hope to learn from the evaluation? What decisions will be influenced by the results? Should the evaluation include recommendations?
  2. Generate possible recommendations throughout the evaluation. Keep a log of ideas as you collect data and observe the program. I like Roberts-Gray, Buller, and Sparkman’s (1987) evaluation question-driven framework.
  3. Base recommendations on evaluation findings and other credible sources. Findings are important, but they’re often not sufficient for formulating recommendations.  Look to other credible sources, such as program goals, stakeholders/program participants, published research, experts, and the program’s logic model.
  4. Engage stakeholders in developing and/or reviewing recommendations prior to their finalization. Clients should not be surprised by anything in an evaluation report, including the recommendations. If you can engage stakeholders directly in developing recommendations, they will feel more ownership. (Read Adrienne Adam’s article about a great process for this).
  5. Focus recommendations on actions within the control of intended users. If the evaluation client doesn’t have control over the policy governing their programs, don’t bother recommending changes at that level.
  6. Provide multiple options for achieving desired results.  Balance consideration of the cost and difficulty of implementing recommendations with the degree of improvement expected; if possible, offer alternatives so stakeholders can select what is most feasible and important to do.

PRESENT

  1. Clearly distinguish between findings and recommendations. Evaluation findings reflect what is, recommendations are a predication about what could be. Developing recommendations requires a separate reasoning process.
  2. Write recommendations in clear, action-oriented language. I often see words like consider, attend to, recognize, and acknowledge in recommendations. Those call the clients’ attention to an issue, but don’t provide guidance as to what to do.
  3. Specify the justification sources for each recommendation. It may not be necessary to include this information in an evaluation report, but be prepared to explain how and why you came up with the recommendations.
  4. Explain the costs, benefits, and challenges associated with implementing recommendations. Provide realistic forecasts of these matters so clients can make informed decisions about whether to implement the recommendations.
  5. Be considerate—exercise political and interpersonal sensitivity. Avoid “red flag” words like fail and lack, don’t blame or embarrass, and be respectful of cultural and organizational values.
  6. Organize recommendations, such as by type, focus, timing, audience, and/or priority. If many recommendations are provided, organize them to help the client digest the information and prioritize their actions.

FOLLOW-UP

  1. Meet with stakeholders to review and discuss recommendations in their final form.  This is an opportunity to make sure they fully understand the recommendations as well as to lay the groundwork for action.
  2. Facilitate decision making and action planning around recommendations. I like the United Nations Development Programme’s “Management Response Template” as an action planning tool.

See also my handy one-pager of these tips for evaluation recommendations.

[1] See especially Hendricks & Papagiannis (1990) and Utilization-Focused Evaluation (4th ed.) by Michael Quinn Patton.

Newsletter: Why Does the NSF Worry about Project/Center Evaluation?

Posted on April 1, 2015 by  in Newsletter - ()

Lead Program Director, ATE, National Science Foundation

I often use a quick set of questions that Dr. Gerhard Salinger developed in response to the question, “How do you develop an excellent proposal?” Question 4 is especially relevant to the issue of project/center evaluation:

  1. What is the need that will be addressed?
  2. How do you specifically plan to address this need?
  3. Does your project team have the necessary expertise to carry out your plan?
  4. How will you know if you succeed?
  5. How will you tell other people about the results and outcomes?

Question 4 is addressing the evaluation activities of a project or center, and I hope you consider it essential for conducting an effective and successful project. Formative assessment guides you and lets you know if your strategy is working; it gives you the information to shift strategies if needed. A summative assessment then provides you and others with information on the overall project goals and objectives. Evaluation adds the concept of value to your project. For example, the evaluation activities might provide you with information on the participants’ perceived value of the workshop, and follow-on evaluation activities might provide you with information as to how many faculty used what they learned in a course. A final step might be to evaluate the impact on student learning in the course following the course change.

As a program officer, I can quickly scan the project facts (e.g., how many of this or that), but I tend to spend much more time on the evaluation data as it provides the value component to your project activities. Let’s go back to the faculty professional development workshops. Program officers definitely want to know if the workshops were held and how many people attended, but it is essential to provide information on the value of the workshops. It’s great to know that faculty “liked” the workshop, but of greater importance is the impact on their teaching practices and student learning that occurred due to the change. Your annual reports (yes, we do read them carefully) can provide the entire evaluation report as an attachment, but it would be really helpful if you, the PI, provided an overview of what you see as your project value added within the body of the report.

There are several reasons evaluation information is important to NSF program officers. First, each federal dollar that you expend carrying out your project is one that the taxpayers expect both you and the NSF to be accountable for. Second, within the NSF, program portfolios are scrutinized to determine programmatic impact and effectiveness. Third, the ATE program is congressionally mandated and program data and evaluation are often used to respond to congressional questions. Put more concisely, NSF wants to know if the investment in your project/center was a wise one and if value was generated from this investment.

Newsletter: Survey Says

Posted on April 1, 2015 by  in Newsletter - ()

Doctoral Associate, EvaluATE, Western Michigan University

Each year, ATE PIs are asked what type of reports their evaluators provide them with and how they use the information. The majority of ATE PIs receive both oral and written reports from their evaluators.

Picture2 Picture1

PIs who receive reports in both oral and written forms report higher rates of evaluation use, as shown in the figure on the right, above.

You can find more at evalu-ate.org/annual_survey/

Newsletter: Expectations to Change (E2C): A Process to Promote the Use of Evaluation for Project Improvement

Posted on April 1, 2014 by  in Newsletter - ()

How can we make sure evaluation findings are used to improve projects? This is a question on the minds of evaluators, project staff, and funders alike. The Expectations to Change (E2C) process is one answer. E2C is a six-step process through which evaluation stakeholders are guided from establishing performance standards (i.e., “expectations”) to formulating action steps toward desired change. The process can be completed in one or more working sessions with those evaluation stakeholders best positioned to put the findings to use. E2C is designed as a process of self-evaluation for projects, and the role of the evaluator is that of facilitator, teacher, and technical consultant. The six steps of the E2C process are summarized in the table below. While the specific activities used to carry out each step should be tailored to the setting, the suggested activities are based on various implementations of the process to date.

E2C Process Overview

Step Objective Suggested Activities
1. Set Expectations Establish standards to serve as a frame of reference for determining whether the findings are “good” or “bad” Instruction, worksheets, and consensus building process
2. Review Findings Examine the findings, compare them to established expectations, and form an initial reaction; celebrate successes Instruction, individual processing, and round-robin group discussion
3. Identify Key Findings Identify the findings that fall below expectations and require immediate attention Ranking process and facilitated group discussion
4. Interpret Key Findings Generate interpretations of what the key findings mean Brainstorming activity such as “Rotating Flip Charts”
5. Make Recommendations Generate recommendations for change based on interpretations of the findings Brainstorming activity such as “Rotating Flip Charts”
6. Plan for Change Formulate an action plan for implementing recommendations Planning activities that enlist all of the stakeholders and result in concrete next steps, such as sticky wall, and small group work

To find out if the E2C process does in fact encourage projects to use evaluation for improvement, we asked a group of staff and administrators from a nonprofit, human service organization to participate in an online survey one year after their E2C workshop. The findings revealed an increase in staff knowledge and awareness of clients’ experiences receiving services, as well as specific changes to the way services were delivered. The findings also showed that participation in the E2C workshop fostered the service providers’ appreciation for, increased their knowledge of, and enhanced their ability to engage in evaluation activities.

Based on these findings and our experiences with the process to date, by providing program stakeholders with the opportunity to systematically compare their evaluation results to agreed-upon performance standards, celebrate successes and address weaknesses, the E2C process facilitates self-evaluation for the purpose of project improvement.

E2C Process Handout

E2C was co-created with Nkiru Nnawulezi, M.A., and Lela Vandenberg, Ph.D., Michigan State University. For more information, contact Adrienne Adams at adamsadr@msu.edu.