Archive: relationships

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Boyce
Manu Platt Ayesha Boyce
Associate Professor, Department of Educational Research Methodology
University of North Carolina Greensboro
Assistant Professor, Department of Educational Research Methodology
University of North Carolina Greensboro

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt
Assistant Professor, Department of Educational Research Methodology
University of North Carolina Greensboro
Associate Professor, Department of Educational Research Methodology
University of North Carolina Greensboro

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs). 

Blog: The Shared Task of Evaluation

Posted on November 18, 2015 by  in Blog (, )

Independent Educational Program Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation was an important strand at the recent ATE meeting in Washington, DC. As I reflected on my own practice as an external evaluator and listened to the comments of my peers, I was impressed once again with how dependent evaluation is on a shared effort by project stakeholders. Ironically, the more external an evaluator is to a project, the more important it is to collaborate closely with PIs, program staff, and participating institutions. Many assessment and data collection activities that are technically part of the outside evaluation are logistically and financially dependent on the internal workings of the project.

This has implications for the scope of work for evaluation and for the evaluation budget. A task might appear in the project proposal as, “survey all participants,” and it would likely be part of the evaluator’s scope of work. But in practice, tasks such as deciding what to ask on the survey, reaching the participants, and following up with nonresponders are likely to require work by the PIs or their assistants.

Occasionally you hear certain percentages cited as appropriate levels of effort for evaluation. Whatever overall portion evaluation plays in a project, my approach is to think of that portion as the sum of my efforts and those of my clients. This has several advantages:

  • During planning, it immediately highlights data that might be difficult to collect. It is much easier to come up with a solution or an alternative in advance and avoid a big gap in the evidence record.
  • It makes clear who is responsible for what activities and avoids embarrassing confrontations along the lines of, “I thought you were going to do that.”
  • It keeps innocents on the project and evaluation staffs from being stuck with long (and possibly uncompensated) hours trying to carry out tasks outside their expected job descriptions.
  • It allows for more accurate budgeting. If I know that a particular study involves substantial clerical support for pulling records from school databases, I can reduce my external evaluation fee, while at the same time warning the PI to anticipate those internal evaluation costs.

The simplest way to assure that these dependencies are identified is to consider them during the initial logic modelling of the project. If an input is professional development, and its output is instructors who use the professional development, and the evidence for the output is use of project resources, who will have to be involved in collecting that evidence? Even if the evaluator proposes to visit every instructor and watch them in practice, it is likely that those visits will have to be coordinated by someone close to the instructional calendar and daily schedule. Specifying and fairly sharing those tasks produces more data, better data, and happier working relationships.