We EvaluATE - Evaluation Management

Getting Your New ATE Project’s Evaluation off to a Great Start

Posted on October 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

New ATE project principal investigators (PIs): When you worked with your evaluator to develop an evaluation plan for your project proposal, you were probably focused on the big picture—how to gather credible and meaningful evidence about the quality and impact of your work. To ensure your evaluation achieves its aims, take these four steps now to make sure your project provides the human resources, time, and information needed for a successful evaluation:

  1. Schedule regular meetings with your evaluator. Regular meetings help ensure that your project’s evaluation receives adequate attention. These exchanges should be in real time—via phone call, web meetings, or face-to-face—not just email. See EvaluATE’s new Communication Plan Checklist for ATE PIs and Evaluators for a list of other communication issues to discuss with your evaluator at the start of a project.
  1. Work with your evaluator to create a project evaluation calendar. This calendar should span the life of your project and include the following:
  • Due dates for National Science Foundation (NSF) annual reports: You should include your evaluation reports or at least information from the evaluation in these reports. Work backward from their due dates to determine when evaluation reports should be completed. To find out when your annual report is due, go to Research.gov, enter your NSF login information, select “Awards & Reporting,” then “Project Reports.”
  • Advisory committee meeting dates: You may want your evaluator to attend these meetings to learn more about your project and to communicate directly with committee members.
  • Project events: Activities such as workshops and outreach events present valuable opportunities to collect data directly from the individuals involved in the project. Make sure your evaluator is aware of them.
  • Due dates for new proposal submissions: If submitting to NSF again, you will need to include evidence of your current project’s intellectual merit and broader impacts. Working with your evaluator now will ensure you have compelling evidence to support a future submission.
  1. Keep track of what you’re doing and who is involved. Don’t leave these tasks to your evaluator or wait until the last minute. Taking an active—and proactive—role in documenting the project’s work will save you time and result in more accurate information. Your evaluator can then use that information when preparing their reports. Moreover, you will find it immensely useful to have good documentation at your fingertips when preparing your annual NSF report.
  • Maintain a record of project activities and products—such as conference presentations, trainings, outreach events, competitions, publications—as they are completed. Check out EvaluATE’s project vita as an example.
  • Create a participant database (or spreadsheet): Everyone who engages with your project should be listed. Record their contact information, role in the project, and pertinent demographic characteristics (such as whether a student is a first-generation college student, a veteran, or part of a group that has been historically underrepresented in STEM). You will probably find several uses for this database, such as for follow-up with participants for evaluation purposes, for outreach, and as evidence of your project’s broader impacts.

An ounce of prevention is worth of pound of cure: Investing time up front to make sure your evaluation is on solid footing will save headaches down the round.

Sustaining Private Evaluation Practices: Overcoming Challenges by Collaborating within Our ATE Community of Practice

Posted on September 27, 2017 by  in Blog ()

President, Impact Allies

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Ben Reid. I am the founder of Impact Allies, a private evaluation firm. The focus of this post is on the business, rather than technical aspects, of evaluation. My purpose is to present a challenge to sustaining a private evaluation practice and best serving clients and propose an opportunity to overcome that challenge by collaborating within our community of practice.

Challenge

Often evaluators act as one-person shows. It is important to give a single point of contact to a principal investigator (PI) and project team and for that evaluator of record to have thorough knowledge of the project and its partners. However, the many different jobs required of an evaluation contract simply cross too many specialties and personality types for one person to effectually serve a client best.

Opportunity

The first opportunity is to become more professionally aware of our strengths and weaknesses. What are your skills? And equally important, where are you skill-deficit (don’t know how to do it) and where are you performance-deficient (have the skill but aren’t suited for it—because of anxiety, frustration, no enthusiasm, etc.)?

The second opportunity is to build relationships within our community of practice. Get to know other evaluators, where their strengths are unique and whom they use for ancillary services (their book of contractors). (The upcoming NSF ATE PI conference is a great place to do this).

Example

My Strengths: Any evaluator can satisfactorily perform the basics – EvaluATE certainly has done a tremendous job of educating and training us. In this field, I am unique in my strengths of external communications, opportunity identification and assessment, strategic and creative thinking, and partnership development. Those skills and a background in education, marketing and branding, and project management, have helped me contribute broadly, which has proven useful time and again when working with small teams. Knowing clients well and having an entrepreneurial mindset allows me to do what is encouraged in NSF’s 2010 User-Friendly Handbook for Project Evaluation: “Certain evaluation activities can help meet multiple purposes, if used judiciously” (p. 119).

My Weaknesses: However, an area where I could use some outside support is graphic design and data visualization. This work, because it succinctly tells the story and successes of a project, is very important when communicating to multiple stakeholders, in published works, or for promotional purposes. Where I once performed these tasks (with much time and frustration and at a level which isn’t noteworthy), I now contract with an expert—and my clients are thereby better served.

Takeaway

“Focus on the user and all else will follow,” is the number one philosophy of Google, the company that has given us so much and in turn done so well for itself. Let us also focus on our clients, serving their needs by building our businesses where we are skilled and enthusiastic and collaborating (partnering, outsourcing, or referring) within our community of practice where another professional can do a better job for our clients.

Blog: Evaluation’s Role in Helping Clients Avoid GroupThink

Posted on July 10, 2017 by  in Blog ()

Senior Evaluator, SmartStart Evaluation & Research

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In December of 2016, I presented a poster on a STEM-C education project at the Restore America’s Estuaries National Summit, co-hosted by The Coastal Society. Having a social science background, I assumed I’d be “out of my depth” amid restoration science topics. However, a documentary on estuarine restoration projects along New Jersey’s Hidden Coast inspired me with insights on the importance of evaluation in helping projects achieve effective outcomes. The film highlights the vital importance of horseshoe crabs as a keystone species beset by myriad threats: Their sustainability as a renewable resource was overestimated and their ecological importance undervalued until serious repercussions became impossible to ignore. Teams of biologists, ecologists, military veterans, communication specialists, and concerned local residents came together to help restore their habitat and raise awareness to help preserve this vital species.

This documentary was not the only project presented at the conference in which diverse teams of scientists, volunteers, educators, and others came together to work toward a shared goal. I began to reflect on how similar the composition of these groups and their need for successful collaboration was to contributors on many projects I evaluate. Time and again it was revealed that the various well-intended interdisciplinary team members often initially struggled to communicate effectively due to different expectations, priorities, and perspectives. Often presenters spoke about ways these challenges had been overcome, most frequently through extensive communication with open exchanges of ideas. However, these only represented successful projects promoting their outcomes as inspiration and guidance for others. How often might lack of open communication lead projects down a different path? When does this occur? and How can an evaluator help the leaders foresee and avoid potential pitfalls?

Often, the route to undesired and unsuccessful outcomes lies in lack of effective communication, which is a common symptom of GroupThink. Imagine the leadership team on any project you evaluate:

  • Are they a highly cohesive group?
  • Do they need to make important decisions, often under deadlines or other pressure?
  • Do members prefer consensus to conflict?

These are ideal conditions for GroupThink, when team members disregard information that does not fit with their shared beliefs, and dissenting ideas or opinions are unwelcome. Partners’ desire for harmony can lead them to ignore early warning signs of threats to achieving goals and lead to making poor decisions.

How do we, as evaluators, help them avoid GroupThink?

  • Examine perceived sustainability objectively: Horseshoe crabs are an ancient species, once so plentiful they covered Atlantic beaches during spawning, each laying 100,000 or more eggs. Perceived as a sustainable species, their usefulness as bait and fertilizer has led to overharvesting. Similarly, project leaders may have misconceptions about resources or little knowledge of other factors influencing capacity to maintain their activities. By using validated measures, such as Washington University’s Program Sustainability Assessment Tool (PSAT), evaluators can raise awareness among project leaders on factors contributing to sustainability and facilitate planning sessions to identify adaptation strategies and increase chances of success.
  • Investigate an unintended consequence of project’s activities: Horseshoe crabs’ copper-based blood is crucial to the pharmaceutical industry. However, they cannot successfully be raised in captivity. Instead, they are captured, drained of about 30 percent of their blood, and returned to the ocean. While survival rates are 70 percent or more, researchers are becoming concerned the trauma may impact breeding and other behaviors. Evaluators can help project leaders delve into cause-and-effect relationships underlying problems by employing techniques such as the Five Whys to identify root causes and developing logic models to clarify relationships between resources, activities, outputs, and outcomes.
  • Anticipate unintended chains of events: Horseshoe crabs’ eggs are the primary source of protein for migrating birds. The declining population of horseshoe crabs has put at least three species of birds’ survival at risk. As evaluators, we have many options (e.g., key informant interviews, risk assessments, negative program theory) to identify aspects of program activities with potentially negative impacts and make recommendations to mitigate the harm.

Horseshoe Crab-in-a-bottle sits on my desk to remind me not to be reticent about making constructive criticisms in order to help project leaders avoid GroupThink.

Blog: Evaluation Management Skill Set

Posted on April 12, 2017 by  in Blog ()

CEO, SPEC Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We as evaluators, all know that managing an evaluation is quite different from managing a scientific research project. Sure, we need to take due diligence in completing the basic inquiry tasks:  deciding study questions/hypotheses; figuring out the strongest design, sampling plan, data collection methods and analysis strategies; and interpreting/reporting results. But evaluation’s purposes extend well beyond proving or disproving a research hypothesis. Evaluators must also focus on how the evaluation will lead to enlightenment and what role it plays in support of decision making. Evaluations can leave in place important processes that extend beyond the study itself, like data collection systems and changed organizational culture that places greater emphasis on data-informed decision making. Evaluations also exist within local and organizational political contexts, which are of less importance to academic and scientific research.

Very little has been written in the evaluation literature about evaluation management. Compton and Baizerman are the most prolific authors editing two issues of New Directions in Evaluation on the topic. They approach evaluation management from a theoretical perspective, discussing issues like the basic competencies of evaluation managers within different organizational contexts (2009) and the role of evaluation managers in advice giving (2012).

I would like to describe good evaluation management in terms of the actual tasks that an evaluation manager must excel in—what evaluation managers must be able to actually do. For this, I looked to the field of project management. There is a large body of literature about project management, and whole organizations, like the Project Management Institute, dedicated to the topic. Overlaying evaluation management onto the core skills of a project manager, here is the skill set I see as needed to effectively manage an evaluation:

Technical Skills:

  • Writing an evaluation plan (including but not limited to descriptions of basic inquiry tasks)
  • Creating evaluation timelines
  • Writing contracts between the evaluation manager and various members of the evaluation team (if they are subcontractors), and with the client organization
  • Completing the application for human subjects institutional review board (HSIRB) approval, if needed

Financial Skills:

  • Creating evaluation budgets, including accurately estimating hours each person will need to devote to each task
  • Generating or justifying billing rates of each member of the evaluation team
  • Tracking expenditures to assure that the evaluation is completed within the agreed-upon budget

Interpersonal Skills:

  • Preparing a communications plan outlining who needs to be apprised of what information or involved in which decisions, how often and by what method
  • Using appropriate verbal and nonverbal communication skills to assure that the evaluation not only gets done, but good relationships are maintained throughout
  • Assuming leadership in guiding the evaluation to its completion
  • Resolving the enormous number of conflicts that can arise both within the evaluation team and between the evaluators and the stakeholders

I think that this framing can provide practical guidance for what new evaluators need to know to effectively manage an evaluation and guidance for how veteran evaluators can organize their knowledge for practical sharing. I’d be interested in comments as to the comprehensiveness and appropriateness of this list…am I missing something?

Blog: Gauging Workplace Readiness Among Cyberforensics Program Graduates

Posted on March 29, 2017 by  in Blog ()

Principal Consultant, Preferred Program Evaluations

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I share my experience leading a multi-year external evaluation that provided useful insights about how to best strengthen the work readiness components of an ATE project.

The Advanced Cyberforensics Education Consortium (ACE) is a National Science Foundation- funded Advanced Technological Education center whose goal is to design and deliver an industry-driven curriculum that produces qualified and adaptive graduates equipped to work in the field of cyberforensics and secure our nation’s electronic infrastructure.  The initiative is being led by Daytona State College of Florida and three other “state lead” partner institutions in Georgia, South Carolina, and North Carolina.  The targeted geographic audience of ACE is community and state colleges in the southeastern region of the United States.

The number of cyberforensics and network security program offerings among ACE’s four state lead institutions increased nearly fivefold between the initiative’s first and fourth year.  One of ACE’s objectives is to align the academic program core with employers’ needs and ensure the curriculum remains current with emerging trends, applications, and cyberforensics platforms.  In an effort to determine the extent to which this was occurring across partner institutions, I, ACE’s external evaluator, sought feedback directly from the project’s industry partners.

A Dialogue with Industry Representatives

Based on a series of stakeholder interviews conducted with industry partners, I learned that program graduates were viewed favorably for their content knowledge and professionalism.  The interviewees noted that the graduates they hired added value to their organizations and that they would consider hiring additional graduates from the same academic programs.  In contrast, I also received feedback via interviews that students were falling short in the desired fundamental set of soft skills.

An electronic survey for industry leaders affiliated with ACE state lead institutions was designed to gauge their experience working with graduates of the respective cyberforensics programs and solicit suggestions for enhancing the programs’ ability to generate graduates who have the requisite skills to succeed in the workplace.  The first iteration of the survey read too much like a performance review.  To address this limitation, the question line was modified to inquire more specifically about the graduates’ knowledge, skills, and abilities related to employability in the field of cyberforensics.

ACE’s P.I. and I wanted to discover how the programs could be tailored to ensure a smoother transition from higher education to industry and how to best acclimate graduates to the workplace.  Additionally, we sought to determine the ways in which the coursework is accountable and to what extent the graduates’ skillset is transferable.

What We Learned from Industry Partners

On the whole, new hires were academically prepared to complete assigned tasks, possessed intellectual curiosity, and displayed leadership qualities.  A few recommendations were specific to collaboration between the institution and the business community.  One suggestion included inviting some of the college’s key faculty and staff to the businesses to learn more about day-to- day operations and how they could be integrated with classroom instruction.  Another industry representative encouraged institutions to engage more readily with the IT business community to generate student internships and co-ops.  The promotion of professional membership in IT organizations for a well-rounded point-of-view as a business technologist was also suggested by survey respondents.

ACE’s P.I. and I came to understand that recent graduates – regardless of age – have room for improvement when it comes to communicating and following complex directions with little oversight.  Employers were of the opinion that graduates could have benefited from more emphasis on attention to detail, critical thinking, and best practices.  Another recommendation centered on the inclusion of a “systems level” class or “big picture integrator” that would allow students to explore how all of the technology pieces fit together cohesively.  Lastly, to remain responsive to industry trends, the partners requested additional hands-on coursework related to telephony and cloud-based security.

Blog: 3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • Set a recurring conference call or meeting with your evaluator—at least once a month.
  • Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • Assign one person on your project team to be the point-person for evaluation.
  • Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.

 

Blog: Good Communication Is Everything!

Posted on February 3, 2016 by  in Blog ()

Evaluator, South Carolina Advanced Technological Education Resource Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I am new to the field of evaluation, and the most important thing that I learned in my first nine months is that effective communication is critical to the success of the evaluation of a project. Whether primarily virtual or face-to-face, knowing the communication preferences of your client is important. Knowing the client’s schedule is also important. For example, if you are working with faculty, having a copy of their teaching and office hours schedule for each semester can help.

While having long lead times to get to know the principal investigator and project team is desirable and can promote strong relationship building in advance of implementing evaluation strategies, that isn’t always possible. With my first project, contracts were finalized with the client and evaluators only days before a major project event. There was little time to prepare and no opportunity to get to know the principal investigator or grant team before launching into evaluation activities. In preparation, I had an evaluation plan, a copy of the proposal as submitted, and other project-related documents. Also, I was working with a veteran evaluator who knew the PI and had experience evaluating another project for the client. Nonetheless, there were surprises that caught both the veteran evaluator and me off guard. As the two evaluators worked with the project team to hone in on the data needed to make the evaluation stronger, we discovered that the goals, objectives, and some of the activities had been changed during the project’s negotiations with NSF prior to funding. As evaluators, we discovered that we were working from a different playbook than the PI and other team members! The memory of this discovery still sends chills down my back!

A mismatch regarding communication styles and anticipated response times can also get an evaluation off to a rocky start. If not addressed, unmet expectations can lead to disappointment and animosity. In this case, face-to-face interaction was key to keeping the evaluation moving forward. Even when a project is clearly doing exciting and impactful work, it isn’t always possible to collect all of the data called for in the evaluation plan. I’ve learned firsthand that the tug-of-war that exists between an evaluator’s desire and preparation to conduct a rigorous evaluation and the need to be flexible and to work within the constraints of a particular situation isn’t always comfortable.

Lessons learned

From this experience, I learned some important points that I think will be helpful to new evaluators.

  • Establishing a trusting relationship can be as important as conducting the evaluation. Find out early if you and the principal investigator are compatible and can work together. The PI and evaluator should get to know each other and establish some common expectations at the earliest possible date.
  • Determine how you will communicate and ensure a common understanding of what constitutes a reasonable response time for emails, telephone calls, or requests for information from either party. Individual priorities differ and thus need to be understood by both parties.
  • Be sure that you ask at the onset if there have been changes to the goals and objectives for the project since the proposal was submitted. Adjust the evaluation plan accordingly.
  • Determine the data that can be and will be collected and who will be responsible for providing what information. In some situations, it helps to secure permission to work directly with an institutional research office or internal evaluator for a project to collect data.
  • When there are differences of opinion or misunderstandings, confront them head on. If the relationship continues to be contentious in any way, changing evaluators may be the best solution.

I hope that some of my comments will help other newcomers to realize that the yellow brick road does have some potential potholes and road closures.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
Manu Platt Ayesha Boyce

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs). 

Blog: The Shared Task of Evaluation

Posted on November 18, 2015 by  in Blog (, )

Independent Educational Program Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation was an important strand at the recent ATE meeting in Washington, DC. As I reflected on my own practice as an external evaluator and listened to the comments of my peers, I was impressed once again with how dependent evaluation is on a shared effort by project stakeholders. Ironically, the more external an evaluator is to a project, the more important it is to collaborate closely with PIs, program staff, and participating institutions. Many assessment and data collection activities that are technically part of the outside evaluation are logistically and financially dependent on the internal workings of the project.

This has implications for the scope of work for evaluation and for the evaluation budget. A task might appear in the project proposal as, “survey all participants,” and it would likely be part of the evaluator’s scope of work. But in practice, tasks such as deciding what to ask on the survey, reaching the participants, and following up with nonresponders are likely to require work by the PIs or their assistants.

Occasionally you hear certain percentages cited as appropriate levels of effort for evaluation. Whatever overall portion evaluation plays in a project, my approach is to think of that portion as the sum of my efforts and those of my clients. This has several advantages:

  • During planning, it immediately highlights data that might be difficult to collect. It is much easier to come up with a solution or an alternative in advance and avoid a big gap in the evidence record.
  • It makes clear who is responsible for what activities and avoids embarrassing confrontations along the lines of, “I thought you were going to do that.”
  • It keeps innocents on the project and evaluation staffs from being stuck with long (and possibly uncompensated) hours trying to carry out tasks outside their expected job descriptions.
  • It allows for more accurate budgeting. If I know that a particular study involves substantial clerical support for pulling records from school databases, I can reduce my external evaluation fee, while at the same time warning the PI to anticipate those internal evaluation costs.

The simplest way to assure that these dependencies are identified is to consider them during the initial logic modelling of the project. If an input is professional development, and its output is instructors who use the professional development, and the evidence for the output is use of project resources, who will have to be involved in collecting that evidence? Even if the evaluator proposes to visit every instructor and watch them in practice, it is likely that those visits will have to be coordinated by someone close to the instructional calendar and daily schedule. Specifying and fairly sharing those tasks produces more data, better data, and happier working relationships.