How can we make sure evaluation findings are used to improve projects? This is a question on the minds of evaluators, project staff, and funders alike. The Expectations to Change (E2C) process is one answer. E2C is a six-step process through which evaluation stakeholders are guided from establishing performance standards (i.e., “expectations”) to formulating action steps toward desired change. The process can be completed in one or more working sessions with those evaluation stakeholders best positioned to put the findings to use. E2C is designed as a process of self-evaluation for projects, and the role of the evaluator is that of facilitator, teacher, and technical consultant. The six steps of the E2C process are summarized in the table below. While the specific activities used to carry out each step should be tailored to the setting, the suggested activities are based on various implementations of the process to date.
E2C Process Overview
|1. Set Expectations
||Establish standards to serve as a frame of reference for determining whether the findings are “good” or “bad”
||Instruction, worksheets, and consensus building process
|2. Review Findings
||Examine the findings, compare them to established expectations, and form an initial reaction; celebrate successes
||Instruction, individual processing, and round-robin group discussion
|3. Identify Key Findings
||Identify the findings that fall below expectations and require immediate attention
||Ranking process and facilitated group discussion
|4. Interpret Key Findings
||Generate interpretations of what the key findings mean
||Brainstorming activity such as “Rotating Flip Charts”
|5. Make Recommendations
||Generate recommendations for change based on interpretations of the findings
||Brainstorming activity such as “Rotating Flip Charts”
|6. Plan for Change
||Formulate an action plan for implementing recommendations
||Planning activities that enlist all of the stakeholders and result in concrete next steps, such as sticky wall, and small group work
To find out if the E2C process does in fact encourage projects to use evaluation for improvement, we asked a group of staff and administrators from a nonprofit, human service organization to participate in an online survey one year after their E2C workshop. The findings revealed an increase in staff knowledge and awareness of clients’ experiences receiving services, as well as specific changes to the way services were delivered. The findings also showed that participation in the E2C workshop fostered the service providers’ appreciation for, increased their knowledge of, and enhanced their ability to engage in evaluation activities.
Based on these findings and our experiences with the process to date, by providing program stakeholders with the opportunity to systematically compare their evaluation results to agreed-upon performance standards, celebrate successes and address weaknesses, the E2C process facilitates self-evaluation for the purpose of project improvement.
E2C Process Handout
E2C was co-created with Nkiru Nnawulezi, M.A., and Lela Vandenberg, Ph.D., Michigan State University. For more information, contact Adrienne Adams at firstname.lastname@example.org.
The most common purposes, or intended uses of evaluations, are often described by the terms formative evaluation and summative evaluation.
Formative evaluation focuses on evaluation for project improvement, in contrast with summative evaluation which uses evaluation results to make decisions about project adoption, expansion, contraction, continuation, or cancellation.
Since formative evaluation is all about project improvement, it needs to occur while there is still time to implement change. So, the earlier a formative evaluation can begin in an ATE project cycle, the better. Formative evaluation is also a recurring activity. As such, those who will be involved in implementing change (project leaders and staff) are the ones who will be the most interested in the results of a formative evaluation.
E. Jane Davidson notes in her book, Evaluation Methodology Basics, that there are two main areas in which formative evaluation is especially useful. Adapted for the ATE context those areas are:
- To help a new project “find its feet” by helping to improve project plans early in the award cycle. Another example for a new project is collecting early evidence of project relevancy from faculty and students, thus allowing changes to occur before full roll out of a project component.
- To assist more established projects improve their services, become more efficient with their grant dollars, or reach a larger audience. For projects looking for refunding, formative evaluation can assist in finding areas of improvement (even in long standing activities) to better respond to changing needs.
This was a question submitted anonymously to EvaluATE by an ATE principal investigator (PI), so I do not know the specific nature of the recommendation in question. Therefore, my response isn’t about the substance of whatever this recommendation may have been about, but on the interpersonal and political dynamics of the situation.
Let’s put the various players’ roles into perspective:
As PI, you are ultimately responsible for your project—delivering what you outlined in your grant proposal/negotiations and making decisions about how best to conduct the project based on your experience, expertise, and input from various advisors. You are in the position of authority when it comes to how your project is implemented and what recommendations from what sources to implement in order to ensure the success of your project.
Your NSF program officer (PO) monitors your project, primarily based on information you provide in your annual report, submitted to him or her via research.gov. Your NSF
program officer may provide extremely valuable guidance and advice, but the PO’s role is to comment on your project as described in the report. You are not obligated to accept the advice. However, the PO does approve the report, based on his or her assessment of whether the project is sufficiently meeting the expectations of the grant. If you choose not to accept your program officer’s recommendations—which is completely acceptable—you should be able to provide a clear rationale for your decision in a respectful and diplomatic way by addressing each of the issues raised. Such a response should be documented, such as in your annual report and/or a response to the evaluation report.
Your evaluator is a consultant you hired to provide a service to your project in exchange for compensation. You are not obligated to accept this person’s recommendations, either. Again,
however, you should give your evaluator’s recommendations—especially those based on evidence—careful consideration and express why or why not you believe the recommendations are or are not appropriate for your project. An evaluator should never “ding” your project for not implementing the evaluation recommendations.
If you are really not sure who is right and neither person’s position (the PO’s recommendation or the evaluator’s disagreement with it) especially resonates with you and your understanding of what your project needs, you should seek additional information. If you have an advisory panel, this is exactly the type of tricky situation they can help with. If you don’t, you might consult an experienced person at your institution or another ATE project or center PI. Whichever way you go, you should be able to provide a clear rationale for your position and communicate it to both parties. This is not a popularity contest between your evaluator and your program officer. This is about making the right decisions for your project.
Who are the stakeholders in your project evaluation? How should they be engaged in the evaluation? An evaluation stakeholder is anyone who is involved in or affected by a project or its evaluation —from the student experiencing a new curriculum to an NSF program officer monitoring a project’s progress. Engagement can be anything from serving as an information source for the evaluation to participating in data interpretation and recommendation development. With such broad definitions, it can be difficult to figure out the right mix of whom to involve in evaluation activities and how.
We have created a new resource to support reflection and decision making around this issue. The Identifying Stakeholders and their Role in an Evaluation worksheet presents a series of prompts to help PIs and evaluators move from thinking generically about stakeholder engagement to identifying specific individuals and the type of involvement best suited to them.
Involving stakeholders in a project’s evaluation has many benefits. For example, when stakeholders are engaged in various aspects of an evaluation, it usually increases the evaluation’s relevance and usefulness to the project. When key stakeholders demonstrate support for the evaluation, it may enhance cooperation with data collection. Stakeholders’ knowledge of a project’s context and content typically exceeds that of an external evaluator; that knowledge can be tapped for myriad purposes throughout an evaluation. But stakeholder engagement is not a one-size-fits-all activity. It’s not necessary—and rarely feasible—to involve all stakeholders to the same degree in an evaluation. Maybe some just need to be kept abreast of evaluation activities, while others should take a more active role in decision making. The worksheet is intended to help you figure out what stakeholder engagement should look like in your project.
Click on the link to download the Identifying Stakeholders and their Role in an Evaluation worksheet.
The PI of an ATE center or project has the responsibility of keeping a strong communication flow with the evaluator. This begins even before the project is funded and continues in a dynamic interchange throughout the funding cycle. There are easy ways that PI and evaluator can add value to a project. Simply asking for help is sometimes overlooked.
A recent example demonstrates how an ATE center used the expertise of the evaluator to get some specific feedback on the use of clearinghouse materials. The co-PI asked the evaluator for assistance and a very nice survey was created that allowed the evaluator to gather additional information about curriculum and instructional materials usage and the center PI’s to gain valuable input about the use of its existing materials.
Second, it is important to actually use the information gained from the evaluation data. What a natural and built in opportunity for the PI and the team to take advantage of impact data to drive the future direction of the center or project. Using data to make decisions provides an opportunity to test assumptions and to learn if current practices and products are working.
Third, the evaluation develops evidence to be used to obtain further funding, advance technical education and contribute field of evaluation. By regular communication and collaboration, the project, the PI and the evaluator all gain value and can more effectively contribute to the design of the current and future projects. Together, the PI and the Evaluator can learn about impact, trends, and key successes that are appropriate for scaling. Thus evaluation is more than reporting but becomes a tool for strategic planning.
The Bio-Link evaluator, Candiya Mann, provides not only a written document that can be used for reporting and planning but also works with me to expand my connections with other projects and people that have similar interests in the use of data to drive actions and achieve broader impact. Removing isolation contributes new ideas for metrics and can actually make evaluation fun.
Learn more about Bio-Link at www.bio-link.org.