Are you interested in creating a project fact sheet for your project but don’t know where to start? This document is a guide to help you create your own project fact sheet.
I often use a quick set of questions that Dr. Gerhard Salinger developed in response to the question, “How do you develop an excellent proposal?” Question 4 is especially relevant to the issue of project/center evaluation:
- What is the need that will be addressed?
- How do you specifically plan to address this need?
- Does your project team have the necessary expertise to carry out your plan?
- How will you know if you succeed?
- How will you tell other people about the results and outcomes?
Question 4 is addressing the evaluation activities of a project or center, and I hope you consider it essential for conducting an effective and successful project. Formative assessment guides you and lets you know if your strategy is working; it gives you the information to shift strategies if needed. A summative assessment then provides you and others with information on the overall project goals and objectives. Evaluation adds the concept of value to your project. For example, the evaluation activities might provide you with information on the participants’ perceived value of the workshop, and follow-on evaluation activities might provide you with information as to how many faculty used what they learned in a course. A final step might be to evaluate the impact on student learning in the course following the course change.
As a program officer, I can quickly scan the project facts (e.g., how many of this or that), but I tend to spend much more time on the evaluation data as it provides the value component to your project activities. Let’s go back to the faculty professional development workshops. Program officers definitely want to know if the workshops were held and how many people attended, but it is essential to provide information on the value of the workshops. It’s great to know that faculty “liked” the workshop, but of greater importance is the impact on their teaching practices and student learning that occurred due to the change. Your annual reports (yes, we do read them carefully) can provide the entire evaluation report as an attachment, but it would be really helpful if you, the PI, provided an overview of what you see as your project value added within the body of the report.
There are several reasons evaluation information is important to NSF program officers. First, each federal dollar that you expend carrying out your project is one that the taxpayers expect both you and the NSF to be accountable for. Second, within the NSF, program portfolios are scrutinized to determine programmatic impact and effectiveness. Third, the ATE program is congressionally mandated and program data and evaluation are often used to respond to congressional questions. Put more concisely, NSF wants to know if the investment in your project/center was a wise one and if value was generated from this investment.
Each year, ATE PIs are asked what type of reports their evaluators provide them with and how they use the information. The majority of ATE PIs receive both oral and written reports from their evaluators.
PIs who receive reports in both oral and written forms report higher rates of evaluation use, as shown in the figure on the right, above.
You can find more at evalu-ate.org/annual_survey/
How can we make sure evaluation findings are used to improve projects? This is a question on the minds of evaluators, project staff, and funders alike. The Expectations to Change (E2C) process is one answer. E2C is a six-step process through which evaluation stakeholders are guided from establishing performance standards (i.e., “expectations”) to formulating action steps toward desired change. The process can be completed in one or more working sessions with those evaluation stakeholders best positioned to put the findings to use. E2C is designed as a process of self-evaluation for projects, and the role of the evaluator is that of facilitator, teacher, and technical consultant. The six steps of the E2C process are summarized in the table below. While the specific activities used to carry out each step should be tailored to the setting, the suggested activities are based on various implementations of the process to date.
E2C Process Overview
|1. Set Expectations||Establish standards to serve as a frame of reference for determining whether the findings are “good” or “bad”||Instruction, worksheets, and consensus building process|
|2. Review Findings||Examine the findings, compare them to established expectations, and form an initial reaction; celebrate successes||Instruction, individual processing, and round-robin group discussion|
|3. Identify Key Findings||Identify the findings that fall below expectations and require immediate attention||Ranking process and facilitated group discussion|
|4. Interpret Key Findings||Generate interpretations of what the key findings mean||Brainstorming activity such as “Rotating Flip Charts”|
|5. Make Recommendations||Generate recommendations for change based on interpretations of the findings||Brainstorming activity such as “Rotating Flip Charts”|
|6. Plan for Change||Formulate an action plan for implementing recommendations||Planning activities that enlist all of the stakeholders and result in concrete next steps, such as sticky wall, and small group work|
To find out if the E2C process does in fact encourage projects to use evaluation for improvement, we asked a group of staff and administrators from a nonprofit, human service organization to participate in an online survey one year after their E2C workshop. The findings revealed an increase in staff knowledge and awareness of clients’ experiences receiving services, as well as specific changes to the way services were delivered. The findings also showed that participation in the E2C workshop fostered the service providers’ appreciation for, increased their knowledge of, and enhanced their ability to engage in evaluation activities.
Based on these findings and our experiences with the process to date, by providing program stakeholders with the opportunity to systematically compare their evaluation results to agreed-upon performance standards, celebrate successes and address weaknesses, the E2C process facilitates self-evaluation for the purpose of project improvement.
E2C was co-created with Nkiru Nnawulezi, M.A., and Lela Vandenberg, Ph.D., Michigan State University. For more information, contact Adrienne Adams at email@example.com.
All evaluators want their evaluations to be useful and used. Evaluation clients need evaluation to bring value to their work to make the investment worthwhile. What does evaluation use look like in your context? It should be more than accountability reporting. Here are common types of evaluation use as defined in the evaluation literature:
Instrumental Use is using evaluation for decision-making purposes. These decisions are most commonly focused on improvement, such as changing marketing strategies or modifying curriculum. Or, they can be more summative in nature, such as deciding to continue, expand, or reinvent a project.
Process Use happens when involvement in an evaluation leads to learning or different ways of thinking or working.
Conceptual Use is evaluation use for knowledge. For example, a college dean might use an evaluation of her academic programs to further understand an issue related to another aspect of STEM education. This evaluation influences her thinking, but does not trigger any specific action.
Symbolic Use is use of evaluation findings to forward an existing agenda. Using evaluation to market an ATE program or to apply for further funding could be examples.