It is generally considered best practice to identify your intended external evaluator by name in an ATE proposal and work with him or her to write the evaluation section. In some cases, college procurement policies may be at odds with this long-standing practice (e.g., see Jacqueline Rearick’s blog post on this topic at http://bit.ly/rearick). If you have to proceed with evaluation planning without the benefit of involvement by an external evaluator, here are some tips for DIY (do-it-yourself) evaluation planning:
Develop a project logic model that specifies your project’s activities, outputs (products), and outcomes. Yes, you can do this! The task of logic model development often falls to an evaluator, although it’s really just project planning. But it provides a great foundation for framing your evaluation plan. Try out our ATE Logic Model Template (http://bit.ly/ate-logic).
Specify the focus of the evaluation by formulating evaluation questions. These should be clearly tied to what is in the logic model. Here are some generic evaluation questions: How well did the project reach and engage its intended audience? How satisfied are participants with the project’s activities and products? To what extent did the project bring about changes in participants’ knowledge, skills, attitudes, and/or behaviors? How well did the project meet the needs it was designed to address? How sustainable is the project? Ask questions about both the project’s implementation and outcomes and avoid questions that can be answered with a yes/no or single number.
Describe the data collection plan. Identify the data and data sources that will be used to answer each of the evaluation questions. Keep in mind most evaluation questions will need multiple sources of evidence in order to answer adequately. Utilizing both qualitative and quantitative data will strengthen your evidence base. Use our Data Collection Planning Matrix to work out the details of your plan (see p. 3- Data Collection Planning Matrix).
Describe the analytical and interpretive procedures to be used for making sense of the evaluation data. For DIY evaluation plans, keep it simple. In fact, most project evaluations (not including research projects) rely mainly on basic descriptive statistics (e.g., percentages, means, aggregate numbers) for analysis. As appropriate, compare data over time, by site, by audience type, and/or against performance targets to aid in interpretation.
Identify the main evaluation deliverables. These are the things the evaluation effort specifically (not the overall project) will produce. Typical deliverables include a detailed evaluation plan (i.e., an expanded version of the plan included in the proposal that is developed after the project is funded), data collection instruments, and evaluation reports. NSF also wants to see how the project will use the evaluation findings, conclusions, and recommendations to inform and improve ongoing project work.
Include references to the evaluation literature. At minimum, consult and reference the NSF User Friendly Handbook for Project Evaluation (http://bit.ly/nsf-evalguide) and the Program Evaluation Standards (http://bit.ly/jc-pes).
Include a line item in your budget for evaluation. The average allocation among ATE projects for evaluation is 7 percent (see Survey Says on p. 1).
Finally, if you’re including a DIY evaluation plan in your proposal, specify the policy prohibiting you from identifying and working with a particular evaluator at the proposal stage. Make it absolutely clear to reviewers why you have not engaged an external evaluator and what steps you will take to procure one once an award is made.