As an evaluator, I am often asked to work on evaluation plans for National Science Foundation proposals. I believe it is important for evaluators and clients to work together, so I start the conversation by asking about project goals and outcomes and then suggest that we work together to develop a project logic model. Developing a logic model helps create a unified vision for the project and promotes common understanding. There are many types and formats of logic models and while no model is “best,” a logic model usually has the following key elements: inputs, activities, outputs, outcomes, and impacts.
- Reasons to develop logic models:
- A logic model is a visual “elevator speech” that can be helpful when reviewing the proposal as it provides a quick overview of the project.
- It is logical! It aligns the resources, activities, deliverables (outputs), and outcomes (short, and medium) with impacts (long-term outcomes). I have often been told that it has helped my clients organize their proposals.
Focus: I love logic models because they help me, the evaluator, focus my work on critical program elements. When a logic model is developed collaboratively by the project team (client) and the evaluator, there is a shared understanding of how the project will work and what it is designed to achieve.
Frame the evaluation plan: Now comes the bonus! A logic model helps form the basis of an outcomes-based evaluation plan. I start the plan by developing indicators with my client for each of the outcomes on the logic model. Indicators are the criteria used for measuring the extent to which projected outcomes are being achieved. Effective indicators align directly to outcomes and are clear and measurable. And while measurable, indicators do not always need to be quantifiable. They can be qualitative and descriptive such as “Youth will describe that they ….” Note that in this example, it is stated how you will determine whether an outcome has been met (youth state that… self-report). It is likely you will have more than one indicator for each outcome. An indicator answers questions like these: How will you know it when you see it? What does it look like when an outcome is met? What is the evidence?
Guide the evaluation questions: After the indicators are developed we decide on the guiding evaluation questions (what we will be evaluating), and I get to work on the rest of the evaluation plan. I figure out an overall design and then add methods, measures, sampling, analysis, reporting, and dissemination (potential topics for future blog posts). Once the project is funded, we refine the evaluation plan, develop a project/evaluation timeline, and determine the ongoing evaluation management and communication – then we are ready for action.
1. W.K. Kellogg Foundation Logic Model Development Guide
2. W.K. Kellogg Foundation Evaluation Handbook (also available in Spanish)
3. EvaluATE’s Logic Model Template for ATE Projects and Centers