Newsletter - Summer 2015

Newsletter: An Evaluative Approach to Proposal Development

Posted on July 1, 2015 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

A student came into my office to ask me a question. Soon after she launched into her query, I stopped her and said I wasn’t the right person to help because she was asking about a statistical method that I wasn’t up-to-date on. She said, “Oh, you’re a qualitative person?” And I answered, “Not really.” She left looking puzzled. The exchange left me pondering the vexing question, “What am I?” (Now imagine these words echoing off my office walls in a spooky voice for a couple of minutes.) After a few uncomfortable moments, I proudly concluded, “I am a critical thinker!”

Yes, evaluators are trained specialists with an arsenal of tools, strategies, and approaches for data collection, analysis, and reporting. But critical thinking—evaluative thinking—is really what drives good evaluation. In fact, the very definition of critical thinking as “the mental process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”1 describes the evaluation process to a T. Applying your critical, evaluative thinking skills in developing your funding proposal will go a long way toward ensuring your submission is competitive.

Make sure all the pieces of your proposal fit together like a snug puzzle. Your proposal needs both a clear statement of the need for your project and a description of the intended outcomes—make sure these match up. If you struggle with the outcome measurement aspect of your evaluation plan, go back to the rationale for your project. If you can observe a need or problem in your context, you should be able to observe the improvements as well. Show linkages between the need you intend to address, your activities and products, and expected outcomes.
Be logical. Develop a logic model to portray how your project will translate its resources into outcomes that address a need in your context. Sometimes simply putting things in a graphic format can reveal shortcomings in a project’s logical foundation (like when important outcomes can’t be tracked back to activities). The narrative description of your project’s goals, objectives, deliverables, and activities should match the logic model.

Be skeptical. Project planning and logic model development typically happen from an optimistic point of view. (“If we build it, they will come.”) While crafting your work plan, step back from time to time and ask yourself and your colleagues, what obstacles might we face? What could really mess things up? Where are the opportunities for failure? And perhaps most importantly, is this really the best solution to the need we’re trying to address? Identify your plan’s weaknesses and build in safeguards against those threats. I’m all for an optimistic outlook, but proposal reviewers won’t be wearing rose-colored glasses when they critique your proposal and compare it with others written by smart people with great ideas, just like you. Be your own worst critic and your proposal will be stronger for it.

Evaluative thinking doesn’t replace specialized training in evaluation. But even the best evaluator and most rigorous evaluation plan cannot compensate for a disheveled, poorly crafted project plan. Give your proposal a competitive edge by applying your critical thinking skills and infusing an evaluative perspective throughout your project description.

1 dictionary.com

Newsletter: Survey Says Summer 2015

Posted on July 1, 2015 by  in Newsletter - ()

Doctoral Associate, EvaluATE, Western Michigan University

On average, ATE grantees spend 7 percent of their budgets on evaluation. Smaller projects spend smaller proportions of their awards on evaluation than larger projects. In this figure, grants are split into quartiles by the size of their annual budgets and the average budget allocation for evaluation is shown for each quartile.

2015-Summer-Survey

 

 

 

 

 

 

 

 

 

For more ATE survey findings, visit www.evalu-ate.org/annual_survey.

Newsletter: Evaluation Plan

Posted on July 1, 2015 by  in Newsletter - ()

An evaluation plan is “a written document describing the overall approach or design that will be used to guide an evaluation. It includes what will be done, how it will be done, who will do it, when it will be done, and why the evaluation is being conducted.”1 Two versions of the evaluation plan are needed: A brief, mostly conceptual overview for use in the proposal and an expanded plan that guides the evaluation once you are funded.

Both versions should describe the evaluation’s scope and focus, data collection plan, and deliverables. The main purpose of the proposal plan is to show reviewers that you have a clear plan, that the plan is appropriate for the project, and you have the capacity to conduct the evaluation. The expanded plan, which should be the first deliverable you receive from your evaluator after your project starts, serves as a guide for implementing and managing the evaluation. As such, it should include concrete details about methods, analyses, deliverables, and time lines. It should reflect changes to the project negotiated with NSF during the award process and be updated as necessary throughout the project’s lifespan.

The Evaluation Design Checklist (http://bit.ly/eval-design) and Evaluation Contracts Checklist (http://bit.ly/eval-contracts) identifies numerous issues both PIs and evaluators should think through when developing evaluation plans and contracts.

1EPA Program Evaluation Glossary (http://bit.ly/epa-glossary)

For more evaluation terminology, get the Evaluation Glossary App from the App Store or Google Play.

Newsletter: What should I do if my college’s procurement office won’t let me name an evaluator in my proposal?

Posted on July 1, 2015 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

DIY Evaluation Planning

It is generally considered best practice to identify your intended external evaluator by name in an ATE proposal and work with him or her to write the evaluation section. In some cases, college procurement policies may be at odds with this long-standing practice (e.g., see Jacqueline Rearick’s blog post on this topic at http://bit.ly/rearick). If you have to proceed with evaluation planning without the benefit of involvement by an external evaluator, here are some tips for DIY (do-it-yourself) evaluation planning:

Develop a project logic model that specifies your project’s activities, outputs (products), and outcomes. Yes, you can do this! The task of logic model development often falls to an evaluator, although it’s really just project planning. But it provides a great foundation for framing your evaluation plan. Try out our ATE Logic Model Template (http://bit.ly/ate-logic).

Specify the focus of the evaluation by formulating evaluation questions. These should be clearly tied to what is in the logic model. Here are some generic evaluation questions: How well did the project reach and engage its intended audience? How satisfied are participants with the project’s activities and products? To what extent did the project bring about changes in participants’ knowledge, skills, attitudes, and/or behaviors? How well did the project meet the needs it was designed to address? How sustainable is the project? Ask questions about both the project’s implementation and outcomes and avoid questions that can be answered with a yes/no or single number.

Describe the data collection plan. Identify the data and data sources that will be used to answer each of the evaluation questions. Keep in mind most evaluation questions will need multiple sources of evidence in order to answer adequately. Utilizing both qualitative and quantitative data will strengthen your evidence base. Use our Data Collection Planning Matrix to work out the details of your plan (see p. 3- Data Collection Planning Matrix).

Describe the analytical and interpretive procedures to be used for making sense of the evaluation data. For DIY evaluation plans, keep it simple. In fact, most project evaluations (not including research projects) rely mainly on basic descriptive statistics (e.g., percentages, means, aggregate numbers) for analysis. As appropriate, compare data over time, by site, by audience type, and/or against performance targets to aid in interpretation.

Identify the main evaluation deliverables. These are the things the evaluation effort specifically (not the overall project) will produce. Typical deliverables include a detailed evaluation plan (i.e., an expanded version of the plan included in the proposal that is developed after the project is funded), data collection instruments, and evaluation reports. NSF also wants to see how the project will use the evaluation findings, conclusions, and recommendations to inform and improve ongoing project work.

Include references to the evaluation literature. At minimum, consult and reference the NSF User Friendly Handbook for Project Evaluation (http://bit.ly/nsf-evalguide) and the Program Evaluation Standards (http://bit.ly/jc-pes).

Include a line item in your budget for evaluation. The average allocation among ATE projects for evaluation is 7 percent (see Survey Says on p. 1).

Finally, if you’re including a DIY evaluation plan in your proposal, specify the policy prohibiting you from identifying and working with a particular evaluator at the proposal stage. Make it absolutely clear to reviewers why you have not engaged an external evaluator and what steps you will take to procure one once an award is made.

Newsletter: Data Collection Planning Matrix

Posted on July 1, 2015 by  in Newsletter - ()

The part of your proposal’s evaluation plan that reviewers will probably scrutinize most closely is the data collection plan. Given that the evaluation section of a proposal is typically just 1-2 pages, you have minimal space to communicate a clear plan for gathering evidence of your project’s quality and impact. An efficient way to convey this information is in a matrix format. To help with this task, we’ve created a Data Collection Planning Matrix, available from (bit.ly/data-matrix).

This tool prompts the user to specify the evaluation questions that will serve as the foundation for the evaluation; what indicators1 will be used to answer each evaluation question; how data for each indicator will be collected, from what sources, by whom, and when; and how the data will be analyzed. (The document includes definitions for each of these components to support shared understandings among members of the proposal development team.) Including details about data collection in your proposal shows reviewers that you have been thoughtful and strategic in determining how you will build a body of evidence about the effectiveness and quality of your NSF-funded work. The value of putting this information in a matrix format is that it ensures you have a clear plan for gathering data that will enable you to fully address all the evaluation questions and, conversely, that all the data you plan to collect will serve a specific purpose.

A good rule of thumb is to develop at least one overarching evaluation question for each main element of a project logic model (i.e., activities, outputs, and short-, mid-, and long-term outcomes). Although not required for ATE program proposals, logic models are an efficient way to convey how your project’s activities and products will lead to intended outcomes. The evaluation’s data collection plan should align clearly with your project’s activities and goals, whether you use a logic model or not. If you are interested in developing a logic model for your project and want to learn more, see our ATE Logic Model Template at (bit.ly/ate-logic).

If you have questions about the data collection planning matrix or logic model template or suggestions for improving it, let us know: email us at info@evalu-ate.org.

1 For more on indicators and how to select ones that will serve your evaluation well, see Goldie MacDonald’s checklist, Criteria for Selection of High-Performing Indicators, available from (bit.ly/indicator-eval).

Newsletter: Project Spotlight: E-MATE

Posted on July 1, 2015 by  in Newsletter - ()

Professor and chair of engineering and technology at Brookdale Community College, E-MATE

A conversation with Mike Qaissaunee, E-MATE’s principal investigator

Q: How did you work with your evaluator during proposal development?

A: As PI and an experienced evaluator, I wrote the initial plan and selected a longtime colleague to act as external evaluator. The proposal was funded with the understanding that we would select a new evaluator, as panelists felt the initial evaluator was too close to me (the PI) and would have difficulty being objective. We selected a new evaluator with significant experience with NSF, ATE, and community colleges. Through a number of calls and meetings, we discussed the proposal, detailed our goals and objectives, answered a number of really good questions, and identified the key things we hoped to learn. Our new evaluator was able to build on my original evaluation plan, developing a rich evaluation framework and logic model.

Q: What advice do you have for communicating an evaluation plan in a proposal?

A: As proposals are fairly short, it’s important to keep the evaluation plan brief and specific to the project, rather than boilerplate. If possible, communicate information in a table and/or graphic. Evaluation metrics and tasks can also be included in tables detailing timelines, activities, and goals and objectives.

Q: How did you integrate evaluation results from a prior project into your proposal?

A: I’ve found that the most powerful approach to including evaluation results in a proposal is a judicious mix of qualitative and quantitative data. Quantitative data demonstrates past success and capacity for future work, while qualitative results bring the proposal to life and engages readers. Evaluation results can also be used to highlight areas with limited success and new areas for investigation. I don’t shy away from addressing evaluation data as it demonstrates that the project team is learning and adapting.