Lori Wingate

Executive Director, The Evaluation Center at Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She directs EvaluATE and leads and a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU.


Blog: Beyond Reporting: Getting More Value out of Your Evaluation*

Posted on April 15, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you’ve been a part of the ATE community for any time at all, you probably already know that ATE projects are required to have their work formally evaluated. NSF program officers want the projects they oversee to include evaluation results in their annual reports 

What may be less well known is that they also want to hear how projects are making use of their evaluations to learn from and improve their NSF-funded work. Did your evaluation results show that an activity you thought would help you reach your project goals turned out to be a flop? That may be disappointing, but it’s also extremely valuable information.   

There is more to using evaluation results than including findings in your annual reports to NSF or even following your evaluators’ recommendations. Project team members should take time to delve into the evaluation data on their own. For example: 

Read every comment in your qualitative data. Although you should avoid getting caught up in the less favorable remarks, they can be a valuable source of information about ways you might improve your work.  

  • Take time to consider the remarks that surprise you—they may reveal a blind spot that needs to be investigated.  
  • Don’t forget to pat yourself on the back for the stuff you’re already getting right.  

It’s important to find out whether a project is effective overall, but it can also be very revealing to disaggregate data by participant characteristics such as gender, age, discipline, enrollment status, or other factors. If you find out that some groups are getting more out of their experience with the project than others, you have an opportunity to adjust what you’re doing to better meet your intended audience’s needs. 

The single most important thing you can do to maximize an evaluation’s potential to bring value to your project is to make time to understand and use the results. That means:  

Meet with your evaluator to discuss the results.  

  • Review results with your project colleagues and advisors. 
  • Make decisions about how to move forward based on the evaluation results 
  • Record those decisions, along with what happens after you take action. That way, you can include this information in your annual reports to NSF. 

ATE grantees are awarded about $66 million annually by the federal government. We have an ethical obligation to be self-critical, use all available information sources to assess progress and opportunities for improvement, and use project evaluations to help us achieve excellence in all aspects of our work.  

 

*This blog is based on an article from an EvaluATE newsletter published in October 2014. 

Evaluator Biographical Sketch Template for National Science Foundation (NSF) Proposals

Posted on April 7, 2020 by  in

This template was created by EvaluATE (evalu-ate.org). It is based on the National Science Foundation’s guidelines for preparing biographical sketches for senior project personnel. The information about what evaluators should include in Products and Synergistic Activities sections are EvaluATE’s suggestions, not NSF requirements. The biographical sketch must not exceed two pages.


Evaluator’s Name

PROFESSIONAL PREPARATION

(List academic degrees and any pertinent certificates.)

Undergraduate Institution Location Major Degree Year
Graduate Institution Location Major Degree Year
Postdoctoral Institution Location Area Years
Certificate-Granting Institution Location Area Certificate Year

APPOINTMENTS

(List employment history in reverse chronological order.)

Dates Job Title Employer

PRODUCTS

(List up to ten products that demonstrate your experience and competence in evaluation and knowledge of the proposed project’s discipline. Examples may include publications, reports, and evaluation tools. All products must be citable and accessible. Include full reference information, including URL, if available).

SYNERGISTIC ACTIVITIES

(In paragraph form, list up to five examples that demonstrate your expertise in evaluation, especially as it pertains to the proposal. Examples may include ongoing or completed evaluations; development or adaptation of evaluation tools; leadership roles in the evaluation field; and invited lectures, presentations, or workshops on evaluation. If you have prior experience working in the proposal’s discipline, describe that as well.)

Downloads

Template: Evaluator Biographical Sketch

ATE Evaluation Primer

Posted on April 6, 2020 by  in

Updated September 2013.

Introduction

Evaluation is the systematic determination of the merit, worth, or significance of something. (The “something” of interest here is an ATE project or center but, for simplicity, we’ll just use the term “project” in the rest of this document). “Merit” is the inherent quality of a project, basically how “good” it is. “Worth” refers to how well the project meets a need in its present context in relation to its cost and value, basically how “worthwhile” it is. “Significance” refers to how important or groundbreaking a project is. Systematic evaluations are guided by questions and involve the collection of data and analysis of these data to reach conclusions about a project’s performance on one or more of these dimensions.

ATE evaluations should be keyed to grantees’ information needs and NSF accountability requirements. For maximum utility, ATE evaluations should describe and assess project processes (content and implementation) and outcomes (what happened as a result of the project) in such a way that project personnel can use the information to improve their work and be accountable for their grant funding.

What qualifies someone to conduct an ATE evaluation?

Evaluators have diverse academic and professional backgrounds. Few evaluators actually have a degree in “evaluation” per se. Although not a precise measure, an evaluator’s resume typically includes the terms “evaluation” or “assessment” in describing academic preparation or work history. Where such indicators are absent, more investigation is needed to ensure the person has the requisite knowledge and skills to be a competent evaluator.

Previous evaluation work should be readily evident in an experienced evaluator’s resume. Subject matter experts and research methods experts have valuable knowledge and skills that may complement an evaluator’s competencies. However, if they do not also have practical evaluation experience, their evaluation skills may be short of what is required for a comprehensive, sound, and useful project evaluation.

It is always a good idea to ask for references and work samples before contracting with an evaluator to ensure there is a good match between the needs of the project and what the evaluator can deliver.

Where can ATE grantees find qualified evaluators?

A good place to start is the American Evaluation Association’s Directory of Evaluators, available at www.eval.org. It is searchable by keyword and geographic location. AEA also has a list of graduate programs in evaluation, which also is a means for locating university faculty and graduate students with evaluation expertise. Grantees can check with local universities to find out if they have research centers or institutes that engage in evaluation. Some university websites have listings of areas of expertise for individual faculty members.

EvaluATE maintains a directory of evaluators with experience in STEM education and community college contexts, available from evalu-ate.org/community/evaluator_directory/. EvaluATE does not recommend specific individuals or firms for evaluation work.

Is it OK for grantees to use internal evaluators?

Yes (with caveats), to supplement but not replace your external evaluator. The project must allocate some funds to support an external evaluation (as noted in the ATE program solicitation). Internal evaluation and external evaluation may be considered as existing on a continuum. At the internal extreme, evaluative activities are carried out by project staff who draw their salaries from the personnel portion of the grant budget and are supervised by other project personnel. At the other extreme, someone outside the host institution with no prior affiliation with the funded project is hired to conduct the evaluation under a separate subcontract or contract. However, there is much “gray area” between these extremes, such as when someone is hired to conduct an evaluation from within the same host institution, but who isn’t on the project’s staff. Whether an evaluator is considered internal or external, the involved parties should address potential conflicts of interest directly and take steps to minimize their influence on the evaluation.

That said, there are ways to use internal evaluation to maximize the use of evaluation resources while maintaining the credibility of the evaluation. For example, an external evaluation  consultant could be hired to guide staff in the development of the initial evaluation design, with regular check-ins to assess how the evaluation is going and give advice to the internal evaluator as needed. This evaluation coach role can be especially helpful in designing data collection strategies and instruments, developing criteria for assessing project success, and ensuring that data are property analyzed and reported. This arrangement is appropriate for projects that have personnel available to spend time on evaluation tasks, but who have minimal experience with evaluation.

Another option is to hire an external metaevaluator—someone who evaluates the internal evaluation. A person in this type of role would play a less direct role in the design of the evaluation, but would provide feedback on the quality of the evaluation’s design, implementation, instruments, and reports. This arrangement may be appropriate for projects that have strong internal capacity for evaluation. If an evaluation is to be carried out mostly internally, it would be especially important to get this metaevaluator’s “stamp of approval” on the plan before it is implemented.

Because of perceptions that conflicts of interest are inherent with internal evaluation, internal
evaluation activities should be especially transparent—especially with regard to how data were
collected, analyzed, and interpreted—to enhance the credibility of the evaluation.

How much of a grant’s budget should be devoted to evaluation?

The general guideline is that 7 to 10 percent of a project’s direct costs should be allocated for
evaluation. Current expenditures on evaluation among ATE grantees average around 8 percent.
Prospective evaluation clients often want to know simply how much an evaluation costs or how much an evaluator should be paid for his or her time. However, these costs depend on the scale of the evaluation and the experience and expertise of the evaluator. The costs should be tied to specific deliverables and activities appropriate to the scope and goals of your project.

What do ATE grantees and NSF program officers expect to see in grant-level evaluation reports?

At a minimum, an evaluation report should describe what need the project is addressing and how the project is addressing that need, what the project accomplished, how “good” the results are, and how the conclusions were reached (what and how data were collected, how they were analyzed, what criteria were used to interpret the results). Evaluation clients often appreciate the inclusion of actionable recommendations, but recommendations should be based on the evaluation results, not only on the evaluator’s expert opinion. Grantees should expect more than a detailed description of their project activities and a regurgitation of data they routinely collect. Evaluations should yield information that adds to the project staff’s understanding of what they are doing, how well they are doing it, and what could be done to improve the project’s effectiveness.

Why do grantees have to complete the annual survey conducted by Western Michigan University in addition to submitting annual reports and having grant-level evaluations?

The annual survey data is used by NSF in its reports to Congress and other federal agencies to justify the program and its continuation/expansion. Although there is some overlap between the information required for the annual survey and ATE annual reports, the survey is much more specific about ATE activities and outcomes. Moreover, there is no way to aggregate information from annual reports submitted via Research.gov into a report about the overall ATE program. The annual survey takes place each year in February and March so that grantees can use the information they submit to the survey in their annual reports, which are due in April. The survey questions also may be used to guide data collection for the project evaluation.

What other evaluation tasks are required for NSF ATE grants?

All ATE grantees are expected to complete an annual survey conducted by EvaluATE, which takes place February-March. All grantees must submit an annual project report to NSF via Research.gov. ATE grantees are asked to upload their annual evaluation reports with their annual report submitted via Research.gov. For ATE centers, an additional evaluative report is developed annually by the center’s National Visiting Committee (NVC) to the NSF program officer. The NVC report provides an industry focused perspective on the success of a project and often makes recommendations on how to accomplish its goals.

Where can I learn more about evaluation?

The following resources are especially helpful for orienting evaluation clients and consumers to what they can and should expect from professional evaluation services; they provide a practical, nontechnical orientation to evaluation and matters related to the professional conduct of  evaluators:

NSF 2010 User-Friendly Handbook for Project Evaluation
American Evaluation Association’s Guiding Principles
Program Evaluation Standards
Competencies for Canadian Evaluation Practice

Download

Evaluation Primer

 

Checklist: Results from Prior NSF support Checklist

Posted on April 6, 2020 by  in ()

If a PI or co-PI for an NSF proposal has received NSF funding in the past five years, information on the results of that funding must be included in the proposal, whether it relates to the current proposal or not. This section of the proposal is called Results from Prior NSF Support; details about what should be included are provided in the NSF Grant Proposal Guide. The following is a synopsis of NSF’s requirements and EvaluATE’s suggestions for this section of an ATE proposal.

REQUIREMENTS

  • Limit to 5 pages or less
  • Make it the first section of your proposal. If the proposal is for the renewal of an ATE center, it may be uploaded as a supplementary document rather than presented in the 15-page project description.
  • Describe research and development products and how they have been made available to
    others
  • Clearly indicate the prior project’s
    •  Title
    • NSF award number
    • Period of support
  • Present results using these exact, distinct headings:
    • Intellectual Merit
    • Broader Impact
  • Provide complete bibliographic citations for all publications developed with NSF support,
    either in the narrative or in the separate references document. If there were no publications, state “No publications were produced under this award.”

SUGGESTIONS

  • Provide a brief factual account of what the project did, created, and who was engaged. A list of activities or deliverables is not sufficient evidence of intellectual merit or broader impacts, but it is important for reviewers to understand the nature and scope of your prior work.
  • Present as much hard evidence as possible in describe the project’s intellectual merit and
    broader impacts.
  • Be forthright about what didn’t work and lessons learned.
  • Describe how the current proposal is building on the prior project’s results.
  • Describe what aspects of previously funded work are being sustained without NSF support.

Downloads

NSF Prior Support Checklist (Fillable PDF)

Checklist: Communication Plan for ATE Principal Investigators and Evaluators

Posted on March 31, 2020 by , in Checklist ()

Creating a clear communication plan at the beginning of an evaluation can help project personnel and evaluators avoid confusion, misunderstandings, or uncertainty. The communication plan should be an agreement between the project’s principal investigator and the evaluator, and followed by members of their respective teams. This checklist highlights the decisions that need to made when developing a clear communication plan.

  • Designate one primary contact person from the project staff and one from the evaluation team. Clearly identify who should be contacted regarding questions, changes, or general updates about the evaluation. The project staff person should be someone who has authority to make decisions or approve small changes that might occur during the evaluation, such as the principal investigator or project manager.
  • Set up recurring meetings to discuss evaluation matters. Decide on the meeting frequency and platform for the project staff and evaluation team to discuss updates on the evaluation. These regular meetings should occur throughout the life of a project.
    • Frequency — At minimum, plan to meet monthly. Increase the frequency as needed to maintain momentum and meet key deadlines.
    • Platform — Real-time interaction via phone calls, web meetings, or in-person meetings will help ensure those involved give adequate attention to the matters being discussed. Do not rely on email or other asynchronous communication platforms.
    • Agenda — Tailor the agendas to reflect the aspects of the evaluation that need attention. In general, the evaluator should provide a status update, identify challenges, and explain what the project staff can do to facilitate the evaluation. The project staff should share important changes or challenges in the project, such as delays in timelines or project staff turnover. Conversations should close with clear action items and deadlines.
  • Agree on a process for reviewing and finalizing data collection instruments and procedures, and evaluation reports. Determine the project staff’s role in providing input on instruments (such as questionnaires or interview protocols), the mechanisms by which data will be collected, and reports. Establish a turnaround time for feedback, to avoid delays in implementing the evaluation.
  • Clarify who is responsible for disseminating reports. As a rule of thumb, responsibility and authority for the distribution of evaluation report lies with the project’s principal investigator. Make it clear whether the evaluator may use the reports for their own purposes and under what conditions.

Downloads

Communication Checklist (PDF)

 

Blog: Strategies and Sources for Interpreting Evaluation Findings to Reach Conclusions

Posted on March 18, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Imagine: You’re an evaluator who has compiled lots of data about an ATE project. You’re preparing to present the results to stakeholders. You have many beautiful charts and compelling stories to share.  

Youre confident you’ll be able to answer the stakeholders’ questions about data collection and analysisBut you get queasy at the prospect of questions like What does this mean? Is this good? Has our investment been worthwhile?  

It seems like the project is on track and they’re doing good work, but you know your hunch is not a sound basis for a conclusion. You know you should have planned ahead for how findings would be interpreted in order to reach conclusions, and you regret that the task got lost in the shuffle.  

What is a sound basis for interpreting findings to make an evaluative conclusion?  

Interpretation requires comparison. Consider how you make judgments in daily life: If you declare, “this pizza is just so-so,” you are comparing that pizza with other pizza you’ve had, or maybe with your imagined ideal pizza. When you judge something, you’re comparing that thing with something else, even if you’re not fully conscious of that comparison.

The same thing happens in program evaluation, and its essential for evaluators to be fully conscious and transparent about what they’re comparing evaluative evidence againstWhen evaluators don’t make their comparison points explicit, their evaluative conclusions may seem arbitrary and stakeholders may dismiss them as unfounded 

Here are some sources and strategies for comparisons to inform interpretation. Evaluators can use these to make clear and reasoned conclusions about a project’s performance:  

Performance Targets: Review the project proposal to see if any performance targets were established (e.g., “The number of nanotechnology certificates awarded will increase by 10 percent per year”). When you compare the project’s results with those targets, keep in mind that the original targets may have been either under or overambitious. Talk with stakeholders to see if those original targets are appropriate or if they need adjustment. Performance targets usually follow the SMART structure. 

Project Goals: Goals may be more general than specific performance targets (e.g., “Meet industry demands for qualified CNC technicians”)To make lofty or vague goals more concrete, you can borrow a technique called Goal Attainment Scaling (GAS). GAS was developed to measure individuals’ progress toward desired psychosocial outcomesThe GAS resource from BetterEvaluation will give you a sense of how to use this technique to assess program goal attainment. 

Project Logic Model: If the project has a logic model, map your data points onto its components to compare the project’s actual achievements with the planned activities and outcomes expressed in the model. No logic model? Work with project staff to create one using EvaluATE’s logic model template. 

Similar Programs: Look online or ask colleagues to find evaluations of projects that serve similar purposes as the one you are evaluating. Compare the results of those projects’ evaluations to your evaluation results. The comparison can inform your conclusions about relative performance.  

Historical Data: Look for historical project data that you can compare the project’s current performance against. Enrollment numbers and student demographics are common data points for STEM education programs. Find out if baseline data were included in the project’s proposal or can be reconstructed with institutional data. Be sure to capture several years of pre-project data so year-to-year fluctuations can be accounted for. See the practical guidance for this interrupted time series approach to assessing change related to an intervention on the Towards Data Science website. 

Stakeholder Perspectives: Ask stakeholders for their opinions about the status of the project. You can work with stakeholders in person or online by holding a data party to engage them directly in interpreting findings 

 

Whatever sources or strategies you use, its critical that you explain your process in your evaluation reports so it is transparent to stakeholders. Clearly documenting the interpretation process will also help you replicate the steps in the future. 

Blog: Three Questions to Spur Action from Your Evaluation Report

Posted on March 4, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluators are urged to make their evaluations and useful. Project staff are encouraged to use their evaluations. An obvious way to support these aims is for evaluators to develop recommendations based on evidence and for project staff to follow those recommendations (if they agree with them, of course). But not all reports have recommendations, and sometimes recommendations are just “keep up the good work!” If implications for actions are not immediately obvious from an evaluation report, here are three questions that project staff can ask themselves to spark thinking and decision making about how to use evaluation findings.  I’ve included real-world examples based our experience at EvaluATE.

1) Are there any unexpected findings in the report? The EvaluATE team has been surprised to learn that we are attracting a large number of grant writers and other grant professionals to our webinars. We initially assumed that principal investigators (PIs) and evaluators would be our main audience. With growing attendance among grant writers, we became aware that they are often the ones who first introduce PIs to evaluation, guiding them on what should go in the evaluation section of a proposal and how to find an evaluator. The unexpected finding that grant writers are seeking out EvaluATE for guidance made us realize that we should develop more tailored content for this important audience as we work to advance evaluation in the ATE program.

Talk with your team and your evaluator to determine if any action is needed related to your unexpected results.

2) What’s the worst/least favorable evaluation finding from your evaluation? Although it can be uncomfortable to focus on a project’s weak points, doing so is where the greatest opportunity for growth and improvement lies. Consider the probable causes of the problem and potential solutions. Can you solve the problem with your current resources? If so, make an action plan. If not, decide if the problem is important enough to address through a new initiative.

At EvaluATE, we serve both evaluators and evaluation consumers who have a wide range of interests and experience. When asked what EvaluATE needs to improve, several respondents to our external evaluation survey noted that they want webinars to be more tailored to their specific needs and skill levels. Some noted that our content was too technical, while others remarked that it was too basic. To address this issue, we decided to develop an ATE evaluation competency framework. Webinars will be keyed to specific competencies, which will help our audience decide which are appropriate for them. We couldn’t implement this research and development work with our current resources, so we wrote this activity into a new proposal.

Don’t sweep an unfavorable result or criticism under the rug. Use it as a lever for positive change.

3) What’s the most favorable finding from your evaluation? Give yourself a pat on the back, and then figure out if this finding points to an aspect of your project you should expand. If you need more information to make that decision, determine what additional evidence could be obtained in the next round of the evaluation. Help others to learn from your successes—the ATE Principal Investigators Conference is an ideal place to share aspects of your work that are especially strong, along with your lessons learned and practical advice about implementing ATE projects.

At EvaluATE, we have been astounded at the interest in and positive response to our webinars. But we don’t yet have a full understanding of the extent to which webinar attendance translates to improvements in evaluation practice. So we decided to start collecting follow-up data from webinar participants to check on use of our content. With that additional evidence in hand, we’ll be better positioned to make an informed decision about expanding or modifying our webinar series.

Don’t just feel good about your positive results—use them as leverage for increased impact.

If you’ve considered your evaluation results carefully but still aren’t able to identify a call to action, it may be time to rethink your evaluation’s focus. You may need to make adjustments to ensure it produces useful, actionable information. Evaluation plans should be fluid and responsive—it is expected that plans will evolve to address emerging needs.

Blog: How Can You Make Sure Your Evaluation Meets the Needs of Multiple Stakeholders?*

Posted on October 31, 2019 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We talk a lot about stakeholders in evaluation. These are the folks who are involved in, affected by, or simply interested in the evaluation of your project. But what these stakeholders want or need to know from the evaluation, the time they have available for the evaluation, and their level of interest are probably quite variable. The table below is a generic guide to the types of ATE evaluation stakeholders, what they might need, and how to meet those needs.

ATE Evaluation Stakeholders

Stakeholder groups What they might need Tips for meeting those needs
Project leaders (PI, co-PIs) Information that will help you improve the project as it unfolds

Results you can include in your annual reports to NSF to demonstrate accountability and impact

Communicate your needs clearly to your evaluator, including when you need the information in order to make use of it.
Advisory committees or National Visiting Committees Results from the evaluation that show whether the project is on track for meeting its goals, and if changes in direction or operations are warranted

Summary information about the project’s strengths and weaknesses

Many advisory committee members donate their time, so they probably aren’t interested in reading lengthy reports. Provide a brief memo and/or short presentation with key findings at meetings, and invite questions about the evaluation. Be forthcoming about strengths and weaknesses.
Participants who provide data for the evaluation Access to reports in which their information was used

Summaries of what actions were taken based on the information they needed to provide

The most important thing for this group is to demonstrate use of the information they provided. You can share reports, but a personal message from project leaders along the lines of “we heard you and here is what we’re doing in response” is most valuable.
NSF program officers Evidence that the project is on track to meet its goals

Evidence of impact (not just what was done, but what difference the work is making)

Evidence that the project is using evaluation results to make improvements

Focus on Intellectual Merit (the intrinsic quality of the work and potential to advance knowledge) and Broader Impacts (the tangible benefits for individuals and progress toward desired societal outcomes). If you’re not sure about what your program officer needs from your evaluation, ask for clarification.
College administrators (department chairs, deans, executives, etc.) Results that demonstrate impact on students, faculty, institutional culture, infrastructure, and reputation Make full reports available upon request, but most busy administrators probably don’t have the time to read technical reports or don’t need the fine-grained data points. Prepare memos or share presentations that focus on the information they’re most interested in.
Partners and collaborators Information that helps them assess the return on the investment of their time or other resources

In case you didn’t read between the lines, the underlying message here is to provide stakeholders with the information that is most relevant to their particular “stake” in your project. A good way not to meet their needs is to only send everyone a long, detailed technical report with every data point collected. It’s good to have a full report available for those who request it, but many simply won’t have the time or level of interest needed to consume that quantity of evaluative information about your project.

Most importantly, don’t take our word about what your stakeholders might need: Ask them!

Not sure what stakeholders to involve in your evaluation or how? Check out our worksheet Identifying Stakeholders and Their Roles in an Evaluation at bit.ly/id-stake.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in October 2015.

Checklist: Evaluation Plan for ATE Proposals

Posted on July 19, 2019 by  in

Updated July 2019!

This checklist provides information on what should be included in evaluation plans for proposals to the
National Science Foundation’s (NSF) Advanced Technological Education (ATE) program. Grant seekers should carefully read the most recent ATE program solicitation (ATE Program Solicitation) for details about the program and proposal submission requirements.

File: Click Here
Type: Checklist
Category: Proposal Development
Author(s): Lori Wingate

Blog: An Evaluative Approach to Proposal Development*

Posted on June 27, 2019 by  in Blog - ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A student came into my office to ask me a question. Soon after she launched into her query, I stopped her and said I wasn’t the right person to help because she was asking about a statistical method that I wasn’t up-to-date on. She said, “Oh, you’re a qualitative person?” And I answered, “Not really.” She left looking puzzled. The exchange left me pondering the vexing question, “What am I?” (Now imagine these words echoing off my office walls in a spooky voice for a couple of minutes.) After a few uncomfortable moments, I proudly concluded, “I am a critical thinker!”  

Yes, evaluators are trained specialists with an arsenal of tools, strategies, and approaches for data collection, analysis, and reporting. But critical thinking—evaluative thinking—is really what drives good evaluation. In fact, the very definition of critical thinking—“the mental process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”2—describes the evaluation process to a T. Applying your critical, evaluative thinking skills in developing your funding proposal will go a long way toward ensuring your submission is competitive.

Make sure all the pieces of your proposal fit together like a snug puzzle. Your proposal needs both a clear statement of the need for your project and a description of the intended outcomes—make sure these match up. If you struggle with the outcome measurement aspect of your evaluation plan, go back to the rationale for your project. If you can observe a need or problem in your context, you should be able to observe the improvements as well.

Be logical. Develop a logic model to portray how your project will translate its resources into outcomes that address a need in your context. Sometimes simply putting things in a graphic format can reveal shortcomings in a project’s logical foundation (like when important outcomes can’t be tracked back to planned activities). The narrative description of your project’s goals, objectives, deliverables, and activities should match the logic model.

Be skeptical. Project planning and logic model development typically happen from an optimistic point of view. (“If we build it, they will come.”) When creating your work plan, step back from time to time and ask yourself and your colleagues, What obstacles might we face? What could really mess things up? Where are the opportunities for failure? And perhaps most important, ask, Is this really the best solution to the need we’re trying to address? Identify your plan’s weaknesses and build in safeguards against those threats. I’m all for an optimistic outlook, but proposal reviewers won’t be wearing rose-colored glasses when they critique your proposal and compare it with others written by smart people with great ideas, just like you. Be your own worst critic and your proposal will be stronger for it.

Evaluative thinking doesn’t replace specialized training in evaluation. But even the best evaluator and most rigorous evaluation plan cannot compensate for a disheveled, poorly crafted project plan. Give your proposal a competitive edge by applying your critical thinking skills and infusing an evaluative perspective throughout your project description.

* This blog is a reprint of an article from an EvaluATE newsletter published in summer 2015.

2 dictionary.com