We EvaluATE - Proposal Development

Blog: Intellectual Merit and Broader Impacts: Identifying Your Project’s Achievements and Supporting Evidence

Posted on October 21, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The deadline for proposals to the National Science Foundation’s Advanced Technological Education program for this year just passed, so a blog about what to include in the Results from Prior NSF Support section in an ATE proposal may seem untimely. But if you’re involved in a project that will seek additional funding from NSF in the next year or two, there’s no better time than NOW to assess the quality and quantity of the evidence of your current project’s intellectual merit and broader impacts (NSF’s review criteria for proposed and completed work).

Kirk Knestis (Hezel Associates) and I are presenting a workshop on this topic the day this blog is being published. Here’s an overview of some of what we’re covering.

Understand the fundamentals of intellectual merit and broader impacts: In a nutshell, intellectual merit is about advancing knowledge and understanding. Broader impacts are benefits to society. If your project is mainly research, it’s likely that most of your achievements are related to intellectual merit. If you’re mostly doing development and implementation, your achievements are probably more in the area of broader impacts. But you should have evidence of both aspects of your work.

Identify your project’s intellectual merit and broader impacts: To hone in on your project’s intellectual merit and broader impacts, it helps to break down these big ideas into smaller chunks. To identify your project’s intellectual merit, ask yourself, what are we doing that is generating new knowledge or improved understanding? Are we using novel research methods or investigating a novel topic to better understand an aspect of STEM education? Is our project transformative, bringing about extraordinary or revolutionary change? In terms of broader impacts, what are we doing to serve groups that have been historically underrepresented in STEM; developing a diverse workforce; creating partnerships between academia and industry; enhancing education infrastructure; increasing economic competitiveness; or improving STEM education in general?

Identify gaps in evidence: It’s not enough to profess your achievements—you need evidence. Evidence is not the method you used to collect data (tests, surveys, observations, etc.); it’s the evidence indicated by those data (a genetic test is not evidence that someone committed a crime, the result of that test is the evidence). If you don’t have good evidence of important achievements, revise your evaluation plan and start collecting data as soon as possible. Make sure that you have evidence of more than just the completion of activities. For example, if your achievement is that you developed a new certification program, to demonstrate broader impacts, you need evidence that it is a high-quality program and that students are enrolling, graduating, and getting jobs (or at least go as far down the outcomes chain as reasonable). Plotting your evidence on a logic model is a good way to figure out if you have sufficient evidence regarding outcomes as well as activities and outputs.

If you find gaps that will impair your ability to make a compelling case about what you’ve achieved with your current grant, update your evaluation plan accordingly. When you write your next proposal, you will be required to present evidence of your achievements under the specific headings of “Intellectual Merit” and “Broader Impacts” – if you don’t, your proposal is at risk of being returned without review.

 

To learn more, check out these resources:

NSF Grant Proposal Guide (this link goes directly to the section on Results from Prior NSF Support): http://bit.ly/nsf-results

NSF Merit Review Website: http://www.nsf.gov/bfa/dias/policy/merit_review/

NSF Important Notice #130: Transformative Research (for details about what NSF considers transformative, one dimension of intellectual merit): http://www.nsf.gov/pubs/2007/in130/in130.jsp

NSF Examples of Broader Impacts: http://www.nsf.gov/pubs/2002/nsf022/bicexamples.pdf

Perspectives on Broader Impacts: http://www.nsf.gov/od/oia/publications/Broader_Impacts.pdf

National Alliance for Broader Impacts: http://broaderimpacts.net/about/

Materials from our ATE PI conference workshop on this topic, including presentation slides, worksheets, and EvaluATE’s Results from Prior NSF Support Checklist: http://www.evalu-ate.org/library/conference/pi-conference/

Blog: EvaluATE to the Rescue!

Posted on September 16, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

8 Resources to help with ATE proposal evaluation plans

The EvaluATE team is busy preparing our proposal for a third round of funding from the National Science Foundation to continue our work as a support center serving the ATE community. So, it’s a good time to remind folks of the EvaluATE resources that may come in handy at proposal development time.

Evaluation Planning Checklist for ATE Proposals
This checklist identifies all the areas in your ATE proposal in which information related to the project’s evaluation should appear, with guidance on what you need to do to present a strong evaluation plan.

Evaluation: Don’t Submit Your ATE Proposal Without It
This webinar from August 2015 focused on developing evaluation plans for ATE proposals. We reviewed the contents of the evaluation planning checklist (see above) in detail, with illustrative examples. Also check out our 2014 webinar on the same topic, featuring the perspectives of an ATE PI, evaluator, and program officer in addition to the EvaluATE team.

10 Helpful Hints and 10 Fatal Flaws: Writing Better Evaluation Sections in Your Proposals
Elizabeth Teles, former ATE program co-lead and member of EvaluATE’s National Visiting Committee, offers her advice on ways to strengthen your proposal’s evaluation plan and avoid common serious mistakes.

ATE Logic Model Template
A logic model isn’t required for ATE proposals, but it is a useful and efficient way to communicate an overview of what you intend to do and achieve with your ATE funding. This template provides a format for you to identify your project’s activities, outputs (products), and outcomes.

Data Collection Planning Matrix
An evaluation plan needs to describe what data will be collected and how, from what sources, by whom, and when, as well as how the data will be analyzed. This worksheet prompts you to record this information in table format, which may then be copied into a proposal’s project description or supplementary document.

ATE Annual Survey Findings
The ATE survey, conducted annually since 2000, provides aggregate information about ATE-funded projects and centers. The survey data may be used to demonstrate a particular need within the ATE program or describe your project’s past performance in relation to the program overall.

Checklists for the Common Guidelines for Education Research and Development
If your proposal is for targeted research or includes a research component, you should show familiarity with the Common Guidelines for Education Research and Development, published jointly by the National Science Foundation and Institute of Education Sciences. EvaluATE’s checklists serve as a quick-start guide to those guidelines.

Project Resume Checklist
If you are applying for renewal funding, a project resume is an efficient means for communicating your past productivity and capacity for future work to reviewers. The checklist explains what to include in a project resume and how. See also our May 2015 webinar on this topic for more information.

And if you haven’t seen it yet, check out the latest issue of our summer ’15 newsletter, which is devoted to evaluation-related issues for ATE proposals.

Blog: Evaluation Plan Development for Grant Writing

Posted on March 25, 2015 by  in Blog (, )

Dean of Grants and Federal Programs, Pensacola State College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As dean of institutional effectiveness and grants I have varied responsibilities, but at heart, I am a grant writer. I find it easy to write a needs statement based on available data; more challenging is the process of developing an effective evaluation plan for a proposed project.

A lot of time and effort – taxpayer supported – go into project evaluation, an increasingly significant component of federal grant applications, as illustrated by the following examples:

  • My college partners on two existing U.S. Department of Labor Trade Adjustment Assistance Community College and Career Training (TAACCCT) Grants – almost $2 billion nationally to expand training for the unemployed – which allow up to 10 percent of project budgets to be designated for mandatory external evaluation.
  • We have an $8.5 million U.S. Department of Health & Human Services Health Profession Opportunity Grant demonstration project. Part of that “demonstration” included mandatory participation in activities conducted by contracted external evaluators.

We recently submitted grant applications under the highly competitive U.S. Department of Education Student Support Services (SSS) Program. My college has a long-term SSS program, which meets all of its objectives so we’ll receive “extra” prior experience points. We are assured refunding, right? Maybe, as long as we address competitive preference priorities and score better than perfect – every point counts.

Although external evaluation is not required, when comparing language excerpted from the last three SSS competitions, it is clear that there is a much greater emphasis on the details of an evaluation plan. The guidelines require a detailed description of what types of data will be collected and how the applicant will use the information collected in the evaluation of project activities. It is no longer sufficient to just say “project staff will collect quantitative and qualitative data and use this information for project improvement.”

Our successful evaluation plans start with a detailed logic model, which allows us to make realistic projections of what we hope will happen and plan data collection around the project’s key activities and outcomes. We use these guiding questions to help formulate the details:

  • What services will be provided?
  • What can be measured?
    • perceptions, participation, academic progress
  • What information sources will be available?
  • What types of data will be collected?
    • student records, surveys, interviews, activity-specific data
  • How will we review and analyze the data collected?
  • What will we do with the findings?
    • Specific actions

Unlike universities, most community and state colleges are not hotbeds of research and evaluation. So what can grant writers do to prepare themselves to meet the “evaluation plan” challenge?

  • Make friends with a statistician; they tend to hang out in the Mathematics or Institutional Research departments.
  • Take a graduate level course in educational statistics. If you’re writing about something it is helpful to have rudimentary knowledge of what you write.
  • Find good resources. I have several textbook-like evaluation manuals, but my go-to, dog-eared guide for developing an evaluation plan is the National Science Foundation’s “2010 User-Friendly Handbook for Project Evaluation” (Logic Model information in Chapter 3).
  • An open-access list of Institutional Research (IR) Links located on the Association for Institutional Research website (AIR; a membership organization), provides more than 2200 links to external IR Web pages on a variety of topics related to data and decisions for higher education.
  • Community College Research Center (CCRC) resources, such as publications on prior research, can guide evaluation plan development (http://ccrc.tc.columbia.edu/). The CCRC FAQs Web page provides national data useful for benchmarking your grant program’s projected outcomes.