Having made a career of community college administration, first as a grant writer and later as a college president, I know well the power of grants in advancing a college’s mission. Somewhere in the early 1990s, the NSF was one of the first grantmakers in higher education to recognize the role of community colleges in STEM undergraduate education. Ever since, two-year faculty have strived to enter the NSF world with varied success.
Unlike much of the grant funding from federal sources, success in winning NSF grants is predicated on innovation and advancing knowledge, which stands in stark contrast to a history of colleges making the case for support based on institutional need. Colleges that are repeatedly successful in winning NSF grants are those that demonstrate their strengths and their ability to deliver what the grantor wants. I contend that NSF grants will increasingly go to new or “first-time” institutions once they recognize and embrace their capacity for innovation and knowledge advancement. With success in winning grants comes the responsibility to document achievements through effective evaluation.
I am encouraged by what I perceive as a stepped-up discussion among grant writers, project PIs, and program officers about evaluation and its importance. As a grant writer/developer, my main concern was to show that the activities I proposed were actually accomplished and that the anticipated courses, curricula, or other project deliverables had been implemented. Longer-term outcomes pertaining to student achievement were generally considered to be beyond a project’s scope. However, student outcomes have now become the measure for attracting public funding, and the emphasis on outcomes will only increase in this era of performance-based budgeting.
When I was a new president of an institution that had never benefitted from grant funding, I had the pleasure of rolling up my sleeves and joining the faculty in writing a proposal to the Advanced Technological Education (ATE) program. College presidents think long and hard about performance measures like graduation rates, length of enrollment until completion, and the gainful employment of graduates, yet such measures may seem distant to faculty who must focus on getting more students to pass their courses. The question rises as how to reconcile equally important interests in outcomes—at the course and program levels for faculty and the institutional level for the president. While I was not convinced that student outcomes were beyond the scope of the grant, the faculty and I agreed that our ATE evaluation ought to be a step in a larger process.
Most evaluators would agree that longitudinal studies of student outcomes cannot fall within the typical three-year grant period. By the same token, I think the new emphasis on logic models that demonstrate the progression from project inputs and activities through short-, mid-, and long-term outcomes allows grant developers to better tailor evaluation designs to the funded work, as well as extend project planning beyond the immediate funding period. The notion of “stackable credentials” so popular with the college completion agenda should now be part of our thinking about grant development. For example, we might look to develop proposals for ATE Targeted Research that build upon more limited project evaluation results. Or perhaps the converse is the way to go: Let’s plan our ATE projects with a mind toward long-term results, supported by evaluation and research designs that ultimately get us the data we need to “make the case” for our colleges as innovators and advancers of knowledge.
The ATE program solicitation calls for the evaluation of project effectiveness. Effectiveness, as defined by the Oxford English Dictionary, is “the degree to which something is successful in producing a desired result.” Therefore, ATE evaluations should determine the extent to which projects achieved their intended results, demonstrating how the project’s activities led to observed outcomes.
To claim effectiveness requires establishing causal links between a project’s activities and observed outcomes. To establish causation, three criteria must be met: temporal precedence, covariation, and no plausible alternative explanations (see bit.ly/trochim). For example, if you claim that your project led to increased enrollment of women in engineering technology, you need to provide evidence that (1) the trend did not start until after the project was initiated, (2) individuals or campuses not involved in the project did not experience the same changes or that the degree of change varied with the degree of involvement; and (3) nothing else going on in the project’s environment could have produced the observed increase in the number of women enrolled.
While important, there is more to evaluation than measuring effectiveness. Some other considerations include relevance, efficiency, impact, and sustainability (these are project evaluation criteria developed by the Organisation for Economic Co-operation and Development; to learn more see bit.ly/oecd-dac.)
The evaluation requirements and expectations expressed in the new ATE program solicitation are generally consistent with those that were in the prior version. However, there are two important changes that relate specifically to ATE centers:
First, the solicitation states that proposals for center renewals “may submit up to five pages on Results of Prior Support in the supplemental documents section and refer the reader to that section in the Project Description section.” The requirement that all proposals must begin with a subsection titled Results of Prior Support has not changed. What is new is the option—for Centers only—of describing results of prior support in a supplementary document, allowing proposers to devote more of their 15-page project descriptions to what they intend to do, rather than what they have accomplished in the past. Whether embedded in the project description or appended as a supplementary document, this section should identify the prior grant’s outcomes and impacts, supported with evidence from the evaluation. Reviewers will be looking for strong evidence that NSF made a good investment in the center and that a renewal grant is warranted given the center’s track record.
Second, the new solicitation calls for national center proposals to include evaluation plans that describe how impacts on institutions, faculty, students, and industry will be assessed. This is a more specific expectation for the evaluation than in the previous solicitation, which called for evaluations to provide evidence of impacts relating to a center’s disciplinary focus. Thus, proposals for national centers should describe the intended impacts at each of these levels (institutions, faculty, students, industry) and the evaluation plan should explain what data will be used to determine the quality and magnitude of those impacts.
Although not directly related to evaluation, other notable changes in the 2014 solicitation include the following:
- there is a new track for ATE projects called “ATE Coordination Networks”
- the Targeted Research track has been expanded
- Resource Centers have been renamed Support Centers
- all grantees are required to work with ATE Central to archive materials developed with grant funds to ensure they remain available to the public after funding ends
The archiving requirement relates directly to the data management plans that are required with all NSF proposals. To learn more about DMPs and how to develop yours, check out the article on page 3 of this newsletter.
Also, note the submission deadline is earlier this year—October 9! To learn more about developing an evaluation plan to include in your ATE proposal, join our webinars on August 20 and 26 (see page 4).
Check out the new solicitation at www.nsf.gov/ate.
NSF requires that ALL proposals include a data management plan (DMP); FastLane will not accept submissions without one. The DMP must detail “how you will conform to NSF policy on the dissemination and sharing of research results.” The term “research results” basically means any information collected or produced as a result of your program. Therefore, the DMP must detail what data you will collect and how you will collect, maintain, report, and disseminate those data, as well as other resources generated by your grant. While NSF does outline requirements for what should be included in a DMP (bit.ly/dmp-ehr), they do not tell you how to write one. There are a handful of resources that can help you write a DMP.
The University of Wisconsin Research Data Services Unit has a webpage that provides several links to resources (http://researchdata.wisc.edu/), and the University of Michigan features extensive guidance, including templates and worksheets (bit.ly/um-dmp). The University of Minnesota also offers several resources for DMP development (bit.ly/umn-dmp).
One other tool that can be helpful is the DMP Tool available at DMPTool.org. You fill out the plan as you go through the tool, and you can save plans as well. The tool provides extensive guidance on DMP development, with instructions for each part of the plan, guidance on how to fill out the sections, and helpful links. ATE Central includes guidance, resources, and an example plan in their handbook, available at atecentral.net/handbook, and also provides archive services for resources produced by ATE projects and centers (which supports sustainability). A new requirement in the 2014 ATE program solicitation is that grantees “must provide copies of [their] resources to ATE Central for archiving purposes.”
If you can demonstrate that you followed the data management plan for a prior grant, and also that you have provided access to the information and resources that your project or center has generated, then you can even use this information in your Results of Prior Support section for your next proposal.
Where are the hidden opportunities to positively influence proposal reviewers? Surprisingly, this is often the Results from Prior Support section. Many proposers do not go beyond simply recounting what they did in prior grants. They miss the chance to “wow” the reader with impact examples, such as Nano-Link’s Nano-Infusion Project that has resulted in the integration and inclusion of nanoscale modules into multiple grade levels of K-14 across the nation. Teachers are empowered with tools to effectively teach nanoscale concepts as evidenced by their survey feedback. New leaders are emerging and enthusiasm for science can be seen on the videos available on the website. Because of NSF funding, additional synergistic projects allowed for scaling activities and growing a national presence.
Any PI having received NSF support in the past 5 years must include a summary of the results (up to 5 pages) and how those results support the current proposal. Because pages in this subsection count toward the total 15 pages, many people worry that they are using too much space to describe what has been done. These pages, however, can provide a punch and energy to the proposal with metrics, outcomes, and stories. This is the time to quote the evaluator’s comments and tie the results to the evaluation plan. The external view provides valuable decision-making information to the reviewers. This discussion of prior support helps reviewers evaluate the proposal, allows them to make comments, and provides evidence that the new activities will add value.According to the NSF Grant Proposal Guide, updated in 2013, the subsection must include: Award #, amount, period of support; title of the project; summary of results described under the distinct separate headings of Intellectual Merit, and Broader Impact; publications acknowledging NSF support; evidence of research products and their availability; and relation of completed work to proposed work.
The bottom line is that the beginning of the project description sets the stage for the entire proposal. Data and examples that demonstrate intellectual merit and broader impact clearly define what has been done thus leaving room for a clear description of new directions that will require funding.