Archive: nsf

Blog: What Goes Where? Reporting Evaluation Results to NSF

Posted on April 26, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I provide advice for Advanced Technological Education (ATE) principal investigators (PIs) on how to include information from their project evaluations in their annual reports to the National Science Foundation (NSF).

Annual reports for NSF grants are due within 90 days of the award’s anniversary date. That means if your project’s initial award date was September 1, your annual reports will be due between June and August each year until the final year of the grant (at which point an outcome report is due within 90 days after the award anniversary date).

When you prepare your first annual report for NSF at Research.gov, you may be surprised to see there is no specific request for results from your project’s evaluation or a prompt to upload your evaluation report. That’s because Research.gov is the online reporting system used by all NSF grantees, whether they are researching fish populations in Wisconsin lakes or developing technician education programs.  So what do you do with the evaluation report your external evaluator prepared or all the great information in it?

1. Report evidence from your evaluation in the relevant sections of your annual report.

The Research.gov system for annual reports includes seven sections: Cover, Accomplishments, Products, Participants, Impact, Changes/Problems, and Special Requirements. Findings and conclusions from your evaluation should be reported in the Accomplishments and Impact sections, as described in the table below. Sometimes evaluation findings will point to a need for changes in project implementation or even its goals. In this case, pertinent evidence should be reported in the Changes/Problems section of the annual report. Highlight the most important evaluation findings and conclusions in these report sections. Refer to the full evaluation report for additional details (see Point 2 below).

NSF annual report section What to report from your evaluation
Accomplishments
  • Number of participants in various activities
  • Data related to participant engagement and satisfaction
  • Data related to the development and dissemination of products (Note: The Products section of the annual report is simply for listing products, not reporting evaluative information about them.)
Impacts
  • Evidence of the nature and magnitude of changes brought about by project activities, such as changes in individual knowledge, skills, attitudes, or behaviors or larger institutional, community, or workforce conditions
  • Evidence of increased participation by members of groups historically underrepresented in STEM
  • Evidence of the project’s contributions to the development of infrastructure that supports STEM education and research, including physical resources, such as labs and instruments; institutional policies; and enhanced access to scientific information
Changes/Problems
  • Evidence of shortcomings or opportunities that point to a need for substantial changes in the project

Do you have a logic model that delineates your project’s activities, outputs, and outcomes? Is your evaluation report organized around the elements in your logic model? If so, a straightforward rule of thumb is to follow that logic model structure and report evidence related to your project activities and outputs in the Accomplishments section and evidence related to your project outcomes in the Impacts section of your NSF annual report.

2. Upload your evaluation report.

Include your project’s most recent evaluation report as a supporting file in the Accomplishments or Impact section of Research.gov. If the report is longer than about 25 pages, make sure it includes a 1-3 page executive summary that highlights key results. Your NSF program officer is very interested in your evaluation results, but probably doesn’t have time to carefully read lengthy reports from all the projects he or she oversees.

Blog: 3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • Set a recurring conference call or meeting with your evaluator—at least once a month.
  • Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • Assign one person on your project team to be the point-person for evaluation.
  • Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.

 

Newsletter: Communicating Results from Prior NSF Support

Posted on January 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

ATE proposal season is many months away in early October, but if you are submitting for new funding this year, now is the time to reflect on your project’s achievements and make sure you will be able to write a compelling account of your current or past project’s results as they relate to the NSF review criteria of Intellectual Merit and Broader Impacts. A section titled Results from Prior NSF Support is required whenever a proposal PI or co-PI has received previous grants from NSF in the past five years. A proposal may be returned without review if it does not use the specific headings of “Intellectual Merit” and “Broader Impacts” when presenting results from prior support.

Given that these specific headings are required, you should have something to say about your project’s achievements in these distinct areas. It is OK for some projects to emphasize one area over another (Intellectual Merit or Broader Impacts), but grantees should be able to demonstrate value in both areas. Descriptions of achievements should be supported with evidence. Bold statements about a proposed project’s potential broader impacts, for example, will be more convincing to reviewers if the proposer can describe tangible benefits of previously funded work.

To help with this aspect of proposal development, EvaluATE has created a Results from Prior NSF Support Checklist (see http://bit.ly/prior-check). This one-page checklist lists the NSF requirements for this section of a proposal, as well as our additional suggestions for what to include and how.

Two EvaluATE blogs include additional guidance in this area: Amy Germuth (http://bit.ly/ag-reapply) offers specific guidance regarding wording and structure, and Lori Wingate (http://bit.ly/nsf-merit) shares tips for assessing the quality and quantity of evidence of a project’s Intellectual Merit and Broader Impacts, with links to helpful resources.

The task of identifying and collecting evidence of results from prior support should not wait until proposal writing time. It should be embedded in a project’s ongoing evaluation.

Newsletter: What do you do when your evaluator disagrees with a recommendation by your program officer?

Posted on April 1, 2014 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

This was a question submitted anonymously to EvaluATE by an ATE principal investigator (PI), so I do not know the specific nature of the recommendation in question. Therefore, my response isn’t about the substance of whatever this recommendation may have been about, but on the interpersonal and political dynamics of the situation.

Let’s put the various players’ roles into perspective:
As PI, you are ultimately responsible for your project—delivering what you outlined in your grant proposal/negotiations and making decisions about how best to conduct the project based on your experience, expertise, and input from various advisors. You are in the position of authority when it comes to how your project is implemented and what recommendations from what sources to implement in order to ensure the success of your project.

Your NSF program officer (PO) monitors your project, primarily based on information you provide in your annual report, submitted to him or her via research.gov. Your NSF
program officer may provide extremely valuable guidance and advice, but the PO’s role is to comment on your project as described in the report. You are not obligated to accept the advice. However, the PO does approve the report, based on his or her assessment of whether the project is sufficiently meeting the expectations of the grant. If you choose not to accept your program officer’s recommendations—which is completely acceptable—you should be able to provide a clear rationale for your decision in a respectful and diplomatic way by addressing each of the issues raised. Such a response should be documented, such as in your annual report and/or a response to the evaluation report.

Your evaluator is a consultant you hired to provide a service to your project in exchange for compensation. You are not obligated to accept this person’s recommendations, either. Again,
however, you should give your evaluator’s recommendations—especially those based on evidence—careful consideration and express why or why not you believe the recommendations are or are not appropriate for your project. An evaluator should never “ding” your project for not implementing the evaluation recommendations.

If you are really not sure who is right and neither person’s position (the PO’s recommendation or the evaluator’s disagreement with it) especially resonates with you and your understanding of what your project needs, you should seek additional information. If you have an advisory panel, this is exactly the type of tricky situation they can help with. If you don’t, you might consult an experienced person at your institution or another ATE project or center PI. Whichever way you go, you should be able to provide a clear rationale for your position and communicate it to both parties. This is not a popularity contest between your evaluator and your program officer. This is about making the right decisions for your project.

Newsletter: Annual NSF Report and the Annual ATE Survey

Posted on January 1, 2014 by  in Newsletter - ()

Doctoral Associate, EvaluATE, Western Michigan University

A common question EvaluATE has been asked about the ATE survey (conducted annually since 2000, with an average response rate of 95 percent) is, “Why can’t you just use the information we provided in our annual report?” Although there is overlap between the information required for the annual ATE survey and the annual reports that grantees submit through Research.gov, the survey is tailored to ATE activities and outcomes. In contrast, the Research.gov reporting system is set up to accommodate a vast array of NSF-funded endeavors—from polar research expeditions to television programming to the development technical degree programs. Also, Research.gov reports are narrative reports that are delivered to program officers in PDF format. As such, there is no way to aggregate information submitted via Research.gov into a report about the overall ATE program, which is what NSF needs to supports the program’s accountability to Congress.

Although they serve distinct purposes, much of the information asked for in the ATE Survey can and should be reported in ATE grantees’ annual reports to NSF. So, EvaluATE has developed a new resource to help stream-line reporting activities of ATE grantees. We’ve extracted information from Research.gov so that PIs can see all the information required in annual reports in one place (rather than having to click through the multi-layered system or strain their eyes viewing the screenshots in Research.gov’s Project Reports Preview PDF document). The document also identifies items from the ATE Survey that are relevant to various annual report sections, so PIs can maximize the use of the data collected about their projects. We welcome your feedback on this draft resource (see p. 1). You may download a draft from evalu-ate.org/annual_survey/.

Newsletter: 20 Years of ATE Evaluation

Posted on October 1, 2013 by  in Newsletter - ()

Evaluation has been required of ATE projects and centers since the program began in 1993. Many evaluations were concerned more with numbers of students and faculty impacted, rather than the effectiveness of the intervention. The sophistication of evaluation expectations has been increasing over time. Early in the program, there was a shortage of evaluators who understood both the disciplinary content and the methods of evaluation. Through a separate grant, Arlen Gullickson at the Western Michigan University Evaluation Center provided an internship program for novice evaluators who spent six months evaluating a component of an ATE project. Several ATE evaluators got their start in this program, and several PIs learned what evaluation could do for them and their projects.

The ATE program responded to the Government Performance Results Act by developing a project monitoring survey that provided a snapshot of the program. The survey is still administered annually by EvaluATE (see p. 3). Although this monitoring system also emphasized “body counts,” as time went on the survey was modified with the input of program officers to include questions that encouraged evaluation of project effectiveness.

For example, questions were asked if the project’s evaluation investigated the extent to which the participants in professional development actually implemented the content correctly and the resulting impact on student learning, following the ideas of the Kirkpatrick model for evaluation. The evaluations reported in renewal proposals still concentrate on “body counts.” Proposal reviewers ask, “What happened as a result?” To develop project evaluations that could be aggregated to determine how the ATE program was meeting its goals, a workshop was held with evaluators from centers. The participants suggested that projects could be evaluated along eight dimensions: impact on students, faculty, the college, the community, industry, interaction among colleges, the region, and the nation. A review of several project and center annual reports found that all categories were addressed, and very few items could not be accommodated in this scheme.

Following the evaluation in NSF’s Math Science Partnerships program, I have encouraged project and center leaders to make a FEW claims about the effectiveness of their projects. The evaluator should provide evidence for the extent to which the claims are justified. This view is consistent with the annual report template in Research.gov, which asks for the major goals of the project. It also limits summative evaluation to a few major issues. Much of the emphasis, both here and in general, has been on summative evaluation focused on impact and effectiveness. Projects should also be engaged in formative evaluation to inform project improvements. This requires a short feedback cycle that is usually not possible with only external evaluation. An internal evaluator working with an external evaluator may be useful for collecting data and providing timely feedback to the project. A grant has recently been awarded to strengthen the practice and use for formative evaluation by ATE grantees. Arlen Gullickson, EvaluATE’s co-PI, is leading this work, in cooperation with EvaluATE.

Gerhard Salinger is a founding program officer of the ATE program. The ideas expressed here are his alone and may not reflect the views of the National Science Foundation.