We EvaluATE - Proposal Development

Blog: Evaluating Educational Programs for the Future STEM Workforce: STELAR Center Resources

Posted on November 8, 2018 by  in Blog ()

Project Associate, STELAR Center, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello EvaluATE community! My name is Sarah MacGillivray, and I am a member of the STEM Learning and Research (STELAR) Center team, which supports the National Science Foundation Innovative Technology Experiences for Students and Teachers (NSF ITEST) program. Through ITEST, NSF funds the research and development of innovative models of engaging K-12 students in authentic STEM experiences. The goals of the program include building students’ interest and capacity to participate in STEM educational opportunities and developing the skills they will need for careers in STEM. While we target slightly different audiences than the Advanced Technological Education (ATE) program, our programs share the common goal of educating the future STEM workforce, and to support this goal, I invite you to access the many evaluation resources available on our website.

The STELAR website houses an extensive set of resources collected from and used by the ITEST community. These resources include a database of nearly 150 research and evaluation instruments. Each entry features a description of the tool, a list of relevant disciplines and topics, target participants, and a link to ITEST projects that have used the instrument in their work. Whenever possible, PDFs and/or URLs to the original resource are included, though some tools require a fee or membership to the third-party site for access. The instruments can be accessed at http://stelar.edc.org/resources/instruments, and the database can be searched or filtered by keywords common to ATE and ITEST projects, e.g., “participant recruitment and retention,” “partnerships and collaboration,” “STEM career opportunities and workforce development,” “STEM content and standards,” and “teacher professional development and pedagogy,” among others.

In addition to our extensive instrument library, our website also features more than 400 publications, curricular materials, and videos. Each library can be browsed individually, or if you would like to view everything that we have on a topic, you can search all resources on the main resources page: http://stelar.edc.org/resources. We are continually adding to our resources and have recently improved our collection methods to allow projects to upload to the website directly. We expect this will result in even more frequent additions, and we encourage you to visit often or join our mailing list for updates.

STELAR also hosts a free, self-paced online course in which novice NSF proposal writers develop a full NSF proposal. While focused on ITEST, the course can be generalized to any NSF proposal. Two sessions focus on research and evaluation, breaking down the process for developing impactful evaluations. Participants learn what key elements to include in research designs, how to develop logic models, what is involved in deciding the evaluation’s design, and how to align the research design and evaluation sections. The content draws from expertise within the STELAR team and elements from NSF’s Common Guidelines for Education Research and Development. Since the course is self-paced, you can learn more about the course and register to participate at any time: https://mailchi.mp/edc.org/invitation-itest-proposal-course-2

We hope that these resources are useful in your work and invite you to share suggestions and feedback with us at stelar@edc.org. As a member of the NSF Resource Centers network, we welcome opportunities to explore cross-program collaboration, working together to connect and promote our shared goals.

Blog: Four Personal Insights from 30 Years of Evaluation

Posted on August 30, 2018 by  in Blog ()

Haddix Community Chair of STEM Education, University of Nebraska Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I complete my 30th year in evaluation, I feel blessed to have worked with so many great people. In preparation for this blog, I spent a reflective morning with some hot coffee, cereal, and wheat toast (that morning donut is no longer an option), and I looked over past evaluations. I thought about any personal insights that I might share, and I came up with four:

  1. Lessons Learned Are Key: I found it increasingly helpful over the years to think about a project evaluation as a shared learning journey, taken with the project leadership. In this context, we both want to learn things that we can share with others.
  2. Evaluator Independence from Project Implementation Is Critical: Nearly 20 years ago, a program officer read in a project annual report that I had done a workshop on problem-based learning for the project. In response, he kindly asked if I had “gone native,” which is slang for a project evaluator getting so close to the project it threatens independence. As I thought it over, he had identified something that I was becoming increasingly uncomfortable with. It became difficult to offer suggestions on implementing problem-based learning when I had offered the training. That quick, thoughtful inquiry helped me to navigate that situation. It also helped me to think about my own future evaluator independence.
  3. Be Sure to Update Plans after Funding: I always adjust a project evaluation plan after the award. Once funded, everyone really digs in, and opportunities typically surface to make the project and its evaluation even better. I have come to embrace that process. I now typically include an “evaluation plan update” phase before we initiate an evaluation, to ensure that the evaluation plan is the best it can truly be when we implement it.
  4. Fidelity Is Important: It took me 10 years in evaluation before I fully understood the “fidelity issue.” Fidelity, for a loose definition, is essentially how faithful program implementers are to the recipe of a program intervention. The first time I became concerned with fidelity I was evaluating the implementation of 50 hours of curriculum. As I interviewed the teachers, it became clear that teachers were spending vastly different amounts of time on topics and activities. Like all good teachers, they had made the curriculum their own, but in many ways, the intended project intervention disappeared. This made it hard to learn much about the intervention. I evolved to include a fidelity feedback process in projects, to statistically adjust for that natural variation or to help examine differing impacts based on intervention fidelity.

In the last 30 years, program evaluation as a field has become increasingly useful and important. Like my days of eating donuts for breakfast, increasingly gone are the days of “superficial” evaluation. This has been replaced by evaluation strategies that are collaboratively planned, engaged, and flexible, which (like my wheat toast and cereal) gets evaluators and project leadership further on the shared journey. Although I do periodically miss the donuts, I never miss the superficial evaluations. Overall, I am always really glad that I now have the cereal and toast—and that I conduct strong and collaborative program evaluations.

Blog: The Life-Changing Magic of a Tidy Evaluation Plan

Posted on August 16, 2018 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Effective tidying involves only three essential actions. All you need to do is take the time to examine every item you own, decide whether or not you want to keep it, then choose where to put what you keep. Designate a place for each thing.”

―Marie Kondo, The Life-Changing Magic of Tidying Up

I’ve noticed a common problem with some proposal evaluation plans: It’s not so much that they don’t include key information; it’s that they lack order. They’re messy. When you have only about two pages of a 15-page National Science Foundation proposal to describe an evaluation, you need to be exceptionally clear and efficient. In this blog, I offer tips on how to “tidy up” your proposal’s evaluation plan to ensure it communicates key information clearly and coherently.

First of all, what does a messy evaluation plan look like? It meanders. It frames the evaluation’s focus in different ways in different places in the proposal, or even within the evaluation section itself, leaving the reviewer confused about the evaluation’s purpose. It discusses data and data collection without indicating what those data will be used to address. It employs different terms to mean the same thing in different places. It makes it hard for reviewers to discern key information from the evaluation plan and understand how that information fits together.

Three Steps to Tidy up a Messy Evaluation Plan

It’s actually pretty easy to convert a messy evaluation plan into a tidy one:

  • State the evaluation’s focus succinctly. List three to seven evaluation questions that the evaluation will address. These questions should encompass all of your planned data collection and analysis—no more, no less. Refer to these as needed later in the plan, rather than restating them differently or introducing new topics later in the plan. Do not express the evaluation’s focus in different ways in different places.
  • Link the data you plan to collect to the evaluation questions. An efficient way to do this is to present the information in a table. I like to include evaluation questions, indicators, data collection methods and sources, analysis, and interpretation in a single table to clearly show the linkages and convey that my team has carefully thought about how we will answer the evaluation questions. Bonus: Presenting information in a table saves space and makes it easy for reviewers to locate key information. (See EvaluATE’s Evaluation Data Matrix Template.)
  • Use straightforward language—consistently. Don’t assume that reviewers will share your definition of evaluation-related terms. Choose your terms carefully and do not vary how you use them throughout the proposal. For example, if you are using the terms measures, metrics, and indicators, ask yourself if you are really referring to different things. If not, stick with one term and use it consistently. If similar words are actually intended to mean different things, include brief definitions to avoid any confusion about your meaning.

Can a Tidy Evaluation Plan Really Change Your Life?

If it moves a very good proposal toward excellent, then yes! In the competitive world of grant funding, every incremental improvement counts and heightens your chances for funding, which can mean life-changing opportunities for the project leaders, evaluators, and—most importantly—individuals who will be served by the project.

Blog: Successful Practices in ATE Evaluation Planning

Posted on July 19, 2018 by  in Blog ()

President, Mullins Consulting, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this essay, I identify what helps me create a strong evaluation plan when working with new Advanced Technological Education (ATE) program partners. I hope my notes add value to current and future proposal-writing conversations.

Become involved as early as possible in the proposal-planning process. With ATE projects, as with most evaluation projects, the sooner an evaluator is included in the project planning, the better. Even if the evaluator just observes the initial planning meetings, their involvement helps them become familiar with the project’s framework, the community partnerships, and the way in which project objectives are taking shape. Such involvement also helps familiarize the evaluator with the language used to frame project components and the new or established relationships expected for project implementation.

Get to know your existing and anticipated partners. Establishing or strengthening partnerships is a core component of ATE planning, as ATE projects often engage with multiple institutions through the creation of new certifications, development of new industry partnerships, and explanation of outreach efforts in public schools. The evaluator should take detailed notes on the internal and external partnerships involved with the project. Sometimes, to support my own understanding as an evaluator, it helps for me to visually map these relationships. Also, the evaluator should prepare for the unexpected. Sometimes, partners will change during the planning process as partner roles and program purposes become more clearly defined.

Integrate evaluation thinking into conversations early on. Once the team gets through the first couple of proposal drafts, it helps if the evaluator creates an evaluation plan and the team makes time to review it as a group. This will help the planning team clarify the evaluation questions to be addressed and outcomes to be measured. This review also allows the team to see how their outcomes can be clearly attached to program activities and measured through specific methods of data collection. Sometimes during this process, I speak up if a component could use further discussion (e.g., cohort size, mentoring practices). If an evaluator has been engaged from the beginning and has gotten to know the partners, they have likely built the trust necessary to add value to the discussion of the proposal’s central components.

Operate as an illuminator. A colleague I admire once suggested that evaluation be used as a flashlight, not as a hammer. This perspective of prioritizing exploration and illumination over determination of cause and effect has informed my work. Useful evaluations certainly require sound evaluation methodology, but they also require the crafting of results into compelling stories, told with data guiding the way. This requires working with others as interpretations unfold, discovering how findings can be communicated to different audiences, and listening to what stakeholders need to move their initiatives forward.

ATE programs offer participants critical opportunities to be a part of our country’s future workforce. Stakeholders are passionate about their programs. Careful, thoughtful engagement throughout the proposal-writing process builds trust while contributing to a quality proposal with a strong evaluation plan.

Blog: Evaluation Feedback Is a Gift

Posted on July 3, 2018 by  in Blog ()

Chemistry Faculty, Anoka-Ramsey Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Christopher Lutz, chemistry faculty at Anoka-Ramsey Community College. When our project was initially awarded, I was a first-time National Science Foundation (NSF) principal investigator. I understood external evaluation was required for grants but saw it as an administrative hurdle in the grant process. I viewed evaluation as proof for the NSF that we did the project and as a metric for outcomes. While both of these aspects are important, I learned evaluation is also an opportunity to monitor and improve your process and grant. Working with our excellent external evaluators, we built a stronger program in our grant project. You can too, if you are open to evaluation feedback.

Our evaluation team was composed of an excellent evaluator and a technical expert. I started working with both about halfway through the proposal development process (a few months before submission) to ensure they could contribute to the project. I recommend contacting evaluators during the initial stages of proposal development and checking in several times before submission. This gives adequate time for your evaluators to develop a quality evaluation plan and gives you time to understand how to incorporate your evaluator’s advice. Our funded project yielded great successes, but we could have saved time and achieved more if we had involved our evaluators earlier in the process.

After receiving funding, we convened grant personnel and evaluators for a face-to-face meeting to avoid wasted effort at the project start. Meeting in person allowed us to quickly collaborate on a deep level. For example, our project evaluator made real-time adjustments to the evaluation plan as our academic team and technical evaluator worked to plan our project videos and training tools. Include evaluator travel funds in your budget and possibly select an evaluator who is close by. We did not designate travel funds for our Kansas-based evaluator, but his ties to Minnesota and understanding of the value of face-to-face collaboration led him to use some of his evaluation salary to travel and meet with our team.

Here are three ways we used evaluation feedback to strengthen our project:

Example 1: The first-year evaluation report showed a perceived deficiency in the project’s provision of hands-on experience with MALDI-MS instrumentation. In response, we had students make small quantities of liquid solution instead of giving pre-mixed solutions, and let them analyze more lab samples. This change required minimal time but led students to regard the project’s hands-on nature as a strength in the second-year evaluation.

Example 2: Another area for improvement was students’ lack of confidence in analyzing data. In response to this feedback, project staff create Excel data analysis tools and a new training activity for students to practice with literature data prior to analyzing their own. The subsequent year’s evaluation report indicated increased student confidence.

Example 3: Input from our technical evaluator allowed us to create videos that have been used in academic institutions in at least three US states, the UK’s Open University system, and Iceland.

Provided here are some overall tips:

  1. Work with your evaluator(s) early in the proposal process to avoid wasted effort.
  2. Build in at least one face-to-face meeting with your evaluator(s).

Review evaluation data and reports with the goal of improving your project in the next year.

Consider external evaluators as critical friends who are there to help improve your project. This will help move your project forward and help you have a greater impact for all.

Blog: Modifying Grant Evaluation Project Objectives

Posted on June 11, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Evelyn Brown
Director, Extension Research and Development
NC State Industry Expansion Solutions
Leressa Suber
Evaluation Coordinator
NC State Industry Expansion Solutions

When performing grant evaluations, our clients develop specific project objectives to drive attainment of overall grant goals. We work with principal investigators (PIs) to monitor work plan activities and project outcomes to ensure objectives are attainable, measurable, and sustainable.

However, what happens when the project team encounters obstacles to starting the activities related to project objectives? What shifts need to be made to meet grant goals?

When the team determines that the project objective cannot be achieved as initially planned, it’s important for the PI and evaluator to determine how to proceed. In the table below, we’ve highlighted three scenarios in which it may be necessary to shift, change, or eliminate a project objective. Then, if changes are made, based on the extent of the project objective modifications, the team can determine if or when the PI should notify the project funder.

Example: Shift in Project Objective

Grant Goal Help underclassmen understand what engineers do by observing the day-to-day activities of a local engineer.
Problem The advisory board members (engineers) in the field were unavailable.
Objective Current: Shadow advisory board member. Change: Shadow young engineering alumni.
Result The goal is still attainable.
PI Notify Funder No, but provide explanation/justification in the end-of-year report.

Example: Change a Project Objective

Grant Goal To create a method by which students at the community college will earn a credential to indicate they are prepared for employment in a specific technical field.
Problem The state process to establish a new certificate is time consuming and can’t occur within the grant period.
Objective Current: Complete degree in specific technical field. Change: Complete certificate in specific technical field.
Result The goal is still attainable.
PI Notify Funder Yes, specifically contact the funding program officer.

Example: Eliminate the Project Objective

Grant Goal The project participant’s salary will increase as result of completing specific program.
Problem Following program exit, salary data is unavailable.
Objective Current: Compare participant’s salary at start of program to salary three months after program completion. Change: Unable to maintain contact with program completers to obtain salary information.
Result The goal cannot realistically be measured.
PI Notify Funder Yes, specifically contact funding program officer.

In our experience working with clients, we’ve found that the best way to minimize the need to modify project objectives is to ensure they are well written during the grant proposal phase.

Tips: How to write attainable project objectives.

1. Thoroughly think through objectives during grant development phase.

The National Science Foundation (NSF) provides guidance to assist PIs with constructing realistic project goals and objectives. Below, we’ve linked to the NSF’s proposal development guide. However, here are a few key considerations:

  • Are the project objectives clear?
  • Are the resources necessary to accomplish the objectives clearly identified?
  • Are their barriers to accessing the resources needed?

2. Seek evaluator assistance early in the grant proposal process.

Link to additional resources: NSF – A Guide for Proposal Writing

Blog: Utilizing Your Institutional Research Office Resources When Writing a Grant Application

Posted on March 20, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Deborah Douma
Dean, Grants and Federal Programs, Pensacola State College
Michael Johnston
Director of Institutional Research, Pensacola State College

There are a number of guiding questions that must be answered to develop a successful grant project evaluation plan. The answers to these questions also provide guidance to demonstrate need and develop ambitious, yet attainable, objectives. Data does not exist in a vacuum and can be evaluated and transformed into insight only if it is contextualized with associated activities. This is best accomplished in collaboration with the Institutional Research (IR) office. The Association for Institutional Research’s aspirational statement “highlights the need for IR to serve a broader range of decision makers.”

We emphasize the critical need to incorporate fundamental knowledge of experimental and quasi-experimental design at the beginning of any grant project. In essence, grant projects are experiments—just not necessarily being performed in a laboratory. The design of any experiment is to introduce new conditions. The independent variable is the grant project and the dependent variable is the success of the target population (students, faculty). The ability to properly measure and replicate this scientific process must be established during project planning, and the IR office can be instrumental in the design of your evaluation.

Responding to a program solicitation (or RFP, RFA, etc.) provides the opportunity to establish the need for the project, measurable outcomes, and an appropriate plan for evaluation that can win over the hearts and minds of reviewers, and lead to a successful grant award. Institutional researchers work with the grant office not only to measure outcomes but also to investigate and provide potential opportunities for improvement. IR staff act as data scientists and statisticians while working with grants and become intimately acquainted with the data, collection process, relationships between variables, and the science being investigated. While the term statistician and data scientist are often used synonymously, data scientists do more than just answer hypothesis tests and develop forecasting models; they also identify how variables not being studied may affect outcomes. This allows IR staff to see beyond the questions that are being asked and not only contribute to the development of the results but also identify unexpected structures in the data. Finding alternative structure may lead to further investigation in other areas and more opportunities for other grants.

If a project’s objective is to affect positive change in student retention, it is necessary to know the starting point before any grant-funded interventions are introduced. IR can provide descriptive statistics on the student body and target population before the intervention. This historical data is used not only for trend analysis but also for validation, correcting errors in the data. Validation can be as simple as looking for differences between comparison groups and confirming potential differences are not due to error. IR can also assist with the predictive analytics necessary to establish appropriate benchmarks for measurable objectives. For example, predicting that an intervention will increase retention rates by 10-20% when a 1-2% increase would be more realistic could lead to a proposal being rejected or set the project up for failure. Your IR office can also help ensure that the appropriate quantitative statistical methods are used to analyze the data.

Tip: Involve your IR office from the beginning, during project planning. This will contribute greatly to submitting a competitive application, the evaluation of which provides the guidance necessary for a successful project.

Vlog: Resources to Help with Evaluation Planning for ATE Proposals

Posted on September 6, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation is an important element of an ATE proposals.  EvaluATE has developed several resources to help you develop your evaluation plans and integrate them into your ATE proposals.  This video highlights a few of them—these and more can be accessed from the links below the video.

Additional Resources:

Blog: Three Tips for a Strong NSF Proposal Evaluation Plan

Posted on August 17, 2016 by  in Blog ()

Principal Research Scientist, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Leslie Goodyear and I’m an evaluator who also served as a program officer for three years at the National Science Foundation in the Division of Research on Learning, which is in the Education and Human Resources Directorate. While I was there, I oversaw evaluation activities in the Division and reviewed many, many evaluation proposals and grant proposals with evaluation sections.

In May 2016, I had the pleasure of participating in the “Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants.” Hosted by Lori Wingate at EvaluATE and Ann Beheler at the Centers Collaborative for Technical Assistance, this webinar covered topics such as evaluation fundamentals; evaluation requirements and expectations; and evaluation staffing, budgeting and utilization.

On the webinar, I shared my perspective on the role of evaluation at NSF, strengths and weaknesses of evaluation plans in proposals, and how reviewers assess Results from Prior NSF Support sections of proposals, among other topics. In this blog, I’ll give a brief overview of some important takeaways from the webinar.

First, if you’re making a proposal to education or outreach programs, you’ll likely need to include some form of project evaluation in your proposal. Be sure to read the program solicitation carefully to know what the specific requirements are for that program. There are no agency-wide evaluation requirements—instead they are specified in each solicitation. Lori had a great suggestion on the webinar:  Search the solicitation for “eval” to make sure you find all the evaluation-related details.

Second, you’ll want to make sure that your evaluation plan is tailored to your proposed activities and outcomes. NSF reviewers and program officers can smell a “cookie cutter” evaluation plan, so make sure that you’ve talked with your evaluator while developing your proposal and that they’ve had the chance to read the goals and objectives of your proposed work before drafting the plan. You want the plan to be incorporated into the proposal so that it appears seamless.

Third, indicators of a strong evaluation plan include carefully crafted, relevant overall evaluation questions, a thoughtful project logic model, a detailed data collection plan that is coordinated with project activities, and a plan for reporting and dissemination of findings. You’ll also want to include a bio for your evaluator so that the reviewers know who’s on your team and what makes them uniquely qualified to carry out the evaluation of your project.

Additions that can make your plan “pop” include:

  • A table that maps out the evaluation questions to the data collection plans. This can save space by conveying lots of information in a table instead of in narrative.
  • Combining the evaluation and project timelines so that the reviewers can see how the evaluation will be coordinated with the project and offer timely feedback.

Some programs allow for using the Supplemental Documents section for additional evaluation information. Remember that reviewers are not required to read these supplemental docs, so be sure that the important information is still in the 15-page proposal.

For the Results of Prior NSF Support section, you want to be brief and outcome-focused. Use this space to describe what resulted from the prior work, not what you did. And be sure to be clear how that work is informing the proposed work by suggesting, for example, that these outcomes set up the questions you’re pursuing in this proposal.

Blog: Getting Ready to Reapply – Highlighting Results of Prior Support

Posted on December 2, 2015 by  in Blog ()

Founder and President, EvalWorks, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello. My name is Amy A. Germuth and I own EvalWorks, LLC, an education evaluation firm in Durham, NC, which has a strong focus on evaluating STEM projects. Having conducted evaluations of ATE and multiple other NSF STEM projects since the early 2000s, I have worked with PIs to help them better respond to NSF solicitations.

For every ATE solicitation, NSF has required that proposers identify the “Results of Prior Support.” NSF requests that proposers provide the following information:

  1. The NSF award number, amount and period of support
  2. The title of the project
  3. A summary of the results of the completed work
  4. A list of publications resulting from the NSF award
  5. A brief description of available data, samples, physical collections, and other related research products not described elsewhere
  6. If the proposal is for renewal of a grant, a description of the relation of the completed work to the proposed work

This is an excellent opportunity for proposers who have been funded previously by NSF to highlight how their prior funds were used to support a positive change among the targeted group or individuals. For point 3, rather than simply stating the number of persons served, proposers should do the following:

  • State briefly the main goal(s) of the project.
  • Identify who was served, how many were served, and in what capacity.
  • Explain the impact on these persons that resulted from their participation in this project.
  • Provide what evidence was used to make the above inference.

An example may read something like this:

“As part of this project, our goal was to increase the number of women who successfully earned an associate’s degree in welding. To this end, we began a targeted recruiting campaign focusing on women who were about to complete or had recently completed other related programs such as pipefitting and construction and developed a brochure for new students that included positive images of women in welding. We used funding to develop the Women in Welding program and support team building and outreach efforts by them. Institutional data reveal that since this project was started, the number of women in the welding program has almost tripled from 12 (2006 – 2010), of which only 8 graduated to 34 (2011 – 2016), of which 17 have already graduated and 5 have only one semester left. Even if the remaining 17 were not to graduate, the 17 who already have is double the number of female students who graduated from the program between 2006 – 2010.”

To summarize, if you have received prior support from NSF, use this opportunity to show how the funding supported project activities that made a difference and how they inform your current proposal (if applicable). Reviewers look to this section as a way to ascertain the degree to which you have been a good steward of the funding that you received and what impacts it had. Attention to this section will provide one more measure by which reviewers will judge the ability of your proposed project to be successful.