Leslie Goodyear

Principal Research Scientist, Education Development Center, Inc.

Leslie Goodyear, PhD, is a researcher and evaluator who has significant experience leading complex evaluations of national programs and systems, particularly government-funded programs. She has conducted program and project evaluations in both formal and informal educational settings that serve youth, with a recent focus on STEM educational programs and programs that aim to broaden participation in STEM. She is the associate editor of the American Journal of Evaluation, a past board member of the American Evaluation Association (AEA), and former chair of the AEA Ethics Committee. She is also the lead editor of the book, Qualitative Inquiry in Evaluation: From Theory to Practice (2014).

During her EDC tenure, Goodyear took a leave to serve as a program officer at the National Science Foundation Division of Research on Learning, where she administered national grants programs, supervised evaluation and research contracts, and developed directorate and division-level evaluation policy.

Goodyear holds a MS and PhD in program evaluation and planning from Cornell University.


Blog: Three Tips for a Strong NSF Proposal Evaluation Plan

Posted on August 17, 2016 by  in Blog ()

Principal Research Scientist, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Leslie Goodyear and I’m an evaluator who also served as a program officer for three years at the National Science Foundation in the Division of Research on Learning, which is in the Education and Human Resources Directorate. While I was there, I oversaw evaluation activities in the Division and reviewed many, many evaluation proposals and grant proposals with evaluation sections.

In May 2016, I had the pleasure of participating in the “Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants.” Hosted by Lori Wingate at EvaluATE and Ann Beheler at the Centers Collaborative for Technical Assistance, this webinar covered topics such as evaluation fundamentals; evaluation requirements and expectations; and evaluation staffing, budgeting and utilization.

On the webinar, I shared my perspective on the role of evaluation at NSF, strengths and weaknesses of evaluation plans in proposals, and how reviewers assess Results from Prior NSF Support sections of proposals, among other topics. In this blog, I’ll give a brief overview of some important takeaways from the webinar.

First, if you’re making a proposal to education or outreach programs, you’ll likely need to include some form of project evaluation in your proposal. Be sure to read the program solicitation carefully to know what the specific requirements are for that program. There are no agency-wide evaluation requirements—instead they are specified in each solicitation. Lori had a great suggestion on the webinar:  Search the solicitation for “eval” to make sure you find all the evaluation-related details.

Second, you’ll want to make sure that your evaluation plan is tailored to your proposed activities and outcomes. NSF reviewers and program officers can smell a “cookie cutter” evaluation plan, so make sure that you’ve talked with your evaluator while developing your proposal and that they’ve had the chance to read the goals and objectives of your proposed work before drafting the plan. You want the plan to be incorporated into the proposal so that it appears seamless.

Third, indicators of a strong evaluation plan include carefully crafted, relevant overall evaluation questions, a thoughtful project logic model, a detailed data collection plan that is coordinated with project activities, and a plan for reporting and dissemination of findings. You’ll also want to include a bio for your evaluator so that the reviewers know who’s on your team and what makes them uniquely qualified to carry out the evaluation of your project.

Additions that can make your plan “pop” include:

  • A table that maps out the evaluation questions to the data collection plans. This can save space by conveying lots of information in a table instead of in narrative.
  • Combining the evaluation and project timelines so that the reviewers can see how the evaluation will be coordinated with the project and offer timely feedback.

Some programs allow for using the Supplemental Documents section for additional evaluation information. Remember that reviewers are not required to read these supplemental docs, so be sure that the important information is still in the 15-page proposal.

For the Results of Prior NSF Support section, you want to be brief and outcome-focused. Use this space to describe what resulted from the prior work, not what you did. And be sure to be clear how that work is informing the proposed work by suggesting, for example, that these outcomes set up the questions you’re pursuing in this proposal.

Webinar: Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants

Posted on March 22, 2016 by , , in Webinars

Presenter(s): Ann Beheler, Leslie Goodyear, Lori Wingate
Date(s): May 25, 2016
Time: 3-4:00 p.m.
Recording: https://www.youtube.com/watch?v=xZdiwSizDUM&feature=youtu.be

External evaluation is a requirement of many federal grant programs. Understanding and addressing these requirements is essential for both successfully seeking grants and achieving the objectives of funded projects. In this webinar, we will review the evaluation language from a variety federal grant programs and translate the specifications into practical steps. Topics will include finding a qualified evaluator, budgeting for evaluation, understanding evaluation design basics, reporting and using evaluation results, and integrating past evaluation results into future grant submissions.

Resources:
Slides
Additional Resource


Newsletter: The PI Guide to Working with Evaluators

Posted on January 1, 2014 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

(originally published as blog at ltd.edc.org/strong-pievaluator-partnerships-users-guide on January 10, 2013)

Evaluation can be a daunting task for PIs. It can seem like the evaluator speaks another language, and the stakes for the project can seem very high. Evaluators face their own challenges. Often working with a tight budgets and timeframes, expectations are high that they deliver both rigor and relevance, along with evidence of project impact. With all this and more in the mix, it’s no surprise that tension can mount and miscommunication can drive animosity and stress.

As the head of evaluation for the ITEST Learning Resource Center and as a NSF program officer, I saw dysfunctional relationships between PIs and their evaluators contribute to missed deadlines, missed opportunities, and frustration on all sides. As an evaluator, I am deeply invested in building evaluators’ capacity to communicate their work and in helping program staff understand the value of evaluation and what it brings to their programs. I was concerned that these dysfunctional relationships would thwart the potential of evaluation to provide vital information for program staff to make decisions and demonstrate the value of their programs.

To help strengthen PI/evaluator collaborations, I’ve done a lot of what I called “evaluation marriage counseling” for PI/evaluator pairs. Through these “counseling sessions,” I learned that evaluation relationships are not so different from any other relationships. Expectations aren’t always made clear, communication often breaks down, and, more than anything else, all relation-ships need care and feeding.

As a program officer, I had the chance to help shape and create a new resource that supports PIs and evaluators in forming strong working relationships. Rick Bonney of the Cornell Lab of Ornithology and I developed a guide to working with evaluators, written by PIs, for PIs. Although it was designed for the Informal Science Education community, the lessons translate to just about any situation in which program staff are working with evaluators. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects is available at bit.ly/1l28nTt.

Newsletter: Developing a Culture of Evaluation

Posted on October 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

As an ATE project, you and your team collect a lot of data: You complete the annual monitoring survey, you work with your evaluator to measure outcomes, you may even track your participants longitudinally in order to learn how they integrate their experiences into their lives. As overwhelming as it may seem at times to manage all the data collection logistics and report writing, these data are important to telling the story of your project and the ATE program. Developing a culture of evaluation in your project and your organization can help to meaningfully put these data to use.

Fostering a culture of evaluation in your project means that evaluation practices are not disconnected from program planning, implementation, and reporting. You’re thinking of evaluation in planning project activities and looking for ways to use data to reflect on and improve your work. During implementation, you consult your evaluator regularly so that you can hear what they’re learning from the data collection, and ensure that they know what’s new in the project. And at analysis and reporting times, you’re ensuring that the right people are thinking about how to use the evaluation findings to make improvements and demonstrate your project’s value to important stakeholder audiences. You and your team are reflecting on how the evaluation went and what can be improved. In a project that has an “evaluation culture,” evaluators are partners, collecting important information to inform decision making.

A great example of evaluators-as partners came from an NSF PI who shared that he regularly talks with his evaluator, peer-to-peer, about the state of the field, not just about his particular project. He wants to now what his evaluator is learning about practice in the field from other projects, workshops, conferences and meetings, and he uses these insights to help him reflect on his own work.

Newsletter: What makes a good evaluation section of a proposal?

Posted on July 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

As a program officer, I read hundreds of proposals for different NSF programs and I saw many different approaches to writing a proposal evaluation section. From my vantage point, here are a few tips that may help to ensure that your evaluation section shines.

First, make sure to involve your evaluator in writing the proposal’s evaluation section. Program officers and reviewers can tell when an evaluation section was written without the consultation of an evaluator. This makes them think you aren’t integrating evaluation into your project planning.

Don’t just call an evaluator a couple weeks before the proposal is due! A strong evaluation section comes from a thoughtful, robust, tailored evaluation plan. This takes collaboration with an evaluator! Get them on board early and talk with them often as you develop your proposal. They can help you develop measureable objectives, add insight to proposal organization, and, of course, work with you to develop an appropriate evaluation plan.

Reviewers and program officers look to see that the evaluator understands the project. This can be done using a logic model or in a paragraph that justifies the evaluation design, based on the proposed project design. The evaluation section should also connect the project objectives and targeted outcomes to evaluation questions, data collection methods and analysis, and dissemination plans. This can be done in a matrix format, which helps the reader to see clearly which data will answer which evaluation question and how these are connected to the objectives of the project.

A strong evaluation plan shows that the evaluator and the project team are in synch and working together, applies a rigorous design and reasonable data collection methods, and answers important questions that will help to demonstrate the value of the project and surface areas for improvement.

Newsletter: Setting the Stage for Useful Reporting

Posted on April 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

In my fifteen years as an evaluator, I’ve written quite a few reports and thought a lot about what makes an evaluation report useful. In addition, I was a program officer at NSF in the Division of Research on Learning, where I was an evaluation client and strove to put evaluation findings to good use. Here are some thoughts on how you can ensure that evaluation information gets used.

Communicating early and often is the foundation for strong evaluation reporting and use. PIs initiate these conversations about reporting with their evaluators, expressing needs and expectations about when they’d like evaluation reports, about what, and in what form.

Would you like a brief report about data collection activities? Talk with your evaluator about how you’d like this to look, what you might do with the data, and how these reports will get included in the annual report. This could be just bullet points about the key findings, or it could be data tables generated from a survey.

Do you want monthly progress reports? Talk with your evaluator about a template for an easy-to-read format. This report might detail funds expended to date, highlight upcoming tasks, and offer a place to raise questions and issues that need timely management.

Would you like a report that you can share with community stakeholders? This could be a one-page list of significant findings, a three-page executive summary, a PowerPoint presentation, or even a shortened version of the full report.

PIs and evaluators can talk about what’s possible, how your choices will affect budget and how you plan to work together to ensure that the evaluation reports are targeted for maximum use.