Newsletter - Spring 2016

Newsletter: Three Questions and Examples to Spur Action from Your Evaluation Report

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

1) Are there any unexpected findings in the report? The EvaluATE team has been surprised to learn that we are attracting a large number of grant writers and other grant professionals to our webinars. We initially assumed that principal investigators (PIs) and evaluators would be our main audience. With growing attendance among grant writers, we became aware that they are often the ones who first introduce PIs to evaluation, guiding them on what should go in the evaluation section of a proposal and how to find an evaluator. The unexpected finding that grant writers are seeking out EvaluATE for guidance caused us to realize that we should develop more tailored content for this important audience as we work to advance evaluation in the ATE program.

Talk with your team and your evaluator to determine if any action is needed related to your unexpected results.

2) What’s the worst/least favorable evaluation finding from your evaluation? Although it can be uncomfortable to focus on a project’s weak points, this is where the greatest opportunity for growth and improvement lies. Consider the probable causes of the problem and potential solutions. Can you solve the problem with your current resources? If so, make an action plan. If not, decide if the problem is important enough to address through a new initiative.

At EvaluATE, we serve both evaluators and evaluation consumers who have a wide range of interests and experience. When asked what EvaluATE needs to improve, several respondents to our external evaluation survey noted that they want webinars to be more tailored to their specific needs and skill levels. Some noted that our content was too technical, while others remarked that it was too basic. To address this issue, we decided to develop an ATE evaluation competency framework. Webinars will be keyed to specific competencies, which will help our audience decide which are appropriate for them. We couldn’t implement this research and development work with our current resources, so we wrote this activity into the renewal proposal we submitted last fall.

Don’t sweep an unfavorable result or criticism under the rug. Use it as a lever for positive change.

3) What’s the most favorable finding from your evaluation? Give yourself a pat on the back, then figure out if it points to an aspect of your project you should expand. If you need more information to make that decision, determine what additional evidence could be obtained in the next round of the evaluation. Help others to learn from your successes—the ATE Principal Investigators Conference is an ideal place to share aspects of your work that are especially strong, along with your lessons learned and practical advice about implementing ATE projects.

At EvaluATE, we have been astounded at the interest in and positive response to our webinars. But we don’t yet have full understanding of the extent to which webinar attendance translates to improvements in evaluation practice. So, we decided to start collecting follow-up data from webinar participants to check on use of our content. With that additional evidence in hand, we’ll be better positioned to make an informed decision about expanding or modifying our webinar series.

Don’t just feel good about your positive results—use them as leverage for increased impact.

If you’ve considered your evaluation results carefully, but still aren’t able to identify a call to action, it may be time to rethink your evaluation’s focus. You may need to make adjustments to ensure it produces useful, actionable information. Evaluation plans should be fluid and responsive—it is expected that plans will evolve to address emerged needs.

Newsletter: Survey Says Spring 2016

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

ATE principal investigators (PIs) who received both oral and written reports from their evaluators indicated more use of their evaluation results than those who received just one type of report. Regardless of report format, more than half of PIs said they used evaluation results to make changes to project activities.

Survey Says chart

The full report of the 2015 ATE survey findings is available at http://www.evalu-ate.org/annual_survey/, along with data snapshots and downloadable graphics.

Newsletter: Outcomes and Impacts

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Outcomes are “changes or benefits resulting from activities and outputs,” including changes in knowledge, attitude, skill, behavior, practices, policies, and conditions. These changes may be at the individual, organizational, or community levels. Impacts are “the ultimate effect of the program on the problem or condition that the program or activity was supposed to do something about.”1

Some individuals and organizations use the terms outcomes and impacts interchangeably. Others, such as the Environmental Protection Agency who authored the above definitions, use impact to refer to the highest level of outcomes. The National Science Foundation uses impact to refer to important improvements in the capacity of individuals, organizations, and our nation to engage in STEM research, teaching, and learning.

Regardless of how impacts and outcomes are defined, they are quite distinct from activities. Activities are what a project does—actions undertaken. Outcomes and impacts are the changes a project brings about.

Each of these topics has a designated section of the Research.gov reporting system. Gaining clarity about your project’s distinct activities, outcomes, and impacts before starting to write an NSF annual report will streamline the process, reduce the potential for redundancy across sections, and ensure that program officers will get more than an inventory of project activities. One way to do that is to revisit your project logic model or create one (to get started, download EvaluATE’s logic model template from http://bit.ly/ate-logic).

1U.S. Environmental Protection Agency. (2007). Program Evaluation Glossary http://bit.ly/epa-evalgloss

Newsletter: Where and how should I report on my evaluation in my annual report to the National Science Foundation?

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Research.gov is the online reporting system used by all National Science Foundation grantees. The system is designed to accommodate reporting on all types of work supported by NSF—from research on changes in ocean chemistry to developing technician education programs. Not all NSF programs require grantees to conduct project-level evaluations, so the Research.gov system does not have a specific section for reporting evaluation results. This may leave some ATE grantees wondering where and how they are supposed to include information from their evaluations in their annual reports. There is no one right way to do this, but here is my advice:

Upload your external evaluation report as a supporting file in the Accomplishments section of the Research.gov system. If the main body of this report exceeds 25 pages, be sure that it includes a 1-3 page executive summary that highlights key findings and conclusions. Although NSF program officers are very interested in project evaluation results, they simply do not have time to read lengthy detailed reports for all the grants they oversee.

Highlight key findings from your evaluation in the Activities, Outcomes, and Impacts sections of your annual report, as appropriate. For example, if you have data on the number and type of individuals served through your grant activities and their satisfaction with that experience, include some of these findings or related conclusions as you report on your activities. If you have data on changes brought about by your grant work at the individual, organizational, or community levels, summarize that evidence in your Outcomes or Impacts sections.

The Impacts section of the annual report is for describing how projects

  • developed human resources by providing opportunities for research, teaching, and mentoring
  • improved capacity of underrepresented groups to engage in STEM research, teaching, and learning
  • provided STEM experiences to teachers, youth, and the public
  • enhanced the knowledge base of the project’s principal discipline or other disciplines
  • expanded physical (labs, instrumentation, etc.) or institutional resources to increase capacity for STEM research, teaching, and learning.

Many—not all—of these types of impacts are relevant to the projects and centers supported by the ATE program, which is focused on improving the quality and quantity of technicians in the workforce. It is appropriate to indicate “not applicable” if you don’t have results that align with these categories. If you happen to have other types of results that don’t match these categories, report them in the Outcomes section of the Research.gov reporting system.

Refer to the uploaded evaluation report for additional information. Each section in the Research.gov reporting system has an 8,000 character limit, so it’s unlikely you can include detailed evaluation results. (To put that in perspective, this article has 3,515 characters.) Instead, convey key findings or conclusions in your annual report and refer to the uploaded evaluation report for details and additional information.

Finally, if the evaluation revealed problems with the project that point to a need to change how it is being implemented, include that information in the Changes/Problems section of the report. One reason that evaluation is required for all ATE projects is to support continuous improvement. If the evaluation reveals something is not working as well as expected, it’s best to be transparent about the problem and how it is being addressed.

Newsletter: Bridging the Gap: Using Action Plans to Facilitate Evaluation Use

Posted on April 1, 2016 by  in Newsletter - ()

Senior Research Associate, The Evaluation Center at Western Michigan University

NSF requires evaluation, in part, because it is an essential tool for project improvement. Yet all too often, evaluation results are not used to inform project decision making. There is a gap in the project improvement cycle between dissemination of evaluation results and decision making. One way to bridge this gap is through use of an action plan for project improvement informed by evaluation findings, conclusions, and recommendations. The United Nations Development Programme’s (UNDP) “Management Response Template” (http://bit.ly/undp-mrt) provides a format for  such an action plan. The template was created to encourage greater use of evaluation by projects. UNDP’s template is designed for use in international development contexts, but could be used for any type of project, including ATE centers and projects.

As shown, the template is organized around evaluation recommendations or issues. Any important issue that emerged from an evaluation would be an appropriate focus for action planning. The form allows for documentation of evaluation-based decisions and tracking implementation of those decisions. The inclusion of a time frame for each key action, who is responsible, and status encourages structure and accountability around use of evaluation results and project improvement.

Resources table

Newsletter: Project Spotlight: Geospatial Technology Advantage Project

Posted on April 1, 2016 by  in Newsletter - ()

The Geospatial Technology Advantage: Preparing GST Technicians and GST-enabled Graduates for Southern Illinois Business and Industry, Lake Land College/Kaskaskia College

Mike Rudibaugh is principal investigator for the Geospatial Technology Advantage Project at Kaskaskia College. The project’s external evaluator is Elaine Craft of SC ATE.

How do your evaluation results come into play with regard to making decisions about your project?

External evaluation feedback has been critical to assist the grant team with evaluating our timelines relating to meeting the grant’s goals. This feedback has literally reshaped our grant team’s view of our goals, budget, and measurable outcomes that we could realistically achieve within the life cycle of the grant. Use of the external evaluator’s logic model has been critical to assist the grant team with tracking goals, objectives, and measurable outcomes for a successful grant.

Can you give an example of something you learned from your evaluation that led you to make a change in your project?

Using observations from our external evaluator’s site visit and report led to major shifts in the grant’s overall focus. Her recommendations suggested that a revised scope of work document needed to be submitted to the grant’s program officer. Simply put, the external evaluation process revealed that the grant’s goals and objectives were overly ambitious considering the institution was new to the ATE program, the original PI retired after year one, and the rural nature of the region was slow to adopt a dynamic new STEM field like geospatial technology. These recommendations and actions led to the development of a revised scope of work document outlining more achievable grant goals. They supported  a long-term approach to building a viable geospatial technology program through integration approaches with existing programs on campus.

What types of reports does your evaluator provide and what type of information do you find most useful?

Our evaluator produces annual written reports and also does one annual site visit each year. Each one of these events and resources assists the grant team with connecting the different elements of the grant together.  The evaluator often sees the big picture and at times really helps the college open doors and connect resources together to move the project forward. The ATE grant and evaluation process provided a platform to discuss program integration with faculty directing other programs potentially benefitting from integrating geospatial technology. Linking these success stories into the grant’s annual report has helped us grow more on-campus champions for STEM integration using geospatial technology across the curriculum. The annual site visit has been critical to develop these partnerships. The external evaluator helps cut through the often historic curriculum partnership barriers between transfer and occupation programs on community college campuses. Focusing on positive student outcomes like program completion and job attainment has assisted faculty to focus on issues benefitting students.