Newsletter

Newsletter: Spring 2018

Posted on June 4, 2018 by  in Newsletter () ()

This time of year, Advanced Technological Education (ATE) project evaluators are preparing evaluation reports for their projects, and principal investigators (PIs) and their project teams are preparing their annual reports for the National Science Foundation (NSF). EvaluATE has a lot of resources to take the mystery out of these activities and enhance the effectiveness and utility of your reports.

Writing Your NSF Annual Report

If you need help determining what should go in your NSF annual report and how to prepare it, see advice on Strategies for Writing Your NSF Annual Report, by Tara Sheffer, supervisor of grant projects at Columbus State Community College.

Reporting Evaluation Results in Your Annual Report to NSF

If you are working on your first annual report to NSF, you may be surprised that there isn’t a section in the online reporting system, Research.gov, explicitly for reporting information from your external evaluation. EvaluATE offers straightforward guidance about how to include your evaluation results in your annual report in What Goes Where? Reporting Evaluation Results to NSF.

Guidance for Effective Evaluation Reporting

Starting an evaluation report from scratch can feel overwhelming. Evaluators can use EvaluATE’s Checklist for Program Evaluation Report Content to determine what content is appropriate for a report and how to organize the information in a coherent way. Project teams can use it to guide conversations with their evaluators about what should go in their evaluation reports.

Get the Word out with One-Page Reports

Don’t let the great information about your project stay buried in long reports that few will read. Highlight key results in a well-designed one-page report that your stakeholders will want—and have time—to read. One-page reports can be used to highlight grant achievements to college and industry stakeholders and to attract potential partners. EvaluATE’s recent webinar and several supporting resource materials clearly explain how to create attention-grabbing one-page reports. Visit our page on One-Page Reports.

ATE Evaluation Report Repository

Did you know EvaluATE has a growing repository of evaluation reports from funded ATE projects? Browse the collection to get a sense of how ATE projects and centers are evaluating their work, what they’re learning, and how they are communicating those results in evaluation reports. Reviewing the reports will also provide insights on what other ATE projects have been funded to do. If you’re an ATE principal investigator or evaluator, let us know if you have a report that we may add to the collection.

Newsletter: Winter 2018

Posted on February 5, 2018 by  in Newsletter () ()

Evaluation Data Resolutions: Newsletter Winter 2018

Whether you’ve been analyzing and collecting evaluation data for decades or you are brand new to this work, the beginning of a new year is a good time to take stock of this aspect of your work. In this issue of EvaluATE’s newsletter, Evaluation Data Resolutions, we encourage you to take a moment to reflect on your data collection practices and look for opportunities to freshen and strengthen your data.

Be CREATIVE

Be Creative: written statement "be creative"

Surveys, interviews, and focus groups are probably the most common data collection methods across ATE project evaluations. But these may not always meet your need for data or be optimal for those providing information. Photolanguage, dotmocracy, and reputational monitoring are examples of nontraditional techniques for gathering information. You’ll find an inventory of 51 data collection methods on Better Evaluation’s website. The list includes short descriptions with links to detailed guidance. It may inspire you to go beyond traditional methods and get creative and innovation with your data collection.

Be RESOURCEFUL

Be Resourceful: computer

Developing a sound data collection instrument from scratch is time-intensive. You might be able to conserve resources by using an existing instrument that fits your context. Check out the instrument collection curated by the STEM Learning and Research Center (STELAR). STELAR supports the National Science Foundation’s Innovative Technology Experiences for Students and Teachers program, and several of the instruments are relevant to the ATE context. Examples include the STEM Semantics Survey, STEM Career Interest Questionnaire, Pre-College Annual Self-Efficacy Survey, Grit Scale, and 21st Century Skills Assessment.

Be PURPOSEFUL

Be Purposeful- focused picture

In the midst of data collection and analysis, it’s easy to lose sight of the big picture – why you collected the data in the first place. Use EvaluATE’s Data Collection Planning Matrix to align your data with your evaluation questions. This template also prompts you to record your plan for analyzing and interpreting data in ways that will help you answer your evaluation questions.

Be CAREFUL

Be Careful: slippery when wet sign

Regardless of how you obtain your data or what you plan to do with it, it’s essential you take care to ensure it’s clean before you begin analysis. A systematic process of data cleaning involves identifying and correcting any issues related to data entry mistakes, duplicate records, format inconsistencies, and other problems that detract from the accuracy of the information or impair your ability to make sense of it. Check out Aleata Hubbard’s Six Data Cleaning Checks for guidance on how to make sure your data are ready for analysis.


Meet EvaluATE’s friendly new ATE survey coordinator

ATE Annual 2018 Survey Logo

Check out this short video to meet Lyssa Wilson Becho, your one-stop shop for questions about the annual survey of Advanced Technological Education (ATE) principal investigators, and to get the scoop on this year’s survey. In this video, Lyssa gives a quick introduction to the 2018 survey and how the findings are used throughout the ATE community.

Check out the new ATE evaluation report repository

Evaluation Report Repository

EvaluATE is building a repository of ATE evaluation reports. Check it out to get a sense of how ATE projects and centers are evaluating their work and what they’re learning. If you’re an ATE principal and investigator or evaluator, let us know if you have a report that should be added to the collection.

 

Newsletter: Fall 2017

Posted on October 18, 2017 by  in Newsletter

Fall is a time when many projects funded by the National Science Foundation’s Advanced Technological Education (ATE) program are gearing up for a new year of work. So this issue of EvaluATE’s newsletter highlights resources that project personnel can use to educate themselves about their roles and responsibilities in evaluations—whether they are starting their first project or entering a new phase of work. Evaluation shouldn’t be something that is “done to” a project and its people. Project staff should be involved in evaluation planning and implementation, and communicate regularly with their evaluators to ensure the evaluation produces useful and timely results.

How to Work with an Evaluator

The Center for the Advancement of Science Education’s Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects describes what principal investigators (PIs) need to know when it comes to evaluation. PIs don’t necessarily have to be skilled in the technical aspects of  conducting an evaluation, but they should be able to engage effectively with evaluators. Chapter 4 of the guide provides practical tips on collaborating with evaluators through all phases of a project.

Get Your Evaluation off to a Great Start

There are three simple things PIs can do to set the stage for a great evaluation:

  1. Schedule regular meetings with the project evaluator.
  2. Work with the evaluator to create a project evaluation calendar.
  3. Create a system to keep track of the project’s activities and accomplishments, as well as who is involved.

Read Lori’s new blog to learn more about these tasks.

Communication: The Key to a Successful Evaluator-PI Relationship

Establishing communication expectations and protocols at the start of a project evaluation can help everyone avoid headaches, misunderstandings, and wasted resources down the road. Review EvaluATE’s new Communication Plan Checklist to learn about four key aspects of ATE PI-evaluator communication that should be clarified at the start of an evaluation.

Reporting Checklist

Thank you to everyone in the ATE community and beyond who took the time to review and pilot EvaluATE’s Checklist for Program Evaluation Report Content. At various stages of the checklist’s development, we received feedback from 42 individuals, including 24 who pilot-tested the checklist. All reviewers who agreed to be named are listed in the Acknowledgments section of the document.

Meet EvaluATE’s Evaluation Fellows

EvaluATE’s first evaluation fellow cohort has been selected. You can meet them at the ATE PI conference. Congratulations to our new fellows- we look forward to working with you this year.

  • Jennifer Bellville
  • Evelyn Brown
  • Gabrielle Gabrielli
  • Megan Mullins

Newsletter: 2017 Summer

Posted on August 7, 2017 by  in Newsletter () ()

Proposals for the National Science Foundation’s Advanced Technological Education (ATE) program are due October 5. If you are submitting a proposal, now is the time to get your evaluation plan in order. This issue of EvaluATE’s newsletter points you to several resources to help you with this task.

New Evaluation Guidelines for ATE Proposals

The National Science Foundation has issued a new solicitation for Advanced Technological Education (ATE) proposals. It includes important changes in the evaluation guidance. Check out these resources to help you put together a winning evaluation plan for your ATE proposal:

Finding an Evaluator: Demystified

You won’t find “evaluator” in the Yellow Pages. There is no list of NSF-vetted evaluators. Yet there are thousands of professionals who identify as evaluators. This situation can leave prospective ATE PIs who need evaluators for their proposals feeling mystified and frustrated about how to locate and select an evaluator for their ATE projects. Read EvaluATE’s new guide to Finding and Selecting an Evaluator to learn how to streamline your search for an evaluator for your ATE proposal.

Evaluators- add your information to the ATE Central Evaluator Map (evalu-ate.org/contact/evaluator-form/)

Not Allowed to Name an Evaluator in Your ATE Proposal?

Some institutions do not allow their faculty and staff to name an evaluator in a proposal prior to an award being made. If that is your situation, check out EvaluATE’s advice for DIY evaluation planning, as well as grants specialist Jacqueline Rearick’s tips for dealing with administrative red tape.

UPCOMING OPPORTUNITIES

EvaluATE will award ATE Evaluation Fellowships to four ATE evaluators to enable them to attend the 2017 ATE Principal Investigators Conference. Learn more and this opportunity and how to apply from the ATE PI Conference section of EvaluATE’s website.​

The ATE Principal Investigators Conference is THE must-attend event of the year for anyone involved in the ATE program. Come to learn and network. Plus, it’s a great opportunity to showcase your lessons learned to help your ATE peers–check out the Call for Sessions.


Recent EvaluATE Blogs:

 

Newsletter: 2017 Spring

Posted on April 26, 2017 by  in Newsletter ()

Printer Friendly Version

This issue of EvaluATE’s newsletter is all about reporting. It highlights a blog on how to include evaluation results in Advanced Technological Education (ATE) annual reports to NSF, resources on alternative report formats, data visualization resources, and tips for throwing a party (a data party, that is).

Reporting Project Evaluation Results in NSF Annual Reports

National Science Foundation grantees are required to submit annual reports through Research.gov. ATE principal investigators should include information from their ATE project’s or center’s evaluation. But when you look at the required sections, you will not see one that says, “Evaluation Results.” That would be too easy! Lori Wingate’s recent blog, “What Goes Where? Reporting Evaluation Results to NSF,” offers straightforward guidance for this task.

Reimagining Evaluation Reports

A detailed technical report is by far the most common means of communicating evaluation results. If you need something different to share with your project’s stakeholders, get some fresh ideas from BetterEvaluation’s overview and resources on alternative reporting media. These include newsletters, postcards, web conferences, posters, videos, cartoons, and infographics—just to name a few.  If you just need an efficient way to report project facts in a no-nonsense manner, try naked reporting – a way of communicating essential project information with minimal descriptive text.

Data Visualization Resources

Great data visualization can make a report more readable and understandable. Bad data visualization can make it confusing and seem unprofessional. Whether reporting within your organization or to external audiences, good charts can help communicate project achievements. Check out Ann Emery’s instructional videos to boost your visualization skills using Excel. To make sure you are getting it right, review your work against the Data Visualization Checklist, by Stephanie Evergreen and Ann Emery. Both of their websites include lots of other helpful information on data visualization.

Party Time!

That’s right, time for a “data party.” Sharing data with stakeholders doesn’t have to be boring. In “Have a Party to Share Evaluation Results,” an AEA365 blog by Kendra Lewis, you’ll learn about gallery walks, data placemats, and other fun and memorable ways to engage stakeholders in reviewing and making sense of evaluation data.

Evaluation Reporting Checklist

Are you writing an evaluation report? Check out EvaluATE’s Evaluation Reporting Checklist – Version 1.1 (a new version is being developed – stay tuned). This checklist is full of helpful guidance for developing comprehensive and straightforward evaluation reports. EvaluATE is currently looking for a few volunteers to pilot Version 1.2 and provide feedback. If you are interested, please send an email to kelly.robertson@wmich.edu – you’ll be among the first to review the new and improved checklist and help shape the final product. Just want to learn more about evaluation reporting?  Watch the recording of EvaluATE’s December 2016 webinar, Anatomy of a User-Friendly Evaluation Report.

 


American Evaluation Association (AEA) Summer Evaluation Institute

This year’s AEA Summer Evaluation Institute is June 4-7 in Atlanta, Georgia. EvaluATE’s director, Lori Wingate, is giving two workshops on Identifying Evaluation Questions. You can also learn about evaluation theory, survey design, logic modeling, evaluating collaborations, data visualization, and much more from a great line-up of instructors.

Did you miss our last webinar? Get the slides, recording, and handout here.

Recent EvaluATE Blogs:

 See more blogs at evalu-ate.org/blog

Newsletter: 2017 Winter

Posted on January 18, 2017 by  in Newsletter ()

Printer Friendly Version

ADAPTING EVALUATION DESIGN TO DATA REALITIES

“What is your biggest challenge working as an ATE evaluator?” Twenty-three evaluators who applied for funding from EvaluATE to attend the 2016 Advanced Technological Education Principal Investigators Conference gave us their opinions on that topic. One of the most common responses was along the lines of “insufficient data.” In this issue of EvaluATE’s newsletter, we highlight resources that evaluators and project staff can turn to when plans need to be adjusted to ensure an evaluation has adequate data. (Another common theme was “communication between project and evaluation personnel,” but that’s for a future newsletter issue).

Scavenge Data

One of the biggest challenges many evaluators encounter is getting people to participate in data collection efforts, such as surveys and focus groups. In her latest contribution to EvaluATE’s blog, Lori Wingate discusses Scavenging Evaluation Data. She identifies two ways to get useful data that don’t require the cooperation of project participants.

Get Real

RealWorld Evaluation is a popular text among evaluators because the authors recognize that evaluations are often conducted under less-than-ideal circumstances with limited resources. Check out the companion website, which includes a free 20-page PDF summary of the book.

Check Timing When Changing Plans

For ATE projects, it is OK to use data collection methods that were not included in the original evaluation plan—as long as there is a good rationale. But be realistic about how much time it takes to develop new data collection instruments and protocols. For a reality check, see the Time Frame Estimates for Common Data Collection Activities in Guidelines for Working with Third-Party Evaluators.

Repurpose Existing Data

Having trouble getting data from project participants? Try using secondary data to supplement your primary evaluation data. In  Look No Further: Potential Sources of Institutional Data, institutional research professionals from the University of Washington Bothell describe several types of institutional data that can be used in project evaluations at colleges and universities.

Upcoming Webinars

Did you miss our recent webinars?

Check out the slides, handouts, and recordings from our August and December webinars:

Want to receive our newsletter via email?

Join our mailing list

Newsletter: 2016 Fall

Posted on October 19, 2016 by  in Newsletter () ()

Printer Friendly Version

Happy New Year!

The calendar year may be coming to a close, but a new academic year just started and many ATE program grantees recently received their award notifications from the National Science Foundation. ‘Tis the season to start up or revisit evaluation plans for the coming year. This digital-only issue of EvaluATE’s newsletter is all about helping project leaders and evaluators get the new evaluation year off on the right track.

Don’t launch (or relaunch) your evaluation before taking these steps

launch

Mentor-Connect’s one-page checklist tells project leaders what they need to do to set the stage for a successful evaluation.

You won’t hear this from anyone else

3truths

EvaluATE’s director, Lori Wingate, shares Three Inconvenient Truths about ATE Evaluation in her latest contribution to the EvaluATE blog. You may find them unsettling, but ignorance is not bliss when it comes to these facts about evaluation.

Is your evaluation on track?

track

Use the Evaluation Progress Checklist to make sure your evaluation is on course. It’s on pages 26-28 in Westat’s Guidelines for Working with Third Party Evaluators, which also includes guidance for resolving problems and other tips for nonevaluators.

Myth: All evaluation stakeholders should be engaged equally

equal

Monitor, facilitate, consult, or co-create? Use our stakeholder identification worksheet to figure out the right way to engage different types of stakeholders in your evaluation.

EvaluATE at the ATE PI Conference: October 26-29

A Practical Approach to Outcome Evaluation: Step-by-Step
WORKSHOP: Wednesday 1-4 p.m.
DEMONSTRATION: Thursday 4:45-5:15 p.m.

SHOWCASES: We will be at all three showcase sessions.

Check out the conference program.

Next Webinar

slider-dec16-webinar

Did you miss our recent webinars?

Check out slides, handouts, and recordings

0816tile 0516tile

Shape the future of EvaluATE

EvaluATE has been refunded for another 4 years! Let us know how you would like us to invest our resources to advance evaluation in the ATE program.

Complete our two-minute survey today.

Want to receive our newsletter via email?

joinmailinglist2

Newsletter: Getting the Most out of Your Logic Model

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

I recently led two workshops at the American Evaluation Association’s Summer Evaluation Institute. To get a sense of the types of projects that the participants were working on, I asked them to send me a brief project description or logic model in advance of the Institute. I received more than 50 responses, representing a diverse array of projects in the areas of health, human rights, education, and community development. While I have long advocated for logic models as a succinct way to communicae the nature and purpose of projects, it wasn’t until I received these responses that I realized how efficient logic models really are in terms of conveying what a project does, whom it serves, and how it is intended to bring about change.

In reviewing the logic models, I was able to quickly understand the main project activities and outcomes.  My workshops were on developing evaluation questions, and I was amazed how quickly I could frame evaluation questions and indicators based on what was presented in the models. It wasn’t as straight forward with the narrative project descriptions, which were much less consistent in terms of the types of information  conveyed and the degree to which the elements were linked conceptually.  When participants would show me their models in the workshop, I quickly remembered their projects and could give them specific feedback based on my previous review of their models.

Think of NSF proposal reviewers who have to read numerous 15-page project descriptions. It’s not easy to keep straight all the details of a single project, let alone that of 10 or more 15-page proposals. In a logic model, all the key information about a project’s activities, products, and outcomes is presented in one graphic. This helps reviewers consume the project information as a “package.”  For reviewers who are especially interested in the quality of the evaluation plan, a quick comparison of the evaluation plan against the model will reveal how well the plan is aligned to the project’s activities, scope, and purpose.  Specifically, mentally mapping the evaluation questions and indicators onto the logic model provides a good sense of whether the evaluation will adequately address both project implementation and outcomes.

One of the main reasons for creating a logic model—other than the fact it may be required by a funding agency—is to illustrate how key project elements logically relate to one another. I have found that representing a project’s planned activities, products, and outcomes in a logic model format can reveal weaknesses in the project’s plan. For example, there may be an activity that doesn’t seem to lead anywhere or ambitious outcomes that aren’t adequately supported by activities or outputs.  It is much better if you, as a project proposer, spot those weaknesses before an NSF reviewer does. A strong logic model can then serve as a blueprint for the narrative project description—all key elements of the model should be apparent in the project description and vice versa.

I don’t think there is such a thing as the perfect logic model. The trick is to recognize when it is good enough. Check to make sure the elements are located in the appropriate sections of the model, that all main project activities (or activity areas) and outcomes are included, and that they are logically linked. Ask someone from outside your team to review it; revise if they see problems or opportunities to increase clarity. But don’t overwork it—treat it as a living document that you can update when and if necessary

Download the logic model template from http://bit.ly/lm-temp.

Newsletter: Theory of Change

Posted on July 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

“A theory of change defines all building blocks required to bring about a given long-term goal. This set of connected building blocks—interchangeably referred to as outcomes, results, accomplishments, or preconditions—is depicted on a map known as a pathway of change/change framework, which is a graphic representation of the change process.”1

While this sounds a lot like a logic model, a theory of change typically includes much more detail about how and why change is expected to happen. For example, a theory of change may describe necessary conditions that must be achieved in order to reach each level of outcomes and include justifications for hypotheses. While logic models are essentially descriptive—communicating what a project will do and the outcomes it will produce—theories of change are more explanatory.  An arrow from one box in a logic model to another indicates, “if we do this, then this will happen.” In contrast, a theory of change explains what that arrow represents, i.e., the specific mechanisms by which change occurs.

Some funding programs, such as NSF’s Improving Undergraduate STEM Education program, call for proposals to include a theory of change. Developing and communicating a theory of change pushes proposers to get specific about how change will occur and include strong justification for planned actions and expected results.

To learn more, see “An Introduction to Theory of Change” in Evaluation Exchange at http://bit.ly/toc-lm, which includes links to helpful resources from the Center for Theory of Change (http://www.theoryofchange.org/).

1http://www.theoryofchange.org > Glossary

Newsletter: What’s the Difference Between Outputs, Outcomes, and Impacts?

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

LMGraphic

A common source of confusion among individuals who are learning about logic models is the difference between outputs, outcomes, and impacts. While most people generally understand that project activities are the things that a project does, the other terms may be less straightforward.

Outputs are the tangible products of project activities. I think of outputs as things whose existence can be observed directly, such as websites, videos, curricula, labs, tools, software, training materials, journal articles, and books. They tend to be the things that remain after a project ends or goes away.

Outcomes are the changes brought about through project activities and outputs/products.  Outcomes may include changes in individual knowledge, skills, attitudes, awareness, or behaviors; organizational practices; and broader social/economic conditions.  In her blog post “Outputs are for programs, outcomes are for people” (http://bit.ly/srob0314), Sheila Robinson offers this guidance: “OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:  How will program participants change as a result of their participation in the program?” This is a great way to check to see if your logic model elements are located in the right place.  If the outcomes in your logic model include things that don’t sound like an appropriate answer to that question, then you may need to move things around.

The term impact is usually used to refer to outcomes that are especially large in scope or the ultimate outcomes a project is seeking to bring about. Sometimes the terms impacts and long-term outcomes are used interchangeably.

For example, one of EvaluATE’s main activities are webinars. Outputs of these webinars include resource materials, presentation slides, and recordings. Short-term outcomes for webinar participants are expected to include increased knowledge of evaluation. Mid-term outcomes include modifications or changes in their evaluation practice. Long-term outcomes are improved quality and utility of ATE project evaluations. The ultimate intended impact is for ATE projects to achieve better outcomes through strategic use of high-quality evaluations.

Keep in mind that not all logic models use these specific terms, and not everyone adheres to these particular definitions. That’s OK! The important thing to remember when developing a logic model is to understand what YOU mean in using these terms and to use and apply them consistently in your model and elsewhere.  And regardless of how you define them, each column in your model should present new information, not a reiteration of something already communicated.