Newsletter

Newsletter: Fall 2017

Posted on October 18, 2017 by  in Newsletter

Fall is a time when many projects funded by the National Science Foundation’s Advanced Technological Education (ATE) program are gearing up for a new year of work. So this issue of EvaluATE’s newsletter highlights resources that project personnel can use to educate themselves about their roles and responsibilities in evaluations—whether they are starting their first project or entering a new phase of work. Evaluation shouldn’t be something that is “done to” a project and its people. Project staff should be involved in evaluation planning and implementation, and communicate regularly with their evaluators to ensure the evaluation produces useful and timely results.

How to Work with an Evaluator

The Center for the Advancement of Science Education’s Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects describes what principal investigators (PIs) need to know when it comes to evaluation. PIs don’t necessarily have to be skilled in the technical aspects of  conducting an evaluation, but they should be able to engage effectively with evaluators. Chapter 4 of the guide provides practical tips on collaborating with evaluators through all phases of a project.

Get Your Evaluation off to a Great Start

There are three simple things PIs can do to set the stage for a great evaluation:

  1. Schedule regular meetings with the project evaluator.
  2. Work with the evaluator to create a project evaluation calendar.
  3. Create a system to keep track of the project’s activities and accomplishments, as well as who is involved.

Read Lori’s new blog to learn more about these tasks.

Communication: The Key to a Successful Evaluator-PI Relationship

Establishing communication expectations and protocols at the start of a project evaluation can help everyone avoid headaches, misunderstandings, and wasted resources down the road. Review EvaluATE’s new Communication Plan Checklist to learn about four key aspects of ATE PI-evaluator communication that should be clarified at the start of an evaluation.

Reporting Checklist

Thank you to everyone in the ATE community and beyond who took the time to review and pilot EvaluATE’s Checklist for Program Evaluation Report Content. At various stages of the checklist’s development, we received feedback from 42 individuals, including 24 who pilot-tested the checklist. All reviewers who agreed to be named are listed in the Acknowledgments section of the document.

Meet EvaluATE’s Evaluation Fellows

EvaluATE’s first evaluation fellow cohort has been selected. You can meet them at the ATE PI conference. Congratulations to our new fellows- we look forward to working with you this year.

  • Jennifer Bellville
  • Evelyn Brown
  • Gabrielle Gabrielli
  • Megan Mullins

Newsletter: 2017 Summer

Posted on August 7, 2017 by  in Newsletter () ()

Proposals for the National Science Foundation’s Advanced Technological Education (ATE) program are due October 5. If you are submitting a proposal, now is the time to get your evaluation plan in order. This issue of EvaluATE’s newsletter points you to several resources to help you with this task.

New Evaluation Guidelines for ATE Proposals

The National Science Foundation has issued a new solicitation for Advanced Technological Education (ATE) proposals. It includes important changes in the evaluation guidance. Check out these resources to help you put together a winning evaluation plan for your ATE proposal:

Finding an Evaluator: Demystified

You won’t find “evaluator” in the Yellow Pages. There is no list of NSF-vetted evaluators. Yet there are thousands of professionals who identify as evaluators. This situation can leave prospective ATE PIs who need evaluators for their proposals feeling mystified and frustrated about how to locate and select an evaluator for their ATE projects. Read EvaluATE’s new guide to Finding and Selecting an Evaluator to learn how to streamline your search for an evaluator for your ATE proposal.

Evaluators- add your information to the ATE Central Evaluator Map (evalu-ate.org/contact/evaluator-form/)

Not Allowed to Name an Evaluator in Your ATE Proposal?

Some institutions do not allow their faculty and staff to name an evaluator in a proposal prior to an award being made. If that is your situation, check out EvaluATE’s advice for DIY evaluation planning, as well as grants specialist Jacqueline Rearick’s tips for dealing with administrative red tape.

UPCOMING OPPORTUNITIES

EvaluATE will award ATE Evaluation Fellowships to four ATE evaluators to enable them to attend the 2017 ATE Principal Investigators Conference. Learn more and this opportunity and how to apply from the ATE PI Conference section of EvaluATE’s website.​

The ATE Principal Investigators Conference is THE must-attend event of the year for anyone involved in the ATE program. Come to learn and network. Plus, it’s a great opportunity to showcase your lessons learned to help your ATE peers–check out the Call for Sessions.


Recent EvaluATE Blogs:

 

Newsletter: 2017 Spring

Posted on April 26, 2017 by  in Newsletter ()

Printer Friendly Version

This issue of EvaluATE’s newsletter is all about reporting. It highlights a blog on how to include evaluation results in Advanced Technological Education (ATE) annual reports to NSF, resources on alternative report formats, data visualization resources, and tips for throwing a party (a data party, that is).

Reporting Project Evaluation Results in NSF Annual Reports

National Science Foundation grantees are required to submit annual reports through Research.gov. ATE principal investigators should include information from their ATE project’s or center’s evaluation. But when you look at the required sections, you will not see one that says, “Evaluation Results.” That would be too easy! Lori Wingate’s recent blog, “What Goes Where? Reporting Evaluation Results to NSF,” offers straightforward guidance for this task.

Reimagining Evaluation Reports

A detailed technical report is by far the most common means of communicating evaluation results. If you need something different to share with your project’s stakeholders, get some fresh ideas from BetterEvaluation’s overview and resources on alternative reporting media. These include newsletters, postcards, web conferences, posters, videos, cartoons, and infographics—just to name a few.  If you just need an efficient way to report project facts in a no-nonsense manner, try naked reporting – a way of communicating essential project information with minimal descriptive text.

Data Visualization Resources

Great data visualization can make a report more readable and understandable. Bad data visualization can make it confusing and seem unprofessional. Whether reporting within your organization or to external audiences, good charts can help communicate project achievements. Check out Ann Emery’s instructional videos to boost your visualization skills using Excel. To make sure you are getting it right, review your work against the Data Visualization Checklist, by Stephanie Evergreen and Ann Emery. Both of their websites include lots of other helpful information on data visualization.

Party Time!

That’s right, time for a “data party.” Sharing data with stakeholders doesn’t have to be boring. In “Have a Party to Share Evaluation Results,” an AEA365 blog by Kendra Lewis, you’ll learn about gallery walks, data placemats, and other fun and memorable ways to engage stakeholders in reviewing and making sense of evaluation data.

Evaluation Reporting Checklist

Are you writing an evaluation report? Check out EvaluATE’s Evaluation Reporting Checklist – Version 1.1 (a new version is being developed – stay tuned). This checklist is full of helpful guidance for developing comprehensive and straightforward evaluation reports. EvaluATE is currently looking for a few volunteers to pilot Version 1.2 and provide feedback. If you are interested, please send an email to kelly.robertson@wmich.edu – you’ll be among the first to review the new and improved checklist and help shape the final product. Just want to learn more about evaluation reporting?  Watch the recording of EvaluATE’s December 2016 webinar, Anatomy of a User-Friendly Evaluation Report.

 


American Evaluation Association (AEA) Summer Evaluation Institute

This year’s AEA Summer Evaluation Institute is June 4-7 in Atlanta, Georgia. EvaluATE’s director, Lori Wingate, is giving two workshops on Identifying Evaluation Questions. You can also learn about evaluation theory, survey design, logic modeling, evaluating collaborations, data visualization, and much more from a great line-up of instructors.

Did you miss our last webinar? Get the slides, recording, and handout here.

Recent EvaluATE Blogs:

 See more blogs at evalu-ate.org/blog

Newsletter: 2017 Winter

Posted on January 18, 2017 by  in Newsletter ()

Printer Friendly Version

ADAPTING EVALUATION DESIGN TO DATA REALITIES

“What is your biggest challenge working as an ATE evaluator?” Twenty-three evaluators who applied for funding from EvaluATE to attend the 2016 Advanced Technological Education Principal Investigators Conference gave us their opinions on that topic. One of the most common responses was along the lines of “insufficient data.” In this issue of EvaluATE’s newsletter, we highlight resources that evaluators and project staff can turn to when plans need to be adjusted to ensure an evaluation has adequate data. (Another common theme was “communication between project and evaluation personnel,” but that’s for a future newsletter issue).

Scavenge Data

One of the biggest challenges many evaluators encounter is getting people to participate in data collection efforts, such as surveys and focus groups. In her latest contribution to EvaluATE’s blog, Lori Wingate discusses Scavenging Evaluation Data. She identifies two ways to get useful data that don’t require the cooperation of project participants.

Get Real

RealWorld Evaluation is a popular text among evaluators because the authors recognize that evaluations are often conducted under less-than-ideal circumstances with limited resources. Check out the companion website, which includes a free 20-page PDF summary of the book.

Check Timing When Changing Plans

For ATE projects, it is OK to use data collection methods that were not included in the original evaluation plan—as long as there is a good rationale. But be realistic about how much time it takes to develop new data collection instruments and protocols. For a reality check, see the Time Frame Estimates for Common Data Collection Activities in Guidelines for Working with Third-Party Evaluators.

Repurpose Existing Data

Having trouble getting data from project participants? Try using secondary data to supplement your primary evaluation data. In  Look No Further: Potential Sources of Institutional Data, institutional research professionals from the University of Washington Bothell describe several types of institutional data that can be used in project evaluations at colleges and universities.

Upcoming Webinars

Did you miss our recent webinars?

Check out the slides, handouts, and recordings from our August and December webinars:

Want to receive our newsletter via email?

Join our mailing list

Newsletter: 2016 Fall

Posted on October 19, 2016 by  in Newsletter () ()

Printer Friendly Version

Happy New Year!

The calendar year may be coming to a close, but a new academic year just started and many ATE program grantees recently received their award notifications from the National Science Foundation. ‘Tis the season to start up or revisit evaluation plans for the coming year. This digital-only issue of EvaluATE’s newsletter is all about helping project leaders and evaluators get the new evaluation year off on the right track.

Don’t launch (or relaunch) your evaluation before taking these steps

launch

Mentor-Connect’s one-page checklist tells project leaders what they need to do to set the stage for a successful evaluation.

You won’t hear this from anyone else

3truths

EvaluATE’s director, Lori Wingate, shares Three Inconvenient Truths about ATE Evaluation in her latest contribution to the EvaluATE blog. You may find them unsettling, but ignorance is not bliss when it comes to these facts about evaluation.

Is your evaluation on track?

track

Use the Evaluation Progress Checklist to make sure your evaluation is on course. It’s on pages 26-28 in Westat’s Guidelines for Working with Third Party Evaluators, which also includes guidance for resolving problems and other tips for nonevaluators.

Myth: All evaluation stakeholders should be engaged equally

equal

Monitor, facilitate, consult, or co-create? Use our stakeholder identification worksheet to figure out the right way to engage different types of stakeholders in your evaluation.

EvaluATE at the ATE PI Conference: October 26-29

A Practical Approach to Outcome Evaluation: Step-by-Step
WORKSHOP: Wednesday 1-4 p.m.
DEMONSTRATION: Thursday 4:45-5:15 p.m.

SHOWCASES: We will be at all three showcase sessions.

Check out the conference program.

Next Webinar

slider-dec16-webinar

Did you miss our recent webinars?

Check out slides, handouts, and recordings

0816tile 0516tile

Shape the future of EvaluATE

EvaluATE has been refunded for another 4 years! Let us know how you would like us to invest our resources to advance evaluation in the ATE program.

Complete our two-minute survey today.

Want to receive our newsletter via email?

joinmailinglist2

Newsletter: Getting the Most out of Your Logic Model

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

I recently led two workshops at the American Evaluation Association’s Summer Evaluation Institute. To get a sense of the types of projects that the participants were working on, I asked them to send me a brief project description or logic model in advance of the Institute. I received more than 50 responses, representing a diverse array of projects in the areas of health, human rights, education, and community development. While I have long advocated for logic models as a succinct way to communicae the nature and purpose of projects, it wasn’t until I received these responses that I realized how efficient logic models really are in terms of conveying what a project does, whom it serves, and how it is intended to bring about change.

In reviewing the logic models, I was able to quickly understand the main project activities and outcomes.  My workshops were on developing evaluation questions, and I was amazed how quickly I could frame evaluation questions and indicators based on what was presented in the models. It wasn’t as straight forward with the narrative project descriptions, which were much less consistent in terms of the types of information  conveyed and the degree to which the elements were linked conceptually.  When participants would show me their models in the workshop, I quickly remembered their projects and could give them specific feedback based on my previous review of their models.

Think of NSF proposal reviewers who have to read numerous 15-page project descriptions. It’s not easy to keep straight all the details of a single project, let alone that of 10 or more 15-page proposals. In a logic model, all the key information about a project’s activities, products, and outcomes is presented in one graphic. This helps reviewers consume the project information as a “package.”  For reviewers who are especially interested in the quality of the evaluation plan, a quick comparison of the evaluation plan against the model will reveal how well the plan is aligned to the project’s activities, scope, and purpose.  Specifically, mentally mapping the evaluation questions and indicators onto the logic model provides a good sense of whether the evaluation will adequately address both project implementation and outcomes.

One of the main reasons for creating a logic model—other than the fact it may be required by a funding agency—is to illustrate how key project elements logically relate to one another. I have found that representing a project’s planned activities, products, and outcomes in a logic model format can reveal weaknesses in the project’s plan. For example, there may be an activity that doesn’t seem to lead anywhere or ambitious outcomes that aren’t adequately supported by activities or outputs.  It is much better if you, as a project proposer, spot those weaknesses before an NSF reviewer does. A strong logic model can then serve as a blueprint for the narrative project description—all key elements of the model should be apparent in the project description and vice versa.

I don’t think there is such a thing as the perfect logic model. The trick is to recognize when it is good enough. Check to make sure the elements are located in the appropriate sections of the model, that all main project activities (or activity areas) and outcomes are included, and that they are logically linked. Ask someone from outside your team to review it; revise if they see problems or opportunities to increase clarity. But don’t overwork it—treat it as a living document that you can update when and if necessary

Download the logic model template from http://bit.ly/lm-temp.

Newsletter: Theory of Change

Posted on July 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

“A theory of change defines all building blocks required to bring about a given long-term goal. This set of connected building blocks—interchangeably referred to as outcomes, results, accomplishments, or preconditions—is depicted on a map known as a pathway of change/change framework, which is a graphic representation of the change process.”1

While this sounds a lot like a logic model, a theory of change typically includes much more detail about how and why change is expected to happen. For example, a theory of change may describe necessary conditions that must be achieved in order to reach each level of outcomes and include justifications for hypotheses. While logic models are essentially descriptive—communicating what a project will do and the outcomes it will produce—theories of change are more explanatory.  An arrow from one box in a logic model to another indicates, “if we do this, then this will happen.” In contrast, a theory of change explains what that arrow represents, i.e., the specific mechanisms by which change occurs.

Some funding programs, such as NSF’s Improving Undergraduate STEM Education program, call for proposals to include a theory of change. Developing and communicating a theory of change pushes proposers to get specific about how change will occur and include strong justification for planned actions and expected results.

To learn more, see “An Introduction to Theory of Change” in Evaluation Exchange at http://bit.ly/toc-lm, which includes links to helpful resources from the Center for Theory of Change (http://www.theoryofchange.org/).

1http://www.theoryofchange.org > Glossary

Newsletter: What’s the Difference Between Outputs, Outcomes, and Impacts?

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

LMGraphic

A common source of confusion among individuals who are learning about logic models is the difference between outputs, outcomes, and impacts. While most people generally understand that project activities are the things that a project does, the other terms may be less straightforward.

Outputs are the tangible products of project activities. I think of outputs as things whose existence can be observed directly, such as websites, videos, curricula, labs, tools, software, training materials, journal articles, and books. They tend to be the things that remain after a project ends or goes away.

Outcomes are the changes brought about through project activities and outputs/products.  Outcomes may include changes in individual knowledge, skills, attitudes, awareness, or behaviors; organizational practices; and broader social/economic conditions.  In her blog post “Outputs are for programs, outcomes are for people” (http://bit.ly/srob0314), Sheila Robinson offers this guidance: “OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:  How will program participants change as a result of their participation in the program?” This is a great way to check to see if your logic model elements are located in the right place.  If the outcomes in your logic model include things that don’t sound like an appropriate answer to that question, then you may need to move things around.

The term impact is usually used to refer to outcomes that are especially large in scope or the ultimate outcomes a project is seeking to bring about. Sometimes the terms impacts and long-term outcomes are used interchangeably.

For example, one of EvaluATE’s main activities are webinars. Outputs of these webinars include resource materials, presentation slides, and recordings. Short-term outcomes for webinar participants are expected to include increased knowledge of evaluation. Mid-term outcomes include modifications or changes in their evaluation practice. Long-term outcomes are improved quality and utility of ATE project evaluations. The ultimate intended impact is for ATE projects to achieve better outcomes through strategic use of high-quality evaluations.

Keep in mind that not all logic models use these specific terms, and not everyone adheres to these particular definitions. That’s OK! The important thing to remember when developing a logic model is to understand what YOU mean in using these terms and to use and apply them consistently in your model and elsewhere.  And regardless of how you define them, each column in your model should present new information, not a reiteration of something already communicated.

Newsletter: ATE Logic Model Template

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

A logic model is a graphic depiction of how a project translates its resources into activities and outcomes. The ATE Project Logic Model Template presents the basic format for a logic model with question prompts and examples  to guide users in distilling their project plans into succinct statements about planned activities and products and desired outcomes. Paying attention to the prompts and ATE-specific examples will help users avoid common logic model mistakes, like placing outputs (tangible products) under outcomes (changes in people, organizations or conditions brought about through project activities and outputs).

The template is in PowerPoint so you may use the existing elements and start creating your own logic model right away—just delete the instructional parts of the document and input your project’s information.  We have found that when a document has several graphic elements, PowerPoint is easier to work in than Word.  Alternatively, you could create a simple table in Word that mirrors the layout in the template.

Formatting tips:

  • If you find you need special paper to print the logic model and maintain its legibility, it’s too complicated.  It should be readable on a 8.5” x 11” sheet of paper.  If you simply have too much information to include in a single page, include general summary statements/categories, and include detailed explanations in a proposal narrative or other project planning document.
  • You may wish to add arrows to connect specific activities to specific outputs or outcomes.  However, if you find that all activities are leading to all outcomes (and that is actually how the project is intended to work), there is no need to clutter your model with arrows leading everywhere.
  • Use a consistent font and font size.
  • Align, align, align! Alignment is one of the most important design principles. When logic model elements are out of alignment, it can make it seem messy and unprofessional.
  • Don’t worry if your logic model doesn’t capture all the subtle nuances of your project. It should provide an overview of what a project does and is intended to accomplish and  convey a clear logic as to how the pieces are connected.  Your proposal narrative or project plan is where the details go.

Download the template from http://bit.ly/lm-temp.

Newsletter: Project Spotlight: Geospatial Technician Education – Unmanned Aircraft Systems & Expanding Geospatial Technician Education through Virginia’s Community Colleges

Posted on July 1, 2016 by  in Newsletter - () ()

Deputy Director, Virginia Space Grant Consortium

Chris Carter is the Deputy Director of the Virginia Space Grant Consortium, where he leads two ATE projects.

How do you use logic models in your ATE projects?

Our team recently received our fourth ATE award, which will support the development of academic pathways and faculty training in unmanned aircraft systems (UAS). UAS, when combined with geospatial technologies, will revolutionize spatial data collection and analysis.

Visualizing desired impacts and outcomes is an important first step to effective project management. Logic models are wonderful tools for creating a roadmap of key project components. As a principal investigator on two ATE projects, I have used logic models to conceptualize project outcomes and the change that our team desires to create. Logic models are also effective tools for articulating the inputs and resources that are leveraged to offer the activities that bring about this change.

With facilitation and guidance from our partner and external evaluator, our team developed several project logic models. We developed one overarching project logic model to conceptualize the intended outcomes and desired change of the regional project. Each community college partner also developed a logic model to capture its unique goals and theory of change while also articulating how it contributes to the larger effort. These complementary logic models allowed the team members to visualize and understand their contributions while ensuring everyone was on the same path.

Faculty partners used these logic models to inform their administrations, business partners, and employers about their work. They are great tools for sharing the vision of change and building consensus among key stakeholders.

Our ATE projects are focused on creating career pathways and building faculty competencies to prepare technicians. The geospatial and UAS workforce is a very dynamic employment sector that is constantly evolving. We find logic models helpful tools for keeping the team and partners focused on the desired outputs and outcomes. The models remind us of our goals and help us understand how the components fit together. It is crucial to identify the project inputs and understand that as these evolve, project activities also need to evolve. Constantly updating a logic model and understanding the relationships between the various sections are key pieces of project management.

I encourage all ATE project leaders to work closely with their project evaluators and integrate logic models. Our external evaluator was instrumental in influencing our team to adopt these models. Project evaluators must be viewed as team members and partners from the beginning. I cannot imagine effectively managing a project without the aid of this project blueprint.