Newsletter

Newsletter: 2017 Winter

Posted on January 18, 2017 by  in Newsletter ()

Printer Friendly Version

ADAPTING EVALUATION DESIGN TO DATA REALITIES

“What is your biggest challenge working as an ATE evaluator?” Twenty-three evaluators who applied for funding from EvaluATE to attend the 2016 Advanced Technological Education Principal Investigators Conference gave us their opinions on that topic. One of the most common responses was along the lines of “insufficient data.” In this issue of EvaluATE’s newsletter, we highlight resources that evaluators and project staff can turn to when plans need to be adjusted to ensure an evaluation has adequate data. (Another common theme was “communication between project and evaluation personnel,” but that’s for a future newsletter issue).

Scavenge Data

One of the biggest challenges many evaluators encounter is getting people to participate in data collection efforts, such as surveys and focus groups. In her latest contribution to EvaluATE’s blog, Lori Wingate discusses Scavenging Evaluation Data. She identifies two ways to get useful data that don’t require the cooperation of project participants.

Get Real

RealWorld Evaluation is a popular text among evaluators because the authors recognize that evaluations are often conducted under less-than-ideal circumstances with limited resources. Check out the companion website, which includes a free 20-page PDF summary of the book.

Check Timing When Changing Plans

For ATE projects, it is OK to use data collection methods that were not included in the original evaluation plan—as long as there is a good rationale. But be realistic about how much time it takes to develop new data collection instruments and protocols. For a reality check, see the Time Frame Estimates for Common Data Collection Activities in Guidelines for Working with Third-Party Evaluators.

Repurpose Existing Data

Having trouble getting data from project participants? Try using secondary data to supplement your primary evaluation data. In  Look No Further: Potential Sources of Institutional Data, institutional research professionals from the University of Washington Bothell describe several types of institutional data that can be used in project evaluations at colleges and universities.

Upcoming Webinars

Did you miss our recent webinars?

Check out the slides, handouts, and recordings from our August and December webinars:

Want to receive our newsletter via email?

joinmailinglist2

Newsletter: 2016 Fall

Posted on October 19, 2016 by  in Newsletter () ()

Printer Friendly Version

Happy New Year!

The calendar year may be coming to a close, but a new academic year just started and many ATE program grantees recently received their award notifications from the National Science Foundation. ‘Tis the season to start up or revisit evaluation plans for the coming year. This digital-only issue of EvaluATE’s newsletter is all about helping project leaders and evaluators get the new evaluation year off on the right track.

Don’t launch (or relaunch) your evaluation before taking these steps

launch

Mentor-Connect’s one-page checklist tells project leaders what they need to do to set the stage for a successful evaluation.

You won’t hear this from anyone else

3truths

EvaluATE’s director, Lori Wingate, shares Three Inconvenient Truths about ATE Evaluation in her latest contribution to the EvaluATE blog. You may find them unsettling, but ignorance is not bliss when it comes to these facts about evaluation.

Is your evaluation on track?

track

Use the Evaluation Progress Checklist to make sure your evaluation is on course. It’s on pages 26-28 in Westat’s Guidelines for Working with Third Party Evaluators, which also includes guidance for resolving problems and other tips for nonevaluators.

Myth: All evaluation stakeholders should be engaged equally

equal

Monitor, facilitate, consult, or co-create? Use our stakeholder identification worksheet to figure out the right way to engage different types of stakeholders in your evaluation.

EvaluATE at the ATE PI Conference: October 26-29

A Practical Approach to Outcome Evaluation: Step-by-Step
WORKSHOP: Wednesday 1-4 p.m.
DEMONSTRATION: Thursday 4:45-5:15 p.m.

SHOWCASES: We will be at all three showcase sessions.

Check out the conference program.

Next Webinar

slider-dec16-webinar

Did you miss our recent webinars?

Check out slides, handouts, and recordings

0816tile 0516tile

Shape the future of EvaluATE

EvaluATE has been refunded for another 4 years! Let us know how you would like us to invest our resources to advance evaluation in the ATE program.

Complete our two-minute survey today.

Want to receive our newsletter via email?

joinmailinglist2

Newsletter: Getting the Most out of Your Logic Model

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

I recently led two workshops at the American Evaluation Association’s Summer Evaluation Institute. To get a sense of the types of projects that the participants were working on, I asked them to send me a brief project description or logic model in advance of the Institute. I received more than 50 responses, representing a diverse array of projects in the areas of health, human rights, education, and community development. While I have long advocated for logic models as a succinct way to communicae the nature and purpose of projects, it wasn’t until I received these responses that I realized how efficient logic models really are in terms of conveying what a project does, whom it serves, and how it is intended to bring about change.

In reviewing the logic models, I was able to quickly understand the main project activities and outcomes.  My workshops were on developing evaluation questions, and I was amazed how quickly I could frame evaluation questions and indicators based on what was presented in the models. It wasn’t as straight forward with the narrative project descriptions, which were much less consistent in terms of the types of information  conveyed and the degree to which the elements were linked conceptually.  When participants would show me their models in the workshop, I quickly remembered their projects and could give them specific feedback based on my previous review of their models.

Think of NSF proposal reviewers who have to read numerous 15-page project descriptions. It’s not easy to keep straight all the details of a single project, let alone that of 10 or more 15-page proposals. In a logic model, all the key information about a project’s activities, products, and outcomes is presented in one graphic. This helps reviewers consume the project information as a “package.”  For reviewers who are especially interested in the quality of the evaluation plan, a quick comparison of the evaluation plan against the model will reveal how well the plan is aligned to the project’s activities, scope, and purpose.  Specifically, mentally mapping the evaluation questions and indicators onto the logic model provides a good sense of whether the evaluation will adequately address both project implementation and outcomes.

One of the main reasons for creating a logic model—other than the fact it may be required by a funding agency—is to illustrate how key project elements logically relate to one another. I have found that representing a project’s planned activities, products, and outcomes in a logic model format can reveal weaknesses in the project’s plan. For example, there may be an activity that doesn’t seem to lead anywhere or ambitious outcomes that aren’t adequately supported by activities or outputs.  It is much better if you, as a project proposer, spot those weaknesses before an NSF reviewer does. A strong logic model can then serve as a blueprint for the narrative project description—all key elements of the model should be apparent in the project description and vice versa.

I don’t think there is such a thing as the perfect logic model. The trick is to recognize when it is good enough. Check to make sure the elements are located in the appropriate sections of the model, that all main project activities (or activity areas) and outcomes are included, and that they are logically linked. Ask someone from outside your team to review it; revise if they see problems or opportunities to increase clarity. But don’t overwork it—treat it as a living document that you can update when and if necessary

Download the logic model template from http://bit.ly/lm-temp.

Newsletter: Theory of Change

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

“A theory of change defines all building blocks required to bring about a given long-term goal. This set of connected building blocks—interchangeably referred to as outcomes, results, accomplishments, or preconditions—is depicted on a map known as a pathway of change/change framework, which is a graphic representation of the change process.”1

While this sounds a lot like a logic model, a theory of change typically includes much more detail about how and why change is expected to happen. For example, a theory of change may describe necessary conditions that must be achieved in order to reach each level of outcomes and include justifications for hypotheses. While logic models are essentially descriptive—communicating what a project will do and the outcomes it will produce—theories of change are more explanatory.  An arrow from one box in a logic model to another indicates, “if we do this, then this will happen.” In contrast, a theory of change explains what that arrow represents, i.e., the specific mechanisms by which change occurs.

Some funding programs, such as NSF’s Improving Undergraduate STEM Education program, call for proposals to include a theory of change. Developing and communicating a theory of change pushes proposers to get specific about how change will occur and include strong justification for planned actions and expected results.

To learn more, see “An Introduction to Theory of Change” in Evaluation Exchange at http://bit.ly/toc-lm, which includes links to helpful resources from the Center for Theory of Change (http://www.theoryofchange.org/).

1http://www.theoryofchange.org > Glossary

Newsletter: What’s the Difference Between Outputs, Outcomes, and Impacts?

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

LMGraphic

A common source of confusion among individuals who are learning about logic models is the difference between outputs, outcomes, and impacts. While most people generally understand that project activities are the things that a project does, the other terms may be less straightforward.

Outputs are the tangible products of project activities. I think of outputs as things whose existence can be observed directly, such as websites, videos, curricula, labs, tools, software, training materials, journal articles, and books. They tend to be the things that remain after a project ends or goes away.

Outcomes are the changes brought about through project activities and outputs/products.  Outcomes may include changes in individual knowledge, skills, attitudes, awareness, or behaviors; organizational practices; and broader social/economic conditions.  In her blog post “Outputs are for programs, outcomes are for people” (http://bit.ly/srob0314), Sheila Robinson offers this guidance: “OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:  How will program participants change as a result of their participation in the program?” This is a great way to check to see if your logic model elements are located in the right place.  If the outcomes in your logic model include things that don’t sound like an appropriate answer to that question, then you may need to move things around.

The term impact is usually used to refer to outcomes that are especially large in scope or the ultimate outcomes a project is seeking to bring about. Sometimes the terms impacts and long-term outcomes are used interchangeably.

For example, one of EvaluATE’s main activities are webinars. Outputs of these webinars include resource materials, presentation slides, and recordings. Short-term outcomes for webinar participants are expected to include increased knowledge of evaluation. Mid-term outcomes include modifications or changes in their evaluation practice. Long-term outcomes are improved quality and utility of ATE project evaluations. The ultimate intended impact is for ATE projects to achieve better outcomes through strategic use of high-quality evaluations.

Keep in mind that not all logic models use these specific terms, and not everyone adheres to these particular definitions. That’s OK! The important thing to remember when developing a logic model is to understand what YOU mean in using these terms and to use and apply them consistently in your model and elsewhere.  And regardless of how you define them, each column in your model should present new information, not a reiteration of something already communicated.

Newsletter: ATE Logic Model Template

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

A logic model is a graphic depiction of how a project translates its resources into activities and outcomes. The ATE Project Logic Model Template presents the basic format for a logic model with question prompts and examples  to guide users in distilling their project plans into succinct statements about planned activities and products and desired outcomes. Paying attention to the prompts and ATE-specific examples will help users avoid common logic model mistakes, like placing outputs (tangible products) under outcomes (changes in people, organizations or conditions brought about through project activities and outputs).

The template is in PowerPoint so you may use the existing elements and start creating your own logic model right away—just delete the instructional parts of the document and input your project’s information.  We have found that when a document has several graphic elements, PowerPoint is easier to work in than Word.  Alternatively, you could create a simple table in Word that mirrors the layout in the template.

Formatting tips:

  • If you find you need special paper to print the logic model and maintain its legibility, it’s too complicated.  It should be readable on a 8.5” x 11” sheet of paper.  If you simply have too much information to include in a single page, include general summary statements/categories, and include detailed explanations in a proposal narrative or other project planning document.
  • You may wish to add arrows to connect specific activities to specific outputs or outcomes.  However, if you find that all activities are leading to all outcomes (and that is actually how the project is intended to work), there is no need to clutter your model with arrows leading everywhere.
  • Use a consistent font and font size.
  • Align, align, align! Alignment is one of the most important design principles. When logic model elements are out of alignment, it can make it seem messy and unprofessional.
  • Don’t worry if your logic model doesn’t capture all the subtle nuances of your project. It should provide an overview of what a project does and is intended to accomplish and  convey a clear logic as to how the pieces are connected.  Your proposal narrative or project plan is where the details go.

Download the template from http://bit.ly/lm-temp.

Newsletter: Project Spotlight: Geospatial Technician Education – Unmanned Aircraft Systems & Expanding Geospatial Technician Education through Virginia’s Community Colleges

Posted on July 1, 2016 by  in Newsletter - () ()

Deputy Director, Virginia Space Grant Consortium

Chris Carter is the Deputy Director of the Virginia Space Grant Consortium, where he leads two ATE projects.

How do you use logic models in your ATE projects?

Our team recently received our fourth ATE award, which will support the development of academic pathways and faculty training in unmanned aircraft systems (UAS). UAS, when combined with geospatial technologies, will revolutionize spatial data collection and analysis.

Visualizing desired impacts and outcomes is an important first step to effective project management. Logic models are wonderful tools for creating a roadmap of key project components. As a principal investigator on two ATE projects, I have used logic models to conceptualize project outcomes and the change that our team desires to create. Logic models are also effective tools for articulating the inputs and resources that are leveraged to offer the activities that bring about this change.

With facilitation and guidance from our partner and external evaluator, our team developed several project logic models. We developed one overarching project logic model to conceptualize the intended outcomes and desired change of the regional project. Each community college partner also developed a logic model to capture its unique goals and theory of change while also articulating how it contributes to the larger effort. These complementary logic models allowed the team members to visualize and understand their contributions while ensuring everyone was on the same path.

Faculty partners used these logic models to inform their administrations, business partners, and employers about their work. They are great tools for sharing the vision of change and building consensus among key stakeholders.

Our ATE projects are focused on creating career pathways and building faculty competencies to prepare technicians. The geospatial and UAS workforce is a very dynamic employment sector that is constantly evolving. We find logic models helpful tools for keeping the team and partners focused on the desired outputs and outcomes. The models remind us of our goals and help us understand how the components fit together. It is crucial to identify the project inputs and understand that as these evolve, project activities also need to evolve. Constantly updating a logic model and understanding the relationships between the various sections are key pieces of project management.

I encourage all ATE project leaders to work closely with their project evaluators and integrate logic models. Our external evaluator was instrumental in influencing our team to adopt these models. Project evaluators must be viewed as team members and partners from the beginning. I cannot imagine effectively managing a project without the aid of this project blueprint.

Newsletter: Three Questions and Examples to Spur Action from Your Evaluation Report

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

1) Are there any unexpected findings in the report? The EvaluATE team has been surprised to learn that we are attracting a large number of grant writers and other grant professionals to our webinars. We initially assumed that principal investigators (PIs) and evaluators would be our main audience. With growing attendance among grant writers, we became aware that they are often the ones who first introduce PIs to evaluation, guiding them on what should go in the evaluation section of a proposal and how to find an evaluator. The unexpected finding that grant writers are seeking out EvaluATE for guidance caused us to realize that we should develop more tailored content for this important audience as we work to advance evaluation in the ATE program.

Talk with your team and your evaluator to determine if any action is needed related to your unexpected results.

2) What’s the worst/least favorable evaluation finding from your evaluation? Although it can be uncomfortable to focus on a project’s weak points, this is where the greatest opportunity for growth and improvement lies. Consider the probable causes of the problem and potential solutions. Can you solve the problem with your current resources? If so, make an action plan. If not, decide if the problem is important enough to address through a new initiative.

At EvaluATE, we serve both evaluators and evaluation consumers who have a wide range of interests and experience. When asked what EvaluATE needs to improve, several respondents to our external evaluation survey noted that they want webinars to be more tailored to their specific needs and skill levels. Some noted that our content was too technical, while others remarked that it was too basic. To address this issue, we decided to develop an ATE evaluation competency framework. Webinars will be keyed to specific competencies, which will help our audience decide which are appropriate for them. We couldn’t implement this research and development work with our current resources, so we wrote this activity into the renewal proposal we submitted last fall.

Don’t sweep an unfavorable result or criticism under the rug. Use it as a lever for positive change.

3) What’s the most favorable finding from your evaluation? Give yourself a pat on the back, then figure out if it points to an aspect of your project you should expand. If you need more information to make that decision, determine what additional evidence could be obtained in the next round of the evaluation. Help others to learn from your successes—the ATE Principal Investigators Conference is an ideal place to share aspects of your work that are especially strong, along with your lessons learned and practical advice about implementing ATE projects.

At EvaluATE, we have been astounded at the interest in and positive response to our webinars. But we don’t yet have full understanding of the extent to which webinar attendance translates to improvements in evaluation practice. So, we decided to start collecting follow-up data from webinar participants to check on use of our content. With that additional evidence in hand, we’ll be better positioned to make an informed decision about expanding or modifying our webinar series.

Don’t just feel good about your positive results—use them as leverage for increased impact.

If you’ve considered your evaluation results carefully, but still aren’t able to identify a call to action, it may be time to rethink your evaluation’s focus. You may need to make adjustments to ensure it produces useful, actionable information. Evaluation plans should be fluid and responsive—it is expected that plans will evolve to address emerged needs.

Newsletter: Survey Says Spring 2016

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

ATE principal investigators (PIs) who received both oral and written reports from their evaluators indicated more use of their evaluation results than those who received just one type of report. Regardless of report format, more than half of PIs said they used evaluation results to make changes to project activities.

Survey Says chart

The full report of the 2015 ATE survey findings is available at http://www.evalu-ate.org/annual_survey/, along with data snapshots and downloadable graphics.