Newsletter - Evaluation Terminology

Newsletter: Theory of Change

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

“A theory of change defines all building blocks required to bring about a given long-term goal. This set of connected building blocks—interchangeably referred to as outcomes, results, accomplishments, or preconditions—is depicted on a map known as a pathway of change/change framework, which is a graphic representation of the change process.”1

While this sounds a lot like a logic model, a theory of change typically includes much more detail about how and why change is expected to happen. For example, a theory of change may describe necessary conditions that must be achieved in order to reach each level of outcomes and include justifications for hypotheses. While logic models are essentially descriptive—communicating what a project will do and the outcomes it will produce—theories of change are more explanatory.  An arrow from one box in a logic model to another indicates, “if we do this, then this will happen.” In contrast, a theory of change explains what that arrow represents, i.e., the specific mechanisms by which change occurs.

Some funding programs, such as NSF’s Improving Undergraduate STEM Education program, call for proposals to include a theory of change. Developing and communicating a theory of change pushes proposers to get specific about how change will occur and include strong justification for planned actions and expected results.

To learn more, see “An Introduction to Theory of Change” in Evaluation Exchange at http://bit.ly/toc-lm, which includes links to helpful resources from the Center for Theory of Change (http://www.theoryofchange.org/).

1http://www.theoryofchange.org > Glossary

Newsletter: Outcomes and Impacts

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Outcomes are “changes or benefits resulting from activities and outputs,” including changes in knowledge, attitude, skill, behavior, practices, policies, and conditions. These changes may be at the individual, organizational, or community levels. Impacts are “the ultimate effect of the program on the problem or condition that the program or activity was supposed to do something about.”1

Some individuals and organizations use the terms outcomes and impacts interchangeably. Others, such as the Environmental Protection Agency who authored the above definitions, use impact to refer to the highest level of outcomes. The National Science Foundation uses impact to refer to important improvements in the capacity of individuals, organizations, and our nation to engage in STEM research, teaching, and learning.

Regardless of how impacts and outcomes are defined, they are quite distinct from activities. Activities are what a project does—actions undertaken. Outcomes and impacts are the changes a project brings about.

Each of these topics has a designated section of the Research.gov reporting system. Gaining clarity about your project’s distinct activities, outcomes, and impacts before starting to write an NSF annual report will streamline the process, reduce the potential for redundancy across sections, and ensure that program officers will get more than an inventory of project activities. One way to do that is to revisit your project logic model or create one (to get started, download EvaluATE’s logic model template from http://bit.ly/ate-logic).

1U.S. Environmental Protection Agency. (2007). Program Evaluation Glossary http://bit.ly/epa-evalgloss

Newsletter: Transformative

Posted on January 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

NSF identifies five questions that proposal reviewers should consider in relation to the NSF merit criteria of Intellectual Merit and Broader Impacts.1 One of these questions is, “To what extent do the proposed activities suggest and explore creative, original, or potentially transformative concepts?”

NSF defines transformative research as involving “ideas, discoveries, or tools that radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education. Such research challenges current understanding or provides pathways to new frontiers.”2

The Transformative Research section of the NSF website (www.nsf.gov/about/transformative_research/) offers additional insights on this topic. It explains that transformative research, “challenges conventional wisdom; leads to unexpected insights that enable new techniques or methodologies; or redefines the boundaries of science, engineering, or education.”

Understanding what NSF means by “transformative” is important so that proposers and grantees use the term appropriately and do not accidentally overstate their project’s potential or actual achievements. While some projects may bring about important institutional transformation, that type of transformation is of a smaller scale than the “radical changes” in scientific understanding and practices associated with NSF’s definition. Claims related to “transformation” should be reserved for the truly extraordinary, revolutionary, and ground-breaking changes in understanding or practice.

1 http://bit.ly/merit-review
2 http://bit.ly/tr-def

Newsletter: Collaborative Evaluation

Posted on October 1, 2015 by  in Newsletter - () ()

A collaborative evaluation is one “in which there is a significant degree of collaboration or cooperation between evaluators and stakeholders in planning and/or conducting the evaluation.”1

Project leaders who are new to grant project evaluation may assume that evaluation is something that is done to them, rather than something they do with an evaluator. Although the degree of collaboration may vary, it is generally advisable for project leaders to work closely with their evaluators on the following tasks:

Define the focus of an evaluation: Be clear about what you, as a project leader, need to learn from the evaluation to help improve your work and what you need to be able to report to NSF to demonstrate accountability and impact.

Minimize barriers to data collection: Inform your evaluator about the best times and places to gather data. If the evaluator needs to collect data directly from students or faculty, an advance note from you or another respected individual from your institution can help a great deal. Help your evaluator connect with your institutional research office or other sources of organizational data.

Review data collection instruments: Your evaluator has expertise in evaluation and research methods, but you know your project’s content area and audience best. Review instruments (e.g., questionnaires, interview/focus group protocols) to ensure they make sense for your audience.

To learn more, visit the website of the American Evaluation Association’s topical interest group on collaborative, participatory, and empowerment evaluation: (bit.ly/cpe-tig).

1Cousins, J. B., Donohue, J. J., & Bloom, G. A. (1996). Collaborative evaluation in North America: Evaluators’ self-reported opinions, practices and consequences. American Journal of Evaluation, 17(3), p. 210.

Newsletter: Evaluation Plan

Posted on July 1, 2015 by  in Newsletter - ()

An evaluation plan is “a written document describing the overall approach or design that will be used to guide an evaluation. It includes what will be done, how it will be done, who will do it, when it will be done, and why the evaluation is being conducted.”1 Two versions of the evaluation plan are needed: A brief, mostly conceptual overview for use in the proposal and an expanded plan that guides the evaluation once you are funded.

Both versions should describe the evaluation’s scope and focus, data collection plan, and deliverables. The main purpose of the proposal plan is to show reviewers that you have a clear plan, that the plan is appropriate for the project, and you have the capacity to conduct the evaluation. The expanded plan, which should be the first deliverable you receive from your evaluator after your project starts, serves as a guide for implementing and managing the evaluation. As such, it should include concrete details about methods, analyses, deliverables, and time lines. It should reflect changes to the project negotiated with NSF during the award process and be updated as necessary throughout the project’s lifespan.

The Evaluation Design Checklist (http://bit.ly/eval-design) and Evaluation Contracts Checklist (http://bit.ly/eval-contracts) identifies numerous issues both PIs and evaluators should think through when developing evaluation plans and contracts.

1EPA Program Evaluation Glossary (http://bit.ly/epa-glossary)

For more evaluation terminology, get the Evaluation Glossary App from the App Store or Google Play.

Newsletter: Dashboards

Posted on April 1, 2015 by  in Newsletter - ()

EvaluATE Blog Editor

Dashboards are a way to present data about the “trends of an organization’s key performance indicators.”1 Dashboards are designed to provide information to decision makers about important trends and outcomes related to key program activities in real time. Think of a car’s dashboard. It gives you information about the amount of gas the car has, the condition of the engine, and the speed—all of which allow you to pay more attention to what is going on around you. Dashboards optimally work by combining data from a number of sources into one document (or web page) that is focused on giving the user the “big picture,” and keeping them from getting lost in the details. For example, a single dashboard could present data on event attendance, participant demographics, web analytics, and student outcomes, which can give the user important information about project reach, as well as potential avenues for growth.

As a project or center’s complexity increases, it’s easy to lose sight of the big picture. By using a dashboard that is designed to integrate many pieces of information about the project or center, staff and stakeholders can make well-balanced decisions and can see the results of their work in a more tangible way. Evaluators can also take periodic readings from the dashboard to inform their own work, providing formative feedback to support good decisions.

For some real-world examples, check out bit.ly/db-examples

1 bit.ly/what-is-db

Newsletter: Secondary Data

Posted on January 1, 2015 by  in Newsletter - () ()

EvaluATE Blog Editor

Secondary data is data that is repurposed from its original use, typically collected by a different entity. This is different from primary data, which is data you collect and analyze for your own needs. Secondary data may include, but is not limited to, data already collected by other departments at your institution, by national agencies, or even by other grants. Secondary data can be useful for planning, benchmarking, and evaluation.

Using secondary data in evaluation could involve using institutional data about student ethnicity and gender to help determine your project’s impact on underrepresented minority graduation rates. National education statistics can be used for benchmarking purposes. A national survey of educational pipelines into industry can help you direct your recruitment planning.

The primary benefit of using secondary data is that it is often cheaper to acquire than primary data in terms of time, labor, and financial expenses, which is especially important if you are involved in a small grant with limited resources. However, secondary data sources may not provide all the information needed for your evaluation—you will still have to do some primary data collection in order to get the full picture of your project’s quality and effectiveness.

One final note: Accessing institutional data may require working closely with offices that are not part of your grant, so you must plan accordingly. It is helpful to connect your evaluator with those offices to facilitate access throughout the evaluation.

Newsletter: Critical Friend

Posted on October 1, 2014 by  in Newsletter - ()

EvaluATE Blog Editor

The term critical friend describes a stance an evaluator can take in his or her relationship with the program or project they evaluate. Costa and Kallick (1993) provide this seminal definition: “A trusted person who asks provocative questions, provides data to be examined through another lens, and offers critique of a person’s work as a friend” (p.50).

The relationship between a project and an evaluator who is a critical friend is one where the evaluator has the best interests of the program at heart and the project staff trusts that this is the case. The evaluator may see their role as being both a trusted advisor and a staunch critic. He or she pushes the program to achieve its goals in the most effective way possible while maintaining independence. The evaluator helps the project staff to view information in different ways, while still being sensitive to the project staff’s own views and priorities. The evaluator will call attention to negative or less effective aspects of a project, but will do so in a constructive way. By pointing out potential pitfalls and flaws in the project, the critical friend evaluator can help the project to grow and improve.

To learn more…

Costa, A.L. & Kallick, B. (1993). Through the lens of a critical friend. Educational Leadership, 51(2) 49-51. http://bit.ly/crit-friend

Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as critical friend. New Directions for Evaluation, 86, 81-92.

Newsletter: Effectiveness

Posted on July 1, 2014 by  in Newsletter - ()

The ATE program solicitation calls for the evaluation of project effectiveness. Effectiveness, as defined by the Oxford English Dictionary, is “the degree to which something is successful in producing a desired result.” Therefore, ATE evaluations should determine the extent to which projects achieved their intended results, demonstrating how the project’s activities led to observed outcomes.

To claim effectiveness requires establishing causal links between a project’s activities and observed outcomes. To establish causation, three criteria must be met: temporal precedence, covariation, and no plausible alternative explanations (see bit.ly/trochim). For example, if you claim that your project led to increased enrollment of women in engineering technology, you need to provide evidence that (1) the trend did not start until after the project was initiated, (2) individuals or campuses not involved in the project did not experience the same changes or that the degree of change varied with the degree of involvement; and (3) nothing else going on in the project’s environment could have produced the observed increase in the number of women enrolled.

While important, there is more to evaluation than measuring effectiveness. Some other considerations include relevance, efficiency, impact, and sustainability (these are project evaluation criteria developed by the Organisation for Economic Co-operation and Development; to learn more see bit.ly/oecd-dac.)

Newsletter: Formative Evaluation

Posted on April 1, 2014 by  in Newsletter - ()

The most common purposes, or intended uses of evaluations, are often described by the terms formative evaluation and summative evaluation.

Formative evaluation focuses on evaluation for project improvement, in contrast with summative evaluation which uses evaluation results to make decisions about project adoption, expansion, contraction, continuation, or cancellation.

Since formative evaluation is all about project improvement, it needs to occur while there is still time to implement change. So, the earlier a formative evaluation can begin in an ATE project cycle, the better. Formative evaluation is also a recurring activity. As such, those who will be involved in implementing change (project leaders and staff) are the ones who will be the most interested in the results of a formative evaluation.

E. Jane Davidson notes in her book, Evaluation Methodology Basics, that there are two main areas in which formative evaluation is especially useful. Adapted for the ATE context those areas are:

  1. To help a new project “find its feet” by helping to improve project plans early in the award cycle. Another example for a new project is collecting early evidence of project relevancy from faculty and students, thus allowing changes to occur before full roll out of a project component.
  2. To assist more established projects improve their services, become more efficient with their grant dollars, or reach a larger audience. For projects looking for refunding, formative evaluation can assist in finding areas of improvement (even in long standing activities) to better respond to changing needs.