Newsletter - Evaluation Terminology

Newsletter: From ANCOVA to Z Scores

Posted on January 1, 2014 by  in Newsletter - ()

EvaluATE Blog Editor

The Evaluation Glossary App features more than 600 terms related to evaluation and assessment. Designed for both evaluators and those who work with evaluators, the app provides three ways to access the terms. The first way allows the user to browse alphabetically, like a dictionary. The second option is to view the terms by one of eight categories: 1) data analysis; 2) data collection; 3) ethics and guidelines; 4)evaluation design; 5) miscellaneous; 6)  program planning; 7) reporting and utilization; and 8) types of evaluation. The categories are a great starting point for users who are less familiar with evaluation lingo. The final option is a basic search function, which can be useful to anyone who needs a quick definition for an evaluation term. Each entry provides a citation for the definition’s source and crossreferences related terms in the glossary.

App author: Kylie Hutchinson of Community Solutions. Free for Android, iOS. Available wherever you purchase apps for your Android or Apple mobile device or from  communitysolutions.ca/web/evaluation-glossary/.

Newsletter: ATE Sustainability

Posted on October 1, 2013 by  in Newsletter - ()

Sustainability is about ensuring that at least some aspects of a project or center’s work—such as faculty positions, partnerships, or curricula—have “a life beyond ATE funding” (nsf.gov/ate). By definition, sustainability “happens” after NSF funding ends—and thus, after the project or center’s evaluation has concluded. So how can sustainability be addressed in an evaluation? There are three sources of information that can help with a prospective assessment of sustainability, whether for external evaluation purposes or to support project planning and implementation:

(1) Every ATE proposal is supposed to include a sustainability plan that describes what aspects of the grant will be sustained beyond the funding period and how. (2) Every proposal submitted in 2012 or later required a data management plan. This plan should have described how the project’s data and other products would be preserved and made available to others. Both the sustainability and data management plans should be reviewed to determine if the project will be able to deliver on what was promised. (3) Developed by Wayne Welch, the Checklist for Assessing the Sustainability of ATE Projects and Centers can be used to determine a project’s strengths and weaknesses in regard to sustainability. The checklist addresses diverse dimensions of sustainability related to program content and delivery, collaboration, materials, facilities, revenue, and other issues. See bit.ly/18l2Fcb.

Newsletter: Data Management Plan (DMP)

Posted on July 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

DMPs are not evaluation plans, but they should address how evaluation data will be handled and possibly shared.

DMPs are required for all NSF proposals, uploaded as a Supplementary Document in the FastLane.gov proposal submission system. They can be up to two pages long and should describe

  • the kind of data you will gather
  • how you’ll format the data and metadata (metadata is documentation about what your primary data are)
  • what policies will govern the access and use of your data by others
  • how you’ll protect privacy, confidentiality, security, and intellectual property
  • who would be interested in accessing your data and in what ways they might use it
  • your plans for archiving the data and preserving access to them

To learn more about data management plans, check out these resources:

Newsletter: Evaluation Use

Posted on April 1, 2013 by  in Newsletter - ()

All evaluators want their evaluations to be useful and used. Evaluation clients need evaluation to bring value to their work to make the investment worthwhile. What does evaluation use look like in your context? It should be more than accountability reporting. Here are common types of evaluation use as defined in the evaluation literature:

Instrumental Use is using evaluation for decision-making purposes. These decisions are most commonly focused on improvement, such as changing marketing strategies or modifying curriculum. Or, they can be more summative in nature, such as deciding to continue, expand, or reinvent a project.

Process Use happens when involvement in an evaluation leads to learning or different ways of thinking or working.

Conceptual Use is evaluation use for knowledge. For example, a college dean might use an evaluation of her academic programs to further understand an issue related to another aspect of STEM education. This evaluation influences her thinking, but does not trigger any specific action.

Symbolic Use is use of evaluation findings to forward an existing agenda. Using evaluation to market an ATE program or to apply for further funding could be examples.