Newsletter - Winter 2014

Newsletter: Meet EvaluATE’s Community College Liaison Panel

Posted on January 1, 2014 by , , , in Newsletter - ()

The ATE program is community college-based, and as such EvaluATE places a priority on meeting the needs of this constituency. To help ensure the relevancy and utility of its resources, EvaluATE has convened a Community College Liaison Panel (CCLP). CCLP members Michael Lesiecki, Marilyn Barger, Jane Ostrander, and Gordon Snyder are tasked with keeping the EvaluATE team tuned into the needs and concerns of 2-year college stakeholders and engaging the ATE community in the review and pilot testing of EvaluATE-produced materials.

These resources distill relevant elements of evaluation theory, principles, and best practices so that a user can quickly understand and apply them for a specific evaluation-related task. They are intended to support members of the ATE community to enhance the quality of their evaluations.

The CCLP’s role is to coordinate a three-phase review process. CCLP members conduct a first-level review of an EvaluATE resource. The EvaluATE team revises it based on the CCLP’s feedback, then each of the four CCLP members reaches out to diverse members of the ATE community—PIs, grant developers, evaluators, and others—to review the material and provide confidential, structured feedback and suggestions. After another round of revisions, the CCLP engages another set of ATE stakeholders to actually try out the resource to ensure it “works” as intended in the real world. Following this pilot testing, EvaluATE finalizes the resource for wide dissemination.

The CCLP has shepherded two resources through the entire review process: the ATE Evaluation Primer and ATE Evaluation Planning Checklist. In the hopper for review in the next few months are the ATE Logic Model Template and Evaluation Planning Matrix, Evaluation Questions Checklist, ATE Evaluation Reporting Checklist, and Professional Development Feedback Survey Template. In addition, CCLP members are leading the development of a Guide to ATE Evaluation Management—by PIs for PIs.

The CCLP invites anyone interested in ATE evaluation to participate in the review process. For a few hours of your time, you’ll get a first look at and tryout of new resources. And your inputs will help shape and strengthen the ATE evaluation community. We also welcome recommendations of tools and materials that others have developed that would be of interest to the ATE community.

To get involved, email CCLP Director Mike Lesiecki at mlesiecki@gmail.com. Tell him you would like to help make EvaluATE be the go-to evaluation resource for people like yourself.

Newsletter: From ANCOVA to Z Scores

Posted on January 1, 2014 by  in Newsletter - ()

EvaluATE Blog Editor

The Evaluation Glossary App features more than 600 terms related to evaluation and assessment. Designed for both evaluators and those who work with evaluators, the app provides three ways to access the terms. The first way allows the user to browse alphabetically, like a dictionary. The second option is to view the terms by one of eight categories: 1) data analysis; 2) data collection; 3) ethics and guidelines; 4)evaluation design; 5) miscellaneous; 6)  program planning; 7) reporting and utilization; and 8) types of evaluation. The categories are a great starting point for users who are less familiar with evaluation lingo. The final option is a basic search function, which can be useful to anyone who needs a quick definition for an evaluation term. Each entry provides a citation for the definition’s source and crossreferences related terms in the glossary.

App author: Kylie Hutchinson of Community Solutions. Free for Android, iOS. Available wherever you purchase apps for your Android or Apple mobile device or from  communitysolutions.ca/web/evaluation-glossary/.

Newsletter: What evaluation models do you recommend for ATE evaluations?

Posted on January 1, 2014 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Evaluators in any context should have working knowledge of multiple evaluation models. Models provide conceptual frameworks for determining the types of questions to be addressed, which stake-holders should be involved and how, the kinds of evidence needed, and other important considerations for an evaluation. However, evaluation practitioners rarely adhere strictly to any one model (Christie, 2003). Rather, they draw on them selectively. Below are a few popular models:

EvaluATE has previously highlighted the Kirkpatrick Model, developed by Donald Kirkpatrick for evaluating training effectiveness in business contexts. It provides a useful framework for focusing an evalua-tion of any type of professional develop-ment activity. It calls for evaluating a training intervention on four levels of im-pact (reaction, learning, behavior, and high-level results). A limitation is that it does not direct evaluators to consider whether the right audiences were reached or assess the quality of an intervention’s content and implementation— only its effects. See bit.ly/1fkdKfh.

Etienne Wenger reconceptualized the Kirkpatrick “levels” for evaluating value creation in communities of practice. He provides useful suggestions for the types of evidence that could be gathered for evaluating community of practice impacts at multiple levels. However, the emphasis on identifying types of “value” could lead those using this approach to overlook evidence of harm and/or overestimate net benefits. See bit.ly/18x5aLc.

Three models that figure prominently in most formal evaluation training programs include Daniel Stufflebeam’s CIPP Model, Michael Scriven’s Key Evaluation Checklist, and Michael Quinn Patton’s Utilization- Focused Evaluation, described below. These authors have distilled their models into checklists—see bit.ly/1fSXu5H.

Stufflebeam’s CIPP Model is especially popular for education and human service evaluations. CIPP calls for evaluators to assess a project’s Context, Input, Process, and Prod-ucts (the latter encompasses effectiveness, sustainability, and transportability). CIPP evaluations ask What needs to be done? How should it be done? Is it being done? Did it succeed?

Scriven’s Key Evaluation Checklist calls for assessing a project’s processes, outcomes, and costs. It emphasizes the importance of identifying the needs being served by a pro-ject and determining how well those needs were met. Especially useful is the list of 21 sources of values/criteria to consider when evaluating pretty much anything.

Patton’s Utilization-Focused Evaluation calls for planning an evaluation around the information needs of “primary intended users” of the evaluation, i.e., those who are in a position to make decisions based on the evaluation results. He provides numerous practical tips for engaging stakeholders to maximize an evaluation’s utility.

This short list barely scratches the surface— for an overview of 22 different models, see Stufflebeam (2001). A firm grounding in evaluation theory will enhance any evaluator’s ability to design and conduct evaluations that are useful, feasible, ethical, and accurate (see jcsee.org).
Stufflebeam, D. (2001). Evaluation models. New Directions for Evaluation, 89, 7–98.

Christie, C. A. (2003), Understanding evaluation theory and its role in guiding practice. New Directions for Evaluation, 97, 91–93.

Newsletter: Annual NSF Report and the Annual ATE Survey

Posted on January 1, 2014 by  in Newsletter - ()

Doctoral Associate, EvaluATE, Western Michigan University

A common question EvaluATE has been asked about the ATE survey (conducted annually since 2000, with an average response rate of 95 percent) is, “Why can’t you just use the information we provided in our annual report?” Although there is overlap between the information required for the annual ATE survey and the annual reports that grantees submit through Research.gov, the survey is tailored to ATE activities and outcomes. In contrast, the Research.gov reporting system is set up to accommodate a vast array of NSF-funded endeavors—from polar research expeditions to television programming to the development technical degree programs. Also, Research.gov reports are narrative reports that are delivered to program officers in PDF format. As such, there is no way to aggregate information submitted via Research.gov into a report about the overall ATE program, which is what NSF needs to supports the program’s accountability to Congress.

Although they serve distinct purposes, much of the information asked for in the ATE Survey can and should be reported in ATE grantees’ annual reports to NSF. So, EvaluATE has developed a new resource to help stream-line reporting activities of ATE grantees. We’ve extracted information from Research.gov so that PIs can see all the information required in annual reports in one place (rather than having to click through the multi-layered system or strain their eyes viewing the screenshots in Research.gov’s Project Reports Preview PDF document). The document also identifies items from the ATE Survey that are relevant to various annual report sections, so PIs can maximize the use of the data collected about their projects. We welcome your feedback on this draft resource (see p. 1). You may download a draft from evalu-ate.org/annual_survey/.

Newsletter: The PI Guide to Working with Evaluators

Posted on January 1, 2014 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

(originally published as blog at ltd.edc.org/strong-pievaluator-partnerships-users-guide on January 10, 2013)

Evaluation can be a daunting task for PIs. It can seem like the evaluator speaks another language, and the stakes for the project can seem very high. Evaluators face their own challenges. Often working with a tight budgets and timeframes, expectations are high that they deliver both rigor and relevance, along with evidence of project impact. With all this and more in the mix, it’s no surprise that tension can mount and miscommunication can drive animosity and stress.

As the head of evaluation for the ITEST Learning Resource Center and as a NSF program officer, I saw dysfunctional relationships between PIs and their evaluators contribute to missed deadlines, missed opportunities, and frustration on all sides. As an evaluator, I am deeply invested in building evaluators’ capacity to communicate their work and in helping program staff understand the value of evaluation and what it brings to their programs. I was concerned that these dysfunctional relationships would thwart the potential of evaluation to provide vital information for program staff to make decisions and demonstrate the value of their programs.

To help strengthen PI/evaluator collaborations, I’ve done a lot of what I called “evaluation marriage counseling” for PI/evaluator pairs. Through these “counseling sessions,” I learned that evaluation relationships are not so different from any other relationships. Expectations aren’t always made clear, communication often breaks down, and, more than anything else, all relation-ships need care and feeding.

As a program officer, I had the chance to help shape and create a new resource that supports PIs and evaluators in forming strong working relationships. Rick Bonney of the Cornell Lab of Ornithology and I developed a guide to working with evaluators, written by PIs, for PIs. Although it was designed for the Informal Science Education community, the lessons translate to just about any situation in which program staff are working with evaluators. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects is available at bit.ly/1l28nTt.