Archive: small project evaluation

Blog: Evolution of Evaluation as ATE Grows Up

Posted on March 15, 2017 by  in Blog ()

Independent Consultant, Independent Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I attended a packed workshop by EvaluATE called “A Practical Approach to Outcome Evaluation” at the 2016 NSF ATE Principal Investigators Conference. Two lessons from the workshop reminded me that the most significant part of the evaluation process is the demystification of the process itself:

  • “Communicate early and often with human data sources about the importance of their cooperation.”
  • “Ensure everyone understands their responsibilities related to data collection.”

Stepping back, it made me reflect upon the evolution of evaluation in the ATE community. When I first started out in the ATE world in 1995, I was on the staff of one of the first ATE centers ever funded. Back then, being “evaluated” was perceived as quite a different experience, something akin to taking your first driver’s test or defending a dissertation—a meeting of the tester and the tested.

As the ATE community has matured, so has our approach to both evaluation and the integral communication component that goes with it. When we were a fledgling center, the meetings with our evaluator could have been a chance to take advantage of the evaluation team’s many years of experience of what works and what doesn’t. Yet, at the start we didn’t realize that it was a two-way street where both parties learned from each other. Twenty years ago, evaluator-center/project relationships were neither designed nor explained in that fashion.

Today, my colleague, Dr. Sandra Mikolaski, and I are co-evaluators for NSF ATE clients who range from a small new-to-ATE grant (they weren’t any of those back in the day!) to a large center grant that provides resources to a number of other centers and projects and even has its own internal evaluation team. The experience of working with our new-to-ATE client was perhaps what forced us to be highly thoughtful about how we hope both parties view their respective roles and input. Because the “fish don’t talk about the water” (i.e., project teams are often too close to their own work to honk their own horn), evaluators can provide not only perspective and advice, but also connections to related work and other project and center principal investigators. This perspective can have a tremendous impact on how activities are carried out and on the goals and objectives of a project.

We use EvaluATE webinars like “User-Friendly Evaluation Reports” and “Small-Scale Evaluation” as references and resources not only for ourselves but also for our clients. These webinars help them understand that an evaluation is not meant to assess and critique, but to inform, amplify, modify, and benefit.

We have learned from being on the other side of the fence that an ongoing dialog, an ethnographic approach (on-the-ground research, participant observation, holistic approach), and formative input-based partnership with our client makes for a more fruitful process for everyone.

Webinar: Small Project Evaluation: Principles and Practices

Posted on February 10, 2016 by , , , , in Webinars

Presenter(s): Charlotte Forrest, Elaine Craft, Lori Wingate, Miranda Lee, Russell Cannon
Date(s): March 23, 2016
Time: 1-2:30 p.m. EDT
Recording: https://youtu.be/WUFTMyyRgyU

An effective small project evaluation requires a clear-cut and feasible project plan, an evaluation plan that matches the project’s scope and purpose, and a project team and external evaluator who are willing and able to share responsibility for implementing the evaluation. In this webinar, we will review foundational principles of small project evaluation and discuss strategies for putting them into practice for a high-quality, economical, and useful evaluation of a small project.

Webinar participants will be able to

  1. Create or refine a project logic model that accurately represents a project’s activities and intended outcomes as a foundation for an evaluation plan.
  2. Develop evaluation questions that are appropriate for a small project.
  3. Identify project process and outcome indicators for answering the evaluation questions.

Resources:
Slides
Handout

Checklist: Project Resume

Posted on May 6, 2015 by  in Resources ()

DRAFT: This checklist is designed to help project staff create a project resume. A project resume is a list of all key activities or accomplishments of a project. This document can easily be created in a word processing document, then uploaded to the project’s website. Make the resume easy to find on the project’s website, such as in the “About” section. For a more dynamic resume, include links to supporting documents, staff biographies, or personal Web pages, this will allow users to quickly locate items referenced on the project’s resume. Tracking all activities over the life of a project will make it easier to complete annual reports, apply for future funding, and respond to information requests. For an example of our project resume see (About > EvaluATE’s Resume).

File: Click Here
Type: Checklist
Category: Reporting & Use
Author(s): Emma Perk

Blog: ATE Small Project Evaluation

Posted on February 18, 2015 by  in Blog (, )

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

All ATE proposals, except for planning grants, are required to specify a budget line for an independent evaluator. But the solicitation offers no guidance as to what a small-scale project evaluation should look like or the kinds of data to collect. The Common Guidelines for Education Research and Development—issued jointly by the National Science Foundation and the Institute of Education Sciences—specify that evidence of impact requires randomized controlled trials, while evidence of promise is generated by correlational and quasi-experimental studies.

The Common Guidelines aren’t well aligned to the work done by many ATE projects and centers, especially projects awarded through the “Small Grants” track. Small ATE projects are funded to do things like create new degree programs, offer summer camps, expand recruitment, provide compensatory education, and develop industry partnerships. These sorts of endeavors are quite distinct from the research and development work to which the Common Guidelines are oriented.

NSF expects small ATE projects to be grounded in research and utilize materials developed by ATE centers. Generally speaking, the charge of small projects is to do, not necessarily to innovate or prove. Therefore, the charge for small project evaluations is to gather and convey evidence about how well this work is being done and how the project contributes to the improvement of technician education. Evaluators of small projects should seek empirical evidence about the extent to which…

The project’s activities are grounded in established practices, policies, frameworks, standards, etc. If small projects are not generating their own evidence of promise or impact, then they should be leveraging the existing evidence base to select and use strategies and materials that have been shown to be effective. Look to authoritative, trusted sources such as the National Academies Press (for example, see the recent report, Reaching Students: What Research Says About Effective Instruction in Undergraduate Science and Engineering) and top-tier education research journals.

The target audience is engaged. All projects should document who is participating in the project (students, faculty, partners, advisors, etc.) and how much. A simple tracking spreadsheet can go a long way toward evaluating this aspect of a project. Showing sustained engagement by a diverse set of stakeholders is important for demonstrating the project’s perceived relevance and quality.

The project contributes to changes in knowledge, skill, attitude, or behavior among the target audience. For any project that progresses beyond development to piloting or implementation, there is presumably some change being sought among those affected. What do they know that they didn’t know before? What new/improved skills do they have? Did their attitudes change? Are they doing anything differently? Even without experimental and quasi-experimental designs, it’s possible to establish empirical and logical linkages between the project’s activities and outcomes.

The ATE program solicitation notes that some projects funded through its Small Grants track “will serve as a prototype or pilot” for a subsequent project. As such, ATE small grant recipients should ensure their evaluations generate evidence that their approaches to improving technician education are worth the next level of investment by NSF.

To learn more about…
‒ the Common Guidelines, see EvaluATE’s Evaluation and Research in the ATE Program webinar recording and materials
‒ evaluation of small projects, see EvaluATE’s Low-Cost, High-Impact Evaluation for Small Projects webinar recording and materials
‒ alternative means for establishing causation, see Jane Davidson’s Understand Causes of Outcomes and Impacts webinar recording and slides