Archive: planning

Blog: Creating an Evaluation Design That Allows for Flexibility

Posted on January 13, 2021 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Holly Connell Allison Teeter
Evaluator
Kansas State University
Assistant Director
Strategic Initiatives and Development

There is no better time than now to talk about the need for flexibility in evaluation design and implementation. It is natural for long-term projects involving many partners, institutions, and objectives to experience changes as they progress. This is especially apparent in the age of the coronavirus pandemic, where many projects are faced with decisions about how to move forward, while still needing to make and demonstrate impact. Having an evaluation design that is too rigid does not allow for adjustments throughout the implementation process.

This blog provides a general guide for building a flexible evaluation design.

Design the Evaluation

Develop an evaluation plan that provides you with four to six evaluation questions that align with the project’s goals and objectives but provides you with ample flexibility to allow for changes throughout the project’s implementation. A sound evaluation design will guide how you conduct the evaluation activities while answering your key evaluation questions. The design will include factors such as:

  • Methods of data collection: Consider your audience and what method will work best and will yield the most robust results. Further, if the method chosen does not yield results, consider whether this method should be used again later, or used at all. Ensure one activity is not responsible for collecting data towards all or most of the evaluation questions. It is best practice to use a triangulation approach; use multiple methods of data collection to strengthen the quality of your results. Wrap in evidence towards as many evaluation questions as applicable in each of your data collections. If an evaluation activity falls through or does not pan out as anticipated, you will still have data to provide evidence towards the evaluation.
  • Sample sizes: Consider at what point a sample size is too small¾or too large¾for what you have originally planned. Develop a backup plan for this situation. Collect data from a variety of stakeholders. Changes in project implementation can affect your target audiences differently. Build this into your evaluation plan by ensuring all applicable target audiences are represented throughout your data collections.
  • Timing of data collection: Be mindful of major events in the lives of the target audience. For example, holding an online survey during exam season will likely reduce your sample size. Do not limit yourself to specific timing of an evaluation activity unless necessary. For example, if a survey can take place at any time during the summer, specify “Summer 2021” rather than “August 2021.”

Keep in mind that most evaluation projects do not go completely as planned and that various aspects of the project may undergo changes.

Being flexible with your design can yield much more meaningful and impactful results rather than using the plan originally in place. Changes and revisions may be needed as the project evolves, or due to unforeseen circumstances. Don’t hesitate to revise the evaluation plan; just make sure to document and justify the changes being made. Defining a list of potential limitations (e.g., of methods, data sources, potential bias, etc.) while developing your initial evaluation design could assist later on when determining if it is best to stay on course with the original plan, or to make a revision to the evaluation design.

Find out more about developing evaluation plans in the Pell Institute Evaluation Toolkit.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho
Principal Research Associate
The Evaluation Center
Research Manager
EvaluATE

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Evaluation Feedback Is a Gift

Posted on July 3, 2018 by  in Blog ()

Chemistry Faculty, Anoka-Ramsey Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Christopher Lutz, chemistry faculty at Anoka-Ramsey Community College. When our project was initially awarded, I was a first-time National Science Foundation (NSF) principal investigator. I understood external evaluation was required for grants but saw it as an administrative hurdle in the grant process. I viewed evaluation as proof for the NSF that we did the project and as a metric for outcomes. While both of these aspects are important, I learned evaluation is also an opportunity to monitor and improve your process and grant. Working with our excellent external evaluators, we built a stronger program in our grant project. You can too, if you are open to evaluation feedback.

Our evaluation team was composed of an excellent evaluator and a technical expert. I started working with both about halfway through the proposal development process (a few months before submission) to ensure they could contribute to the project. I recommend contacting evaluators during the initial stages of proposal development and checking in several times before submission. This gives adequate time for your evaluators to develop a quality evaluation plan and gives you time to understand how to incorporate your evaluator’s advice. Our funded project yielded great successes, but we could have saved time and achieved more if we had involved our evaluators earlier in the process.

After receiving funding, we convened grant personnel and evaluators for a face-to-face meeting to avoid wasted effort at the project start. Meeting in person allowed us to quickly collaborate on a deep level. For example, our project evaluator made real-time adjustments to the evaluation plan as our academic team and technical evaluator worked to plan our project videos and training tools. Include evaluator travel funds in your budget and possibly select an evaluator who is close by. We did not designate travel funds for our Kansas-based evaluator, but his ties to Minnesota and understanding of the value of face-to-face collaboration led him to use some of his evaluation salary to travel and meet with our team.

Here are three ways we used evaluation feedback to strengthen our project:

Example 1: The first-year evaluation report showed a perceived deficiency in the project’s provision of hands-on experience with MALDI-MS instrumentation. In response, we had students make small quantities of liquid solution instead of giving pre-mixed solutions, and let them analyze more lab samples. This change required minimal time but led students to regard the project’s hands-on nature as a strength in the second-year evaluation.

Example 2: Another area for improvement was students’ lack of confidence in analyzing data. In response to this feedback, project staff create Excel data analysis tools and a new training activity for students to practice with literature data prior to analyzing their own. The subsequent year’s evaluation report indicated increased student confidence.

Example 3: Input from our technical evaluator allowed us to create videos that have been used in academic institutions in at least three US states, the UK’s Open University system, and Iceland.

Provided here are some overall tips:

  1. Work with your evaluator(s) early in the proposal process to avoid wasted effort.
  2. Build in at least one face-to-face meeting with your evaluator(s).

Review evaluation data and reports with the goal of improving your project in the next year.

Consider external evaluators as critical friends who are there to help improve your project. This will help move your project forward and help you have a greater impact for all.