There is no better time than now to talk about the need for flexibility in evaluation design and implementation. It is natural for long-term projects involving many partners, institutions, and objectives to experience changes as they progress. This is especially apparent in the age of the coronavirus pandemic, where many projects are faced with decisions about how to move forward, while still needing to make and demonstrate impact. Having an evaluation design that is too rigid does not allow for adjustments throughout the implementation process.

This blog provides a general guide for building a flexible evaluation design.

Design the Evaluation

Develop an evaluation plan that provides you with four to six evaluation questions that align with the project’s goals and objectives but provides you with ample flexibility to allow for changes throughout the project’s implementation. A sound evaluation design will guide how you conduct the evaluation activities while answering your key evaluation questions. The design will include factors such as:

  • Methods of data collection: Consider your audience and what method will work best and will yield the most robust results. Further, if the method chosen does not yield results, consider whether this method should be used again later, or used at all. Ensure one activity is not responsible for collecting data towards all or most of the evaluation questions. It is best practice to use a triangulation approach; use multiple methods of data collection to strengthen the quality of your results. Wrap in evidence towards as many evaluation questions as applicable in each of your data collections. If an evaluation activity falls through or does not pan out as anticipated, you will still have data to provide evidence towards the evaluation.
  • Sample sizes: Consider at what point a sample size is too small¾or too large¾for what you have originally planned. Develop a backup plan for this situation. Collect data from a variety of stakeholders. Changes in project implementation can affect your target audiences differently. Build this into your evaluation plan by ensuring all applicable target audiences are represented throughout your data collections.
  • Timing of data collection: Be mindful of major events in the lives of the target audience. For example, holding an online survey during exam season will likely reduce your sample size. Do not limit yourself to specific timing of an evaluation activity unless necessary. For example, if a survey can take place at any time during the summer, specify “Summer 2021” rather than “August 2021.”

Keep in mind that most evaluation projects do not go completely as planned and that various aspects of the project may undergo changes.

Being flexible with your design can yield much more meaningful and impactful results rather than using the plan originally in place. Changes and revisions may be needed as the project evolves, or due to unforeseen circumstances. Don’t hesitate to revise the evaluation plan; just make sure to document and justify the changes being made. Defining a list of potential limitations (e.g., of methods, data sources, potential bias, etc.) while developing your initial evaluation design could assist later on when determining if it is best to stay on course with the original plan, or to make a revision to the evaluation design.

Find out more about developing evaluation plans in the Pell Institute Evaluation Toolkit.

About the Authors

Holly Connell

Holly Connell box with arrow

Evaluator, Office of Educational Innovation and Evaluation (OEIE), Kansas State University

Holly Connell is a senior research and evaluation assistant for the Office of Educational Innovation and Evaluation (OEIE) at Kansas State University. She provides support for a wide range of evaluation projects and evaluation teams. She received an M.P.H. in community and behavioral health with an emphasis in health policy and a B.S. in health and human physiology from The University of Iowa. Holly has expertise in data collection and analysis, project coordination, and reporting on data outcomes. She has experience in working with diverse stakeholder groups, facilitating project implementation and evaluation, and providing written and visual summary reports.

Allison Teeter

Allison Teeter box with arrow

Assistant Director, Strategic Initiatives and Development- Office of Educational Innovation and Evaluation (OEIE), Kansas State University

Allison Teeter is OEIE’ assistant director for strategic initiatives and development. She provides strategic direction for securing funding to sustain OEIE activities and leads teams to collaborate with clients in the creation of logic models, evaluation plans, and other research and evaluation proposal materials. Allison also leads evaluation teams for a variety of projects focusing on research and extension capacity building, workforce development, and broadening participation. These projects involve collaboration across multiple institutions, states, and disciplines. Allison earned a Ph.D. in sociology with an emphasis in social research methods from Kansas State University.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.