We EvaluATE - Evaluation Management

Blog: Shorten the Evaluation Learning Curve: Avoid These Common Pitfalls*

Posted on September 16, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This EvaluATE blog is focused on getting started with evaluation. It’s oriented to new ATE principal investigators who are getting their projects off the ground, but I think it holds some good reminders for veteran PIs as well. To shorten the evaluation learning curve, avoid these common pitfalls:

Searching for the truth about “what NSF wants from evaluation.” NSF is not prescriptive about what an ATE evaluation should or shouldn’t look like. So, if you’ve been concerned that you’ve somehow missed the one document that spells out exactly what NSF wants from an ATE evaluation—rest assured, you haven’t overlooked anything. But there is information that NSF requests from all projects in annual reports and that you are asked to report on the annual ATE survey. So it’s worthwhile to preview the Research.gov reporting template (bit.ly/nsf_prt) and the ATE annual survey questions (bit.ly/ATEsurvey16). And if you’re doing research, be sure to review the Common Guidelines for Education Research and Development – which are pretty cut-and-dried criteria for different types of research (bit.ly/cg-checklist). Most importantly, put some time into thinking about what you, as a project leader, need to learn from the evaluation. If you’re still concerned about meeting expectations, talk to your program officer.

Thinking your evaluator has all the answers. Even for veteran evaluators, every evaluation is new and has to be tailored to context. Don’t expect your evaluator to produce a detailed, actionable evaluation plan on Day 1. He or she will need to work out the details of the plan with you. And if something doesn’t seem right to you, it’s OK to ask for something different.

 Putting off dealing with the evaluation until you are less busy. “Less busy” is a mythical place and you will probably never get there. I am both an evaluator and a client of evaluation services, and even I have been guilty of paying less attention to evaluation in favor of “more urgent” matters. Here are some tips for ensuring your project’s evaluation gets the attention it needs: (a) Set a recurring conference call or meeting with your evaluator (e.g., every two to three weeks); (b) Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation matters; (c) Give someone other than the PI responsibility for attending to the evaluation—not to replace the PI’s attention, but to ensure the PI and other project members are staying on top of the evaluation and communicating regularly with the evaluator; (d) Commit to using the evaluation results in a timely way—if you do something on a recurring basis, make sure you gather feedback from those involved and use it to improve the next activity.

Assuming you will need your first evaluation report at the end of Year 1. PIs must submit their annual reports to NSF 90 days prior to the end of the current budget period. So if your grant started on September 1, your first annual report is due around June 1. And it will take some time to prepare, so you should probably start writing in early May. You’ll want to include at least some of your evaluation results, so start working with your evaluator now to figure what information is most important to collect right now.

Veteran PIs: What tips do you have for shortening the evaluation learning curve?  Submit a blog to EvaluATE and tell your story and lessons learned for the benefit of new PIs.

*Blog is a reprint of the 2015 newsletter article

Blog: Quick Reference Guides Evaluators Can’t Live Without

Posted on August 5, 2020 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on May 15, 2020:
https://aea365.org/blog/quick-reference-guides-evaluators-cant-live-without-by-kelly-robertson/

My name is Kelly Robertson, and I work at The Evaluation Center at Western Michigan University and EvaluATE, the National Science Foundation–funded evaluation hub for Advanced Technological Education.

I’m a huge fan of quick reference guides. Quick reference guides are brief summaries of important content that can be used to improve practice in real time. They’re also commonly referred to as job aids or cheat sheets.

I found quick reference guides to be especially helpful when I was just learning about evaluation. For example, Thomas Guskey’s Five Critical Levels of Professional Development Evaluation helped me learn about different levels of outcomes (e.g., reaction, learning, organizational support, application of skills, and target population outcomes).

Even with 10-plus years of experience, I still turn to quick reference guides every now and then. Here are a few of my personal favorites:

My colleague Lyssa Becho is also a huge fan of quick reference guides, and together we compiled a list of over 50 evaluation-related quick reference guides. The list draws on the results from a survey we conducted as part of our work at EvaluATE. It includes quick reference guides that 45 survey respondents rated as most useful for each stage of the evaluation process.

Here are some popular quick reference guides from the list:

  • Evaluation Planning: Patton’s Evaluation Flash Cards introduce core evaluation concepts such as evaluation questions, standards, and reporting in an easily accessible format.
  • Evaluation Design: Wingate’s Evaluation Data Matrix Template helps evaluators organize information about evaluation indicators, data collection sources, analysis, and interpretation.
  • Data Collection: Wingate and Schroeter’s Evaluation Questions Checklist for Program Evaluation provides criteria to help evaluators understand what constitutes high-quality evaluation questions.
  • Data Analysis: Hutchinson’s You’re Invited to a Data Party! explains how to engage stakeholders in collective data analysis.
  • Evaluation Reporting: Evergreen and Emery’s Data Visualization Checklist is a guide for the development of high-impact data visualizations. Topics covered include text, arrangement, color, and lines.

If you find any helpful evaluation-related quick reference guides are missing from the full collection please contact kelly.robertson@wmich.edu.

Blog: Shift to Remote Online Work: Assets to Consider

Posted on July 22, 2020 by  in Blog ()

Principal Partner, Education Design, INC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m the principal partner of Education Design in Boston, focusing on STEM program evaluation. I first engaged in online instruction and design in 1994 with CU-SeeMe, a very early desktop videoconferencing app (without audio… that came in 1995!). While I’m certainly no expert in online learning, I’ve observed this newly accelerated shift toward virtual learning for several decades.

During 2020 we’ve seen nearly all of our personal and professional meetings converted to online interactions. In education this has been both challenging and illuminating. For decades, many in our field have planned and designed for the benefits online and digital learning might offer, often with predictive optimism. Clearly the future we anticipated is upon us.

Here, I want to identify some of the key assets and benefits of online and remote learning. I don’t intend to diminish the value of in-person human contact, but rather to help projects thrive in the current environment.

More Embrace than Rejection of Virtual

In nearly all our STEM learning projects, I’ve noticed far more embrace than rejection of virtual learning and socializing spaces.

In one project with partner colleges located in different states, online meetings and remote professional training were part of the original design. Funded in early 2020, the work has begun seamlessly, pandemic notwithstanding, owing to the colleges’ commitment to remote sharing and learning. These partners, leaders from a previous ATE project, will now become mentors for technical college partners, and that work will most likely be done remotely as well.

While forced to change approaches and learning modes, these partners haven’t just accepted remote interactions. Rather than focus on what is missing (site visits will not occur at this time), they’re actively seeking to understand the benefits and assets of connecting remotely.

“Your Zoom face is your presence”

Opportunities of the Online Context

  1. Videoconferencing presents some useful benefits: facial communication enables trust and human contact. Conversations flow more easily. Chat text boxes provide a platform for comments and freeform notes, and most platforms allow recording of sessions for later review. In larger meetings, group breakout functionality helps facilitate smaller sub-sessions.
  2. Online, sharing and retaining documents and artifacts becomes part of the conversation without depending on the in-person promise to “email it later.”
  3. There is an inherent scalability to online models, whether for instructional activities, such as complete courses or teaching examples, or for materials.
  4. It’s part of tomorrow’s landscape, pandemic or not. Online working, learning, and sharing has leapt forward out of necessity. It’s highly likely that when we return to a post-virus environment, many of the online shifts that have shown value and efficiency will remain in schools and the workforce, leading toward newer hybrid models. If you’re part of the development now, you’re better positioned for those changes.

Tip

As an evaluator, my single most helpful action has been to attend more meetings and events than originally planned, engaging with the team more, building the trust necessary to collect quality data. Your Zoom face is your presence.

Less Change than You’d Think

In most projects, re-calibration has been necessary, but you’d be surprised at how few changes may be required to continue your project work successfully in this new context simply through a change of perspective.

Blog: What I’ve Learned about Evaluation: Lessons from the Field

Posted on June 21, 2020 by  in Blog ()

Coordinator in Educational Leadership, San Francisco State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

What I’ve Learned about Evaluation_ Lessons from the FieldI’m completing my second year as the external evaluator of a three-year ATE project. As a first-time evaluator, I have to confess that I’ve had a lot to learn.

The first surprise was that, in spite of my best intentions, my evaluation process seems always a bit messy. A grant proposal is just that: a proposed plan. It is an idealized vision of what may come. Therefore, the evaluation plan based on that vision is also idealized. Over time, I have had to reconsider my evaluation as grant activities and circumstances evolved—what data is to be collected, how it is to be collected, or whether that data is to be collected at all.

I also thought that my evaluations would somehow reveal something startling to my project team. In reality, my evaluations have served as a mirror to them, acknowledging what they have done and mostly confirming what they already suspect to be true. In a few instances, the manner in which I’ve analyzed data has allowed the team to challenge some assumptions made along the way. In general, though, my work is less revelatory than I had expected.

Similarly, I anticipated my role as a data analyst would be more important. However, this project was designed to use iterative continuous improvement and so the team has met frequently to analyze and consider anecdotal data and impromptu surveys. This more immediate feedback on project activities was regularly used to guide changes. So while my planned evaluation activities and formal data analysis has been important, it has been a less significant contribution than I had expected.

Instead, I’ve added the greatest value to the team by serving as a critical colleague. Benefiting from distance from the day-to-day work, I can offer a more objective, outsider’s view of the project activities. By doing so, I’m able to help a talented, innovative, and ambitious team consider their options and determine whether or not investing in certain activities promotes the goals of the grant or moves the team tangentially. This, of course, is critical for a small grant on a small budget.

Over my short time involved in this work, I see that by being brought into the project from the beginning, and encouraged to offer guidance along the way, I’ve assessed the progress made in achieving the grant goals, and I have been able to observe and document how individuals work together effectively to achieve those goals. This insight highlights another important service evaluators can offer: to tell the stories of successful teams to their stakeholders.

As evaluators, we are accountable to our project teams and also to their funders. It is in the funders’ interest to learn how teams work effectively to achieve results. I had not expected it, but I now see that it’s in the teams’ interest for the external evaluators to understand their successful collaboration and bring it to light.

Blog: Three Ways to Boost Network Reporting

Posted on April 29, 2020 by  in Blog ()

Assistant Director, Collin College’s National Convergence Technology Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The National Convergence Technology Center (CTC), a national ATE center focusing on IT infrastructure technology, manages a community called the Convergence College Network (CCN)The CCN consists of 76 community colleges and four-year universities across 26 statesFaculty and administrators from the CCN meet regularly to share resources, trade know-how, and discuss common challenges 

 Because so much of the CTC’s work is directed to supporting the CCN, we ask the member colleges to submit a “CCN Yearly Report” evaluation each FebruaryThe data from that “CCN Yearly Report” informs the reporting we deliver to the NSF, to our National Visiting Committee, and to the annual ATE surveyEach of those three groups need slightly different information, so we’ve worked hard to include everything in a single evaluation tool. 

 We’re always trying to improve that “CCN Yearly Report” by improving the questions we ask, removing the questions we don’t need, and making any other adjustments that could improve the response rateWe want to make it easy on the respondentsOur efforts seem to be workingWe received 37 reports from the 76 CCN member colleges this past February, a 49% response rate. 

 We attribute this success to three strategies.  

  1. 1. Prepare them in advance.We start talking about the February “CCN Yearly Report” due date in the summerThe CCN community gets multiple email reminders, and we often mention the report deadline at our quarterly meetingsWe don’t want anyone to say they didn’t know about the report or its deadlinePart of this ongoing preparation also involves making sure everyone in the network understands the importance of the data we’re seekingWe emphasize that we need their help to accurately report grant impact to the NSF.
  1. Share the results.If we go to such lengths to make sure everyone understands the importance of the report up front, it makes sense to do the same after the results are inWe try to deliver a short overview of the results at our July quarterly meetingDoing so underscores the importance of the survey. Beyond that, research tells us that one key to nurturing a successful community of practice like the CCN is to provide positive feedback about the value of the groupBy sharing highlights of the report, we remind CCN members that they are a part of a thriving, successful group of educators. 
  1. Reward participation.Grant money is a great carrotBecause the CTC so often provides partial travel reimbursement to faculty from CCN member colleges so they can attend conferences and professional development events, we can incentivize the submission of yearly reports.  Colleges that want the maximum membership benefits, which include larger travel caps, must deliver a report.  Half of the 37 reports we received last year were from colleges seeking those maximum benefits. 

 We’re sure there are other grants with similar communities of organizations and institutions. We hope some of these strategies can help you get the data you need from your communities. 

 

References:  

 Milton, N. (2017, January 16). Why communities of practice succeed, and why they fail [Blog post].

Blog: Strategies and Sources for Interpreting Evaluation Findings to Reach Conclusions

Posted on March 18, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Imagine: You’re an evaluator who has compiled lots of data about an ATE project. You’re preparing to present the results to stakeholders. You have many beautiful charts and compelling stories to share.  

Youre confident you’ll be able to answer the stakeholders’ questions about data collection and analysisBut you get queasy at the prospect of questions like What does this mean? Is this good? Has our investment been worthwhile?  

It seems like the project is on track and they’re doing good work, but you know your hunch is not a sound basis for a conclusion. You know you should have planned ahead for how findings would be interpreted in order to reach conclusions, and you regret that the task got lost in the shuffle.  

What is a sound basis for interpreting findings to make an evaluative conclusion?  

Interpretation requires comparison. Consider how you make judgments in daily life: If you declare, “this pizza is just so-so,” you are comparing that pizza with other pizza you’ve had, or maybe with your imagined ideal pizza. When you judge something, you’re comparing that thing with something else, even if you’re not fully conscious of that comparison.

The same thing happens in program evaluation, and its essential for evaluators to be fully conscious and transparent about what they’re comparing evaluative evidence againstWhen evaluators don’t make their comparison points explicit, their evaluative conclusions may seem arbitrary and stakeholders may dismiss them as unfounded 

Here are some sources and strategies for comparisons to inform interpretation. Evaluators can use these to make clear and reasoned conclusions about a project’s performance:  

Performance Targets: Review the project proposal to see if any performance targets were established (e.g., “The number of nanotechnology certificates awarded will increase by 10 percent per year”). When you compare the project’s results with those targets, keep in mind that the original targets may have been either under or overambitious. Talk with stakeholders to see if those original targets are appropriate or if they need adjustment. Performance targets usually follow the SMART structure. 

Project Goals: Goals may be more general than specific performance targets (e.g., “Meet industry demands for qualified CNC technicians”)To make lofty or vague goals more concrete, you can borrow a technique called Goal Attainment Scaling (GAS). GAS was developed to measure individuals’ progress toward desired psychosocial outcomesThe GAS resource from BetterEvaluation will give you a sense of how to use this technique to assess program goal attainment. 

Project Logic Model: If the project has a logic model, map your data points onto its components to compare the project’s actual achievements with the planned activities and outcomes expressed in the model. No logic model? Work with project staff to create one using EvaluATE’s logic model template. 

Similar Programs: Look online or ask colleagues to find evaluations of projects that serve similar purposes as the one you are evaluating. Compare the results of those projects’ evaluations to your evaluation results. The comparison can inform your conclusions about relative performance.  

Historical Data: Look for historical project data that you can compare the project’s current performance against. Enrollment numbers and student demographics are common data points for STEM education programs. Find out if baseline data were included in the project’s proposal or can be reconstructed with institutional data. Be sure to capture several years of pre-project data so year-to-year fluctuations can be accounted for. See the practical guidance for this interrupted time series approach to assessing change related to an intervention on the Towards Data Science website. 

Stakeholder Perspectives: Ask stakeholders for their opinions about the status of the project. You can work with stakeholders in person or online by holding a data party to engage them directly in interpreting findings 

 

Whatever sources or strategies you use, its critical that you explain your process in your evaluation reports so it is transparent to stakeholders. Clearly documenting the interpretation process will also help you replicate the steps in the future. 

Blog: Untangling the Story When You’re Part of the Complexity

Posted on April 16, 2019 by  in Blog ()

Evaluator, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

I am wrestling with a wicked evaluation problem: How do I balance evaluation, research, and technical assistance work when they are so interconnected? I will discuss strategies for managing different aspects of work and the implications of evaluating something that you are simultaneously trying to change.

Background

In 2017, the National Science Foundation solicited proposals that called for researchers and practitioners to partner in conducting research that directly informs problems of practice through the Research Practice Partnership (RPP) model. I work on one project funded under this grant: Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All (RPPforCS). RPPforCS aims to learn how projects supported under this funding are conducting research and improving practice. It also brings a community of researchers and evaluators across funded partnerships together for collective capacity building.

The Challenge

The RPPforCS work requires a dynamic approach to evaluation, and it challenges conventional boundaries between research, evaluation, and technical assistance. I am both part of the evaluation team for individual projects and part of a program-wide research project that aims to understand how projects are using an RPP model to meet their computer science and equity goals. Given the novelty of the program and research approach, the RPPforCS team also supports these projects with targeted technical assistance to improve their ability to use an RPP model (ideas that typically come out of what we’re learning across projects).

Examples in Practice

The RPPforCS team examines changes through a review of project proposals and annual reports, yearly interviews with a member of each project, and an annual community survey. Using these data collection mechanisms, we ask about the impact of the technical assistance on the functioning of the project. Being able to rigorously document how the technical assistance aspect of our research project influences their work allows us to track change affected by the RPPforCS team separately from change stemming from the individual project.

We use the technical assistance (e.g., tools, community meetings, webinars) to help projects further their goals and as research and evaluation data collection opportunities to understand partnership dynamics. The technical assistance tools are all shared through Google Suite, allowing us to see how the teams engage with them. Teams are also able to use these tools to improve their partnership practice (e.g., using our Health Assessment Tool to establish shared goals with partners). Structured table discussions at our community meetings allow us to understand more about specific elements of partnership that are demonstrated within a given project. We share all of our findings with the community on a frequent basis to foreground the research effort, while still providing necessary support to individual projects. 

Hot Tips

  • Rigorous documentation The best way I have found to account for our external impact is rigorous documentation. This may sound like a basic approach to evaluation, but it is the easiest way to track change over time and track change that you have introduced (as opposed to organic change coming from within the project).
  • Multi-use activities Turn your technical assistance into a data collection opportunity. It both builds capacity within a project and allows you to access information for your own evaluation and research goals.

Blog: The Business of Evaluation: Liability Insurance

Posted on January 11, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Bottom line: you may need liability insurance, and you have to pay for it.

The proposal has been funded, you are the named evaluator, you have created a detailed scope of work, and the educational institution has sent you a Professional Services Contract to sign (and read!).

This contract will contain many provisions, one of which is having insurance. I remember the first time I read it: The contractor shall maintain commercial general liability insurance against any claims that might incur in carrying out this agreement. Minimum coverage shall be $1,000,000.

I thought, well, this probably doesn’t pertain to me, but then I read further: Upon request, the contractor is required to provide a Certificate of Insurance. That got my attention.

You might find what happened next interesting. I called the legal offices at the community college. My first question was Can we just strike that from the contract? No, we were required by law to have it. Then she explained, “Mike that sort of liability thing is mostly for contractors coming to do physical work on our campus, in case there was an injury, brick falling on the head of a student, things like that.” She lowered her voice. “ I can tell you we are never going to ask you to show that certificate to us.”

However, sometimes, you will be asked to maintain and provide, on request, professional liability insurance, also called errors and omissions insurance (E&O insurance) or indemnity insurance. This protects your business if you are sued for negligently performing your services, even if you haven’t made a mistake. (OK, I admit, this doesn’t seem likely in our business of evaluation.)

Then the moment of truth came. A decent-sized contract arrived from a major university I shall not name located in Tempe, Arizona, with a mascot that is a devil with a pitchfork. It said if you want a purchase order from us, sign the contract and attach your Certificate of Insurance.

I was between the devil and a hard place. Somewhat naively, I called my local insurance agent (i.e., for home and car.) He actually had never heard of professional liability insurance and promised to get back to me. He didn’t.

I turned to Google, the fount of all things. (Full disclosure, I am not advocating for a particular company—just telling you what I did.) I explored one company that came up high in the search results. Within about an hour, I was satisfied that it was what I needed, had a quote, and typed in my credit card number. In the next hour, I had my policy online and printed out the one-page Certificate of Insurance with the university’s name as “additional insured.” Done.

I would like to clarify one point. I did not choose general liability insurance because there is no risk to physical damage to property or people that may be caused by my operations. In the business of evaluation that is not a risk.

I now have a $2 million professional liability insurance policy that costs $700 per year. As I add clients, if they require it, I can create a one-page certificate naming them as additional insured, at no extra cost.

Liability insurance, that’s one of the costs of doing business.

Blog: How Evaluators Can Use InformalScience.org

Posted on December 13, 2018 by  in Blog ()

Evaluation and Research Manager, Science Museum of Minnesota and Independent Evaluation Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m excited to talk to you about the Center for Advancement of Informal Science Education (CAISE) and the support they offer evaluators of informal science education (ISE) experiences. CAISE is a National Science Foundation (NSF) funded resource center for NSF’s Advancing Informal STEM Learning program. Through InformalScience.org, CAISE provides a wide range of resources valuable to the EvaluATE community.

Defining Informal Science Education

ISE is lifelong learning in science, technology, engineering, and math (STEM) that takes place across a multitude of designed settings and experiences outside of the formal classroom. The video below is a great introduction to the field.

Outcomes of ISE experiences have some similarities to those of formal education. However, ISE activities tend to focus less on content knowledge and more on other types of outcomes, such as interest, attitudes, engagement, skills, behavior, or identity. CAISE’s Evaluation and Measurement Task Force investigates the outcome areas of STEM identity, interest, and engagement to provide evaluators and experience designers with guidance on how to define and measure these outcomes. Check out the results of their work on the topic of STEM identity (results for interest and engagement are coming soon).

Resources You Can Use

InformalScience.org has a variety of resources that I think you’ll find useful for your evaluation practice.

  1. In the section “Design Evaluation,” you can learn more about evaluation in the ISE field through professional organizations, journals, and projects researching ISE evaluation. The “Evaluation Tools and Instruments” page in this section lists sites with tools for measuring outcomes of ISE projects, and there is also a section about reporting and dissemination. I provide a walk-through of CAISE’s evaluation pages in this blog post: How to Use InformalScience.org for Evaluation.
  2. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects has been extremely useful for me in introducing ISE evaluation to evaluators new to the field.
  3. In the “News & Views” section are several evaluation-related blogs, including a series on working with an institutional review board and another one on conducting culturally responsive evaluations.
  4. If you are not affiliated with an academic institution, you can access peer-reviewed articles in some of your favorite academic journals by becoming a member InformalScienceorg. Click here to join; it’s free! Once you’re logged in, select “Discover Research” in the menu bar and scroll down to “Access Peer-Reviewed Literature (EBSCO).” Journals of interest include Science Education and Cultural Studies of Science Education. If you are already a member of InformalScience.org, you can immediately begin searching the EBSCO Education Source database.

My favorite part of InformalScience.org is the repository of evaluation reports—1,020 reports and growing—which is the largest collection of reports in the evaluation field. Evaluators can use this rich collection to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating ISE projects. Even if you don’t evaluate ISE experiences, I encourage you to take a minute to search the reports and see what you can find. And if you conduct ISE evaluations, consider sharing your own reports on InformalScience.org.

Do you have any questions about CAISE or InformalScience.org? Contact Melissa Ballard, communications and community manager, at mballard@informalscience.org.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho
Principal Research Associate
The Evaluation Center
Research Manager
EvaluATE

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)