We EvaluATE - Evaluation Management

Blog: What I’ve Learned about Evaluation: Lessons from the Field

Posted on June 21, 2020 by  in Blog ()

Coordinator in Educational Leadership, San Francisco State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

What I’ve Learned about Evaluation_ Lessons from the FieldI’m completing my second year as the external evaluator of a three-year ATE project. As a first-time evaluator, I have to confess that I’ve had a lot to learn.

The first surprise was that, in spite of my best intentions, my evaluation process seems always a bit messy. A grant proposal is just that: a proposed plan. It is an idealized vision of what may come. Therefore, the evaluation plan based on that vision is also idealized. Over time, I have had to reconsider my evaluation as grant activities and circumstances evolved—what data is to be collected, how it is to be collected, or whether that data is to be collected at all.

I also thought that my evaluations would somehow reveal something startling to my project team. In reality, my evaluations have served as a mirror to them, acknowledging what they have done and mostly confirming what they already suspect to be true. In a few instances, the manner in which I’ve analyzed data has allowed the team to challenge some assumptions made along the way. In general, though, my work is less revelatory than I had expected.

Similarly, I anticipated my role as a data analyst would be more important. However, this project was designed to use iterative continuous improvement and so the team has met frequently to analyze and consider anecdotal data and impromptu surveys. This more immediate feedback on project activities was regularly used to guide changes. So while my planned evaluation activities and formal data analysis has been important, it has been a less significant contribution than I had expected.

Instead, I’ve added the greatest value to the team by serving as a critical colleague. Benefiting from distance from the day-to-day work, I can offer a more objective, outsider’s view of the project activities. By doing so, I’m able to help a talented, innovative, and ambitious team consider their options and determine whether or not investing in certain activities promotes the goals of the grant or moves the team tangentially. This, of course, is critical for a small grant on a small budget.

Over my short time involved in this work, I see that by being brought into the project from the beginning, and encouraged to offer guidance along the way, I’ve assessed the progress made in achieving the grant goals, and I have been able to observe and document how individuals work together effectively to achieve those goals. This insight highlights another important service evaluators can offer: to tell the stories of successful teams to their stakeholders.

As evaluators, we are accountable to our project teams and also to their funders. It is in the funders’ interest to learn how teams work effectively to achieve results. I had not expected it, but I now see that it’s in the teams’ interest for the external evaluators to understand their successful collaboration and bring it to light.

Blog: Three Ways to Boost Network Reporting

Posted on April 29, 2020 by  in Blog ()

Assistant Director, Collin College’s National Convergence Technology Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The National Convergence Technology Center (CTC), a national ATE center focusing on IT infrastructure technology, manages a community called the Convergence College Network (CCN)The CCN consists of 76 community colleges and four-year universities across 26 statesFaculty and administrators from the CCN meet regularly to share resources, trade know-how, and discuss common challenges 

 Because so much of the CTC’s work is directed to supporting the CCN, we ask the member colleges to submit a “CCN Yearly Report” evaluation each FebruaryThe data from that “CCN Yearly Report” informs the reporting we deliver to the NSF, to our National Visiting Committee, and to the annual ATE surveyEach of those three groups need slightly different information, so we’ve worked hard to include everything in a single evaluation tool. 

 We’re always trying to improve that “CCN Yearly Report” by improving the questions we ask, removing the questions we don’t need, and making any other adjustments that could improve the response rateWe want to make it easy on the respondentsOur efforts seem to be workingWe received 37 reports from the 76 CCN member colleges this past February, a 49% response rate. 

 We attribute this success to three strategies.  

  1. 1. Prepare them in advance.We start talking about the February “CCN Yearly Report” due date in the summerThe CCN community gets multiple email reminders, and we often mention the report deadline at our quarterly meetingsWe don’t want anyone to say they didn’t know about the report or its deadlinePart of this ongoing preparation also involves making sure everyone in the network understands the importance of the data we’re seekingWe emphasize that we need their help to accurately report grant impact to the NSF.
  1. Share the results.If we go to such lengths to make sure everyone understands the importance of the report up front, it makes sense to do the same after the results are inWe try to deliver a short overview of the results at our July quarterly meetingDoing so underscores the importance of the survey. Beyond that, research tells us that one key to nurturing a successful community of practice like the CCN is to provide positive feedback about the value of the groupBy sharing highlights of the report, we remind CCN members that they are a part of a thriving, successful group of educators. 
  1. Reward participation.Grant money is a great carrotBecause the CTC so often provides partial travel reimbursement to faculty from CCN member colleges so they can attend conferences and professional development events, we can incentivize the submission of yearly reports.  Colleges that want the maximum membership benefits, which include larger travel caps, must deliver a report.  Half of the 37 reports we received last year were from colleges seeking those maximum benefits. 

 We’re sure there are other grants with similar communities of organizations and institutions. We hope some of these strategies can help you get the data you need from your communities. 

 

References:  

 Milton, N. (2017, January 16). Why communities of practice succeed, and why they fail [Blog post].

Blog: Strategies and Sources for Interpreting Evaluation Findings to Reach Conclusions

Posted on March 18, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Imagine: You’re an evaluator who has compiled lots of data about an ATE project. You’re preparing to present the results to stakeholders. You have many beautiful charts and compelling stories to share.  

Youre confident you’ll be able to answer the stakeholders’ questions about data collection and analysisBut you get queasy at the prospect of questions like What does this mean? Is this good? Has our investment been worthwhile?  

It seems like the project is on track and they’re doing good work, but you know your hunch is not a sound basis for a conclusion. You know you should have planned ahead for how findings would be interpreted in order to reach conclusions, and you regret that the task got lost in the shuffle.  

What is a sound basis for interpreting findings to make an evaluative conclusion?  

Interpretation requires comparison. Consider how you make judgments in daily life: If you declare, “this pizza is just so-so,” you are comparing that pizza with other pizza you’ve had, or maybe with your imagined ideal pizza. When you judge something, you’re comparing that thing with something else, even if you’re not fully conscious of that comparison.

The same thing happens in program evaluation, and its essential for evaluators to be fully conscious and transparent about what they’re comparing evaluative evidence againstWhen evaluators don’t make their comparison points explicit, their evaluative conclusions may seem arbitrary and stakeholders may dismiss them as unfounded 

Here are some sources and strategies for comparisons to inform interpretation. Evaluators can use these to make clear and reasoned conclusions about a project’s performance:  

Performance Targets: Review the project proposal to see if any performance targets were established (e.g., “The number of nanotechnology certificates awarded will increase by 10 percent per year”). When you compare the project’s results with those targets, keep in mind that the original targets may have been either under or overambitious. Talk with stakeholders to see if those original targets are appropriate or if they need adjustment. Performance targets usually follow the SMART structure. 

Project Goals: Goals may be more general than specific performance targets (e.g., “Meet industry demands for qualified CNC technicians”)To make lofty or vague goals more concrete, you can borrow a technique called Goal Attainment Scaling (GAS). GAS was developed to measure individuals’ progress toward desired psychosocial outcomesThe GAS resource from BetterEvaluation will give you a sense of how to use this technique to assess program goal attainment. 

Project Logic Model: If the project has a logic model, map your data points onto its components to compare the project’s actual achievements with the planned activities and outcomes expressed in the model. No logic model? Work with project staff to create one using EvaluATE’s logic model template. 

Similar Programs: Look online or ask colleagues to find evaluations of projects that serve similar purposes as the one you are evaluating. Compare the results of those projects’ evaluations to your evaluation results. The comparison can inform your conclusions about relative performance.  

Historical Data: Look for historical project data that you can compare the project’s current performance against. Enrollment numbers and student demographics are common data points for STEM education programs. Find out if baseline data were included in the project’s proposal or can be reconstructed with institutional data. Be sure to capture several years of pre-project data so year-to-year fluctuations can be accounted for. See the practical guidance for this interrupted time series approach to assessing change related to an intervention on the Towards Data Science website. 

Stakeholder Perspectives: Ask stakeholders for their opinions about the status of the project. You can work with stakeholders in person or online by holding a data party to engage them directly in interpreting findings 

 

Whatever sources or strategies you use, its critical that you explain your process in your evaluation reports so it is transparent to stakeholders. Clearly documenting the interpretation process will also help you replicate the steps in the future. 

Blog: Untangling the Story When You’re Part of the Complexity

Posted on April 16, 2019 by  in Blog ()

Evaluator, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

I am wrestling with a wicked evaluation problem: How do I balance evaluation, research, and technical assistance work when they are so interconnected? I will discuss strategies for managing different aspects of work and the implications of evaluating something that you are simultaneously trying to change.

Background

In 2017, the National Science Foundation solicited proposals that called for researchers and practitioners to partner in conducting research that directly informs problems of practice through the Research Practice Partnership (RPP) model. I work on one project funded under this grant: Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All (RPPforCS). RPPforCS aims to learn how projects supported under this funding are conducting research and improving practice. It also brings a community of researchers and evaluators across funded partnerships together for collective capacity building.

The Challenge

The RPPforCS work requires a dynamic approach to evaluation, and it challenges conventional boundaries between research, evaluation, and technical assistance. I am both part of the evaluation team for individual projects and part of a program-wide research project that aims to understand how projects are using an RPP model to meet their computer science and equity goals. Given the novelty of the program and research approach, the RPPforCS team also supports these projects with targeted technical assistance to improve their ability to use an RPP model (ideas that typically come out of what we’re learning across projects).

Examples in Practice

The RPPforCS team examines changes through a review of project proposals and annual reports, yearly interviews with a member of each project, and an annual community survey. Using these data collection mechanisms, we ask about the impact of the technical assistance on the functioning of the project. Being able to rigorously document how the technical assistance aspect of our research project influences their work allows us to track change affected by the RPPforCS team separately from change stemming from the individual project.

We use the technical assistance (e.g., tools, community meetings, webinars) to help projects further their goals and as research and evaluation data collection opportunities to understand partnership dynamics. The technical assistance tools are all shared through Google Suite, allowing us to see how the teams engage with them. Teams are also able to use these tools to improve their partnership practice (e.g., using our Health Assessment Tool to establish shared goals with partners). Structured table discussions at our community meetings allow us to understand more about specific elements of partnership that are demonstrated within a given project. We share all of our findings with the community on a frequent basis to foreground the research effort, while still providing necessary support to individual projects. 

Hot Tips

  • Rigorous documentation The best way I have found to account for our external impact is rigorous documentation. This may sound like a basic approach to evaluation, but it is the easiest way to track change over time and track change that you have introduced (as opposed to organic change coming from within the project).
  • Multi-use activities Turn your technical assistance into a data collection opportunity. It both builds capacity within a project and allows you to access information for your own evaluation and research goals.

Blog: The Business of Evaluation: Liability Insurance

Posted on January 11, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Bottom line: you may need liability insurance, and you have to pay for it.

The proposal has been funded, you are the named evaluator, you have created a detailed scope of work, and the educational institution has sent you a Professional Services Contract to sign (and read!).

This contract will contain many provisions, one of which is having insurance. I remember the first time I read it: The contractor shall maintain commercial general liability insurance against any claims that might incur in carrying out this agreement. Minimum coverage shall be $1,000,000.

I thought, well, this probably doesn’t pertain to me, but then I read further: Upon request, the contractor is required to provide a Certificate of Insurance. That got my attention.

You might find what happened next interesting. I called the legal offices at the community college. My first question was Can we just strike that from the contract? No, we were required by law to have it. Then she explained, “Mike that sort of liability thing is mostly for contractors coming to do physical work on our campus, in case there was an injury, brick falling on the head of a student, things like that.” She lowered her voice. “ I can tell you we are never going to ask you to show that certificate to us.”

However, sometimes, you will be asked to maintain and provide, on request, professional liability insurance, also called errors and omissions insurance (E&O insurance) or indemnity insurance. This protects your business if you are sued for negligently performing your services, even if you haven’t made a mistake. (OK, I admit, this doesn’t seem likely in our business of evaluation.)

Then the moment of truth came. A decent-sized contract arrived from a major university I shall not name located in Tempe, Arizona, with a mascot that is a devil with a pitchfork. It said if you want a purchase order from us, sign the contract and attach your Certificate of Insurance.

I was between the devil and a hard place. Somewhat naively, I called my local insurance agent (i.e., for home and car.) He actually had never heard of professional liability insurance and promised to get back to me. He didn’t.

I turned to Google, the fount of all things. (Full disclosure, I am not advocating for a particular company—just telling you what I did.) I explored one company that came up high in the search results. Within about an hour, I was satisfied that it was what I needed, had a quote, and typed in my credit card number. In the next hour, I had my policy online and printed out the one-page Certificate of Insurance with the university’s name as “additional insured.” Done.

I would like to clarify one point. I did not choose general liability insurance because there is no risk to physical damage to property or people that may be caused by my operations. In the business of evaluation that is not a risk.

I now have a $2 million professional liability insurance policy that costs $700 per year. As I add clients, if they require it, I can create a one-page certificate naming them as additional insured, at no extra cost.

Liability insurance, that’s one of the costs of doing business.

Blog: How Evaluators Can Use InformalScience.org

Posted on December 13, 2018 by  in Blog ()

Evaluation and Research Manager, Science Museum of Minnesota and Independent Evaluation Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m excited to talk to you about the Center for Advancement of Informal Science Education (CAISE) and the support they offer evaluators of informal science education (ISE) experiences. CAISE is a National Science Foundation (NSF) funded resource center for NSF’s Advancing Informal STEM Learning program. Through InformalScience.org, CAISE provides a wide range of resources valuable to the EvaluATE community.

Defining Informal Science Education

ISE is lifelong learning in science, technology, engineering, and math (STEM) that takes place across a multitude of designed settings and experiences outside of the formal classroom. The video below is a great introduction to the field.

Outcomes of ISE experiences have some similarities to those of formal education. However, ISE activities tend to focus less on content knowledge and more on other types of outcomes, such as interest, attitudes, engagement, skills, behavior, or identity. CAISE’s Evaluation and Measurement Task Force investigates the outcome areas of STEM identity, interest, and engagement to provide evaluators and experience designers with guidance on how to define and measure these outcomes. Check out the results of their work on the topic of STEM identity (results for interest and engagement are coming soon).

Resources You Can Use

InformalScience.org has a variety of resources that I think you’ll find useful for your evaluation practice.

  1. In the section “Design Evaluation,” you can learn more about evaluation in the ISE field through professional organizations, journals, and projects researching ISE evaluation. The “Evaluation Tools and Instruments” page in this section lists sites with tools for measuring outcomes of ISE projects, and there is also a section about reporting and dissemination. I provide a walk-through of CAISE’s evaluation pages in this blog post: How to Use InformalScience.org for Evaluation.
  2. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects has been extremely useful for me in introducing ISE evaluation to evaluators new to the field.
  3. In the “News & Views” section are several evaluation-related blogs, including a series on working with an institutional review board and another one on conducting culturally responsive evaluations.
  4. If you are not affiliated with an academic institution, you can access peer-reviewed articles in some of your favorite academic journals by becoming a member InformalScienceorg. Click here to join; it’s free! Once you’re logged in, select “Discover Research” in the menu bar and scroll down to “Access Peer-Reviewed Literature (EBSCO).” Journals of interest include Science Education and Cultural Studies of Science Education. If you are already a member of InformalScience.org, you can immediately begin searching the EBSCO Education Source database.

My favorite part of InformalScience.org is the repository of evaluation reports—1,020 reports and growing—which is the largest collection of reports in the evaluation field. Evaluators can use this rich collection to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating ISE projects. Even if you don’t evaluate ISE experiences, I encourage you to take a minute to search the reports and see what you can find. And if you conduct ISE evaluations, consider sharing your own reports on InformalScience.org.

Do you have any questions about CAISE or InformalScience.org? Contact Melissa Ballard, communications and community manager, at mballard@informalscience.org.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho
Principal Research Associate
The Evaluation Center
Research Manager
EvaluATE

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Getting Your New ATE Project’s Evaluation off to a Great Start

Posted on October 17, 2017 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

New ATE project principal investigators (PIs): When you worked with your evaluator to develop an evaluation plan for your project proposal, you were probably focused on the big picture—how to gather credible and meaningful evidence about the quality and impact of your work. To ensure your evaluation achieves its aims, take these three steps now to make sure your project provides the human resources, time, and information needed for a successful evaluation:

  1. Schedule regular meetings with your evaluator. Regular meetings help ensure that your project’s evaluation receives adequate attention. These exchanges should be in real time—via phone call, web meetings, or face-to-face—not just email. See EvaluATE’s new Communication Plan Checklist for ATE PIs and Evaluators for a list of other communication issues to discuss with your evaluator at the start of a project.
  1. Work with your evaluator to create a project evaluation calendar. This calendar should span the life of your project and include the following:
  • Due dates for National Science Foundation (NSF) annual reports: You should include your evaluation reports or at least information from the evaluation in these reports. Work backward from their due dates to determine when evaluation reports should be completed. To find out when your annual report is due, go to Research.gov, enter your NSF login information, select “Awards & Reporting,” then “Project Reports.”
  • Advisory committee meeting dates: You may want your evaluator to attend these meetings to learn more about your project and to communicate directly with committee members.
  • Project events: Activities such as workshops and outreach events present valuable opportunities to collect data directly from the individuals involved in the project. Make sure your evaluator is aware of them.
  • Due dates for new proposal submissions: If submitting to NSF again, you will need to include evidence of your current project’s intellectual merit and broader impacts. Working with your evaluator now will ensure you have compelling evidence to support a future submission.
  1. Keep track of what you’re doing and who is involved. Don’t leave these tasks to your evaluator or wait until the last minute. Taking an active—and proactive—role in documenting the project’s work will save you time and result in more accurate information. Your evaluator can then use that information when preparing their reports. Moreover, you will find it immensely useful to have good documentation at your fingertips when preparing your annual NSF report.
  • Maintain a record of project activities and products—such as conference presentations, trainings, outreach events, competitions, publications—as they are completed. Check out EvaluATE’s project vita as an example.
  • Create a participant database (or spreadsheet): Everyone who engages with your project should be listed. Record their contact information, role in the project, and pertinent demographic characteristics (such as whether a student is a first-generation college student, a veteran, or part of a group that has been historically underrepresented in STEM). You will probably find several uses for this database, such as for follow-up with participants for evaluation purposes, for outreach, and as evidence of your project’s broader impacts.

An ounce of prevention is worth of pound of cure: Investing time up front to make sure your evaluation is on solid footing will save headaches down the round.

Blog: Sustaining Private Evaluation Practices: Overcoming Challenges by Collaborating within Our ATE Community of Practice

Posted on September 27, 2017 by  in Blog ()

President, Impact Allies

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Ben Reid. I am the founder of Impact Allies, a private evaluation firm. The focus of this post is on the business, rather than technical aspects, of evaluation. My purpose is to present a challenge to sustaining a private evaluation practice and best serving clients and propose an opportunity to overcome that challenge by collaborating within our community of practice.

Challenge

Often evaluators act as one-person shows. It is important to give a single point of contact to a principal investigator (PI) and project team and for that evaluator of record to have thorough knowledge of the project and its partners. However, the many different jobs required of an evaluation contract simply cross too many specialties and personality types for one person to effectually serve a client best.

Opportunity

The first opportunity is to become more professionally aware of our strengths and weaknesses. What are your skills? And equally important, where are you skill-deficit (don’t know how to do it) and where are you performance-deficient (have the skill but aren’t suited for it—because of anxiety, frustration, no enthusiasm, etc.)?

The second opportunity is to build relationships within our community of practice. Get to know other evaluators, where their strengths are unique and whom they use for ancillary services (their book of contractors). (The upcoming NSF ATE PI conference is a great place to do this).

Example

My Strengths: Any evaluator can satisfactorily perform the basics – EvaluATE certainly has done a tremendous job of educating and training us. In this field, I am unique in my strengths of external communications, opportunity identification and assessment, strategic and creative thinking, and partnership development. Those skills and a background in education, marketing and branding, and project management, have helped me contribute broadly, which has proven useful time and again when working with small teams. Knowing clients well and having an entrepreneurial mindset allows me to do what is encouraged in NSF’s 2010 User-Friendly Handbook for Project Evaluation: “Certain evaluation activities can help meet multiple purposes, if used judiciously” (p. 119).

My Weaknesses: However, an area where I could use some outside support is graphic design and data visualization. This work, because it succinctly tells the story and successes of a project, is very important when communicating to multiple stakeholders, in published works, or for promotional purposes. Where I once performed these tasks (with much time and frustration and at a level which isn’t noteworthy), I now contract with an expert—and my clients are thereby better served.

Takeaway

“Focus on the user and all else will follow,” is the number one philosophy of Google, the company that has given us so much and in turn done so well for itself. Let us also focus on our clients, serving their needs by building our businesses where we are skilled and enthusiastic and collaborating (partnering, outsourcing, or referring) within our community of practice where another professional can do a better job for our clients.

Blog: Evaluation’s Role in Helping Clients Avoid GroupThink

Posted on July 10, 2017 by  in Blog ()

Senior Evaluator, SmartStart Evaluation & Research

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In December of 2016, I presented a poster on a STEM-C education project at the Restore America’s Estuaries National Summit, co-hosted by The Coastal Society. Having a social science background, I assumed I’d be “out of my depth” amid restoration science topics. However, a documentary on estuarine restoration projects along New Jersey’s Hidden Coast inspired me with insights on the importance of evaluation in helping projects achieve effective outcomes. The film highlights the vital importance of horseshoe crabs as a keystone species beset by myriad threats: Their sustainability as a renewable resource was overestimated and their ecological importance undervalued until serious repercussions became impossible to ignore. Teams of biologists, ecologists, military veterans, communication specialists, and concerned local residents came together to help restore their habitat and raise awareness to help preserve this vital species.

This documentary was not the only project presented at the conference in which diverse teams of scientists, volunteers, educators, and others came together to work toward a shared goal. I began to reflect on how similar the composition of these groups and their need for successful collaboration was to contributors on many projects I evaluate. Time and again it was revealed that the various well-intended interdisciplinary team members often initially struggled to communicate effectively due to different expectations, priorities, and perspectives. Often presenters spoke about ways these challenges had been overcome, most frequently through extensive communication with open exchanges of ideas. However, these only represented successful projects promoting their outcomes as inspiration and guidance for others. How often might lack of open communication lead projects down a different path? When does this occur? and How can an evaluator help the leaders foresee and avoid potential pitfalls?

Often, the route to undesired and unsuccessful outcomes lies in lack of effective communication, which is a common symptom of GroupThink. Imagine the leadership team on any project you evaluate:

  • Are they a highly cohesive group?
  • Do they need to make important decisions, often under deadlines or other pressure?
  • Do members prefer consensus to conflict?

These are ideal conditions for GroupThink, when team members disregard information that does not fit with their shared beliefs, and dissenting ideas or opinions are unwelcome. Partners’ desire for harmony can lead them to ignore early warning signs of threats to achieving goals and lead to making poor decisions.

How do we, as evaluators, help them avoid GroupThink?

  • Examine perceived sustainability objectively: Horseshoe crabs are an ancient species, once so plentiful they covered Atlantic beaches during spawning, each laying 100,000 or more eggs. Perceived as a sustainable species, their usefulness as bait and fertilizer has led to overharvesting. Similarly, project leaders may have misconceptions about resources or little knowledge of other factors influencing capacity to maintain their activities. By using validated measures, such as Washington University’s Program Sustainability Assessment Tool (PSAT), evaluators can raise awareness among project leaders on factors contributing to sustainability and facilitate planning sessions to identify adaptation strategies and increase chances of success.
  • Investigate an unintended consequence of project’s activities: Horseshoe crabs’ copper-based blood is crucial to the pharmaceutical industry. However, they cannot successfully be raised in captivity. Instead, they are captured, drained of about 30 percent of their blood, and returned to the ocean. While survival rates are 70 percent or more, researchers are becoming concerned the trauma may impact breeding and other behaviors. Evaluators can help project leaders delve into cause-and-effect relationships underlying problems by employing techniques such as the Five Whys to identify root causes and developing logic models to clarify relationships between resources, activities, outputs, and outcomes.
  • Anticipate unintended chains of events: Horseshoe crabs’ eggs are the primary source of protein for migrating birds. The declining population of horseshoe crabs has put at least three species of birds’ survival at risk. As evaluators, we have many options (e.g., key informant interviews, risk assessments, negative program theory) to identify aspects of program activities with potentially negative impacts and make recommendations to mitigate the harm.

Horseshoe Crab-in-a-bottle sits on my desk to remind me not to be reticent about making constructive criticisms in order to help project leaders avoid GroupThink.