There is a dearth of research on evaluation practice, particularly of the sort that practitioners can use to improve their own work (according to Nick Smith in a forthcoming edition of New Directions for Evaluation, “Using Action Design Research to Research and Develop Evaluation Practice”1,2).

Action design research is described by Dr. Smith as a “strategy for developing and testing alternative evaluation practices within a case-based, practical reasoning view of evaluation practice.” This approach is grounded in the understanding that evaluation is not a “generalizable intervention to be evaluated, but a collection of performances to be investigated” (p. 5). Importantly, action design research is conducted in real time, in authentic evaluation contexts. Its purpose is not only to better understand evaluation practices, but to develop effective solutions to common challenges.

We at EvaluATE are always on the lookout for opportunities to test out ideas for improving evaluation practice as well as our own work in providing evaluation education.  A chronic problem for many evaluators is low response rates. Since 2009, EvaluATE has presented 4 to 6 webinars per year, each concluding with a brief feedback survey. Given that these webinars are about evaluation, a logical conclusion is that participants are predisposed to evaluation and will readily complete the surveys, right? Not really. Our response rates for these surveys range from 34 to 96 percent, with an average of 60 percent. I believe we should consistently be in the 90 to 100 percent range.

So in the spirit of action design research on evaluation, I decided to try a little experiment. At our last webinar, before presenting any content, I showed a slide with the following statement beside an empty checkbox: “I agree to complete the <5-minute feedback survey at the end of this webinar.” I noted the importance of evaluation for improving our center’s work and for our accountability to the National Science Foundation.  We couldn’t tell exactly how many people checked the box, but it’s clear that several did (play the video clip below).  I was optimistic that asking for this public (albeit anonymous) commitment at the start of the webinar would boost response rates substantially.

The result: 72 percent completed the survey.  Pretty good, but well short of my standard for excellence. It was our eighth highest response rate ever and highest for the past year, but four of the five webinar surveys in 2013-14 had response rates between 65 and 73 percent. Like so often in research, the initial results are inclusive and we will have to investigate further: How are webinar response rates affected by audience composition, perceptions of the webinar’s quality, or asking for participation multiple times? As Nick Smith pointed out in his review of a draft of this blog: “What you are really after is not just a high response rate, but a greater understanding of what effects webinar evaluation response rates. That kind of insight turns your efforts from local problem solving to generalizable knowledge – from Action Design Problem Solving to Action Design Research.”

I am sharing this experience not because I found the sure-fire way to get people to respond to webinar evaluation surveys. Rather, I am sharing it as a lesson learned and to invite you to conduct your own action design research on evaluation and tell us about it here on the EvaluATE blog.

1 Disclosure: Nick Smith is the chairperson of EvaluATE’s National Visiting Committee, an advisory panel that reports to the National Science Foundation.

2 Smith, N. L. (in press). Using action design research to research and develop evaluation practice. In P. R. Brandon (Ed.), Recent developments in research on evaluation. New Directions for Evaluation.

About the Authors

Lori Wingate

Lori Wingate box with arrow

Executive Director, The Evaluation Center, Western Michigan University

Lori has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She is co-principal investigator of EvaluATE and leads a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. Dr. Wingate has led numerous webinars and workshops on evaluation in a variety of contexts, including CDC University and the American Evaluation Association Summer Evaluation Institute. She is an associate member of the graduate faculty at WMU. She, along with Dr. Kelly Robertson, led the development of The Evaluation Center's online training program, Valeo (valeoeval.com)

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.