There is a dearth of research on evaluation practice, particularly of the sort that practitioners can use to improve their own work (according to Nick Smith in a forthcoming edition of New Directions for Evaluation, “Using Action Design Research to Research and Develop Evaluation Practice”1,2).
Action design research is described by Dr. Smith as a “strategy for developing and testing alternative evaluation practices within a case-based, practical reasoning view of evaluation practice.” This approach is grounded in the understanding that evaluation is not a “generalizable intervention to be evaluated, but a collection of performances to be investigated” (p. 5). Importantly, action design research is conducted in real time, in authentic evaluation contexts. Its purpose is not only to better understand evaluation practices, but to develop effective solutions to common challenges.
We at EvaluATE are always on the lookout for opportunities to test out ideas for improving evaluation practice as well as our own work in providing evaluation education. A chronic problem for many evaluators is low response rates. Since 2009, EvaluATE has presented 4 to 6 webinars per year, each concluding with a brief feedback survey. Given that these webinars are about evaluation, a logical conclusion is that participants are predisposed to evaluation and will readily complete the surveys, right? Not really. Our response rates for these surveys range from 34 to 96 percent, with an average of 60 percent. I believe we should consistently be in the 90 to 100 percent range.
So in the spirit of action design research on evaluation, I decided to try a little experiment. At our last webinar, before presenting any content, I showed a slide with the following statement beside an empty checkbox: “I agree to complete the <5-minute feedback survey at the end of this webinar.” I noted the importance of evaluation for improving our center’s work and for our accountability to the National Science Foundation. We couldn’t tell exactly how many people checked the box, but it’s clear that several did (play the video clip below). I was optimistic that asking for this public (albeit anonymous) commitment at the start of the webinar would boost response rates substantially.
The result: 72 percent completed the survey. Pretty good, but well short of my standard for excellence. It was our eighth highest response rate ever and highest for the past year, but four of the five webinar surveys in 2013-14 had response rates between 65 and 73 percent. Like so often in research, the initial results are inclusive and we will have to investigate further: How are webinar response rates affected by audience composition, perceptions of the webinar’s quality, or asking for participation multiple times? As Nick Smith pointed out in his review of a draft of this blog: “What you are really after is not just a high response rate, but a greater understanding of what effects webinar evaluation response rates. That kind of insight turns your efforts from local problem solving to generalizable knowledge – from Action Design Problem Solving to Action Design Research.”
I am sharing this experience not because I found the sure-fire way to get people to respond to webinar evaluation surveys. Rather, I am sharing it as a lesson learned and to invite you to conduct your own action design research on evaluation and tell us about it here on the EvaluATE blog.
1 Disclosure: Nick Smith is the chairperson of EvaluATE’s National Visiting Committee, an advisory panel that reports to the National Science Foundation.
2 Smith, N. L. (in press). Using action design research to research and develop evaluation practice. In P. R. Brandon (Ed.), Recent developments in research on evaluation. New Directions for Evaluation.