Archive: evaluation questions

Blog: Researching Evaluation Practice while Practicing Evaluation

Posted on November 10, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

There is a dearth of research on evaluation practice, particularly of the sort that practitioners can use to improve their own work (according to Nick Smith in a forthcoming edition of New Directions for Evaluation, “Using Action Design Research to Research and Develop Evaluation Practice”1,2).

Action design research is described by Dr. Smith as a “strategy for developing and testing alternative evaluation practices within a case-based, practical reasoning view of evaluation practice.” This approach is grounded in the understanding that evaluation is not a “generalizable intervention to be evaluated, but a collection of performances to be investigated” (p. 5). Importantly, action design research is conducted in real time, in authentic evaluation contexts. Its purpose is not only to better understand evaluation practices, but to develop effective solutions to common challenges.

We at EvaluATE are always on the lookout for opportunities to test out ideas for improving evaluation practice as well as our own work in providing evaluation education.  A chronic problem for many evaluators is low response rates. Since 2009, EvaluATE has presented 4 to 6 webinars per year, each concluding with a brief feedback survey. Given that these webinars are about evaluation, a logical conclusion is that participants are predisposed to evaluation and will readily complete the surveys, right? Not really. Our response rates for these surveys range from 34 to 96 percent, with an average of 60 percent. I believe we should consistently be in the 90 to 100 percent range.

So in the spirit of action design research on evaluation, I decided to try a little experiment. At our last webinar, before presenting any content, I showed a slide with the following statement beside an empty checkbox: “I agree to complete the <5-minute feedback survey at the end of this webinar.” I noted the importance of evaluation for improving our center’s work and for our accountability to the National Science Foundation.  We couldn’t tell exactly how many people checked the box, but it’s clear that several did (play the video clip below).  I was optimistic that asking for this public (albeit anonymous) commitment at the start of the webinar would boost response rates substantially.

The result: 72 percent completed the survey.  Pretty good, but well short of my standard for excellence. It was our eighth highest response rate ever and highest for the past year, but four of the five webinar surveys in 2013-14 had response rates between 65 and 73 percent. Like so often in research, the initial results are inclusive and we will have to investigate further: How are webinar response rates affected by audience composition, perceptions of the webinar’s quality, or asking for participation multiple times? As Nick Smith pointed out in his review of a draft of this blog: “What you are really after is not just a high response rate, but a greater understanding of what effects webinar evaluation response rates. That kind of insight turns your efforts from local problem solving to generalizable knowledge – from Action Design Problem Solving to Action Design Research.”

I am sharing this experience not because I found the sure-fire way to get people to respond to webinar evaluation surveys. Rather, I am sharing it as a lesson learned and to invite you to conduct your own action design research on evaluation and tell us about it here on the EvaluATE blog.

1 Disclosure: Nick Smith is the chairperson of EvaluATE’s National Visiting Committee, an advisory panel that reports to the National Science Foundation.

2 Smith, N. L. (in press). Using action design research to research and develop evaluation practice. In P. R. Brandon (Ed.), Recent developments in research on evaluation. New Directions for Evaluation.

Blog: Finding Opportunity in Unintended Outcomes

Posted on April 15, 2015 by  in Blog (, , , )

Research and Evaluation Consultant, Steven Budd Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Working with underage students bears an increased responsibility for their supervision. Concerns may arise during the implementation of activities that were never envisioned when the project was designed. These unintended consequences may be revealed during an evaluation, thus presenting an opportunity for PIs and evaluators to both learn and intervene.

One project I’m evaluating includes a website designed for young teens, and features videos from ATETV and other sources. The site encourages our teen viewers to share information about the site with their peers and to explore links to videos hosted on other popular sites like YouTube. The overarching goal is to attract kids to STEM and technician careers by piquing their interest with engaging and accurate science content. What we didn’t anticipate was the volume of links to pseudoscience, science denial, and strong political agendas they would encounter. The question for the PI and Co-PIs became, “How do we engage our young participants in a conversation about good versus not-so-good science and how to think critically about what they see?”

As the internal project evaluator, I first began a conversation with the project PI and senior personnel around the question of responsibility. What is the responsibility of the PIs to engage our underage participants in a conversation about critical thinking and learning, so they can discriminate between questionable and solid content? Such content is readily accessible to young teens as they surf the Web, so a more important question was how the project team might capture this reality and capitalize on it. In this sense, was a teaching moment at hand?

As evaluators on NSF-funded projects, we know that evaluator engagement is critical right from the start. Formative review becomes especially important when even well-designed and well thought out activities take unanticipated turns. Our project incorporates a model of internal evaluation, which enables project personnel to gather data and provide real-time assessment of activity outcomes. We then present the data with comment to our external evaluator. The evaluation team works with the project leadership to identify concerns as they arise and strategize a response. That response might include refining activities and how they are implemented or by creating entirely new activities that address a concern directly.

After thinking it through, the project leadership chose to open a discussion about critical thinking and science content with the project’s teen advisory group. Our response was to:

  • Initiate more frequent “check-ins” with our teen advisers and have more structured conversations around science content and what they think.
  • Sample other teen viewers as they join their peers in the project’s discussion groups and social media postings.
  • Seek to better understand how teens engage Internet-based content and how they make sense of what they see.
  • Seek new approaches to activities that engage young teens in building their science literacy and critical thinking.

Tips to consider

  • Adjust your evaluation questions to better understand the actual experience of your project’s participants, and then look for the teaching opportunities in response to what you hear.
  • Vigilant evaluation may reveal the first signs of unintended impacts.