Archive: survey

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.

Webinar: Basic Principles of Survey Question Development

Posted on January 30, 2019 by , in Webinars ()

Presenter(s): Lori Wingate, Lyssa Wilson Becho, Mike Lesiecki
Date(s): February 20, 2019
Time: 1:00-2:00 p.m. EASTERN
Recording: https://youtu.be/64nXDeRm-9c

Surveys are a valuable source of evaluation data. Obtaining quality data relies heavily on well-crafted survey items that align with the overall purpose of the evaluation. In this webinar, participants will learn fundamental principles of survey question construction to enhance the validity and utility of survey data. We will discuss the importance of considering data analysis during survey construction and ways to test your survey questions. Participants will receive an overview of survey do’s and don’ts to help apply fundamental principles of survey question development in their own work.

Resources:
Slides
Handout

Newsletter: Survey Says Winter 2016

Posted on January 1, 2016 by , in Newsletter - ()

On the 2015 ATE survey, 65 of 230 principal investigators (28%) reported spending some portion of their annual budgets on research. Six of these projects were funded as targeted research. Among the other 59 projects, expenditures on research ranged from 1% to 65% with a median of 14%. With just six targeted research projects and less than a third of all ATE grantees engaging in research, there is immense opportunity within the ATE program to expand research on technician education.

 

Survey-Says

The full report of 2015 ATE survey findings, along with data snapshots and downloadable graphics, is available from www.evalu-ate.org/annual_survey/.

Report: An Exploratory Test of a Model for Enhancing the Sustainability of NSF’s Advanced Technological Education (ATE) Program

Posted on February 25, 2015 by  in Resources ()

The purpose of this research is to examine the effectiveness of a model that purports to improve the sustainability of ATE projects and centers. According to Lawrenz, Keiser, & Lavoie (2003), several models for sustainability have been proposed in the organizational change literature. However, for the most part, the models are advocacy statements based on author experience rather than on empirical studies. These authors concluded there was little research directly related to sustainability.

File: Click Here
Type: Report
Category: ATE Research & Evaluation
Author(s): Wayne Welch

Blog: A Rose Isn’t as Sweet by Any Other Name: Lessons on Subject Lines for Web Surveys

Posted on February 25, 2015 by  in Blog ()

Principal Consultant, The Rucks Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Survey developers typically spend a great deal of time on the content of questionnaires. We struggle with what items to include, how to ask the question, whether an item should be closed-ended or open-ended; the list of considerations goes on.  After all that effort, we generally spend less time on a smaller aspect that is incredibly important to web surveys: the subject line.

I have come to appreciate the extent to which the subject line acts as a “frame” for a survey. In simplistic terms, a frame is how a concept is categorized. Framing is the difference between calling an unwanted situation a challenge versus a problem. There is a significant literature that suggests that the nature of a frame will produce particular types of behaviors. For instance, my firm recently disseminated a questionnaire to gain feedback on the services that EvaluATE provides. As shown in the chart below, initially we received about 100 responses. With that questionnaire invitation, we used the subject line EvaluATE Services Survey. Based on past experience, we would have expected the next dissemination to garner about 50 responses, but we got closer to 90. So what happened? We had started playing with the subject line.

Rucks_Chart1

 

EvaluATE’s Director, Lori Wingate, sent out a reminder email with the subject line, What do you think of EvaluATE? When we sent out the actual questionnaire, we used the subject line, Tell us what you think. For the next two iterations of dissemination, we had slightly higher than expected response rates.

For the third dissemination, Lori conducted an experiment. She sent out reminder notices but manipulated the subject lines. There were seven different subject lines in total, each sent to about 100 different individuals. The actual questionnaire disseminated had a constant subject line of Would you share your thoughts today? As you see below, the greatest response rate occurred when the subject line of the reminder was How is EvaluATE doing?, while the lowest response rate was when Just a few days was used.

Rucks_Chart2

 

These results aren’t completely surprising. In the 2012 presidential election, the Obama campaign devoted much effort to identifying subject lines that produced the highest response rates. They found that a “gap in information” was the most effective. Using this explanation, the question may emerge as to why the subject line Just a few days would garner the lowest response rate, because it presents a gap in information. The reason this occurred is unclear. One possibility is that incongruity between the sense of urgency implied by the subject line and the importance of the topic of the email to respondents made them feel tricked and they opted not to complete the survey.

Taking all of these findings together tells us that a “rose by any other name would not smell as sweet” and that what something is called does make a difference. So when you are designing your next web survey, make sure crafting the subject line is part of the design process.

Report: Findings from a survey of ATE projects and centers

Posted on October 8, 2014 by  in Resources ()

The purpose of this survey was to better understand the nature of the ATE projects, and to begin to address the effectiveness of those grants. The survey was web based, and respondents could provide the requested information using their computers once they were given their individual user names and passwords.

File: Click Here
Type: Report
Category: ATE Research & Evaluation
Author(s): EvaluATE