We EvaluATE - Data...

Blog: Increase Online Survey Response Rates with These Four Tips

Posted on April 3, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Molly Henschel Elizabeth Peery Anne Cosby

 

Greetings! We are Molly Henschel, Beth Perry, and Anne Cosby with Magnolia Consulting. We often use online surveys in our Advanced Technological Education (ATE) projects. Online surveys are an efficient data collection method for answering evaluation questions and providing valuable information to ATE project teams. Low response rates threaten the credibility and usefulness of survey findings. At Magnolia Consulting, we use proven strategies to increase response rates, which, in turn, ensures survey results are representative of the population. We offer the following four strategies to promote high response rates:

1. Ensure the survey is easy to complete. Keep certain factors in mind as you create your survey. For example, is the survey clear and easy to read? Is it free of jargon? Is it concise? You do not want respondents to lose interest in completing a survey because it is difficult to read or too lengthy. To help respondents finish the survey, consider:

      • collaborating with the ATE project team to develop survey questions that are straightforward, clear, and relevant;
      • distributing survey questions across several pages to decrease cognitive load and minimize the need for scrolling;
      • including a progress bar; and
      • ensuring your survey is compatible with both computers and mobile devices.

Once the survey is finalized, coordinate with program staff to send the survey during ATE-related events, when the respondents have protected time to complete the survey.

2. Send a prenotification. Prior to sending the online survey, send a prenotification to all respondents, informing them of the upcoming survey. A prenotification establishes survey trustworthiness, boosts survey anticipation, and reduces the possibility that a potential respondent will disregard the survey. The prenotification can be sent by email, but research shows that using a mixed-mode strategy (i.e., email and postcard) can have positive effects on response rates (Dillman, Smyth, & Christian, 2014; Kaplowitz, Lupi, Couper, & Thorp, 2012). We also found that asking the ATE principal investigator (PI) or co-investigators (co-PIs) to send the prenotification helps yield higher response rates.

3. Use an engaging and informative survey invitation. The initial survey invitation is an opportunity to grab your respondents’ attention. First, use a short and engaging subject line that will encourage respondents to open your email. In addition, follow best practices to ensure your email is not diverted into a recipient’s spam folder. Next, make sure the body of your email provides respondents with relevant survey information, including:

      • a clear survey purpose;
      • a statement on the importance of their participation;
      • realistic survey completion time;
      • a deadline for survey completion;
      • information on any stipend requirements or incentives  (if your budget allows for it);
      • a statement about survey confidentiality;
      • a show of appreciation for time and effort; and
      • contact information for any questions about the survey.

4.  Follow up with nonresponders. Track survey response rates on a regular basis. To address low response rates:

      • continue to follow up with nonresponders, sending at least two reminders;
      • investigate potential reasons the survey has not been completed and offer any assistance (e.g., emailing a paper copy) to make survey completion less burdensome;
      • contact nonresponders via a different mode (e.g., phone); or
      • enlist the help of the ATE PI and co-PI to personally follow up with nonresponders. In our experience, the relationship between the ATE PI or co-PI and the respondents can be helpful in collecting those final surveys.

 

Resources:

Nulty, D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education33(3), 301–314.

References:

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). New York: Wiley.

Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on web survey response rates. Social Science Computer Review, 30, 339–349.

Blog: From Instruments to Analysis: EvalFest’s Outreach Training Offerings

Posted on February 26, 2019 by  in Blog ()

President, Karen Peterman Consulting, Co.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Looking for a quick way to train field researchers? How about quick tips on data management or a reminder about what a p-value is? The new EvalFest website hosts brief training videos and related resources to support evaluators and practitioners. EvalFest is a community of practice, funded by the National Science Foundation, that was designed to explore what we could learn about science festivals by using shared measures. The videos on the website were created to fit the needs of our 25 science festival partners from across the United States. Even though they were created within the context of science festival evaluation, the videos and website have been framed generally to support anyone who is evaluating outreach events.

Here’s what you should know:

  1. The resources are free!
  2. The resources have been vetted by our partners, advisors, and/or other leaders in the STEM evaluation community.
  3. You can download PDF and video content directly from the site.

Here’s what we have to offer:

  • Instruments — The site includes 10 instruments, some of which include validation evidence. The instruments gather data from event attendees, potential attendees who may or may not have attended your outreach event, event exhibitors and partners, and scientists who conduct outreach. Two observation protocols are also available, including a mystery shopper protocol and a timing and tracking protocol.
  • Data Collection Tools — EvalFest partners often need to train staff or field researchers to collect data during events, so this section includes eight videos that our partners have used to provide consistent training to their research teams. Field researchers typically watch the videos on their own and then attend a “just in time” hands-on training to learn the specifics about the event and to practice using the evaluation instruments before collecting data. Topics include approaching attendees to do surveys during an event, informed consent, and online survey platforms, such as QuickTapSurvey and SurveyMonkey.
  • Data Management Videos — Five short videos are available to help clean and organize your data and to help begin to explore it in Excel. These videos include the kinds of data that are typically generated by outreach surveys, and they show step-by-step how to do things like filter your data, recode your data, and create pivot tables.
  • Data Analysis Videos — Available in this section are 18 videos and 18 how-to guides that provide quick explanations of things like the p-value, exploratory data analysis, the chi-square test, independent-samples t-test, and analysis of variance. The conceptual videos describe how each statistical test works in nonstatistical terms. The how-to resources are then provided in both video and written format, and walk users through conducting each analysis in Excel, SPSS, and R.

Our website tagline is “A Celebration of Evaluation.” It is our hope that the resources on the site help support STEM practitioners and evaluators in conducting high-quality evaluation work for many years to come. We will continue to add resources throughout 2019. So please check out the website, let us know what you think, and feel free to suggest resources that you’d like us to create next!

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.

Blog: PhotoVoice: A Method of Inquiry in Program Evaluation

Posted on January 25, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Emma Perk Michael Harnar

Hello, EvaluATE! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation, IDPE), Emma Perk (co-PI of EvaluATE at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. We recently used PhotoVoice in our evaluation of a Michigan-based Upward Bound (UB) program (a college preparation program focused on 14- to 19-year-old youth living in low-income families in which neither parent has a bachelor’s degree).

PhotoVoice is a method of inquiry that engages participants in creating photographs and short captions in response to specific prompts. The photos and captions provide contextually grounded insights that might otherwise be unreachable by those not living that experience. We opted to use PhotoVoice because the photos and narratives could provide insights into participants’ perspectives that cannot be captured using close-ended questionnaires.

We created two prompts, in the form of questions, and introduced PhotoVoice in person with the UB student participants (see the instructional handout below). Students used their cell phones to take one photo per prompt. For confidentiality reasons, we also asked the students to avoid taking pictures of human faces. Students were asked to write a two- to three-sentence caption for each photo. The caption was to include a short description of the photo, what was happening in the photo, and the reason for taking the photo.

PhotoVoice handout

Figure 1: PhotoVoice Handout

PhotoVoice participation was part of the UB summer programming and overseen by the UB staff. Participants had two weeks to complete the tasks. After receiving the photographs and captions, we analyzed them using MAXQDA 2018. We coded the pictures and the narratives using an inductive thematic approach.

After the preliminary analysis, we then went back to our student participants to see if our themes resonated with them. Each photo and caption was printed on a large sheet of paper (see figure 2 below) and posted on the wall. During a gallery walk, students were asked to review each photo and caption combination and to indicate whether they agree or disagree with our theme selections (see figure 3). We gave participants stickers and asked them to place the stickers in either the “agree” or “disagree” section on the bottom of each poster. After the gallery walk, we discussed the participants’ ratings to understand their photos and write-ups better.

Figure 2: Gallery walk layout (photo and caption on large pieces of paper)

Figure 3: Participants browsing the photographs

Using the participants’ insights, we finalized the analysis, created a webpage, and developed a two-page report for the program staff. To learn more about our reporting process, see our next blog. Below is a diagram of the activities that we completed during the evaluation.

Figure 4: Activities conducted in the Upward Bound evaluation

The PhotoVoice activity provided us with rich insights that we would not have received from the survey that was previously used. The UB student participants enjoyed learning about and being a part of the evaluation process. The program staff valued the reports and insights the method provided. The exclusion of faces in the photographs enabled us to avoid having to obtain parental permission to release the photos for use in the evaluation and by UB staff. Having the students use cell phone cameras kept costs low. Overall, the evaluation activity went over well with the group, and we plan to continue using PhotoVoice in the future.

Blog: Using Mixed-Mode Survey Administration to Increase Response

Posted on September 26, 2018 by  in Blog ()

Program Evaluator, Cold Spring Harbor Laboratory, DNA Learning Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Why aren’t people responding?”

This is the perpetual question asked by anyone doing survey research, and it’s one that I am no stranger to myself. There are common strategies to combat low survey participation, but what happens when they fail?

Last year, I was co-principal investigator on a small Advanced Technological Education (ATE) grant to conduct a nationwide survey of high school biology teachers. This was a follow-up to a 1998 survey done as part of an earlier ATE grant my institution had received. In 1998, the survey was done entirely by mail and had a 35 percent response rate. In 2018, we administered an updated version of this survey to nearly 13,000 teachers. However, this time, there was one big difference: we used email.

After a series of four messages over two months (pre-notice, invitation, and two reminders), an incentivized survey, and intentional targeting of high school biology teachers, our response rate was only 10 percent. We anticipated that teachers would be busy and that a 15-minute survey might be too much for many of them to deal with at school. However, there appeared to be a bigger problem: nearly two-thirds of our messages were never opened and perhaps never even seen.

To boost our numbers, we decided to return to what had worked previously: the mail. Rather than send more emails, we mailed an invitation to individuals who had not completed the survey, followed by postcard reminders. Individuals were reminded of the incentive and directed to a web address where they could complete the survey online. The end result was a 14 percent response rate.

I noticed that, particularly when emailing teachers at their school-provided email addresses, many messages never reach the intended recipients. Although use of a mail-exclusive design may never be likely, an alternative would be to heed the advice of Millar and Dillman (2011): administer a mixed-mode, web-then-mail messaging strategy to ensure that spam filters don’t prevent participants from being a part of surveys. Asking the following questions can help guide your method-of-contact decisions and help avoid troubleshooting a low response rate mid-survey.

  1. Have I had low response rates from a similar population before?
  2. Do I have the ability to contact individuals via multiple methods?
  3. Is using the mail cost- or time-prohibitive for this particular project?
  4. What is the sample size necessary for my sample to reasonably represent the target population?
  5. Have I already made successful contact with these individuals over email?
  6. Does the survey tool I’m using (Survey Monkey, Qualtrics, etc.) tend to be snagged by spam filters if I use its built-in invitation management features?

These are just some of the considerations that may help you avoid major spam filter issues in your forthcoming project. Spam filters may not be the only reason for a low response rate, but anything that can be done to mitigate their impact is a step toward a better response rate for your surveys.


Reference

Millar, M., & Dillman, D. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75, 249–269.

Blog: Using Rubrics to Demonstrate Educator Mastery in Professional Development

Posted on September 18, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Nena Bloom
Evaluation Coordinator
Center for Science Teaching and Learning, Northern Arizona University
Lori Rubino-Hare
Professional Development Coordinator
Center for Science Teaching and Learning, Northern Arizona University

We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.

One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.

GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.

We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.

Practices we used to develop and utilize the rubric included the following:

  • We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
  • We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
  • Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
  • We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
  • We pilot tested the rubric with several units, using several scorers, and revised accordingly.

This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.


References and resources

Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.

Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric

Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.

Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.

Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.

Blog: Measure What Matters: Time for Higher Education to Revisit This Important Lesson

Posted on May 23, 2018 by  in Blog (, )

Senior Partner, Cosgrove & Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If one accepts Peter Drucker’s premise that “what gets measured, gets managed,” then two things are apparent: measurement is valuable, but measuring the wrong thing has consequences. Data collection efforts focusing on the wrong metrics lead to mismanagement and failure to recognize potential opportunities. Focusing on the right measures matters. For example, in Moneyball, Michael Lewis describes how the Oakland Athletics improved their won-loss record by revising player evaluation metrics to more fully understand players’ potential to score runs.

The higher education arena has equally high stakes concerning evaluation. A growing number of states (more than 30 in 2017)[1] have adopted performance funding systems to allocate higher education funding. Such systems focus on increasing the number of degree completers and have been fueled by calls for increased accountability. The logic of performance funding seems clear: Tie funding to the achievement of performance metrics, and colleges will improve their performance. However, research suggests we might want to re-examine this logic.  In “Why Performance-Based College Funding Doesn’t Work,” Nicholas Hillman found little to no evidence to support the connection between performance funding and improved educational outcomes.

Why are more states jumping on the performance-funding train? States are under political pressure, with calls for increased accountability and limited taxpayer dollars. But do the chosen performance metrics capture the full impact of education? Do the metrics result in more efficient allocation of state funding? The jury may be still out on these questions, but Hillman’s evidence suggests the answer is no.

The disconnect between performance funding and improved outcomes may widen even more when one considers open-enrollment colleges or colleges that serve a high percentage of adult, nontraditional, or low-income students. For example, when a student transfers from a community college (without a two-year degree) to a four-year college, should that behavior count against the community college’s degree completion metric? Might that student have been well-served by their time at the lower-cost college? When community colleges provide higher education access to adult students who enroll on a part-time basis, should they be penalized for not graduating such students within the arbitrary three-year time period? Might those students and that community have been well-served by access to higher education?

To ensure more equitable and appropriate use of performance metrics, college and states would be well-served to revisit current performance metrics and more clearly define appropriate metrics and data collection strategies. Most importantly, states and colleges should connect the analysis of performance metrics to clear and funded pathways for improvement. Stepping back to remember that the goal of performance measurement is to help build capacity and improve performance will place both parties in a better position to support and evaluate higher education performance in a more meaningful and equitable manner.

[1] Jones, T., & Jones, S. (2017, November 6). Can equity be bought? A look at outcomes-based funding in higher ed [Blog post].

Blog: Gauging the Impact of Professional Development Activities on Students

Posted on January 17, 2018 by  in Blog ()

Executive Director of Emerging Technology Grants, Collin College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Gauging the Impact of Professional Development Activities on Students

Many Advanced Technological Education (ATE) grants hold professional development events for faculty. As the lead for several ATE grants, I have been concerned that while data obtained from faculty surveys immediately after these events are useful, they do not gauge the impact of the training on students. The National Convergence Technology Center’s (CTC) approach, described below, uses longitudinal survey data from the faculty attendees to begin to provide evidence on student impact. I believe the approach is applicable to any discipline.

The CTC provides information technology faculty with a free intensive professional development event titled Working Connections Faculty Development Institute. The institute is held twice per year—five days in the summer and two and a half days in the winter. Working Connections helps faculty members develop the skills needed to create a new course or to perform major updates on an existing course in the summer and enough to update a course in the winter. Over the years, more than 1,700 faculty have enrolled in the training. From the beginning, we have gathered attendee feedback via two surveys at the end of the event. One survey focuses on the specific topic track, asking about the extent to which attendees feel that their three learning outcomes were mastered, as well as information on the instructor’s pacing, classroom handling, etc. The other survey asks questions about the overall event, including attendees’ reactions to the focused lunch programs, and how many new courses have been created or enhanced as a result of past attendance.

The CTC educates faculty members as a vehicle for educating students. To learn how the training impacts students and programs, we also send out longitudinal surveys at 6, 18, 30, 42, and 54 months after each summer Working Connections training. These surveys ask faculty members to report on what they did with what they learned at each training, including how many students they educated as a result of what they learned. Faculty are also asked to report how many certificates and degrees were created or enhanced. Each Working Connections cohort receives a separate survey invitation (i.e., someone who attended two Working Connections will get two separate invitations) that includes a link to the survey as well as a roster to help attendees remember which track they took that year. Participation is voluntary, but over the years, we have consistently and strongly emphasized the importance of getting this longitudinal data so that we can provide some evidence of student impact to the National Science Foundation. Our response rate from surveys sent in January 2016 was 59%.

Responses from surveys from 2008-2016 indicate the following:

Number of students who were taught the subjects trained 88,591
Number of new/enhanced courses 241
Number of sections taught 4,899
Number of new/enhanced certificates and degrees 310

While these data still do not allow us to know how the students themselves consumed the attendees’ learning, it does provide evidence that is one step closer to obtaining student impact than just counting faculty feedback after each training. We are considering what else we can do to further unpack the impact on students, but the Family Educational Rights and Privacy Act’s (FERPA) limitations stand in the way of the CTC contacting affected students directly without their permission.

Tip: It is mandatory that a longitudinal survey effort be intentional and consistent. Further, it is extremely important to consistently promote the need for attendees to fill out surveys both during the professional development events and via emails preceding the annual survey emails.  It is all too easy for attendees to simply delete the longitudinal survey if they do not see the point of filling them out.

Blog: Tips and Tricks When Writing Interview Questions

Posted on January 2, 2018 by  in Blog ()

Senior Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Blog: Tips and Tricks When Writing Interview Questions

Developing a well-constructed interview protocol is by no means an easy task. To give us some ideas on how to formulate well-designed interview questions, Michael Patton (2015) dedicates an entire chapter of his book, Qualitative Research & Evaluation Methods, to question formulation. Like with any skill, the key to improving your craft is practice. That’s why I wanted to share a few ideas from Patton and contribute some of my own thoughts to help improve how you formulate interview questions.

One approach I find useful is to consider the category of question you are asking. With qualitative research, the categories of questions can sometimes seem infinite. However, Patton provides a few overarching categories, which can help frame your thinking, allowing you to ask questions with more precision and be intentional with what you are asking. Patton (2015, p. 444) suggests general categories and provides a few question examples, which are presented below. So, when trying to formulate a question, consider the type you are interested in asking:

  • Experience and behavior questions: If I had been in the program with you, what would I have seen you doing?
  • Opinion and value questions: What would you like to see happen?
  • Feeling questions: How do you feel about that?
  • Knowledge questions: Who is eligible for this program?
  • Sensory questions: What does the counselor ask you when you meet with her? What does she actually say? (Questions that describe stimuli)
  • Background and demographic questions: How old are you?

Once the category is known and you start writing or editing questions, some additional strategies are to double check that you are writing truly open-ended questions and avoiding jargon. For instance, don’t assume that your interviewee knows the acronyms you’re using. As evaluators, sometimes we know the program better than the informants! This makes it so important to write questions with clarity. Everyone wins when you take the time to be intentional and design a question with clarity—you get better data and you won’t confuse your interviewee.

Another interesting point from Patton is to make sure you are asking a singular question. Think about when you’re conducting quantitative research and writing an item for a questionnaire—a red flag might be if it’s double-barreled (i.e., asking more than one question simultaneously). For example, a poorly framed questionnaire item about experiences in a mentorship program might read: To what extent do you agree with the statement, “I enjoyed this program and would do it again.” You simply wouldn’t put that item in a questionnaire, since a person might enjoy the program, but wouldn’t necessarily do it again. Although you have more latitude during an interview, it’s always best to write your questions with precision. It’s also a good chance for you to flex some skills when conducting the interview, knowing when to probe effectively if you need to shift the conversation or dive deeper based on what you hear.

It is important to keep in mind there is no right way to formulate interview questions. However, by having multiple tools in your tool kit, you can lean on different strategies as appropriate, allowing you to develop stronger and more rigorous qualitative studies.

Reference:

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice. Thousand Oaks, CA: SAGE.