We EvaluATE - Data...

Blog: Data Cleaning Tips in R*

Posted on July 8, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I recently came across a set of data cleaning tips in Excel from EvaluATE, which provides support for people looking to improve their evaluation practice.


Screenshot of the Excel Data Cleaning Tips

As I looked through the tips, I realized that I could show how to do each of the five tips listed in the document in R. Many people come to R from Excel so having a set of R to Excel equivalents (also see this post on a similar topic) is helpful.

The tips are not intended to be comprehensive, but they do show some common things that people do when cleaning messy data. I did a live stream recently where I took each tip listed in the document and showed its R equivalent.

As I mention at the end of the video, while you can certainly do data cleaning in Excel, switching to R enables you to make your work reproducible. Say you have some surveys that need cleaning today. You write your code and save it. Then, when you get 10 new surveys next week, you can simply rerun your code, saving you countless Excel points and clicks.

You can watch the full video at the very bottom or go each tip by using the videos immediately below. I hope it’s helpful in giving an overview of data cleaning in R!

Tip #1: Identify all cells that contain a specific word or (short) phrase in a column with open-ended text

Tip #2: Identify and remove duplicate data

Tip #3: Identify the outliers within a data set

Tip #4: Separate data from a single column into two or more column

Tip #5: Categorize data in a column, such as class assignments or subject groups

Full Video

*This is a Repost of David Keyes’ blog Data Cleaning Tips in R

Blog: Improving the Quality of Evaluation Data from Participants

Posted on June 10, 2020 by  in Blog ()

Professor of Educational Research and Evaluation, Tennessee Tech University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Improving the Quality of Evaluation Data from Participants

I have had experience evaluating a number of ATE projects, all of them collaborative projects among several four-year and two-year community colleges. One of the projects’ overarching goals is to provide training to college instructors as well as elementary-, middle-, and high-school teachers, to advance their skills in additive manufacturing and/or smart manufacturing.

The training is done via the use of train-the-trainer studios (TTS). TTSs provide novel hands-on learning experiences to program participants. As with any program, the evaluation of such projects needs to be informed by rich data to capture participants’ entire experience, including the knowledge they gain.

Here’s one lesson I’ve learned from evaluating these projects: Participants’ perception of their value in the project contributes crucially to the quality of data collected.

As the evaluator, one can make participants feel that the data they are being asked to provide (regarding technical knowledge gained, their application of it, and perceptions about all aspects of the training) will be beneficial to the overall program and to them directly or indirectly.

If they feel that their importance is minimal, and that the information they provide will not matter, they will provide the barest amount of information (regardless of the method of data collection employed). If they understand the importance of their participation, they’re more likely to provide rich data.

How can you make them feel valued?

Establish good rapport with each of the participants, particularly if the group(s) is(are) of reasonable size. Make sure to interact informally with each participant throughout the training workshop(s). Inquire about their professional work, and ask them about supports that they might need when they return to their workplace.

The responses to the open-ended questions on most of my workshop evaluations have been very rich and detailed¾much more so than those from participants to whom I administered the survey remotely, without ever meeting. Program participants want to connect to a real person, not a remote evaluator. In the event that in-person connections are not possible, explore other innovative ways of establishing rapport with individual participants, before and during the program.

How can you improve the quality of data they will provide?

 Sell the evaluation. Make it clear how the evaluation findings will be used and how the results will benefit the participants and their constituents specifically, directly or indirectly.

 Share success stories. During the training workshops that I have been evaluating, I’ve shared some previous success stories with participants in order to show them what they are capable of accomplishing as well.

The time and energy you spend building these connections with participants will result in high-quality evaluation data, ultimately helping the program serve participants better.

Blog: Building Capacity for High-Quality Data Collection

Posted on May 13, 2020 by  in Blog (, )

Director of Evaluation, Thomas P. Miller & Associates, LLC 

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I, like everyone else, am adjusting to working at home and practicing social distancing, I have been thinking about how to conduct my evaluation projects remotely. One thing that’s struck me as I’ve been retooling evaluation plans and data collection timelines is the need for even more evaluation capacity building around high-quality data collection for our clients. We will continue to rely on our clients to collect program data, and now that they’re working remotely too, a refresher on how to collect data well feels timely.  

Below are some tips and tricks for increasing your clients capacity to collect their own high-quality data for use in evaluation and informed decision making. 

Identify who will need to collect the data.  

Especially with multiple-site programs or programs with multiple collectors, identifying who will be responsible for data collection and ensuring that all data collectors use the same tools is key to collecting similar data across the program.  

Determine what is going to be collected.  

Examine your tool. Consider the length of the tool, the types of data being requested, and the language used in the tool itself. When creating a tool that will be used by others, be certain that your tool will yield the data that you need and will make sense to those who will be using it. Test the tool with a small group of your data collectors, if possible, before full deployment.  

Make sure data collectors know why the data is being collected.  

When those collecting data understand how the data will be used, they’re more likely to be invested in the process and more likely to collect and report their data carefully. When you emphasize the crucial role that stakeholders play in collecting data, they see the value in the time they are spending using your tools. 

Train data collectors on how to use your data collection tools.  

Walking data collectors through the step-by-step process of using your data collection tool, even if the tool is a basic intake form, will ensure that all collectors use the tool in the same way. It will also ensure they have had a chance to walk through the best way to use the tool before they actually need to implement it. Provide written instructions, too, so that they can refer to them in the future.  

Determine an appropriate schedule for when data will be reported.  

To ensure that your data reporting schedule is not overly burdensome, consider the time commitment that the data collection may entail, as well as what else the collectors have on their plates.  

 Conduct regular quality checks of what data is collected.  

Checking the data regularly allows you to employ a quality control process and promptly identify when data collectors are having issues. Catching these errors quickly will allow for easier course correction.  

Blog: Understanding Data Literacy

Posted on February 19, 2020 by  in Blog ()

Dean of Institutional Effectiveness, Coastline College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In today’s data-filled society, institutions are abundant with data but lack data literacy, the ability to transform data into usable information and further utilize the knowledge to facilitate actionable change.

Data literacy is a foundational driver in understanding institutional capacity to gather, consume, and utilize various data to build insight and inform actions. Institutions can use a variety of strategies to determine the maturity of their data utilization culture. The following list provides a set of methods that can be used to better understand your organization’s level of data literacy:

  • Conduct a survey that provides insight into areas of awareness, access, application, and action associated with data utilization. For example, Coastline College uses a data utilization maturity index tool, the EDUCAUSE benchmark survey, and annual utilization statistics to get this information. The survey can be conducted in person or electronically, based on the access and comfort employees or stakeholders have with technology. The goal of this strategy is to gain surface-level insight into the maturity of your organizational data culture.
  • Lead focus groups with a variety of stakeholders (e.g., faculty members, project directors) to gather rich insight into ideas about and challenges associated with data. The goal of this approach is to glean a deeper understanding of the associated “whys” found in broader assessments (e.g., observations, institutional surveys, operational data mining).
  • Compare your organizational infrastructure and operations to similar institutions that have been identified as having successful data utilization. The goal of this strategy is to help visualize and understand what a data culture is, how your organization compares to others, and how your organization can adapt or differentiate its data strategy (or adopt another one). A few resources I would recommend include Harvard Business Review’s Analytics topic library, EDUCAUSE’s Analytics library, What Works Clearinghouse, McKinsey & Company’s data culture article, and Tableau’s article on data culture.
  • Host open discussions with stakeholders (e.g., faculty members, project directors, administrators) about the benefits, disadvantages, optimism, and fears related to data. This method can build awareness, interest, and insight to support your data planning. The goal of this approach is to effectively prepare and address any challenges prior to your data plan investment and implementation.

Based on the insight collected, organizational leadership can develop an implementation plan to adopt and adapt tools, operations, and trainings to build awareness, access, application, and action associated with data utilization.

Avoid the following pitfalls:

  • Investing in a technology prior to engaging stakeholders and understanding the organizational data culture. In these instances, the technology will help but will not be the catalyst or foundation to build the data culture. The “build it and they will come” theory is not applicable in today’s data society. Institutions must first determine what they are seeking to achieve. Clay Christensen’s Jobs to Be Done Theory is a resource that can may bring clarity to this matter.
  • Assuming individuals have a clear understanding of the technical aspects of data. This assumption could lead to misuse or limited use of your data. To address this issue, institutions need to conduct an assessment to understand the realities in which they are operating.
  • Hiring for a single position to lead the effort of building a data culture. In this instance, a title does not validate the effort or ensure that an institution has a data-informed strategy and infrastructure. To alleviate this challenge, institutions must invest in teams and continuous trainings. For example, Coastline College has an online data coaching course, in-person hands-on data labs, and open discussion forums and study sessions to learn about data access and utilization.

As institutions better understand and foster their data cultures, the work of evaluators can be tailored and utilized to meet project stakeholders (e.g., project directors, faculty members, supporters, and advisory boards) where they are. By understanding institutional data capacity, evaluators can support continuous improvement and scaling through the provision of meaningful and palatable evaluations, presentations, and reports.

Blog: How I Came to Learn R, and Why You Should Too!

Posted on February 5, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Title graphic image

A few years ago, I left my job on the research team at the Oregon Community Foundation and started working as an independent evaluation consultant. No longer constrained by the data analysis software choices made by others, I was free to use whatever tool I wanted. As an independent consultant, I couldn’t afford proprietary software such as SPSS, so I used Excel. But the limits of Excel quickly became apparent, and I went in search of other options.

I had heard of R, but it was sort of a black box in my mind. I knew it was a tool for data analysis and visualization, but I had no idea how to use it. I had never coded before, and the prospect of learning was daunting. But my desire to find a new tool was strong enough that I decided to take up the challenge of learning R.

My journey to successfully using R was rocky and circuitous. I would start many projects in R before finding I couldn’t do something, and I would have to slink back to Excel. Eventually, though, it clicked, and I finally felt comfortable using R for all of my work.

The more I used R, the more I came to appreciate its power.

  1. The code that had caused me such trouble when I was learning became second nature. And I could reuse code in multiple projects, so my workflow became more efficient.
  2. The data visualizations I made in R were far better and more varied than anything I had produced in Excel.
  3. The most fundamental shift in my work, though, has come from using RMarkdown. This tool enables me to go from data import to final report in R, avoiding the dance across, say, SPSS (for analyzing data), Excel (for visualizing data), and Word (for reporting). And when I receive new data, I can simply rerun my code, automatically generating my report.

In 2019, I started R for the Rest of Us to help evaluators and others learn to embrace the power of R. Through online courses, workshops, coaching, and custom training for organizations, I’ve helped many people transition to R.

I’m delighted to share some videos here that show you a bit more about what R is and why you might consider learning it. You’ll learn about what importing data into R looks like and how you can use a few lines of code to analyze your data, and you’ll see how you can do this all in RMarkdown. The videos should give you a good sense of what working in R looks like and help you decide if it makes sense for you to learn it.

I always tell people considering R that it is challenging to learn. But I also tell them that the time and energy you invest in learning R is very much worth it in the end. Learning R will not only improve the quality of your data analysis, data visualization, and workflow, but also ensure that you have access to this powerful tool forever—because, oh, did I mention that R is free? Learning R is an investment in your current self and your future self. What could be better than that?

R Video Series

Blog: Increasing Response Rates*

Posted on January 9, 2020 by  in Blog ()

Founder and President, EvalWorks, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Increasing Response Rates Graphic

Higher response rates result in greater sample sizes and reduce bias. Research on ways to increase response rates for mail and Internet surveys suggests that the following steps will improve the odds that participants will complete and return your survey, whether it is by Internet or mail.

Make the survey as salient as possible to potential respondents.
Relevance can be tested with a small group of people similar to your respondents.

If possible, use Likert-type questions, versus open-ended questions, to increase response rates. 
Generally, the shorter the survey appears to respondents, the better.

Limit the number of questions of a sensitive nature, when possible.
Additionally, if possible, make the survey anonymous, as opposed to confidential.

Include prenotification and follow-ups to survey respondents.
Personalizing these contacts will also increase response rates. In addition, surveys conducted by noncommercial institutions (e.g., colleges) obtain higher response rates than those conducted by commercial institutions.

Provide additional copies of or links to the survey.
This can be done as part of follow-up with potential respondents.

Provide incentives. 
Incentives included in the initial mailing produce higher return rates than those contingent upon survey return, with twice the increase when monetary (versus nonmonetary) incentives are included up-front.

Consider these additional strategies for mail surveys:
Sending surveys using recorded delivery, using colored paper for mail surveys, and providing addressed, stamped return envelopes.

Consider the following when conducting an Internet survey:
A visual indicator of how much of the survey respondents have completed—or, alternately, how much of the survey they have left to complete.

Although there are no hard-and-fast rules for what constitutes an appropriate response rate, many government agencies require response rates of 80 percent or higher before they are willing to report results. If you have conducted a survey and still have a low response rate, it is important to make additional efforts or use a different survey mode to reach non-respondents; however, it is important, to ensure that they do not respond differently than initial respondents and that the survey mode itself did not produce bias.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in spring 2010.

Blog: LinkedIn for Alumni Tracking

Posted on June 13, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

LinkedIn for Alumni Tracking

Benjamin Reid Kevin Cooper
President of Impact Allies PI of RDNET and Dean of Advanced Technology at IRSC

Post-program outcomes for students are obviously huge indicators of success and primary metrics for measuring medium and long-term outcomes and impacts. EvaluATE’s 2019 revised Advanced Technological Education (ATE) Annual Survey states, “ATE program stakeholders would like to know more about post-program outcomes for students.” It lists the types of data sought:

    • Job placement
    • Salary
    • Employer satisfaction
    • Pursuit of additional STEM education
    • Acquisition of industry certifications or licenses

The survey also asks for the sources used to collect this data, giving the following choices:

    • Institutional research office
    • Survey of former students
    • Local economic data
    • Personal outreach to former students
    • State longitudinal data systems
    • Other (describe)

This blog introduces an “Other” data source: LinkedIn Alumni Tool (LAT).

LAT is data rich and free, yet underutilized. Each alumni’s professional information is readily available (i.e., no permissions process for the researcher) and personally updated. The information is also remarkably accurate, because the open-visibility and network effects help ensure honesty. These factors make LAT a great tool for quick health checks and an alternative to contacting each person and requesting this same information.

Even better, LinkedIn is a single tool that is useful for evaluators, principal investigators, instructors, and students. For example, a couple years ago Kevin, Principal Investigator for the Regional Center for Nuclear Education and Training (RCNET) and I (RCNET’s evaluator) realized that our respective work was leading us to use the same tool — LinkedIn — and that we should co-develop our strategies for connecting and communicating with students and alumni on this medium. Kevin uses it to help RCNET’s partner colleges to communicate opportunities (jobs, internships, scholarships, continued education) and develop soft skills (professional presentation, networking, awareness of industry news). I use it to glean information about students’ educational and professional experiences leading up to and during their programs and to track their paths and outcomes after graduation. LinkedIn is also a user-centric tool for students that — rather than ceasing to be useful after graduation — actually becomes more useful.

When I conducted a longitudinal study of RCNET’s graduates across the county over the preceding eight years, I used LinkedIn for two purposes: triangulation and connecting with alumni via another channel, because after college many students change their email addresses and telephone numbers. More than 30 percent of the alumni who responded were reached via LinkedIn, as their contact information on file with the colleges had since changed.

Using LAT, I viewed their current and former employers, job positions, promotions, locations, skills, and further education (and there were insignificant differences between what alumni reported on the survey and interview and what was on their LinkedIn profiles). That is, three of the five post-program outcomes for students of interest to ATE program stakeholders (plus a lot more) can be seen for many alumni via LinkedIn.

Visit https://university.linkedin.com/higher-ed-professionals for short videos about how to use the LinkedIn Alumni Tool and many others. Many of the videos take an institutional perspective, but here is a tip on how to pinpoint program-specific students and alumni. Find your college’s page, click Alumni, and type your program’s name in the search bar. This will filter the results only to the people in your program. It’s that simple.

 

Blog: Increase Online Survey Response Rates with These Four Tips

Posted on April 3, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Molly Henschel Elizabeth Peery Anne Cosby
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluation Associate
Magnolia Consulting, LLC

 

Greetings! We are Molly Henschel, Beth Perry, and Anne Cosby with Magnolia Consulting. We often use online surveys in our Advanced Technological Education (ATE) projects. Online surveys are an efficient data collection method for answering evaluation questions and providing valuable information to ATE project teams. Low response rates threaten the credibility and usefulness of survey findings. At Magnolia Consulting, we use proven strategies to increase response rates, which, in turn, ensures survey results are representative of the population. We offer the following four strategies to promote high response rates:

1. Ensure the survey is easy to complete. Keep certain factors in mind as you create your survey. For example, is the survey clear and easy to read? Is it free of jargon? Is it concise? You do not want respondents to lose interest in completing a survey because it is difficult to read or too lengthy. To help respondents finish the survey, consider:

      • collaborating with the ATE project team to develop survey questions that are straightforward, clear, and relevant;
      • distributing survey questions across several pages to decrease cognitive load and minimize the need for scrolling;
      • including a progress bar; and
      • ensuring your survey is compatible with both computers and mobile devices.

Once the survey is finalized, coordinate with program staff to send the survey during ATE-related events, when the respondents have protected time to complete the survey.

2. Send a prenotification. Prior to sending the online survey, send a prenotification to all respondents, informing them of the upcoming survey. A prenotification establishes survey trustworthiness, boosts survey anticipation, and reduces the possibility that a potential respondent will disregard the survey. The prenotification can be sent by email, but research shows that using a mixed-mode strategy (i.e., email and postcard) can have positive effects on response rates (Dillman, Smyth, & Christian, 2014; Kaplowitz, Lupi, Couper, & Thorp, 2012). We also found that asking the ATE principal investigator (PI) or co-investigators (co-PIs) to send the prenotification helps yield higher response rates.

3. Use an engaging and informative survey invitation. The initial survey invitation is an opportunity to grab your respondents’ attention. First, use a short and engaging subject line that will encourage respondents to open your email. In addition, follow best practices to ensure your email is not diverted into a recipient’s spam folder. Next, make sure the body of your email provides respondents with relevant survey information, including:

      • a clear survey purpose;
      • a statement on the importance of their participation;
      • realistic survey completion time;
      • a deadline for survey completion;
      • information on any stipend requirements or incentives  (if your budget allows for it);
      • a statement about survey confidentiality;
      • a show of appreciation for time and effort; and
      • contact information for any questions about the survey.

4.  Follow up with nonresponders. Track survey response rates on a regular basis. To address low response rates:

      • continue to follow up with nonresponders, sending at least two reminders;
      • investigate potential reasons the survey has not been completed and offer any assistance (e.g., emailing a paper copy) to make survey completion less burdensome;
      • contact nonresponders via a different mode (e.g., phone); or
      • enlist the help of the ATE PI and co-PI to personally follow up with nonresponders. In our experience, the relationship between the ATE PI or co-PI and the respondents can be helpful in collecting those final surveys.

 

Resources:

Nulty, D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education33(3), 301–314.

References:

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). New York: Wiley.

Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on web survey response rates. Social Science Computer Review, 30, 339–349.

Blog: From Instruments to Analysis: EvalFest’s Outreach Training Offerings

Posted on February 26, 2019 by  in Blog ()

President, Karen Peterman Consulting, Co.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Looking for a quick way to train field researchers? How about quick tips on data management or a reminder about what a p-value is? The new EvalFest website hosts brief training videos and related resources to support evaluators and practitioners. EvalFest is a community of practice, funded by the National Science Foundation, that was designed to explore what we could learn about science festivals by using shared measures. The videos on the website were created to fit the needs of our 25 science festival partners from across the United States. Even though they were created within the context of science festival evaluation, the videos and website have been framed generally to support anyone who is evaluating outreach events.

Here’s what you should know:

  1. The resources are free!
  2. The resources have been vetted by our partners, advisors, and/or other leaders in the STEM evaluation community.
  3. You can download PDF and video content directly from the site.

Here’s what we have to offer:

  • Instruments — The site includes 10 instruments, some of which include validation evidence. The instruments gather data from event attendees, potential attendees who may or may not have attended your outreach event, event exhibitors and partners, and scientists who conduct outreach. Two observation protocols are also available, including a mystery shopper protocol and a timing and tracking protocol.
  • Data Collection Tools — EvalFest partners often need to train staff or field researchers to collect data during events, so this section includes eight videos that our partners have used to provide consistent training to their research teams. Field researchers typically watch the videos on their own and then attend a “just in time” hands-on training to learn the specifics about the event and to practice using the evaluation instruments before collecting data. Topics include approaching attendees to do surveys during an event, informed consent, and online survey platforms, such as QuickTapSurvey and SurveyMonkey.
  • Data Management Videos — Five short videos are available to help clean and organize your data and to help begin to explore it in Excel. These videos include the kinds of data that are typically generated by outreach surveys, and they show step-by-step how to do things like filter your data, recode your data, and create pivot tables.
  • Data Analysis Videos — Available in this section are 18 videos and 18 how-to guides that provide quick explanations of things like the p-value, exploratory data analysis, the chi-square test, independent-samples t-test, and analysis of variance. The conceptual videos describe how each statistical test works in nonstatistical terms. The how-to resources are then provided in both video and written format, and walk users through conducting each analysis in Excel, SPSS, and R.

Our website tagline is “A Celebration of Evaluation.” It is our hope that the resources on the site help support STEM practitioners and evaluators in conducting high-quality evaluation work for many years to come. We will continue to add resources throughout 2019. So please check out the website, let us know what you think, and feel free to suggest resources that you’d like us to create next!

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.