We EvaluATE - Data...

Blog: Tips for Building and Strengthening Stakeholder Relationships

Posted on November 23, 2020 by  in Blog ()

Project Manager, EvaluATE at The Evaluation Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I am Valerie Marshall, I work on a range of projects at The Evaluation Center, including EvaluATE, where I serve as the administrator and analyst for the annual ATE Survey.

A cornerstone of evaluation is working with stakeholders. Stakeholders are individuals or groups who are part of an evaluation or are otherwise interested in its findings. They may be internal or external to the program being evaluated.

Stakeholders’ interests and involvement in evaluation activities may vary. But they are a key ingredient to evaluation success. They can provide critical insight into project activities and evaluation questions, serve as the gatekeepers to other stakeholders or data, and help determine if evaluation findings and recommendations are implemented.

Given their importance, identifying ways to build and nurture relationships with stakeholders is pivotal.

So the question is: how can you build relationships with evaluation stakeholders?

Below is a list of tips based on my own research and evaluation experience. This list is by no means exhaustive. If you are an ATE PI or evaluator, please join EvaluATE’s Slack community to continue the conversation and share some of your own tips!

Tip 1: Be intentional and adaptative about how you communicate. Not all stakeholders will prefer the same mode of communication. And how stakeholders want to communicate can change over the course a project’s lifecycle. In my experience, using communication styles and tools that align with stakeholders’ needs and preferences often results in greater engagement. So, ask stakeholders how they would like to communicate at various points throughout your work together.

Tip 2: Build rapport. ATE evaluator and fellow blogger George Chitiyo previously noted that building rapport with stakeholders can make them feel valued and, in turn, help lead to quality data. Rapport is defined as a friendly relationship that makes communication easier (Merriam-Webster). Chatting during “down time” in a videoconference, sharing helpful resources, and mentioning a lighthearted story are great ways to begin fostering a friendly relationship.

Tip 3: Support and maintain transparency. Communicate with stakeholders about what is being done, when, and why. This not only reduces confusion but also facilitates trust. Trust is pivotal to building  productive, healthy relationships with stakeholders. Providing project staff with a timeline of research or evaluation activities, giving regular progress updates, and meeting with stakeholders one-on-one or in small groups to answer questions or address concerns are all helpful ways to generate transparency.

Tip 4: Identify roles and responsibilities. When stakeholders know what is expected of them and how they can and cannot contribute to different aspects of a research or evaluation project, they can engage in a more meaningful way. The clarity generated from the process of outlining the roles and responsibilities of both stakeholders and research and evaluation staff can help reduce misunderstandings. At the beginning of a project, and as new staff and stakeholders join the project, make sure to review roles and expectations with everyone.

Blog: Strategies for Communicating in Virtual Settings

Posted on October 21, 2020 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Jeffrey Hillman
Doctoral Student
The Evaluation Center
Doctoral Student
The Evaluation Center

We are Ouen and Jeffrey, the authors of the recently published resource “Effective Communication Strategies for Interviews and Focus Groups.” Thank you to everyone who provided feedback. During the review, we noticed a need to address strategies for conducting online interviews and focus groups.

Your interview environment can promote sharing of stories or deter it. Here are some observations we find helpful to improve communication in virtual settings:

1.Keep your video on, but do not require this of your interviewees. People feel more at ease sharing their stories if they can see the person receiving their information.

2. Keep your background clear of clutter! If this is not an option, test out a neutral virtual background or use a high-quality photo of an uncluttered space of your choice. For example, your office space as a picture background provides a personalized yet professional touch to your virtual setting. Be warned that virtual backgrounds can cut certain body parts out! Test the background, and plan your outfits accordingly (don’t wear green!).

3.  Exaggerate your nonverbal expressions a little to ensure that you are not interrupting the people sharing their stories. Additionally, typical verbal cues of attentiveness can cause delays and skips in a virtual setting. Show your attentiveness by nodding a few times purposefully for affirmations instead of saying “Yes” or “Agreed.” Move your body every now and then to assure people that you are listening and have not lost your internet connection.

4. If you have books in the background, turn the spines of the books away. The titles of the books can be distracting and can communicate unintended messages to the interviewees. More importantly, certain book titles can be trauma triggers. If you want to include decorations, use plants. Additionally, you can place your camera facing the corner of a room to provide visual depth.

5. Be in a quiet room free of other people or pets. Noise and movement can distract your participants from concentrating on the interview.

6. Be sure you have good lighting. People depend on your facial expressions for communication. Face a window (do not have the window behind you), or use lamps or selfie rings if you need additional light.

7. On video calls, most people naturally tend to look at the person’s image. So, it’s important to arrange your camera at the proper angle to see the participants on your screen.

On a laptop, place the laptop camera or separate webcam at eye level; this can be accomplished by using a stand or even a stack of books. Tilt the camera down at approximately 30 degrees, and arm’s length away from you. Experiment with the angle to assure a more natural appearance.

If you use a monitor with a webcam, place the webcam at eye level, tilted down approximately 30 degrees, and arm’s-length away from you. If needed, you can use a small tripod.

Whatever your arrangement, keeping the participant’s picture on the screen close to the camera will remind you where to look.

8. If possible, use a separate webcam, microphone, and headset. A pre-installed webcam generally has a lower resolution than a separate webcam.

Using a separate microphone will provide clearer speech, and a separate set of headphones will help you hear better. Listen to the laptop microphone recording (left) versus the separate condenser microphone recording (right).

Be sure to place the microphone away from view so the microphone does not block the view of your face. Using a plug-in headset instead of a Bluetooth headset will ensure you do not run out of battery.

Pre-Installed Microphone

Separate Condenser Microphone

HOT TIP: Try out the following office setup for your next online interview or focus group!

We would love to hear from you regarding tips that we could not cover in this blog!

Ouen Hunter: Ouen.C.Hunter@wmich.edu
Jeffrey Hillman: Jeffrey.A.Hillman@wmich.edu

Blog: Examining STEM Participation Through an Equity Lens: Measurement and Recommendations

Posted on October 14, 2020 by  in Blog ()

Director of Evaluation Services, Higher Ed Insight

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 Examining STEM Participation Through an Equity Lens_ Measurement and Recommendations

Hey there—my name is Tashera, and I’ve served as an external evaluator for dozens of STEM interventions and innovations. I’ve learned that a primary indicator of program success is recruitment of learners to participate in project activities.

Given that this metric is foundational to most evaluations, measurement of this outcome is rarely thought to be a challenge. A simple count of learners enrolled in programming provides information about participants rather easily.

However, this tells a very limited story. As many of us know, a major priority of STEM initiatives is to broaden participation to be more representative of diverse populations—particularly among groups historically marginalized. As such, we must move beyond reporting quantitative metrics as collectives and instead shift towards disaggregation by student demographics.

This critical analytical approach lets us identify where potential disparities exist. And it can help transform evaluation from a passive system of assessment into a mechanism that helps programs reach more equitable outcomes.

Moreover, program implementation efforts must be deliberate. Activities must be intentionally designed to reach and support populations disproportionally underrepresented within STEM. We can aid this process in our role as evaluators. I would even go so far as to argue that it is our responsibility—as stipulated by AEA’s Guiding Principles for Evaluators—to do so.

During assessment, make it a practice to examine whether program efforts are equitable, inclusive, and accessible. If you find that clients are experiencing challenges relating to locating or recruiting diverse students, the following recommendations can be provided during formative feedback:

  1. Go to the target population: “Traditional” marketing and outreach strategies that have been used time and time again won’t attract the diverse learners you are seeking—otherwise, there wouldn’t be such a critical call for broadened STEM participation today. You can, however, successfully reach these students if you go where they are.

a. Looking for Black, Latino, or female students to partake in your innovative engineering or IT program? Try reaching out to professional campus-based STEM organizations (e.g., National Society of Black Engineers, Black and Latinx Information Science and Technology Support, Women in Science and Engineering).

b. Numerous organizations on college campuses serve the students you are seeking to engage.

          • Locate culture-based organizations: the National Pan-Hellenic Council, National Association of Latino Fraternal Organizations, National Black Student Union, or Latino Student Council.
          • Leverage programs that support priority student groups (e.g., first-generation, low-income, students with disabilities): Higher Education Opportunity Program, Student Support Services, or Office for Students with Disabilities.

2. Cultural responsiveness must be embedded throughout the program’s design.

a. Make sure that implementation approaches—including recruitment—and program materials (e.g., curriculum, marketing and outreach) are culturally responsive, interventions are culturally relevant, and staff are culturally sensitive.

b. Ensure staff diversity at all levels of leadership (e.g., program directors and staff, faculty, mentors).

There is increased likelihood of students’ participation and persistence when they feel they belong, which at minimum encompasses seeing themselves represented across a program’s spectrum.

As an evaluation community, we cannot allow the onus of equitable STEM opportunity to be placed solely on programs or clients. A lens of equity must also be deeply embedded throughout our evaluation approach, including during analyses and recommendations. It is this shift in paradigm—a model of shared accountability—that allows for equitable outcomes to be realized.

 

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President, CSEdResearch.org

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from https://www.tacc.utexas.edu/epic/research. Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.

 

Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: Data Cleaning Tips in R*

Posted on July 8, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I recently came across a set of data cleaning tips in Excel from EvaluATE, which provides support for people looking to improve their evaluation practice.


Screenshot of the Excel Data Cleaning Tips

As I looked through the tips, I realized that I could show how to do each of the five tips listed in the document in R. Many people come to R from Excel so having a set of R to Excel equivalents (also see this post on a similar topic) is helpful.

The tips are not intended to be comprehensive, but they do show some common things that people do when cleaning messy data. I did a live stream recently where I took each tip listed in the document and showed its R equivalent.

As I mention at the end of the video, while you can certainly do data cleaning in Excel, switching to R enables you to make your work reproducible. Say you have some surveys that need cleaning today. You write your code and save it. Then, when you get 10 new surveys next week, you can simply rerun your code, saving you countless Excel points and clicks.

You can watch the full video at the very bottom or go each tip by using the videos immediately below. I hope it’s helpful in giving an overview of data cleaning in R!

Tip #1: Identify all cells that contain a specific word or (short) phrase in a column with open-ended text

Tip #2: Identify and remove duplicate data

Tip #3: Identify the outliers within a data set

Tip #4: Separate data from a single column into two or more column

Tip #5: Categorize data in a column, such as class assignments or subject groups

Full Video

*This is a Repost of David Keyes’ blog Data Cleaning Tips in R

Blog: Improving the Quality of Evaluation Data from Participants

Posted on June 10, 2020 by  in Blog ()

Professor of Educational Research and Evaluation, Tennessee Tech University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Improving the Quality of Evaluation Data from Participants

I have had experience evaluating a number of ATE projects, all of them collaborative projects among several four-year and two-year community colleges. One of the projects’ overarching goals is to provide training to college instructors as well as elementary-, middle-, and high-school teachers, to advance their skills in additive manufacturing and/or smart manufacturing.

The training is done via the use of train-the-trainer studios (TTS). TTSs provide novel hands-on learning experiences to program participants. As with any program, the evaluation of such projects needs to be informed by rich data to capture participants’ entire experience, including the knowledge they gain.

Here’s one lesson I’ve learned from evaluating these projects: Participants’ perception of their value in the project contributes crucially to the quality of data collected.

As the evaluator, one can make participants feel that the data they are being asked to provide (regarding technical knowledge gained, their application of it, and perceptions about all aspects of the training) will be beneficial to the overall program and to them directly or indirectly.

If they feel that their importance is minimal, and that the information they provide will not matter, they will provide the barest amount of information (regardless of the method of data collection employed). If they understand the importance of their participation, they’re more likely to provide rich data.

How can you make them feel valued?

Establish good rapport with each of the participants, particularly if the group(s) is(are) of reasonable size. Make sure to interact informally with each participant throughout the training workshop(s). Inquire about their professional work, and ask them about supports that they might need when they return to their workplace.

The responses to the open-ended questions on most of my workshop evaluations have been very rich and detailed¾much more so than those from participants to whom I administered the survey remotely, without ever meeting. Program participants want to connect to a real person, not a remote evaluator. In the event that in-person connections are not possible, explore other innovative ways of establishing rapport with individual participants, before and during the program.

How can you improve the quality of data they will provide?

 Sell the evaluation. Make it clear how the evaluation findings will be used and how the results will benefit the participants and their constituents specifically, directly or indirectly.

 Share success stories. During the training workshops that I have been evaluating, I’ve shared some previous success stories with participants in order to show them what they are capable of accomplishing as well.

The time and energy you spend building these connections with participants will result in high-quality evaluation data, ultimately helping the program serve participants better.

Blog: Building Capacity for High-Quality Data Collection

Posted on May 13, 2020 by  in Blog (, )

Director of Evaluation, Thomas P. Miller & Associates, LLC 

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I, like everyone else, am adjusting to working at home and practicing social distancing, I have been thinking about how to conduct my evaluation projects remotely. One thing that’s struck me as I’ve been retooling evaluation plans and data collection timelines is the need for even more evaluation capacity building around high-quality data collection for our clients. We will continue to rely on our clients to collect program data, and now that they’re working remotely too, a refresher on how to collect data well feels timely.  

Below are some tips and tricks for increasing your clients capacity to collect their own high-quality data for use in evaluation and informed decision making. 

Identify who will need to collect the data.  

Especially with multiple-site programs or programs with multiple collectors, identifying who will be responsible for data collection and ensuring that all data collectors use the same tools is key to collecting similar data across the program.  

Determine what is going to be collected.  

Examine your tool. Consider the length of the tool, the types of data being requested, and the language used in the tool itself. When creating a tool that will be used by others, be certain that your tool will yield the data that you need and will make sense to those who will be using it. Test the tool with a small group of your data collectors, if possible, before full deployment.  

Make sure data collectors know why the data is being collected.  

When those collecting data understand how the data will be used, they’re more likely to be invested in the process and more likely to collect and report their data carefully. When you emphasize the crucial role that stakeholders play in collecting data, they see the value in the time they are spending using your tools. 

Train data collectors on how to use your data collection tools.  

Walking data collectors through the step-by-step process of using your data collection tool, even if the tool is a basic intake form, will ensure that all collectors use the tool in the same way. It will also ensure they have had a chance to walk through the best way to use the tool before they actually need to implement it. Provide written instructions, too, so that they can refer to them in the future.  

Determine an appropriate schedule for when data will be reported.  

To ensure that your data reporting schedule is not overly burdensome, consider the time commitment that the data collection may entail, as well as what else the collectors have on their plates.  

 Conduct regular quality checks of what data is collected.  

Checking the data regularly allows you to employ a quality control process and promptly identify when data collectors are having issues. Catching these errors quickly will allow for easier course correction.  

Blog: Understanding Data Literacy

Posted on February 19, 2020 by  in Blog ()

Dean of Institutional Effectiveness, Coastline College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In today’s data-filled society, institutions are abundant with data but lack data literacy, the ability to transform data into usable information and further utilize the knowledge to facilitate actionable change.

Data literacy is a foundational driver in understanding institutional capacity to gather, consume, and utilize various data to build insight and inform actions. Institutions can use a variety of strategies to determine the maturity of their data utilization culture. The following list provides a set of methods that can be used to better understand your organization’s level of data literacy:

  • Conduct a survey that provides insight into areas of awareness, access, application, and action associated with data utilization. For example, Coastline College uses a data utilization maturity index tool, the EDUCAUSE benchmark survey, and annual utilization statistics to get this information. The survey can be conducted in person or electronically, based on the access and comfort employees or stakeholders have with technology. The goal of this strategy is to gain surface-level insight into the maturity of your organizational data culture.
  • Lead focus groups with a variety of stakeholders (e.g., faculty members, project directors) to gather rich insight into ideas about and challenges associated with data. The goal of this approach is to glean a deeper understanding of the associated “whys” found in broader assessments (e.g., observations, institutional surveys, operational data mining).
  • Compare your organizational infrastructure and operations to similar institutions that have been identified as having successful data utilization. The goal of this strategy is to help visualize and understand what a data culture is, how your organization compares to others, and how your organization can adapt or differentiate its data strategy (or adopt another one). A few resources I would recommend include Harvard Business Review’s Analytics topic library, EDUCAUSE’s Analytics library, What Works Clearinghouse, McKinsey & Company’s data culture article, and Tableau’s article on data culture.
  • Host open discussions with stakeholders (e.g., faculty members, project directors, administrators) about the benefits, disadvantages, optimism, and fears related to data. This method can build awareness, interest, and insight to support your data planning. The goal of this approach is to effectively prepare and address any challenges prior to your data plan investment and implementation.

Based on the insight collected, organizational leadership can develop an implementation plan to adopt and adapt tools, operations, and trainings to build awareness, access, application, and action associated with data utilization.

Avoid the following pitfalls:

  • Investing in a technology prior to engaging stakeholders and understanding the organizational data culture. In these instances, the technology will help but will not be the catalyst or foundation to build the data culture. The “build it and they will come” theory is not applicable in today’s data society. Institutions must first determine what they are seeking to achieve. Clay Christensen’s Jobs to Be Done Theory is a resource that can may bring clarity to this matter.
  • Assuming individuals have a clear understanding of the technical aspects of data. This assumption could lead to misuse or limited use of your data. To address this issue, institutions need to conduct an assessment to understand the realities in which they are operating.
  • Hiring for a single position to lead the effort of building a data culture. In this instance, a title does not validate the effort or ensure that an institution has a data-informed strategy and infrastructure. To alleviate this challenge, institutions must invest in teams and continuous trainings. For example, Coastline College has an online data coaching course, in-person hands-on data labs, and open discussion forums and study sessions to learn about data access and utilization.

As institutions better understand and foster their data cultures, the work of evaluators can be tailored and utilized to meet project stakeholders (e.g., project directors, faculty members, supporters, and advisory boards) where they are. By understanding institutional data capacity, evaluators can support continuous improvement and scaling through the provision of meaningful and palatable evaluations, presentations, and reports.

Blog: How I Came to Learn R, and Why You Should Too!

Posted on February 5, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Title graphic image

A few years ago, I left my job on the research team at the Oregon Community Foundation and started working as an independent evaluation consultant. No longer constrained by the data analysis software choices made by others, I was free to use whatever tool I wanted. As an independent consultant, I couldn’t afford proprietary software such as SPSS, so I used Excel. But the limits of Excel quickly became apparent, and I went in search of other options.

I had heard of R, but it was sort of a black box in my mind. I knew it was a tool for data analysis and visualization, but I had no idea how to use it. I had never coded before, and the prospect of learning was daunting. But my desire to find a new tool was strong enough that I decided to take up the challenge of learning R.

My journey to successfully using R was rocky and circuitous. I would start many projects in R before finding I couldn’t do something, and I would have to slink back to Excel. Eventually, though, it clicked, and I finally felt comfortable using R for all of my work.

The more I used R, the more I came to appreciate its power.

  1. The code that had caused me such trouble when I was learning became second nature. And I could reuse code in multiple projects, so my workflow became more efficient.
  2. The data visualizations I made in R were far better and more varied than anything I had produced in Excel.
  3. The most fundamental shift in my work, though, has come from using RMarkdown. This tool enables me to go from data import to final report in R, avoiding the dance across, say, SPSS (for analyzing data), Excel (for visualizing data), and Word (for reporting). And when I receive new data, I can simply rerun my code, automatically generating my report.

In 2019, I started R for the Rest of Us to help evaluators and others learn to embrace the power of R. Through online courses, workshops, coaching, and custom training for organizations, I’ve helped many people transition to R.

I’m delighted to share some videos here that show you a bit more about what R is and why you might consider learning it. You’ll learn about what importing data into R looks like and how you can use a few lines of code to analyze your data, and you’ll see how you can do this all in RMarkdown. The videos should give you a good sense of what working in R looks like and help you decide if it makes sense for you to learn it.

I always tell people considering R that it is challenging to learn. But I also tell them that the time and energy you invest in learning R is very much worth it in the end. Learning R will not only improve the quality of your data analysis, data visualization, and workflow, but also ensure that you have access to this powerful tool forever—because, oh, did I mention that R is free? Learning R is an investment in your current self and your future self. What could be better than that?

R Video Series

Blog: Increasing Response Rates*

Posted on January 9, 2020 by  in Blog ()

Founder and President, EvalWorks, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Increasing Response Rates Graphic

Higher response rates result in greater sample sizes and reduce bias. Research on ways to increase response rates for mail and Internet surveys suggests that the following steps will improve the odds that participants will complete and return your survey, whether it is by Internet or mail.

Make the survey as salient as possible to potential respondents.
Relevance can be tested with a small group of people similar to your respondents.

If possible, use Likert-type questions, versus open-ended questions, to increase response rates. 
Generally, the shorter the survey appears to respondents, the better.

Limit the number of questions of a sensitive nature, when possible.
Additionally, if possible, make the survey anonymous, as opposed to confidential.

Include prenotification and follow-ups to survey respondents.
Personalizing these contacts will also increase response rates. In addition, surveys conducted by noncommercial institutions (e.g., colleges) obtain higher response rates than those conducted by commercial institutions.

Provide additional copies of or links to the survey.
This can be done as part of follow-up with potential respondents.

Provide incentives. 
Incentives included in the initial mailing produce higher return rates than those contingent upon survey return, with twice the increase when monetary (versus nonmonetary) incentives are included up-front.

Consider these additional strategies for mail surveys:
Sending surveys using recorded delivery, using colored paper for mail surveys, and providing addressed, stamped return envelopes.

Consider the following when conducting an Internet survey:
A visual indicator of how much of the survey respondents have completed—or, alternately, how much of the survey they have left to complete.

Although there are no hard-and-fast rules for what constitutes an appropriate response rate, many government agencies require response rates of 80 percent or higher before they are willing to report results. If you have conducted a survey and still have a low response rate, it is important to make additional efforts or use a different survey mode to reach non-respondents; however, it is important, to ensure that they do not respond differently than initial respondents and that the survey mode itself did not produce bias.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in spring 2010.