We EvaluATE - Data...

Blog: Using Mixed-Mode Survey Administration to Increase Response

Posted on September 26, 2018 by  in Blog ()

Program Evaluator, Cold Spring Harbor Laboratory, DNA Learning Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Why aren’t people responding?”

This is the perpetual question asked by anyone doing survey research, and it’s one that I am no stranger to myself. There are common strategies to combat low survey participation, but what happens when they fail?

Last year, I was co-principal investigator on a small Advanced Technological Education (ATE) grant to conduct a nationwide survey of high school biology teachers. This was a follow-up to a 1998 survey done as part of an earlier ATE grant my institution had received. In 1998, the survey was done entirely by mail and had a 35 percent response rate. In 2018, we administered an updated version of this survey to nearly 13,000 teachers. However, this time, there was one big difference: we used email.

After a series of four messages over two months (pre-notice, invitation, and two reminders), an incentivized survey, and intentional targeting of high school biology teachers, our response rate was only 10 percent. We anticipated that teachers would be busy and that a 15-minute survey might be too much for many of them to deal with at school. However, there appeared to be a bigger problem: nearly two-thirds of our messages were never opened and perhaps never even seen.

To boost our numbers, we decided to return to what had worked previously: the mail. Rather than send more emails, we mailed an invitation to individuals who had not completed the survey, followed by postcard reminders. Individuals were reminded of the incentive and directed to a web address where they could complete the survey online. The end result was a 14 percent response rate.

I noticed that, particularly when emailing teachers at their school-provided email addresses, many messages never reach the intended recipients. Although use of a mail-exclusive design may never be likely, an alternative would be to heed the advice of Millar and Dillman (2011): administer a mixed-mode, web-then-mail messaging strategy to ensure that spam filters don’t prevent participants from being a part of surveys. Asking the following questions can help guide your method-of-contact decisions and help avoid troubleshooting a low response rate mid-survey.

  1. Have I had low response rates from a similar population before?
  2. Do I have the ability to contact individuals via multiple methods?
  3. Is using the mail cost- or time-prohibitive for this particular project?
  4. What is the sample size necessary for my sample to reasonably represent the target population?
  5. Have I already made successful contact with these individuals over email?
  6. Does the survey tool I’m using (Survey Monkey, Qualtrics, etc.) tend to be snagged by spam filters if I use its built-in invitation management features?

These are just some of the considerations that may help you avoid major spam filter issues in your forthcoming project. Spam filters may not be the only reason for a low response rate, but anything that can be done to mitigate their impact is a step toward a better response rate for your surveys.


Reference

Millar, M., & Dillman, D. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75, 249–269.

Blog: Using Rubrics to Demonstrate Educator Mastery in Professional Development

Posted on September 18, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Nena Bloom
Evaluation Coordinator
Center for Science Teaching and Learning, Northern Arizona University
Lori Rubino-Hare
Professional Development Coordinator
Center for Science Teaching and Learning, Northern Arizona University

We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.

One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.

GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.

We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.

Practices we used to develop and utilize the rubric included the following:

  • We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
  • We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
  • Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
  • We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
  • We pilot tested the rubric with several units, using several scorers, and revised accordingly.

This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.


References and resources

Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.

Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric

Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.

Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.

Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.

Blog: Measure What Matters: Time for Higher Education to Revisit This Important Lesson

Posted on May 23, 2018 by  in Blog (, )

Senior Partner, Cosgrove & Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If one accepts Peter Drucker’s premise that “what gets measured, gets managed,” then two things are apparent: measurement is valuable, but measuring the wrong thing has consequences. Data collection efforts focusing on the wrong metrics lead to mismanagement and failure to recognize potential opportunities. Focusing on the right measures matters. For example, in Moneyball, Michael Lewis describes how the Oakland Athletics improved their won-loss record by revising player evaluation metrics to more fully understand players’ potential to score runs.

The higher education arena has equally high stakes concerning evaluation. A growing number of states (more than 30 in 2017)[1] have adopted performance funding systems to allocate higher education funding. Such systems focus on increasing the number of degree completers and have been fueled by calls for increased accountability. The logic of performance funding seems clear: Tie funding to the achievement of performance metrics, and colleges will improve their performance. However, research suggests we might want to re-examine this logic.  In “Why Performance-Based College Funding Doesn’t Work,” Nicholas Hillman found little to no evidence to support the connection between performance funding and improved educational outcomes.

Why are more states jumping on the performance-funding train? States are under political pressure, with calls for increased accountability and limited taxpayer dollars. But do the chosen performance metrics capture the full impact of education? Do the metrics result in more efficient allocation of state funding? The jury may be still out on these questions, but Hillman’s evidence suggests the answer is no.

The disconnect between performance funding and improved outcomes may widen even more when one considers open-enrollment colleges or colleges that serve a high percentage of adult, nontraditional, or low-income students. For example, when a student transfers from a community college (without a two-year degree) to a four-year college, should that behavior count against the community college’s degree completion metric? Might that student have been well-served by their time at the lower-cost college? When community colleges provide higher education access to adult students who enroll on a part-time basis, should they be penalized for not graduating such students within the arbitrary three-year time period? Might those students and that community have been well-served by access to higher education?

To ensure more equitable and appropriate use of performance metrics, college and states would be well-served to revisit current performance metrics and more clearly define appropriate metrics and data collection strategies. Most importantly, states and colleges should connect the analysis of performance metrics to clear and funded pathways for improvement. Stepping back to remember that the goal of performance measurement is to help build capacity and improve performance will place both parties in a better position to support and evaluate higher education performance in a more meaningful and equitable manner.

[1] Jones, T., & Jones, S. (2017, November 6). Can equity be bought? A look at outcomes-based funding in higher ed [Blog post].

Blog: Gauging the Impact of Professional Development Activities on Students

Posted on January 17, 2018 by  in Blog ()

Executive Director of Emerging Technology Grants, Collin College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Gauging the Impact of Professional Development Activities on Students

Many Advanced Technological Education (ATE) grants hold professional development events for faculty. As the lead for several ATE grants, I have been concerned that while data obtained from faculty surveys immediately after these events are useful, they do not gauge the impact of the training on students. The National Convergence Technology Center’s (CTC) approach, described below, uses longitudinal survey data from the faculty attendees to begin to provide evidence on student impact. I believe the approach is applicable to any discipline.

The CTC provides information technology faculty with a free intensive professional development event titled Working Connections Faculty Development Institute. The institute is held twice per year—five days in the summer and two and a half days in the winter. Working Connections helps faculty members develop the skills needed to create a new course or to perform major updates on an existing course in the summer and enough to update a course in the winter. Over the years, more than 1,700 faculty have enrolled in the training. From the beginning, we have gathered attendee feedback via two surveys at the end of the event. One survey focuses on the specific topic track, asking about the extent to which attendees feel that their three learning outcomes were mastered, as well as information on the instructor’s pacing, classroom handling, etc. The other survey asks questions about the overall event, including attendees’ reactions to the focused lunch programs, and how many new courses have been created or enhanced as a result of past attendance.

The CTC educates faculty members as a vehicle for educating students. To learn how the training impacts students and programs, we also send out longitudinal surveys at 6, 18, 30, 42, and 54 months after each summer Working Connections training. These surveys ask faculty members to report on what they did with what they learned at each training, including how many students they educated as a result of what they learned. Faculty are also asked to report how many certificates and degrees were created or enhanced. Each Working Connections cohort receives a separate survey invitation (i.e., someone who attended two Working Connections will get two separate invitations) that includes a link to the survey as well as a roster to help attendees remember which track they took that year. Participation is voluntary, but over the years, we have consistently and strongly emphasized the importance of getting this longitudinal data so that we can provide some evidence of student impact to the National Science Foundation. Our response rate from surveys sent in January 2016 was 59%.

Responses from surveys from 2008-2016 indicate the following:

Number of students who were taught the subjects trained 88,591
Number of new/enhanced courses 241
Number of sections taught 4,899
Number of new/enhanced certificates and degrees 310

While these data still do not allow us to know how the students themselves consumed the attendees’ learning, it does provide evidence that is one step closer to obtaining student impact than just counting faculty feedback after each training. We are considering what else we can do to further unpack the impact on students, but the Family Educational Rights and Privacy Act’s (FERPA) limitations stand in the way of the CTC contacting affected students directly without their permission.

Tip: It is mandatory that a longitudinal survey effort be intentional and consistent. Further, it is extremely important to consistently promote the need for attendees to fill out surveys both during the professional development events and via emails preceding the annual survey emails.  It is all too easy for attendees to simply delete the longitudinal survey if they do not see the point of filling them out.

Blog: Tips and Tricks When Writing Interview Questions

Posted on January 2, 2018 by  in Blog ()

Senior Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Blog: Tips and Tricks When Writing Interview Questions

Developing a well-constructed interview protocol is by no means an easy task. To give us some ideas on how to formulate well-designed interview questions, Michael Patton (2015) dedicates an entire chapter of his book, Qualitative Research & Evaluation Methods, to question formulation. Like with any skill, the key to improving your craft is practice. That’s why I wanted to share a few ideas from Patton and contribute some of my own thoughts to help improve how you formulate interview questions.

One approach I find useful is to consider the category of question you are asking. With qualitative research, the categories of questions can sometimes seem infinite. However, Patton provides a few overarching categories, which can help frame your thinking, allowing you to ask questions with more precision and be intentional with what you are asking. Patton (2015, p. 444) suggests general categories and provides a few question examples, which are presented below. So, when trying to formulate a question, consider the type you are interested in asking:

  • Experience and behavior questions: If I had been in the program with you, what would I have seen you doing?
  • Opinion and value questions: What would you like to see happen?
  • Feeling questions: How do you feel about that?
  • Knowledge questions: Who is eligible for this program?
  • Sensory questions: What does the counselor ask you when you meet with her? What does she actually say? (Questions that describe stimuli)
  • Background and demographic questions: How old are you?

Once the category is known and you start writing or editing questions, some additional strategies are to double check that you are writing truly open-ended questions and avoiding jargon. For instance, don’t assume that your interviewee knows the acronyms you’re using. As evaluators, sometimes we know the program better than the informants! This makes it so important to write questions with clarity. Everyone wins when you take the time to be intentional and design a question with clarity—you get better data and you won’t confuse your interviewee.

Another interesting point from Patton is to make sure you are asking a singular question. Think about when you’re conducting quantitative research and writing an item for a questionnaire—a red flag might be if it’s double-barreled (i.e., asking more than one question simultaneously). For example, a poorly framed questionnaire item about experiences in a mentorship program might read: To what extent do you agree with the statement, “I enjoyed this program and would do it again.” You simply wouldn’t put that item in a questionnaire, since a person might enjoy the program, but wouldn’t necessarily do it again. Although you have more latitude during an interview, it’s always best to write your questions with precision. It’s also a good chance for you to flex some skills when conducting the interview, knowing when to probe effectively if you need to shift the conversation or dive deeper based on what you hear.

It is important to keep in mind there is no right way to formulate interview questions. However, by having multiple tools in your tool kit, you can lean on different strategies as appropriate, allowing you to develop stronger and more rigorous qualitative studies.

Reference:

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice. Thousand Oaks, CA: SAGE.

Blog: Thinking Critically about Critical Thinking Assessment

Posted on October 31, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Vera Beletzan
Senior Special Advisor Essential Skills
Humber College
Paula Gouveia
Dean, School of Liberal Arts and Sciences
Humber College

Humber College, as part of a learning outcomes assessment consortium funded by the Higher Education Quality Council of Ontario (HEQCO), has developed an assessment tool to measure student gains in critical thinking (CT) as expressed through written communication (WC).

In Phase 1 of this project, a cross-disciplinary team of faculty and staff researched and developed a tool to assess students’ CT skills through written coursework. The tool was tested for usability by a variety of faculty and in a variety of learning contexts. Based on this pilot, we revised the tool to focus on two CT dimensions: comprehension and integration of writer’s ideas, within which are six variables: interpretation, analysis, evaluation, inference, explanation, and self-regulation.

In Phase 2, our key questions were:

  1. What is the validity and reliability of the assessment tool?
  2. Where do students experience greater levels of CT skill achievement?
  3. Are students making gains in learning CT skills over time?
  4. What is the usability and scalability of the tool?

To answer the first question, we examined the inter-rater reliability of the tool, as well as compared CTWC assessment scores with students’ final grades. We conducted a cross-sectional analysis by comparing diverse CT and WC learning experiences in different contexts, namely our mandatory semester I and II cross-college writing courses, where CTWC skills are taught explicitly and reinforced as course learning outcomes; vocationally-oriented courses in police foundations, where the skills are implicitly embedded as deemed essential by industry; and a critical thinking course in our general arts and sciences programs, where CT is taught as content knowledge.

We also performed a longitudinal analysis by assessing CTWC gains in a cohort of students across two semesters in their mandatory writing courses.

Overall, our tests showed positive results for reliability and validity. Our cross-sectional analysis showed the greatest CT gains in courses where the skill is explicitly taught. Our longitudinal analysis showed only modest gains, indicating that a two-semester span is insufficient for significant improvement to occur.

In terms of usability, faculty agreed that the revised tool was straightforward and easy to apply. However, there was less agreement on the tool’s meaningfulness to students, indicating that further research needs to include student feedback.

Lessons learned:

  • Build faculty buy-in at the outset and recognize workload issues
  • Ensure project team members are qualified
  • For scalability, align project with other institutional priorities

Recommendations:

  • Teach CT explicitly and consistently, as a skill, and over time
  • Strategically position courses where CT is taught explicitly throughout a program for maximum reinforcement
  • Assess and provide feedback on students’ skills at regular intervals
  • Implement faculty training to build a common understanding of the importance of essential skills and their assessment
  • For the tool to be meaningful, students must understand which skills are being assessed and why

Our project will inform Humber’s new Essential Skills Strategy, which includes the development of an institutional learning outcomes framework and assessment process.

A detailed report, including our assessment tool, will be available through HEQCO in the near future. For further information, please contact the authors: vera.beletzan@humber.ca  or paula.gouveia@humber.ca

Blog: Using Mutual Interviewing to Gather Student Feedback

Posted on September 18, 2017 by  in Blog ()

Owner, Applied Inference

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

It’s hard to do a focus group with 12 or more students! With so many people and so little time, you know it’s going to be hard to hear from the quieter students, but that may be who you need to hear from the most! Or, maybe some of the questions are sensitive – the students with the most to lose may be the least likely to speak up. What can you do? Mutual interviewing. Here is how you do it.

This method works great for non-English speakers, as long as all participants speak the same language and your co-facilitator and one person in each group is bilingual.

Warnings: The setup is time-consuming, it is very hard to explain, pandemonium ensues, people start out confused, they are often afraid they’ll do it wrong, and it is noisy (a big room is best!).

Promises: Participants are engaged, it accommodates much larger groups than a traditional focus group, everyone participates and no one can dominate, responses are anonymous except to the one individual who heard the response, it builds community and enthusiasm, and it is an empowering process.

Preparation (the researcher):

  1. Create interview guides for four distinct topics related to a project, such as program strengths, program weaknesses, suggestions for improvement, future plans, how you learn best, or personal challenges that might interfere. The challenge is to identify the most important topics that are far enough apart that people do not give the same answer to each question.
  2. Create detailed interview guides, perhaps including probe questions the group members can use to help their interviewees get traction on the question. Each group member interviewer will need four copies of his or her group’s interview guide (one for each interviewee, plus one to fill out himself/herself).
  3. Prepare nametags with a group and ID number (to ensure confidentiality) (e.g., 3-1 would be Group 3, member 1). Make sure the groups are equal in size – with between 3 and 6 members per group. This method allows you to conduct the focus group activity with up to 24 people! The nametags help member-interviewers and member-interviewees from the designated groups find each other during the rounds.
  4. Create a brief demographic survey, and pre-fill it with Group and ID numbers to match the nametags. (The ID links the survey with interview responses gathered during the session, and this helps avoid double counting responses during analysis. You can also disaggregate interview responses by demographics.)
  5. Set up four tables for the groups and label them clearly with group number.
  6. Provide good food and have it ready to welcome the participants as they walk in the door.

During the Session:

At the time of the focus group, help people to arrange themselves into 4 equal-sized groups of between 3 and 6 members. Assign one of the topics to each group. Group members will research their topic by interviewing someone from each of the other groups (and being interviewed in return by their counterparts). After completing all rounds of interviews, they come back to their own group and answer the questions themselves. Then they discuss what they heard with each other, and report it out. The report-out gives other people the opportunity to add thoughts or clarify their views.

  1. As participants arrive, give them a nametag with Group and ID number and have them fill out a brief demographic survey (which you have pre-filled with their group/id number). The group number will indicate the table where they should sit.
  2. Explain the participant roles: Group Leader, Interviewer, and Interviewee.
    1. One person in each group is recruited to serve as a group leader and note taker during the session. This person will also participate in a debrief afterward to provide insights or tidbits gleaned during their group’s discussion.
    2. The group leader will brief their group members on their topic and review the interview guide.
    3. Interviewer/Interviewee: During each round, each member is paired with one person from another group, and they take turns interviewing each other about their group’s topic.
  3. Give members 5 minutes to review the interview guide and answer the questions on their own. They can also discuss with others in their group.
  4. After 5 minutes, pair each member up with a partner from another group for the interview (i.e., Group 1 and Group 2, Group 3 and Group 4). The members should mark a fresh interview sheet with their interviewee’s Group-ID number and then they take turns interviewing each other. Make sure they take notes during the interview. Give the first interviewer 5 minutes, then switch roles, and repeat the process.
  5. Rotate and pair each member with someone from a different group (Group 1 and 3, Group 2 and 4) and repeat the interviews using a fresh interview sheet, marked with the new interviewee’s Group-ID number. Again, each member will interview the other for five minutes.
  6. Finally, rotate again and pair up members from Groups 1 and 4 and Groups 2 and 3 for the final round. Mark the third clean interview sheet with each interviewee’s Group-ID number and interview each other for five minutes.
  7. Once all pairings are finished, members return to their original groups. Each member takes 5 minutes to complete or revise their own interview form, possibly enriched by the perspectives of 3 other people.
  8. The Group Leader facilitates a 15-minute discussion, during which participants compare notes and prepare a flip chart to report out their findings. The Group Leader should take notes during the discussion. (Tip: Sometimes it’s helpful to provide guiding questions for the report-out.)
  9. Each group then has about five minutes to report the compiled findings. (Tip: During the reports, have some questions prepared to further spark conversation).

After the Session:

  1. Hold a brief (10-15 minute) meeting with the Group Leaders and have them talk about the process, insights, or tidbits that did not make it to the flip chart and provide any additional feedback to the researcher.

Results of the process:

You will now have demographic data from the surveys, notes from the individual interview sheets, the group leaders’ combined notes, and the flip charts of combined responses to each question.

To learn more about mutual interviewing, see pages 51-55 of Research Toolkit for Program Evaluation and Needs Assessments Summary of Best Practices.

Blog: Not Just an Anecdote: Systematic Analysis of Qualitative Evaluation Data

Posted on August 30, 2017 by  in Blog ()

President and Founder, Creative Research & Evaluation LLC (CR&E)

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a Ph.D. trained anthropologist, I spent many years learning how to shape individual stories and detailed observations into larger patterns that help us understand social and cultural aspects of human life.  Thus, I was initially taken aback when I realized that program staff or program officers often initially think of qualitative evaluation as “just anecdotal.” Even people who want “stories” in their evaluation reports can be surprised at what is revealed through a systematic analysis of qualitative data.

Here are a few tips that can help lead to credible findings using qualitative data.  Examples are drawn from my experience evaluating ATE programs.

  • Organize your materials so that you can report which experiences are shared among program participants and what perceptions are unusual or unique. This may sound simple, but it takes forethought and time to provide a clear picture of the overall range and variation of participant perceptions. For example, in analyzing two focus group discussions held with the first cohort of students in an ATE program, I looked at each transcript separately to identify the program successes and challenges raised in each focus group. Comparing major themes raised by each group, I was confident when I reported that students in the program felt well prepared, although somewhat nervous about upcoming internships. On the other hand, although there were multiple joking comments about unsatisfactory classroom dynamics, I knew these were all made by one person and not taken seriously by other participants because I had assigned each participant a label and I used these labels in the focus group transcripts.
  • Use several qualitative data sources to provide strength to a complex conclusion. In technical terms, this is called “triangulation.” Two common methods of triangulation are comparing information collected from people with different roles in a program and comparing what people say with what they are observed doing. In some cases, data sources converge and in some cases they diverge. In collecting early information about an ATE program, I learned how important this program is to industry stakeholders. In this situation, there was such a need for entry-level technicians that stakeholders, students, and program staff all mentioned ways that immediate job openings might have a short-term priority over continuing immediately into advanced levels in the same program.
  • Think about qualitative and quantitative data together in relation to each other.  Student records and participant perceptions show different things and can inform each other. For example, instructors from industry may report a cohort of students as being highly motivated and uniformly successful at the same time that institutional records show a small number of less successful students. Both pieces of the picture are important here for assessing a project’s success; one shows high level of industry enthusiasm, while the other can provide exact percentages about participant success.

Additional Resources

The following two sources are updated classics in the fields of qualitative research and evaluation.

Miles, M. B., Huberman, A. M., & Saldana, J. (2014). Qualitative data analysis: A methods sourcebook. Thousand Oaks, CA: Sage.

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice: The definitive text of qualitative inquiry frameworks and options (4th ed.). Thousand Oaks, CA: Sage.

Blog: Scavenging Evaluation Data

Posted on January 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

But little Mouse, you are not alone,
In proving foresight may be vain:
The best laid schemes of mice and men
Go often askew,
And leave us nothing but grief and pain,
For promised joy!

From To a Mouse, by Robert Burns (1785), modern English version

Research and evaluation textbooks are filled with elegant designs for studies that will illuminate our understanding of social phenomena and programs. But as any evaluator will tell you, the real world is fraught with all manner of hazards and imperfect conditions that wreak havoc on design, bringing grief and pain, rather than the promised joy of a well-executed evaluation.

Probably the biggest hindrance to executing planned designs is that evaluation is just not the most important thing to most people. (GASP!) They are reluctant to give two minutes for a short survey, let alone an hour for a focus group. Your email imploring them to participate in your data collection effort is one of hundreds of requests for their time and attention that they are bombarded with daily.

So, do all the things the textbooks tell you to do. Take the time to develop a sound evaluation design and do your best to follow it. Establish expectations early with project participants and other stakeholders about the importance of their cooperation. Use known best practices to enhance participation and response rates.

In addition: Be a data scavenger. Here are two ways to get data for an evaluation that do not require hunting down project participants and convincing them to give you information.

1. Document what the project is doing.

I have seen a lot of evaluation reports in which evaluators painstakingly recount a project’s activities as a tedious story rather than straightforward account. This task typically requires the evaluator to ask many questions of project staff, pore through documents, and track down materials. It is much more efficient for project staff to keep a record of their own activities. For example, see EvaluATE’s resume. It is a no-nonsense record of our funding, activities, dissemination, scholarship, personnel, and contributors.  In and of itself, our resume does most of the work of the accountability aspect of our evaluation (i.e., Did we do what we promised?).  In addition, the resume can be used to address questions like these:

  • Is the project advancing knowledge, as evidenced by peer-reviewed publications and presentations?
  • Is the project’s productivity adequate in relation to its resources (funding and personnel)?
  • To what extent is the project leveraging the expertise of the ATE community?

2. Track participation.

If your project holds large events, use a sign-in sheet to get attendance numbers. If you hold webinars, you almost certainly have records with information about registrants and attendees. If you hold smaller events, pass around a sign-in sheet asking for basic information like name, institution, email address, and job title (or major if it’s a student group). If the project has developed a course, get enrollment information from the registrar.  Most importantly: Don’t put these records in a drawer. Compile them in a spreadsheet and analyze the heck out of them. Here are example data points that we glean from EvaluATE’s participation records:

  • Number of attendees
  • Number of attendees from various types of organizations (such as two- and four-year colleges, nonprofits, government agencies, and international organizations)
  • Number and percentage of attendees who return for subsequent events
  • Geographic distribution of attendees

Project documentation and participation data will be most helpful for process evaluation and accountability. You will still need cooperation from participants for outcome evaluation—and you should engage them early to garner their interest and support for evaluation efforts. Still, you may be surprised by how much valuable information you can get from these two sources—documentation of activities and participation records—with minimal effort.

Get creative about other data you can scavenge, such as institutional data that colleges already collect; website data, such as Google Analytics; and citation analytics for published articles.