Archive: analysis

Blog: Not Just an Anecdote: Systematic Analysis of Qualitative Evaluation Data

Posted on August 30, 2017 by  in Blog ()

President and Founder, Creative Research & Evaluation LLC (CR&E)

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a Ph.D. trained anthropologist, I spent many years learning how to shape individual stories and detailed observations into larger patterns that help us understand social and cultural aspects of human life.  Thus, I was initially taken aback when I realized that program staff or program officers often initially think of qualitative evaluation as “just anecdotal.” Even people who want “stories” in their evaluation reports can be surprised at what is revealed through a systematic analysis of qualitative data.

Here are a few tips that can help lead to credible findings using qualitative data.  Examples are drawn from my experience evaluating ATE programs.

  • Organize your materials so that you can report which experiences are shared among program participants and what perceptions are unusual or unique. This may sound simple, but it takes forethought and time to provide a clear picture of the overall range and variation of participant perceptions. For example, in analyzing two focus group discussions held with the first cohort of students in an ATE program, I looked at each transcript separately to identify the program successes and challenges raised in each focus group. Comparing major themes raised by each group, I was confident when I reported that students in the program felt well prepared, although somewhat nervous about upcoming internships. On the other hand, although there were multiple joking comments about unsatisfactory classroom dynamics, I knew these were all made by one person and not taken seriously by other participants because I had assigned each participant a label and I used these labels in the focus group transcripts.
  • Use several qualitative data sources to provide strength to a complex conclusion. In technical terms, this is called “triangulation.” Two common methods of triangulation are comparing information collected from people with different roles in a program and comparing what people say with what they are observed doing. In some cases, data sources converge and in some cases they diverge. In collecting early information about an ATE program, I learned how important this program is to industry stakeholders. In this situation, there was such a need for entry-level technicians that stakeholders, students, and program staff all mentioned ways that immediate job openings might have a short-term priority over continuing immediately into advanced levels in the same program.
  • Think about qualitative and quantitative data together in relation to each other.  Student records and participant perceptions show different things and can inform each other. For example, instructors from industry may report a cohort of students as being highly motivated and uniformly successful at the same time that institutional records show a small number of less successful students. Both pieces of the picture are important here for assessing a project’s success; one shows high level of industry enthusiasm, while the other can provide exact percentages about participant success.

Additional Resources

The following two sources are updated classics in the fields of qualitative research and evaluation.

Miles, M. B., Huberman, A. M., & Saldana, J. (2014). Qualitative data analysis: A methods sourcebook. Thousand Oaks, CA: Sage.

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice: The definitive text of qualitative inquiry frameworks and options (4th ed.). Thousand Oaks, CA: Sage.

Blog: Gender Evaluation Strategies: Improving Female Recruitment and Retention in ATE Projects

Posted on January 14, 2015 by  in Blog (, )

Executive Director, CalWomen Tech ScaleUP, IWITTS

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

How can ATE project staff and/or STEM educators in general tell if the strategies they are  implementing to increase diversity are impacting the targeted students and if those students actually find those strategies helpful?

I’m very passionate about using evaluation and data to support the National Science Foundation’s (NSF’s) goal of broadening impacts in STEM education. In IWITTS’ CalWomenTech Project, we provided technical assistance to seven community colleges in California between 2006 and 2011 to help them recruit and retain female students into technology programs where they were underrepresented. Six of seven CalWomenTech colleges had increases in female enrollment in targeted introductory technology courses and four colleges increased both female and male completion rates substantially (six colleges increased male retention). So how could the CalWomenTech colleges tell during the project if the strategies they were implementing were helping female technology students?

The short answer is: The CalWomenTech colleges knew because 1) the project was measuring increases in female (and male) enrollment and completion numbers in as close to real time as possible; and 2) they asked the female students in the targeted classes if they had experienced project strategies, found those strategies helpful, and wanted to experience strategies they hadn’t encountered.

What I want to focus on here is how the CalWomenTech Project was able to use the findings from those qualitative surveys. The external evaluators for the CalWomenTech Project developed an anonymous “Survey of Female Technology Course Students” that was distributed among the colleges. The survey was a combination of looking at classroom retention strategies that the instructors had been trained on as part of the project, recruitment strategies, and population demographics. The first time we administered the survey, 60 female students responded (out of 121 surveyed) across seven CalWomenTech colleges. The colleges were also provided with the female survey data filtered for their specific college.

Fifty percent or more of the 60 survey respondents reported exposure to over half the retention strategies listed in the survey. One of the most important outcomes of the survey was that the CalWomenTech colleges were able to use the survey results to choose which strategies to focus on. Instructors exposed to the results during a site visit or monthly conference call came up with ways to start incorporating the strategies female students requested in their classroom. For example, one STEM instructor came up with a plan to start assigning leadership roles in group projects randomly to avoid men taking the leadership role in groups more often than women, after she saw how many female students wanted to try out a leadership role in class.

To hear about more evaluation lessons learned, watch the webinar “How well are we serving our female students in STEM?” or read more about the CalWomenTech survey of female technology students here.

Human Subjects Alert: If you are administering a survey such as this to a specific group of students and there are only a few in the program, then it’s not anonymous. It’s important to be very careful about how the responses are shared and with whom, since this kind of survey includes confidential information that could harm respondents.