Blog




Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog (, , )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.

 

Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.

References:

Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.

Blog: Untangling the Story When You’re Part of the Complexity

Posted on April 16, 2019 by  in Blog ()

Evaluator, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

I am wrestling with a wicked evaluation problem: How do I balance evaluation, research, and technical assistance work when they are so interconnected? I will discuss strategies for managing different aspects of work and the implications of evaluating something that you are simultaneously trying to change.

Background

In 2017, the National Science Foundation solicited proposals that called for researchers and practitioners to partner in conducting research that directly informs problems of practice through the Research Practice Partnership (RPP) model. I work on one project funded under this grant: Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All (RPPforCS). RPPforCS aims to learn how projects supported under this funding are conducting research and improving practice. It also brings a community of researchers and evaluators across funded partnerships together for collective capacity building.

The Challenge

The RPPforCS work requires a dynamic approach to evaluation, and it challenges conventional boundaries between research, evaluation, and technical assistance. I am both part of the evaluation team for individual projects and part of a program-wide research project that aims to understand how projects are using an RPP model to meet their computer science and equity goals. Given the novelty of the program and research approach, the RPPforCS team also supports these projects with targeted technical assistance to improve their ability to use an RPP model (ideas that typically come out of what we’re learning across projects).

Examples in Practice

The RPPforCS team examines changes through a review of project proposals and annual reports, yearly interviews with a member of each project, and an annual community survey. Using these data collection mechanisms, we ask about the impact of the technical assistance on the functioning of the project. Being able to rigorously document how the technical assistance aspect of our research project influences their work allows us to track change affected by the RPPforCS team separately from change stemming from the individual project.

We use the technical assistance (e.g., tools, community meetings, webinars) to help projects further their goals and as research and evaluation data collection opportunities to understand partnership dynamics. The technical assistance tools are all shared through Google Suite, allowing us to see how the teams engage with them. Teams are also able to use these tools to improve their partnership practice (e.g., using our Health Assessment Tool to establish shared goals with partners). Structured table discussions at our community meetings allow us to understand more about specific elements of partnership that are demonstrated within a given project. We share all of our findings with the community on a frequent basis to foreground the research effort, while still providing necessary support to individual projects. 

Hot Tips

  • Rigorous documentation The best way I have found to account for our external impact is rigorous documentation. This may sound like a basic approach to evaluation, but it is the easiest way to track change over time and track change that you have introduced (as opposed to organic change coming from within the project).
  • Multi-use activities Turn your technical assistance into a data collection opportunity. It both builds capacity within a project and allows you to access information for your own evaluation and research goals.

Blog: Increase Online Survey Response Rates with These Four Tips

Posted on April 3, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Molly Henschel Elizabeth Peery Anne Cosby
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluation Associate
Magnolia Consulting, LLC

 

Greetings! We are Molly Henschel, Beth Perry, and Anne Cosby with Magnolia Consulting. We often use online surveys in our Advanced Technological Education (ATE) projects. Online surveys are an efficient data collection method for answering evaluation questions and providing valuable information to ATE project teams. Low response rates threaten the credibility and usefulness of survey findings. At Magnolia Consulting, we use proven strategies to increase response rates, which, in turn, ensures survey results are representative of the population. We offer the following four strategies to promote high response rates:

1. Ensure the survey is easy to complete. Keep certain factors in mind as you create your survey. For example, is the survey clear and easy to read? Is it free of jargon? Is it concise? You do not want respondents to lose interest in completing a survey because it is difficult to read or too lengthy. To help respondents finish the survey, consider:

      • collaborating with the ATE project team to develop survey questions that are straightforward, clear, and relevant;
      • distributing survey questions across several pages to decrease cognitive load and minimize the need for scrolling;
      • including a progress bar; and
      • ensuring your survey is compatible with both computers and mobile devices.

Once the survey is finalized, coordinate with program staff to send the survey during ATE-related events, when the respondents have protected time to complete the survey.

2. Send a prenotification. Prior to sending the online survey, send a prenotification to all respondents, informing them of the upcoming survey. A prenotification establishes survey trustworthiness, boosts survey anticipation, and reduces the possibility that a potential respondent will disregard the survey. The prenotification can be sent by email, but research shows that using a mixed-mode strategy (i.e., email and postcard) can have positive effects on response rates (Dillman, Smyth, & Christian, 2014; Kaplowitz, Lupi, Couper, & Thorp, 2012). We also found that asking the ATE principal investigator (PI) or co-investigators (co-PIs) to send the prenotification helps yield higher response rates.

3. Use an engaging and informative survey invitation. The initial survey invitation is an opportunity to grab your respondents’ attention. First, use a short and engaging subject line that will encourage respondents to open your email. In addition, follow best practices to ensure your email is not diverted into a recipient’s spam folder. Next, make sure the body of your email provides respondents with relevant survey information, including:

      • a clear survey purpose;
      • a statement on the importance of their participation;
      • realistic survey completion time;
      • a deadline for survey completion;
      • information on any stipend requirements or incentives  (if your budget allows for it);
      • a statement about survey confidentiality;
      • a show of appreciation for time and effort; and
      • contact information for any questions about the survey.

4.  Follow up with nonresponders. Track survey response rates on a regular basis. To address low response rates:

      • continue to follow up with nonresponders, sending at least two reminders;
      • investigate potential reasons the survey has not been completed and offer any assistance (e.g., emailing a paper copy) to make survey completion less burdensome;
      • contact nonresponders via a different mode (e.g., phone); or
      • enlist the help of the ATE PI and co-PI to personally follow up with nonresponders. In our experience, the relationship between the ATE PI or co-PI and the respondents can be helpful in collecting those final surveys.

 

Resources:

Nulty, D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education33(3), 301–314.

References:

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). New York: Wiley.

Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on web survey response rates. Social Science Computer Review, 30, 339–349.

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho
Managing Director
EvaluATE
Research Manager
EvaluATE

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.

Blog: Evaluation Reporting with Adobe Spark

Posted on March 8, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Ouen Hunter Emma Perk Michael Harnar
Doctoral Student
The Evaluation Center
Managing Director
EvaluATE
Assistant Professor of Interdisciplinary
Ph.D. in Evaluation
The Evaluation Center

This blog was originally published on AEA365 on December 28, 2018: https://aea365.org/blog/evaluation-reporting-with-adobe-spark-by-ouen-hunter-and-emma-perk/

Hi! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation Program, IDPE), Emma Perk (project manager at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. Recently, we used PhotoVoice in our evaluation of an Upward Bound program and wanted to share how we reported our PhotoVoice findings using the cost-free version of Adobe Spark.

Adobe Spark offers templates to make webpages, videos, flyers, reports, and more. It also hosts your product online for free. While there is a paid version of Adobe Spark, everything we discuss in this blog can be done using the free version. The software is very straightforward, and we were able to get our report online within an hour. We chose to create a webpage to increase accessibility for a large audience.

The free version of Adobe Spark has a lot of features, but it can be difficult to customize the layout. Therefore, we created our layouts in PowerPoint then uploaded them to Spark. This enabled us to customize the font, alignment, and illustrations. Follow these instructions to create a similar webpage:

  • Create a slide deck in PowerPoint. Use one slide per photo and text from the participant. The first slide serves as a template for the rest.
  • After creating the slides, you have a few options for saving the photos for upload.
    1. Use a snipping tool (Windows’ snipping or Mac’s screenshot function) to take a picture of each slide and save it as a PNG file.
    2. Save each as a picture in PowerPoint by selecting the image and the speech bubble, right clicking, and saving as a picture.
    3. Export as a PNG in PowerPoint. Go to File > Export then select PNG under the File Format drop-down menu. This will save all the slides as individual image files.
  • Create a webpage in Adobe Spark.
          1. Once on the site, you will be prompted to start a new account (unless you’re a returning user). This will allow your projects to be stored and give you access to create in the software.
          2. You have the option to change the theme to match your program or branding by selecting the Theme button.
          3. Once you have selected your theme, you are ready to add a title and upload the photos you created from PowerPoint. To upload the photos, press the plus icon. 
          4. Then select Photo. 
          5. Select Upload Photo. Add all photos and confirm the arrangement.
          6. After finalizing, remember to post the page online and click Share to give out the link. 

Though we used Adobe Spark to share our PhotoVoice results, there are many applications for using Spark. We encourage you to check out Adobe Spark to see how you can use it to share your evaluation results.

Hot Tips and Features:

  • Adobe Spark adjusts automatically for handheld devices.
  • Adobe Spark also automatically adjusts lines for you. No need to use a virtual ruler.
  • There are themes available with the free subscription, making it easy to design the webpage.
  • Select multiple photos during your upload. Adobe Spark will automatically separate each file for you.

*Disclaimer: Adobe Spark didn’t pay us anything for this blog. We wanted to share this amazing find with the evaluation community!

Blog: From Instruments to Analysis: EvalFest’s Outreach Training Offerings

Posted on February 26, 2019 by  in Blog ()

President, Karen Peterman Consulting, Co.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Looking for a quick way to train field researchers? How about quick tips on data management or a reminder about what a p-value is? The new EvalFest website hosts brief training videos and related resources to support evaluators and practitioners. EvalFest is a community of practice, funded by the National Science Foundation, that was designed to explore what we could learn about science festivals by using shared measures. The videos on the website were created to fit the needs of our 25 science festival partners from across the United States. Even though they were created within the context of science festival evaluation, the videos and website have been framed generally to support anyone who is evaluating outreach events.

Here’s what you should know:

  1. The resources are free!
  2. The resources have been vetted by our partners, advisors, and/or other leaders in the STEM evaluation community.
  3. You can download PDF and video content directly from the site.

Here’s what we have to offer:

  • Instruments — The site includes 10 instruments, some of which include validation evidence. The instruments gather data from event attendees, potential attendees who may or may not have attended your outreach event, event exhibitors and partners, and scientists who conduct outreach. Two observation protocols are also available, including a mystery shopper protocol and a timing and tracking protocol.
  • Data Collection Tools — EvalFest partners often need to train staff or field researchers to collect data during events, so this section includes eight videos that our partners have used to provide consistent training to their research teams. Field researchers typically watch the videos on their own and then attend a “just in time” hands-on training to learn the specifics about the event and to practice using the evaluation instruments before collecting data. Topics include approaching attendees to do surveys during an event, informed consent, and online survey platforms, such as QuickTapSurvey and SurveyMonkey.
  • Data Management Videos — Five short videos are available to help clean and organize your data and to help begin to explore it in Excel. These videos include the kinds of data that are typically generated by outreach surveys, and they show step-by-step how to do things like filter your data, recode your data, and create pivot tables.
  • Data Analysis Videos — Available in this section are 18 videos and 18 how-to guides that provide quick explanations of things like the p-value, exploratory data analysis, the chi-square test, independent-samples t-test, and analysis of variance. The conceptual videos describe how each statistical test works in nonstatistical terms. The how-to resources are then provided in both video and written format, and walk users through conducting each analysis in Excel, SPSS, and R.

Our website tagline is “A Celebration of Evaluation.” It is our hope that the resources on the site help support STEM practitioners and evaluators in conducting high-quality evaluation work for many years to come. We will continue to add resources throughout 2019. So please check out the website, let us know what you think, and feel free to suggest resources that you’d like us to create next!

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.

Blog: PhotoVoice: A Method of Inquiry in Program Evaluation

Posted on January 25, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Ouen Hunter Emma Perk Michael Harnar
Doctoral Student
The Evaluation Center
Managing Director
EvaluATE
Assistant Professor of Interdisciplinary
Ph.D. in Evaluation
The Evaluation Center

Hello, EvaluATE! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation, IDPE), Emma Perk (co-PI of EvaluATE at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. We recently used PhotoVoice in our evaluation of a Michigan-based Upward Bound (UB) program (a college preparation program focused on 14- to 19-year-old youth living in low-income families in which neither parent has a bachelor’s degree).

PhotoVoice is a method of inquiry that engages participants in creating photographs and short captions in response to specific prompts. The photos and captions provide contextually grounded insights that might otherwise be unreachable by those not living that experience. We opted to use PhotoVoice because the photos and narratives could provide insights into participants’ perspectives that cannot be captured using close-ended questionnaires.

We created two prompts, in the form of questions, and introduced PhotoVoice in person with the UB student participants (see the instructional handout below). Students used their cell phones to take one photo per prompt. For confidentiality reasons, we also asked the students to avoid taking pictures of human faces. Students were asked to write a two- to three-sentence caption for each photo. The caption was to include a short description of the photo, what was happening in the photo, and the reason for taking the photo.

PhotoVoice handout

Figure 1: PhotoVoice Handout

PhotoVoice participation was part of the UB summer programming and overseen by the UB staff. Participants had two weeks to complete the tasks. After receiving the photographs and captions, we analyzed them using MAXQDA 2018. We coded the pictures and the narratives using an inductive thematic approach.

After the preliminary analysis, we then went back to our student participants to see if our themes resonated with them. Each photo and caption was printed on a large sheet of paper (see figure 2 below) and posted on the wall. During a gallery walk, students were asked to review each photo and caption combination and to indicate whether they agree or disagree with our theme selections (see figure 3). We gave participants stickers and asked them to place the stickers in either the “agree” or “disagree” section on the bottom of each poster. After the gallery walk, we discussed the participants’ ratings to understand their photos and write-ups better.

Figure 2: Gallery walk layout (photo and caption on large pieces of paper)

Figure 3: Participants browsing the photographs

Using the participants’ insights, we finalized the analysis, created a webpage, and developed a two-page report for the program staff. To learn more about our reporting process, see our next blog. Below is a diagram of the activities that we completed during the evaluation.

Figure 4: Activities conducted in the Upward Bound evaluation

The PhotoVoice activity provided us with rich insights that we would not have received from the survey that was previously used. The UB student participants enjoyed learning about and being a part of the evaluation process. The program staff valued the reports and insights the method provided. The exclusion of faces in the photographs enabled us to avoid having to obtain parental permission to release the photos for use in the evaluation and by UB staff. Having the students use cell phone cameras kept costs low. Overall, the evaluation activity went over well with the group, and we plan to continue using PhotoVoice in the future.

Blog: The Business of Evaluation: Liability Insurance

Posted on January 11, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Bottom line: you may need liability insurance, and you have to pay for it.

The proposal has been funded, you are the named evaluator, you have created a detailed scope of work, and the educational institution has sent you a Professional Services Contract to sign (and read!).

This contract will contain many provisions, one of which is having insurance. I remember the first time I read it: The contractor shall maintain commercial general liability insurance against any claims that might incur in carrying out this agreement. Minimum coverage shall be $1,000,000.

I thought, well, this probably doesn’t pertain to me, but then I read further: Upon request, the contractor is required to provide a Certificate of Insurance. That got my attention.

You might find what happened next interesting. I called the legal offices at the community college. My first question was Can we just strike that from the contract? No, we were required by law to have it. Then she explained, “Mike that sort of liability thing is mostly for contractors coming to do physical work on our campus, in case there was an injury, brick falling on the head of a student, things like that.” She lowered her voice. “ I can tell you we are never going to ask you to show that certificate to us.”

However, sometimes, you will be asked to maintain and provide, on request, professional liability insurance, also called errors and omissions insurance (E&O insurance) or indemnity insurance. This protects your business if you are sued for negligently performing your services, even if you haven’t made a mistake. (OK, I admit, this doesn’t seem likely in our business of evaluation.)

Then the moment of truth came. A decent-sized contract arrived from a major university I shall not name located in Tempe, Arizona, with a mascot that is a devil with a pitchfork. It said if you want a purchase order from us, sign the contract and attach your Certificate of Insurance.

I was between the devil and a hard place. Somewhat naively, I called my local insurance agent (i.e., for home and car.) He actually had never heard of professional liability insurance and promised to get back to me. He didn’t.

I turned to Google, the fount of all things. (Full disclosure, I am not advocating for a particular company—just telling you what I did.) I explored one company that came up high in the search results. Within about an hour, I was satisfied that it was what I needed, had a quote, and typed in my credit card number. In the next hour, I had my policy online and printed out the one-page Certificate of Insurance with the university’s name as “additional insured.” Done.

I would like to clarify one point. I did not choose general liability insurance because there is no risk to physical damage to property or people that may be caused by my operations. In the business of evaluation that is not a risk.

I now have a $2 million professional liability insurance policy that costs $700 per year. As I add clients, if they require it, I can create a one-page certificate naming them as additional insured, at no extra cost.

Liability insurance, that’s one of the costs of doing business.