Archive: evaluation

Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog (, , )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.

 

Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.

References:

Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.

Blog: Evaluation Reporting with Adobe Spark

Posted on March 8, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Emma Perk Michael Harnar

This blog was originally published on AEA365 on December 28, 2018: https://aea365.org/blog/evaluation-reporting-with-adobe-spark-by-ouen-hunter-and-emma-perk/

Hi! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation Program, IDPE), Emma Perk (project manager at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. Recently, we used PhotoVoice in our evaluation of an Upward Bound program and wanted to share how we reported our PhotoVoice findings using the cost-free version of Adobe Spark.

Adobe Spark offers templates to make webpages, videos, flyers, reports, and more. It also hosts your product online for free. While there is a paid version of Adobe Spark, everything we discuss in this blog can be done using the free version. The software is very straightforward, and we were able to get our report online within an hour. We chose to create a webpage to increase accessibility for a large audience.

The free version of Adobe Spark has a lot of features, but it can be difficult to customize the layout. Therefore, we created our layouts in PowerPoint then uploaded them to Spark. This enabled us to customize the font, alignment, and illustrations. Follow these instructions to create a similar webpage:

  • Create a slide deck in PowerPoint. Use one slide per photo and text from the participant. The first slide serves as a template for the rest.
  • After creating the slides, you have a few options for saving the photos for upload.
    1. Use a snipping tool (Windows’ snipping or Mac’s screenshot function) to take a picture of each slide and save it as a PNG file.
    2. Save each as a picture in PowerPoint by selecting the image and the speech bubble, right clicking, and saving as a picture.
    3. Export as a PNG in PowerPoint. Go to File > Export then select PNG under the File Format drop-down menu. This will save all the slides as individual image files.
  • Create a webpage in Adobe Spark.
          1. Once on the site, you will be prompted to start a new account (unless you’re a returning user). This will allow your projects to be stored and give you access to create in the software.
          2. You have the option to change the theme to match your program or branding by selecting the Theme button.
          3. Once you have selected your theme, you are ready to add a title and upload the photos you created from PowerPoint. To upload the photos, press the plus icon. 
          4. Then select Photo. 
          5. Select Upload Photo. Add all photos and confirm the arrangement.
          6. After finalizing, remember to post the page online and click Share to give out the link. 

Though we used Adobe Spark to share our PhotoVoice results, there are many applications for using Spark. We encourage you to check out Adobe Spark to see how you can use it to share your evaluation results.

Hot Tips and Features:

  • Adobe Spark adjusts automatically for handheld devices.
  • Adobe Spark also automatically adjusts lines for you. No need to use a virtual ruler.
  • There are themes available with the free subscription, making it easy to design the webpage.
  • Select multiple photos during your upload. Adobe Spark will automatically separate each file for you.

*Disclaimer: Adobe Spark didn’t pay us anything for this blog. We wanted to share this amazing find with the evaluation community!

Blog: PhotoVoice: A Method of Inquiry in Program Evaluation

Posted on January 25, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Emma Perk Michael Harnar

Hello, EvaluATE! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation, IDPE), Emma Perk (co-PI of EvaluATE at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. We recently used PhotoVoice in our evaluation of a Michigan-based Upward Bound (UB) program (a college preparation program focused on 14- to 19-year-old youth living in low-income families in which neither parent has a bachelor’s degree).

PhotoVoice is a method of inquiry that engages participants in creating photographs and short captions in response to specific prompts. The photos and captions provide contextually grounded insights that might otherwise be unreachable by those not living that experience. We opted to use PhotoVoice because the photos and narratives could provide insights into participants’ perspectives that cannot be captured using close-ended questionnaires.

We created two prompts, in the form of questions, and introduced PhotoVoice in person with the UB student participants (see the instructional handout below). Students used their cell phones to take one photo per prompt. For confidentiality reasons, we also asked the students to avoid taking pictures of human faces. Students were asked to write a two- to three-sentence caption for each photo. The caption was to include a short description of the photo, what was happening in the photo, and the reason for taking the photo.

PhotoVoice handout

Figure 1: PhotoVoice Handout

PhotoVoice participation was part of the UB summer programming and overseen by the UB staff. Participants had two weeks to complete the tasks. After receiving the photographs and captions, we analyzed them using MAXQDA 2018. We coded the pictures and the narratives using an inductive thematic approach.

After the preliminary analysis, we then went back to our student participants to see if our themes resonated with them. Each photo and caption was printed on a large sheet of paper (see figure 2 below) and posted on the wall. During a gallery walk, students were asked to review each photo and caption combination and to indicate whether they agree or disagree with our theme selections (see figure 3). We gave participants stickers and asked them to place the stickers in either the “agree” or “disagree” section on the bottom of each poster. After the gallery walk, we discussed the participants’ ratings to understand their photos and write-ups better.

Figure 2: Gallery walk layout (photo and caption on large pieces of paper)

Figure 3: Participants browsing the photographs

Using the participants’ insights, we finalized the analysis, created a webpage, and developed a two-page report for the program staff. To learn more about our reporting process, see our next blog. Below is a diagram of the activities that we completed during the evaluation.

Figure 4: Activities conducted in the Upward Bound evaluation

The PhotoVoice activity provided us with rich insights that we would not have received from the survey that was previously used. The UB student participants enjoyed learning about and being a part of the evaluation process. The program staff valued the reports and insights the method provided. The exclusion of faces in the photographs enabled us to avoid having to obtain parental permission to release the photos for use in the evaluation and by UB staff. Having the students use cell phone cameras kept costs low. Overall, the evaluation activity went over well with the group, and we plan to continue using PhotoVoice in the future.

Blog: The Business of Evaluation: Liability Insurance

Posted on January 11, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Bottom line: you may need liability insurance, and you have to pay for it.

The proposal has been funded, you are the named evaluator, you have created a detailed scope of work, and the educational institution has sent you a Professional Services Contract to sign (and read!).

This contract will contain many provisions, one of which is having insurance. I remember the first time I read it: The contractor shall maintain commercial general liability insurance against any claims that might incur in carrying out this agreement. Minimum coverage shall be $1,000,000.

I thought, well, this probably doesn’t pertain to me, but then I read further: Upon request, the contractor is required to provide a Certificate of Insurance. That got my attention.

You might find what happened next interesting. I called the legal offices at the community college. My first question was Can we just strike that from the contract? No, we were required by law to have it. Then she explained, “Mike that sort of liability thing is mostly for contractors coming to do physical work on our campus, in case there was an injury, brick falling on the head of a student, things like that.” She lowered her voice. “ I can tell you we are never going to ask you to show that certificate to us.”

However, sometimes, you will be asked to maintain and provide, on request, professional liability insurance, also called errors and omissions insurance (E&O insurance) or indemnity insurance. This protects your business if you are sued for negligently performing your services, even if you haven’t made a mistake. (OK, I admit, this doesn’t seem likely in our business of evaluation.)

Then the moment of truth came. A decent-sized contract arrived from a major university I shall not name located in Tempe, Arizona, with a mascot that is a devil with a pitchfork. It said if you want a purchase order from us, sign the contract and attach your Certificate of Insurance.

I was between the devil and a hard place. Somewhat naively, I called my local insurance agent (i.e., for home and car.) He actually had never heard of professional liability insurance and promised to get back to me. He didn’t.

I turned to Google, the fount of all things. (Full disclosure, I am not advocating for a particular company—just telling you what I did.) I explored one company that came up high in the search results. Within about an hour, I was satisfied that it was what I needed, had a quote, and typed in my credit card number. In the next hour, I had my policy online and printed out the one-page Certificate of Insurance with the university’s name as “additional insured.” Done.

I would like to clarify one point. I did not choose general liability insurance because there is no risk to physical damage to property or people that may be caused by my operations. In the business of evaluation that is not a risk.

I now have a $2 million professional liability insurance policy that costs $700 per year. As I add clients, if they require it, I can create a one-page certificate naming them as additional insured, at no extra cost.

Liability insurance, that’s one of the costs of doing business.

Blog: How Evaluators Can Use InformalScience.org

Posted on December 13, 2018 by  in Blog ()

Evaluation and Research Manager, Science Museum of Minnesota and Independent Evaluation Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m excited to talk to you about the Center for Advancement of Informal Science Education (CAISE) and the support they offer evaluators of informal science education (ISE) experiences. CAISE is a National Science Foundation (NSF) funded resource center for NSF’s Advancing Informal STEM Learning program. Through InformalScience.org, CAISE provides a wide range of resources valuable to the EvaluATE community.

Defining Informal Science Education

ISE is lifelong learning in science, technology, engineering, and math (STEM) that takes place across a multitude of designed settings and experiences outside of the formal classroom. The video below is a great introduction to the field.

Outcomes of ISE experiences have some similarities to those of formal education. However, ISE activities tend to focus less on content knowledge and more on other types of outcomes, such as interest, attitudes, engagement, skills, behavior, or identity. CAISE’s Evaluation and Measurement Task Force investigates the outcome areas of STEM identity, interest, and engagement to provide evaluators and experience designers with guidance on how to define and measure these outcomes. Check out the results of their work on the topic of STEM identity (results for interest and engagement are coming soon).

Resources You Can Use

InformalScience.org has a variety of resources that I think you’ll find useful for your evaluation practice.

  1. In the section “Design Evaluation,” you can learn more about evaluation in the ISE field through professional organizations, journals, and projects researching ISE evaluation. The “Evaluation Tools and Instruments” page in this section lists sites with tools for measuring outcomes of ISE projects, and there is also a section about reporting and dissemination. I provide a walk-through of CAISE’s evaluation pages in this blog post: How to Use InformalScience.org for Evaluation.
  2. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects has been extremely useful for me in introducing ISE evaluation to evaluators new to the field.
  3. In the “News & Views” section are several evaluation-related blogs, including a series on working with an institutional review board and another one on conducting culturally responsive evaluations.
  4. If you are not affiliated with an academic institution, you can access peer-reviewed articles in some of your favorite academic journals by becoming a member InformalScienceorg. Click here to join; it’s free! Once you’re logged in, select “Discover Research” in the menu bar and scroll down to “Access Peer-Reviewed Literature (EBSCO).” Journals of interest include Science Education and Cultural Studies of Science Education. If you are already a member of InformalScience.org, you can immediately begin searching the EBSCO Education Source database.

My favorite part of InformalScience.org is the repository of evaluation reports—1,020 reports and growing—which is the largest collection of reports in the evaluation field. Evaluators can use this rich collection to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating ISE projects. Even if you don’t evaluate ISE experiences, I encourage you to take a minute to search the reports and see what you can find. And if you conduct ISE evaluations, consider sharing your own reports on InformalScience.org.

Do you have any questions about CAISE or InformalScience.org? Contact Melissa Ballard, communications and community manager, at mballard@informalscience.org.

Blog: Evaluating Educational Programs for the Future STEM Workforce: STELAR Center Resources

Posted on November 8, 2018 by  in Blog ()

Project Associate, STELAR Center, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello EvaluATE community! My name is Sarah MacGillivray, and I am a member of the STEM Learning and Research (STELAR) Center team, which supports the National Science Foundation Innovative Technology Experiences for Students and Teachers (NSF ITEST) program. Through ITEST, NSF funds the research and development of innovative models of engaging K-12 students in authentic STEM experiences. The goals of the program include building students’ interest and capacity to participate in STEM educational opportunities and developing the skills they will need for careers in STEM. While we target slightly different audiences than the Advanced Technological Education (ATE) program, our programs share the common goal of educating the future STEM workforce, and to support this goal, I invite you to access the many evaluation resources available on our website.

The STELAR website houses an extensive set of resources collected from and used by the ITEST community. These resources include a database of nearly 150 research and evaluation instruments. Each entry features a description of the tool, a list of relevant disciplines and topics, target participants, and a link to ITEST projects that have used the instrument in their work. Whenever possible, PDFs and/or URLs to the original resource are included, though some tools require a fee or membership to the third-party site for access. The instruments can be accessed at http://stelar.edc.org/resources/instruments, and the database can be searched or filtered by keywords common to ATE and ITEST projects, e.g., “participant recruitment and retention,” “partnerships and collaboration,” “STEM career opportunities and workforce development,” “STEM content and standards,” and “teacher professional development and pedagogy,” among others.

In addition to our extensive instrument library, our website also features more than 400 publications, curricular materials, and videos. Each library can be browsed individually, or if you would like to view everything that we have on a topic, you can search all resources on the main resources page: http://stelar.edc.org/resources. We are continually adding to our resources and have recently improved our collection methods to allow projects to upload to the website directly. We expect this will result in even more frequent additions, and we encourage you to visit often or join our mailing list for updates.

STELAR also hosts a free, self-paced online course in which novice NSF proposal writers develop a full NSF proposal. While focused on ITEST, the course can be generalized to any NSF proposal. Two sessions focus on research and evaluation, breaking down the process for developing impactful evaluations. Participants learn what key elements to include in research designs, how to develop logic models, what is involved in deciding the evaluation’s design, and how to align the research design and evaluation sections. The content draws from expertise within the STELAR team and elements from NSF’s Common Guidelines for Education Research and Development. Since the course is self-paced, you can learn more about the course and register to participate at any time: https://mailchi.mp/edc.org/invitation-itest-proposal-course-2

We hope that these resources are useful in your work and invite you to share suggestions and feedback with us at stelar@edc.org. As a member of the NSF Resource Centers network, we welcome opportunities to explore cross-program collaboration, working together to connect and promote our shared goals.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Four Personal Insights from 30 Years of Evaluation

Posted on August 30, 2018 by  in Blog ()

Haddix Community Chair of STEM Education, University of Nebraska Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I complete my 30th year in evaluation, I feel blessed to have worked with so many great people. In preparation for this blog, I spent a reflective morning with some hot coffee, cereal, and wheat toast (that morning donut is no longer an option), and I looked over past evaluations. I thought about any personal insights that I might share, and I came up with four:

  1. Lessons Learned Are Key: I found it increasingly helpful over the years to think about a project evaluation as a shared learning journey, taken with the project leadership. In this context, we both want to learn things that we can share with others.
  2. Evaluator Independence from Project Implementation Is Critical: Nearly 20 years ago, a program officer read in a project annual report that I had done a workshop on problem-based learning for the project. In response, he kindly asked if I had “gone native,” which is slang for a project evaluator getting so close to the project it threatens independence. As I thought it over, he had identified something that I was becoming increasingly uncomfortable with. It became difficult to offer suggestions on implementing problem-based learning when I had offered the training. That quick, thoughtful inquiry helped me to navigate that situation. It also helped me to think about my own future evaluator independence.
  3. Be Sure to Update Plans after Funding: I always adjust a project evaluation plan after the award. Once funded, everyone really digs in, and opportunities typically surface to make the project and its evaluation even better. I have come to embrace that process. I now typically include an “evaluation plan update” phase before we initiate an evaluation, to ensure that the evaluation plan is the best it can truly be when we implement it.
  4. Fidelity Is Important: It took me 10 years in evaluation before I fully understood the “fidelity issue.” Fidelity, for a loose definition, is essentially how faithful program implementers are to the recipe of a program intervention. The first time I became concerned with fidelity I was evaluating the implementation of 50 hours of curriculum. As I interviewed the teachers, it became clear that teachers were spending vastly different amounts of time on topics and activities. Like all good teachers, they had made the curriculum their own, but in many ways, the intended project intervention disappeared. This made it hard to learn much about the intervention. I evolved to include a fidelity feedback process in projects, to statistically adjust for that natural variation or to help examine differing impacts based on intervention fidelity.

In the last 30 years, program evaluation as a field has become increasingly useful and important. Like my days of eating donuts for breakfast, increasingly gone are the days of “superficial” evaluation. This has been replaced by evaluation strategies that are collaboratively planned, engaged, and flexible, which (like my wheat toast and cereal) gets evaluators and project leadership further on the shared journey. Although I do periodically miss the donuts, I never miss the superficial evaluations. Overall, I am always really glad that I now have the cereal and toast—and that I conduct strong and collaborative program evaluations.