Blog




Blog: Completing a National Science Foundation Freedom of Information Act Request

Posted on July 15, 2019 by  in Blog (, , )

Principal Consultant, The Rucks Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Completing a Form

You probably have heard of a FOIA (Freedom of Information Act) request, but it was probably in the context of journalism. Often, journalists will submit a FOIA request to obtain information that is not otherwise publicly available, but is key to an investigative reporting project.

There may be times when you as an evaluator may be evaluating or researching a topic and your work could be enhanced with information that requires submitting a FOIA request. For instance, while working as EvaluATE’s external evaluator, The Rucks Group needed to complete a FOIA request to learn how evaluation plans in ATE proposals have changed over time. And we were interested in documenting how EvaluATE may have influenced those changes. Toward that goal, a random sample of ATE proposals funded between 2004 and 2017 was sought to be reviewed. However, in spite of much effort over an 18-month period, we still were in need of actually obtaining nearly three dozen proposals. We needed to get these proposals via a FOIA request primarily because the projects were older and we were unable to reach either the principal investigators or the appropriate person at the institution. So we submitted a FOIA request to the National Science Foundation (NSF) for the outstanding proposals.

For me, this was a new and, at first, a mentally daunting task. Now, after having gone through the process, I realize that I need not be nervous because completing a FOIA request is actually quite simple. These are the elements that one needs to provide:

  1. Nature of request: We provided a detailed description of the proposals we needed and what we needed from each proposal. We also provided the rationale for the request, but I do not believe a rationale is required.
  2. Delivery method: Identify the method through which you prefer to receive the materials. We chose to receive digital copies via a secure digital system.
  3. Budget: Completing the task could require special fees, so you will need to indicate how much you are willing to pay for the request. Receiving paper copies through the US Postal Service can be more costly than receiving digital copies.

It may take a while for the FOIA request to be filled. We submitted the request in fall 2018 and received the materials in spring 2019. The delay may have been due in part to the 35-day government shutdown and a possibly lengthy process for Principal Investigator approval.

The NSF FOIA office was great to work with, and we appreciated staffers’ communications with us to keep us updated.

Because access is granted only for a particular time, pay attention to when you are notified via email that the materials have been released to you. In other words, do not let this notice sit in your inbox.

One caveat: When you submit the FOIA request, there may be encouragement to acquire the materials through other means. Submitting a FOIA request to colleges or state agencies may be an option for you.

While FOIA requests should be made judiciously, they are useful tools that, under the right circumstances, could enhance your evaluation efforts. They take time, but thanks to the law backing the public’s right to know, your FOIA requests will be honored.

To learn more, visit https://www.nsf.gov/policies/foia.jsp

Keywords: FOIA request, freedom of information act

Blog: An Evaluative Approach to Proposal Development*

Posted on June 27, 2019 by  in Blog - ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A student came into my office to ask me a question. Soon after she launched into her query, I stopped her and said I wasn’t the right person to help because she was asking about a statistical method that I wasn’t up-to-date on. She said, “Oh, you’re a qualitative person?” And I answered, “Not really.” She left looking puzzled. The exchange left me pondering the vexing question, “What am I?” (Now imagine these words echoing off my office walls in a spooky voice for a couple of minutes.) After a few uncomfortable moments, I proudly concluded, “I am a critical thinker!”  

Yes, evaluators are trained specialists with an arsenal of tools, strategies, and approaches for data collection, analysis, and reporting. But critical thinking—evaluative thinking—is really what drives good evaluation. In fact, the very definition of critical thinking—“the mental process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”2—describes the evaluation process to a T. Applying your critical, evaluative thinking skills in developing your funding proposal will go a long way toward ensuring your submission is competitive.

Make sure all the pieces of your proposal fit together like a snug puzzle. Your proposal needs both a clear statement of the need for your project and a description of the intended outcomes—make sure these match up. If you struggle with the outcome measurement aspect of your evaluation plan, go back to the rationale for your project. If you can observe a need or problem in your context, you should be able to observe the improvements as well.

Be logical. Develop a logic model to portray how your project will translate its resources into outcomes that address a need in your context. Sometimes simply putting things in a graphic format can reveal shortcomings in a project’s logical foundation (like when important outcomes can’t be tracked back to planned activities). The narrative description of your project’s goals, objectives, deliverables, and activities should match the logic model.

Be skeptical. Project planning and logic model development typically happen from an optimistic point of view. (“If we build it, they will come.”) When creating your work plan, step back from time to time and ask yourself and your colleagues, What obstacles might we face? What could really mess things up? Where are the opportunities for failure? And perhaps most important, ask, Is this really the best solution to the need we’re trying to address? Identify your plan’s weaknesses and build in safeguards against those threats. I’m all for an optimistic outlook, but proposal reviewers won’t be wearing rose-colored glasses when they critique your proposal and compare it with others written by smart people with great ideas, just like you. Be your own worst critic and your proposal will be stronger for it.

Evaluative thinking doesn’t replace specialized training in evaluation. But even the best evaluator and most rigorous evaluation plan cannot compensate for a disheveled, poorly crafted project plan. Give your proposal a competitive edge by applying your critical thinking skills and infusing an evaluative perspective throughout your project description.

* This blog is a reprint of an article from an EvaluATE newsletter published in summer 2015.

2 dictionary.com

Blog: LinkedIn for Alumni Tracking

Posted on June 13, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

LinkedIn for Alumni Tracking

Benjamin Reid Kevin Cooper
President of Impact Allies PI of RDNET and Dean of Advanced Technology at IRSC

Post-program outcomes for students are obviously huge indicators of success and primary metrics for measuring medium and long-term outcomes and impacts. EvaluATE’s 2019 revised Advanced Technological Education (ATE) Annual Survey states, “ATE program stakeholders would like to know more about post-program outcomes for students.” It lists the types of data sought:

    • Job placement
    • Salary
    • Employer satisfaction
    • Pursuit of additional STEM education
    • Acquisition of industry certifications or licenses

The survey also asks for the sources used to collect this data, giving the following choices:

    • Institutional research office
    • Survey of former students
    • Local economic data
    • Personal outreach to former students
    • State longitudinal data systems
    • Other (describe)

This blog introduces an “Other” data source: LinkedIn Alumni Tool (LAT).

LAT is data rich and free, yet underutilized. Each alumni’s professional information is readily available (i.e., no permissions process for the researcher) and personally updated. The information is also remarkably accurate, because the open-visibility and network effects help ensure honesty. These factors make LAT a great tool for quick health checks and an alternative to contacting each person and requesting this same information.

Even better, LinkedIn is a single tool that is useful for evaluators, principal investigators, instructors, and students. For example, a couple years ago Kevin, Principal Investigator for the Regional Center for Nuclear Education and Training (RCNET) and I (RCNET’s evaluator) realized that our respective work was leading us to use the same tool — LinkedIn — and that we should co-develop our strategies for connecting and communicating with students and alumni on this medium. Kevin uses it to help RCNET’s partner colleges to communicate opportunities (jobs, internships, scholarships, continued education) and develop soft skills (professional presentation, networking, awareness of industry news). I use it to glean information about students’ educational and professional experiences leading up to and during their programs and to track their paths and outcomes after graduation. LinkedIn is also a user-centric tool for students that — rather than ceasing to be useful after graduation — actually becomes more useful.

When I conducted a longitudinal study of RCNET’s graduates across the county over the preceding eight years, I used LinkedIn for two purposes: triangulation and connecting with alumni via another channel, because after college many students change their email addresses and telephone numbers. More than 30 percent of the alumni who responded were reached via LinkedIn, as their contact information on file with the colleges had since changed.

Using LAT, I viewed their current and former employers, job positions, promotions, locations, skills, and further education (and there were insignificant differences between what alumni reported on the survey and interview and what was on their LinkedIn profiles). That is, three of the five post-program outcomes for students of interest to ATE program stakeholders (plus a lot more) can be seen for many alumni via LinkedIn.

Visit https://university.linkedin.com/higher-ed-professionals for short videos about how to use the LinkedIn Alumni Tool and many others. Many of the videos take an institutional perspective, but here is a tip on how to pinpoint program-specific students and alumni. Find your college’s page, click Alumni, and type your program’s name in the search bar. This will filter the results only to the people in your program. It’s that simple.

 

Blog: Grant Evaluation: What Every PI Should Know and Do*

Posted on June 3, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A number of years ago, the typical Advanced Technological Education (ATE) Principal Investigator (PI) deemed evaluation a necessary evil. As a PI, I recall struggling even to find an evaluator who appeared to have reasonable credentials. I viewed evaluation as something you had to have in a proposal to get funded.

Having transitioned from the PI role to being an evaluator myself, I now appreciate how evaluation can add value to a project. I also know a lot more about how to find an evaluator and negotiate the terms of the evaluation contract.

Today, PIs typically identify evaluators through networking and sometimes use evaluator directories, such as the one maintained by EvaluATE at ATE Central. You can call colleagues and ask them to identify someone they trust and can recommend with confidence. If you don’t know anyone yet, start your networking by contacting an ATE center PI using the map at atecentral.net. Do this at least three months before the proposal submission date (i.e., now). When you approach an evaluator, ask for a résumé, references, and a work sample or two. Review their qualifications to be sure the proposal’s reviewers will perceive them as a credentialed evaluator.

Second, here is an important question many PIs ask: “Once you have identified the evaluator, can you expect them to write the evaluation section of your proposal for free?” The answer is (usually) yes. Just remember: Naming an individual in your proposal and engaging that person in proposal development reflects your commitment to enter into a contract with them if your proposal is funded. (An important caveat: Many community colleges’ procurement rules require a competition or bid process for evaluation services. That may affect your ability to commit to the evaluator should the proposal be funded. Have a frank discussion about this.)

Although there is a limit to what evaluators can or should do for free at the proposal stage, you should expect more than a boilerplate evaluation plan (provided you’ve allowed enough time for a thoughtful one). You want someone who will take a look at your goals and objectives and describe in 1 to 1.25 pages the approach for this project’s evaluation. This will serve you better than modifying their “standard language,” if they offer it, yourself. Once the proposal is funded, their first deliverable will be the complete evaluation plan; you generally won’t need that level of detail at the proposal stage.

Now that you have a handshake agreement with your selected evaluator, make it clear you need the draft evaluation section by a certain deadline — say, a month before the proposal due date. You do not have to discuss detailed contractual terms prior to the proposal being funded, but you do have to establish the evaluation budget and the evaluator’s daily rate, for your budget and budget justification. Establishing this rate requires a frank discussion about fees.

Communication in this process is key. Check out EvaluATE’s webinar, “Getting Everyone on the Same Page,” practical strategies for evaluator-stakeholder communication.

Once your proposal has been funded, you get to hammer out a real statement of work with your evaluator and set up a contract for the project. Then the real work begins.

*This blog is a reprint of an article from an EvaluATE newsletter published in summer 2012.

Keywords: evaluators, find evaluator, proposal, evaluation, evaluation proposal

Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog (, , )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.

 

Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.

References:

Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.

Blog: Untangling the Story When You’re Part of the Complexity

Posted on April 16, 2019 by  in Blog ()

Evaluator, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

I am wrestling with a wicked evaluation problem: How do I balance evaluation, research, and technical assistance work when they are so interconnected? I will discuss strategies for managing different aspects of work and the implications of evaluating something that you are simultaneously trying to change.

Background

In 2017, the National Science Foundation solicited proposals that called for researchers and practitioners to partner in conducting research that directly informs problems of practice through the Research Practice Partnership (RPP) model. I work on one project funded under this grant: Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All (RPPforCS). RPPforCS aims to learn how projects supported under this funding are conducting research and improving practice. It also brings a community of researchers and evaluators across funded partnerships together for collective capacity building.

The Challenge

The RPPforCS work requires a dynamic approach to evaluation, and it challenges conventional boundaries between research, evaluation, and technical assistance. I am both part of the evaluation team for individual projects and part of a program-wide research project that aims to understand how projects are using an RPP model to meet their computer science and equity goals. Given the novelty of the program and research approach, the RPPforCS team also supports these projects with targeted technical assistance to improve their ability to use an RPP model (ideas that typically come out of what we’re learning across projects).

Examples in Practice

The RPPforCS team examines changes through a review of project proposals and annual reports, yearly interviews with a member of each project, and an annual community survey. Using these data collection mechanisms, we ask about the impact of the technical assistance on the functioning of the project. Being able to rigorously document how the technical assistance aspect of our research project influences their work allows us to track change affected by the RPPforCS team separately from change stemming from the individual project.

We use the technical assistance (e.g., tools, community meetings, webinars) to help projects further their goals and as research and evaluation data collection opportunities to understand partnership dynamics. The technical assistance tools are all shared through Google Suite, allowing us to see how the teams engage with them. Teams are also able to use these tools to improve their partnership practice (e.g., using our Health Assessment Tool to establish shared goals with partners). Structured table discussions at our community meetings allow us to understand more about specific elements of partnership that are demonstrated within a given project. We share all of our findings with the community on a frequent basis to foreground the research effort, while still providing necessary support to individual projects. 

Hot Tips

  • Rigorous documentation The best way I have found to account for our external impact is rigorous documentation. This may sound like a basic approach to evaluation, but it is the easiest way to track change over time and track change that you have introduced (as opposed to organic change coming from within the project).
  • Multi-use activities Turn your technical assistance into a data collection opportunity. It both builds capacity within a project and allows you to access information for your own evaluation and research goals.

Blog: Increase Online Survey Response Rates with These Four Tips

Posted on April 3, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Molly Henschel Elizabeth Peery Anne Cosby
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluation Associate
Magnolia Consulting, LLC

 

Greetings! We are Molly Henschel, Beth Perry, and Anne Cosby with Magnolia Consulting. We often use online surveys in our Advanced Technological Education (ATE) projects. Online surveys are an efficient data collection method for answering evaluation questions and providing valuable information to ATE project teams. Low response rates threaten the credibility and usefulness of survey findings. At Magnolia Consulting, we use proven strategies to increase response rates, which, in turn, ensures survey results are representative of the population. We offer the following four strategies to promote high response rates:

1. Ensure the survey is easy to complete. Keep certain factors in mind as you create your survey. For example, is the survey clear and easy to read? Is it free of jargon? Is it concise? You do not want respondents to lose interest in completing a survey because it is difficult to read or too lengthy. To help respondents finish the survey, consider:

      • collaborating with the ATE project team to develop survey questions that are straightforward, clear, and relevant;
      • distributing survey questions across several pages to decrease cognitive load and minimize the need for scrolling;
      • including a progress bar; and
      • ensuring your survey is compatible with both computers and mobile devices.

Once the survey is finalized, coordinate with program staff to send the survey during ATE-related events, when the respondents have protected time to complete the survey.

2. Send a prenotification. Prior to sending the online survey, send a prenotification to all respondents, informing them of the upcoming survey. A prenotification establishes survey trustworthiness, boosts survey anticipation, and reduces the possibility that a potential respondent will disregard the survey. The prenotification can be sent by email, but research shows that using a mixed-mode strategy (i.e., email and postcard) can have positive effects on response rates (Dillman, Smyth, & Christian, 2014; Kaplowitz, Lupi, Couper, & Thorp, 2012). We also found that asking the ATE principal investigator (PI) or co-investigators (co-PIs) to send the prenotification helps yield higher response rates.

3. Use an engaging and informative survey invitation. The initial survey invitation is an opportunity to grab your respondents’ attention. First, use a short and engaging subject line that will encourage respondents to open your email. In addition, follow best practices to ensure your email is not diverted into a recipient’s spam folder. Next, make sure the body of your email provides respondents with relevant survey information, including:

      • a clear survey purpose;
      • a statement on the importance of their participation;
      • realistic survey completion time;
      • a deadline for survey completion;
      • information on any stipend requirements or incentives  (if your budget allows for it);
      • a statement about survey confidentiality;
      • a show of appreciation for time and effort; and
      • contact information for any questions about the survey.

4.  Follow up with nonresponders. Track survey response rates on a regular basis. To address low response rates:

      • continue to follow up with nonresponders, sending at least two reminders;
      • investigate potential reasons the survey has not been completed and offer any assistance (e.g., emailing a paper copy) to make survey completion less burdensome;
      • contact nonresponders via a different mode (e.g., phone); or
      • enlist the help of the ATE PI and co-PI to personally follow up with nonresponders. In our experience, the relationship between the ATE PI or co-PI and the respondents can be helpful in collecting those final surveys.

 

Resources:

Nulty, D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education33(3), 301–314.

References:

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). New York: Wiley.

Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on web survey response rates. Social Science Computer Review, 30, 339–349.

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho
Managing Director
EvaluATE
Research Manager
EvaluATE

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.

Blog: Evaluation Reporting with Adobe Spark

Posted on March 8, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Ouen Hunter Emma Perk Michael Harnar
Doctoral Student
The Evaluation Center
Managing Director
EvaluATE
Assistant Professor of Interdisciplinary
Ph.D. in Evaluation
The Evaluation Center

This blog was originally published on AEA365 on December 28, 2018: https://aea365.org/blog/evaluation-reporting-with-adobe-spark-by-ouen-hunter-and-emma-perk/

Hi! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation Program, IDPE), Emma Perk (project manager at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. Recently, we used PhotoVoice in our evaluation of an Upward Bound program and wanted to share how we reported our PhotoVoice findings using the cost-free version of Adobe Spark.

Adobe Spark offers templates to make webpages, videos, flyers, reports, and more. It also hosts your product online for free. While there is a paid version of Adobe Spark, everything we discuss in this blog can be done using the free version. The software is very straightforward, and we were able to get our report online within an hour. We chose to create a webpage to increase accessibility for a large audience.

The free version of Adobe Spark has a lot of features, but it can be difficult to customize the layout. Therefore, we created our layouts in PowerPoint then uploaded them to Spark. This enabled us to customize the font, alignment, and illustrations. Follow these instructions to create a similar webpage:

  • Create a slide deck in PowerPoint. Use one slide per photo and text from the participant. The first slide serves as a template for the rest.
  • After creating the slides, you have a few options for saving the photos for upload.
    1. Use a snipping tool (Windows’ snipping or Mac’s screenshot function) to take a picture of each slide and save it as a PNG file.
    2. Save each as a picture in PowerPoint by selecting the image and the speech bubble, right clicking, and saving as a picture.
    3. Export as a PNG in PowerPoint. Go to File > Export then select PNG under the File Format drop-down menu. This will save all the slides as individual image files.
  • Create a webpage in Adobe Spark.
          1. Once on the site, you will be prompted to start a new account (unless you’re a returning user). This will allow your projects to be stored and give you access to create in the software.
          2. You have the option to change the theme to match your program or branding by selecting the Theme button.
          3. Once you have selected your theme, you are ready to add a title and upload the photos you created from PowerPoint. To upload the photos, press the plus icon. 
          4. Then select Photo. 
          5. Select Upload Photo. Add all photos and confirm the arrangement.
          6. After finalizing, remember to post the page online and click Share to give out the link. 

Though we used Adobe Spark to share our PhotoVoice results, there are many applications for using Spark. We encourage you to check out Adobe Spark to see how you can use it to share your evaluation results.

Hot Tips and Features:

  • Adobe Spark adjusts automatically for handheld devices.
  • Adobe Spark also automatically adjusts lines for you. No need to use a virtual ruler.
  • There are themes available with the free subscription, making it easy to design the webpage.
  • Select multiple photos during your upload. Adobe Spark will automatically separate each file for you.

*Disclaimer: Adobe Spark didn’t pay us anything for this blog. We wanted to share this amazing find with the evaluation community!