Archive: program evaluation

Blog: Utilizing Social Media Analytics to Demonstrate Program Impact

Posted on November 26, 2019 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
LeAnn Brosius, Evaluator
Kansas State University
Office of Educational Innovation and Evaluation
Adam Cless, Evaluation Assistant
Kansas State University
Office of Educational Innovation and Evaluation

The application of social media within programs has grown exponentially over the past decade and has become a popular way for programs to engage and reach their stakeholders and inform engagement efforts. Consequently, organizations are utilizing data analytics from social media platforms as a way to measure impact. These data can help programs understand how program objectives, progress, and outcomes are disseminated and used (e.g., through discussions, viewing of content, following program social media pages). Social media allows programs to:

  • Reach broad and diverse audiences
  • Promote open communication and collaboration
  • Gain instantaneous feedback
  • Predict future impacts – “Forecasting based on social media has already proven surprisingly effective in diverse areas including predicting stock prices, election results and movie box-office returns.” (Priem, 2014)

Programs, as well as funding agencies, are now recognizing social media as a way to measure a program’s impact across social networks and dissemination efforts, increase visibility of a program, demonstrate broader impacts on audiences, and complement other impact measures. Nevertheless, the question remains…

Should a social media analysis be conducted?

Knowing when and if to conduct a social media analysis is an important concept to consider. Just because a social media analysis can be conducted doesn’t mean one should be conducted. Therefore, before beginning one it is important to take the time to determine a few things:

  1. What are the specific goals that will be answered through social media?
  2. How will these goals be measured using social media?
  3. Which platforms will be most valuable/useful in reaching the targeted audience?

So, why is an initial assessment important before conducting a social media analysis?

Metrics available for social media are extensive and not all are useful for determining the impact of a program’s social media efforts. As Sterne (2010) explains, there needs to be meaning with social media metrics because “measuring for measurement’s sake is a fool’s errand”; “without context, your measurements are meaningless”; and “without specific business goals, your metrics are meaningless.”. Therefore, it is important to consider specific program objectives and which metrics (key performance indicators [KPIs]) are central to assessing the progress and success of these objectives.

Additionally, it is also worthwhile to recognize that popular social media platforms are always changing, categorizing various social media platforms is difficult, and metrics used by different platforms vary.

In order to provide more meaning to the social media analyses of a program, it may be helpful to consider using a framework to provide a structure for aligning social media metrics to the program’s objectives and assist with demonstrating progress and success towards those objectives.

One framework in the literature developed by Neiger et al. (2012) was used to classify and measure various social media metrics and platforms utilized in health care. This framework looked at the use of social media for its potential to engage, communicate, and disseminate critical information to stakeholders, as well as promote programs and expand audience reach. In this framework, Neiger et al. presented four KPI categories (insight, exposure, reach, and engagement) for the analysis of the social media metrics used in healthcare promotion, which aligned to 39 metrics. This framework is a great place to start, but keep in mind that it may not be an exact fit with a program’s objectives. Below is an example of an alignment to the Neiger et al. framework to a different program. This table shows the social media metrics analyzed for the program, the KPI those metrics measured, and the alignment of the metrics and KPI’s to the program’s outreach goals. In this example, the program’s goals aligned to only three of the four KPIs from the Neiger et al. framework. Additionally, different metrics and other platforms were evaluated that were more representative of this program’s social media efforts. For example, this program incorporated the use of phone apps to disseminate program information, and therefore was added as a social media metric.

What are effective ways to share the results from a social media analysis?

After compiling and cleaning data from the social media platforms utilized by a program, it is important to then consider the program’s goals and audience in order to format a report and/or visual that will best communicate the results of the social media data. The results from the program example above were shared using a visual in order to illustrate the program’s progress towards their dissemination efforts and the metric evidence from each social media platform they used to reach their audience. This visual representation highlights the following information from the social media analysis:

  • The extent to which the program’s content was viewed
  • Evidence of the program’s dissemination efforts
  • The engagement and preferences with program content being posted on various social media platforms by stakeholders
  • Potential areas of focus for the program’s future social media efforts


What are some of the limitations of a social media analysis?

The use and application of social media as an effective means to measure program impacts can be restricted by several factors. It is important to be mindful of what these limitations are and present them with findings from the analysis. A few limiting aspects of social media analytics to keep in mind:

  • They do not define program impact
  • They may not measure program impact
  • There are many different platforms
  • There are a vast number of metrics (with multiple definitions between platforms)
  • The audience is mostly invisible/not traceable

What are the next steps for evaluators using social media analytics to demonstrate program impacts?

  • Develop a framework aligned to the intended program’s goals
  • Determine the social media platforms and metrics that most accurately demonstrate progress toward the program’s goals and reach target audiences
  • Establish growth rates for each metric to demonstrate progress and impact
  • Involve key stakeholders throughout the process
  • Continue to revise and revisit regularly

Editor’s Note: This blog is based on a presentation the authors gave at the 2018 American Evaluation Association (AEA) Annual Conference in Cleveland, OH.

References

Priem, J. (2014). Altmetrics. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact ( pp. 263-288). Cambridge, MA: The MIT Press.

Sterne, J. (2010). Social media metrics: How to measure and optimize your marketing investment. Hoboken, NJ: John Wiley & Sons, Inc.

Neiger, B. L., Thackeray, R., Van Wagenen, S. A., Hanson, C. L., West, J. H., Barnes, M. D., & Fagen, M. C. (2012). Use of social media in health promotion: Purposes, key performance indicators, and evaluation metrics. Health Promotion Practice, 13(2), 159-164

Evaluation Process

Posted on March 14, 2018 by , in Resources ()

Highlights the four main steps of an ATE Evaluation, and provides detailed activities for each step. This example is an excerpt from the Evaluation Basics for Non-evaluators webinar. Access slides, recording, handout, and additional resources from bit.ly/mar18-webinar.

File: Click Here
Type: Doc
Category: Getting Started
Author(s): Emma Perk, Lori Wingate

Blog: How Real-time Evaluation Can Increase the Utility of Evaluation Findings

Posted on July 21, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Peery Wilkerson
Elizabeth Peery Stephanie B. Wilkerson
Lead Research Associate
Magnolia Consulting, LLC
President
Magnolia Consulting, LLC

Evaluations are most useful when evaluators make relevant findings available to project partners at key decision-making moments. One approach to increasing the utility of evaluation findings is by collecting real-time data and providing immediate feedback at crucial moments to foster progress monitoring during service delivery. Based on our experience evaluating multiple five-day professional learning institutes for an ATE project, we discovered the benefits of providing real-time evaluation feedback and the vital elements that contributed to the success of this approach.

What did we do?

With project partners we co-developed online daily surveys that aligned with the learning objectives for each day’s training session. Daily surveys measured the effectiveness and appropriateness of each session’s instructional delivery, exercises and hands-on activities, materials and resources, content delivery format, and session length. Participants also rated their level of understanding of the session content and preparedness to use the information. They could submit questions, offer suggestions for improvement, and share what they liked most and least. Based on the survey data that evaluators provided to project partners after each session, partners could monitor what was and wasn’t working and identify where participants needed reinforcement, clarification, or re-teaching. Project partners could make immediate changes and modifications to the remaining training sessions to address any identified issues or shortcomings before participants completed the training.

Why was it successful?

Through the process, we recognized that there were a number of elements that made the daily surveys useful in immediately improving the professional learning sessions. These included the following:

  • Invested partners: The project partners recognized the value of the immediate feedback and its potential to greatly improve the trainings. Thus, they made a concentrated effort to use the information to make mid-training modifications.
  • Evaluator availability: Evaluators had to be available to pull the data after hours from the online survey software program and deliver it to project partners immediately.
  • Survey length and consistency: The daily surveys took less than 10 minutes to complete. While tailored to the content of each day, the surveys had a consistent question format that made them easier to complete.
  • Online format: The online format allowed for a streamlined and user-friendly survey. Additionally, it made retrieving a usable data summary much easier and timelier for the evaluators.
  • Time for administration: Time was carved out of the training sessions to allow for the surveys to be administered. This resulted in higher response rates and more predictable timing of data collection.

If real-time evaluation data will provide useful information that can help make improvements or decisions about professional learning trainings, it is worthwhile to seek resources and opportunities to collect and report this data in a timely manner.

Here are some additional resources regarding real-time evaluation:

Blog: Articulating Intended Outcomes Using Logic Models: The Roles Evaluators Play

Posted on July 6, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Wilkerson Peery
Stephanie B. Wilkerson Elizabeth Peery
President
Magnolia Consulting, LLC
Lead Research Associate
Magnolia Consulting, LLC

Articulating project outcomes is easier said than done. A well-articulated outcome is one that is feasible to achieve within the project period, measurable, appropriate for the phase of project development, and in alignment with the project’s theory of change. A project’s theory of change represents causal relationships – IF we do these activities, THEN these intended outcomes will result. Understandably, project staff often frame outcomes as what they intend to do, develop, or provide, rather than what will happen as a result of those project activities. Using logic models to situate intended outcomes within a project’s theory of change helps to illustrate how project activities will result in intended outcomes.

Since 2008, my team and I have served as the external evaluator for two ATE project cycles with the same client. As the project has evolved over time, so too have its intended outcomes. Our experience using logic models for program planning and evaluation has illuminated four critical roles we as evaluators have played in partnership with project staff:

  1. Educator. Once funded, we spent time educating the project partners on the purpose and development of a theory of change and intended outcomes using logic models. In this role, our goal was to build understanding of and buy-in for the need to have logic models with well-articulated outcomes to guide project implementation.
  1. Facilitator. Next, we facilitated the development of an overarching project logic model with project partners. The process of defining the project’s theory of change and intended outcomes was important in creating a shared agreement and vision for project implementation and evaluation. Even if the team includes a logic model in the proposal, refining it during project launch is still an important process for engaging project partners. We then collaborated with individual project partners to build a “family” of logic models to capture the unique and complementary contributions of each partner while ensuring that the work of all partners was aligned with the project’s intended outcomes. We repeated this process during the second project cycle.
  1. Methodologist. The family of logic models became the key source for refining the evaluation questions and developing data collection methods that aligned with intended outcomes. The logic model thus became an organizing framework for the evaluation. Therefore, the data collection instruments, analyses, and reporting yielded relevant evaluation information related to intended outcomes.
  1. Critical Friend. As evaluators, our role as a critical friend is to make evidence-based recommendations for improving project activities to achieve intended outcomes. Sometimes evaluation findings don’t support the project’s theory of change, and as critical friends, we play an important role in challenging project staff to identify any assumptions they might have made about project activities leading to intended outcomes. This process helped to inform the development of tenable and appropriate outcomes for the next funding cycle.

Resources:

There are several resources for articulating outcomes using logic models. Some of the most widely known include the following:

Worksheet: Logic Model Template for ATE Projects & Centers: https://www.evalu-ate.org/resources/lm-template/

Education Logic Model (ELM) Application Tool for Developing Logic Models: http://relpacific.mcrel.org/resources/elm-app/

University of Wisconsin-Extension’s Logic Model Resources: http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

W.K. Kellogg Foundation Logic Model Development Guide: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide