The application of social media within programs has grown exponentially over the past decade and has become a popular way for programs to engage and reach their stakeholders and inform engagement efforts. Consequently, organizations are utilizing data analytics from social media platforms as a way to measure impact. These data can help programs understand how program objectives, progress, and outcomes are disseminated and used (e.g., through discussions, viewing of content, following program social media pages). Social media allows programs to:

  • Reach broad and diverse audiences
  • Promote open communication and collaboration
  • Gain instantaneous feedback
  • Predict future impacts – “Forecasting based on social media has already proven surprisingly effective in diverse areas including predicting stock prices, election results and movie box-office returns.” (Priem, 2014)

Programs, as well as funding agencies, are now recognizing social media as a way to measure a program’s impact across social networks and dissemination efforts, increase visibility of a program, demonstrate broader impacts on audiences, and complement other impact measures. Nevertheless, the question remains…

 

Should a social media analysis be conducted?

Knowing when and if to conduct a social media analysis is an important concept to consider. Just because a social media analysis can be conducted doesn’t mean one should be conducted. Therefore, before beginning one it is important to take the time to determine a few things:

  1. What are the specific goals that will be answered through social media?
  2. How will these goals be measured using social media?
  3. Which platforms will be most valuable/useful in reaching the targeted audience?

 

So, why is an initial assessment important before conducting a social media analysis?

Metrics available for social media are extensive and not all are useful for determining the impact of a program’s social media efforts. As Sterne (2010) explains, there needs to be meaning with social media metrics because “measuring for measurement’s sake is a fool’s errand”; “without context, your measurements are meaningless”; and “without specific business goals, your metrics are meaningless.”. Therefore, it is important to consider specific program objectives and which metrics (key performance indicators [KPIs]) are central to assessing the progress and success of these objectives.

Additionally, it is also worthwhile to recognize that popular social media platforms are always changing, categorizing various social media platforms is difficult, and metrics used by different platforms vary.

In order to provide more meaning to the social media analyses of a program, it may be helpful to consider using a framework to provide a structure for aligning social media metrics to the program’s objectives and assist with demonstrating progress and success towards those objectives.

One framework in the literature developed by Neiger et al. (2012) was used to classify and measure various social media metrics and platforms utilized in health care. This framework looked at the use of social media for its potential to engage, communicate, and disseminate critical information to stakeholders, as well as promote programs and expand audience reach. In this framework, Neiger et al. presented four KPI categories (insight, exposure, reach, and engagement) for the analysis of the social media metrics used in healthcare promotion, which aligned to 39 metrics. This framework is a great place to start, but keep in mind that it may not be an exact fit with a program’s objectives. Below is an example of an alignment to the Neiger et al. framework to a different program. This table shows the social media metrics analyzed for the program, the KPI those metrics measured, and the alignment of the metrics and KPI’s to the program’s outreach goals. In this example, the program’s goals aligned to only three of the four KPIs from the Neiger et al. framework. Additionally, different metrics and other platforms were evaluated that were more representative of this program’s social media efforts. For example, this program incorporated the use of phone apps to disseminate program information, and therefore was added as a social media metric.

What are effective ways to share the results from a social media analysis?

After compiling and cleaning data from the social media platforms utilized by a program, it is important to then consider the program’s goals and audience in order to format a report and/or visual that will best communicate the results of the social media data. The results from the program example above were shared using a visual in order to illustrate the program’s progress towards their dissemination efforts and the metric evidence from each social media platform they used to reach their audience. This visual representation highlights the following information from the social media analysis:

  • The extent to which the program’s content was viewed
  • Evidence of the program’s dissemination efforts
  • The engagement and preferences with program content being posted on various social media platforms by stakeholders
  • Potential areas of focus for the program’s future social media efforts

<
What are some of the limitations of a social media analysis?

The use and application of social media as an effective means to measure program impacts can be restricted by several factors. It is important to be mindful of what these limitations are and present them with findings from the analysis. A few limiting aspects of social media analytics to keep in mind:

  • They do not define program impact
  • They may not measure program impact
  • There are many different platforms
  • There are a vast number of metrics (with multiple definitions between platforms)
  • The audience is mostly invisible/not traceable

 

What are the next steps for evaluators using social media analytics to demonstrate program impacts?

  • Develop a framework aligned to the intended program’s goals
  • Determine the social media platforms and metrics that most accurately demonstrate progress toward the program’s goals and reach target audiences
  • Establish growth rates for each metric to demonstrate progress and impact
  • Involve key stakeholders throughout the process
  • Continue to revise and revisit regularly

Editor’s Note: This blog is based on a presentation the authors gave at the 2018 American Evaluation Association (AEA) Annual Conference in Cleveland, OH.

References

Priem, J. (2014). Altmetrics. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact ( pp. 263-288). Cambridge, MA: The MIT Press.

Sterne, J. (2010). Social media metrics: How to measure and optimize your marketing investment. Hoboken, NJ: John Wiley & Sons, Inc.

Neiger, B. L., Thackeray, R., Van Wagenen, S. A., Hanson, C. L., West, J. H., Barnes, M. D., & Fagen, M. C. (2012). Use of social media in health promotion: Purposes, key performance indicators, and evaluation metrics. Health Promotion Practice, 13(2), 159-164

About the Authors

LeAnn Brosius

LeAnn Brosius box with arrow

Evaluator, Kansas State University Office of Educational Innovation and Evaluation

The Office of Educational Innovation and Evaluation (OEIE) conducts program evaluations of both large and small-scale projects for a broad range of clientele in educational institutions, governmental agencies, and foundations. Established in 2000 and affiliated with Kansas State University’s College of Education, OEIE has a wide depth of evaluation experience and expertise, including projects within the fields of agriculture, engineering, public health, education, and workforce development. Through years of experience, our office has gained expertise in evaluation design, qualitative and quantitative methodologies, data collection, analysis and reporting, as well as the incorporation of technology into evaluation.

Adam Cless

Adam Cless box with arrow

Evaluation Assistant, Kansas State University Office of Educational Innovation and Evaluation

The Office of Educational Innovation and Evaluation (OEIE) conducts program evaluations of both large and small-scale projects for a broad range of clientele in educational institutions, governmental agencies, and foundations. Established in 2000 and affiliated with Kansas State University’s College of Education, OEIE has a wide depth of evaluation experience and expertise, including projects within the fields of agriculture, engineering, public health, education, and workforce development. Through years of experience, our office has gained expertise in evaluation design, qualitative and quantitative methodologies, data collection, analysis and reporting, as well as the incorporation of technology into evaluation.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.