Blog




Blog: Beyond Reporting: Getting More Value out of Your Evaluation*

Posted on April 15, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you’ve been a part of the ATE community for any time at all, you probably already know that ATE projects are required to have their work formally evaluated. NSF program officers want the projects they oversee to include evaluation results in their annual reports 

What may be less well known is that they also want to hear how projects are making use of their evaluations to learn from and improve their NSF-funded work. Did your evaluation results show that an activity you thought would help you reach your project goals turned out to be a flop? That may be disappointing, but it’s also extremely valuable information.   

There is more to using evaluation results than including findings in your annual reports to NSF or even following your evaluators’ recommendations. Project team members should take time to delve into the evaluation data on their own. For example: 

Read every comment in your qualitative data. Although you should avoid getting caught up in the less favorable remarks, they can be a valuable source of information about ways you might improve your work.  

  • Take time to consider the remarks that surprise you—they may reveal a blind spot that needs to be investigated.  
  • Don’t forget to pat yourself on the back for the stuff you’re already getting right.  

It’s important to find out whether a project is effective overall, but it can also be very revealing to disaggregate data by participant characteristics such as gender, age, discipline, enrollment status, or other factors. If you find out that some groups are getting more out of their experience with the project than others, you have an opportunity to adjust what you’re doing to better meet your intended audience’s needs. 

The single most important thing you can do to maximize an evaluation’s potential to bring value to your project is to make time to understand and use the results. That means:  

Meet with your evaluator to discuss the results.  

  • Review results with your project colleagues and advisors. 
  • Make decisions about how to move forward based on the evaluation results 
  • Record those decisions, along with what happens after you take action. That way, you can include this information in your annual reports to NSF. 

ATE grantees are awarded about $66 million annually by the federal government. We have an ethical obligation to be self-critical, use all available information sources to assess progress and opportunities for improvement, and use project evaluations to help us achieve excellence in all aspects of our work.  

 

*This blog is based on an article from an EvaluATE newsletter published in October 2014. 

Blog: Backtracking Alumni: Using Institutional Research and Reflective Inquiry to Improve Organizational Learning

Posted on April 2, 2020 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

In a recent blog post, we shared practical tips for developing an alumni tracking program to assess students’ employment outcomes. Alumni tracking is an effective tool for assessing the quality of educational programs and helping determine whether programs have the intended impact 

In this post, we share the Backtracking technique, aadvanced approach that supplements alumni tracking data with students’ institutionally archived recordsBacktracking assumes that institutions and programs already gather student outcomes information (e.g., employment, salary, and advanced educational data) from alumni on a periodic basis (e.g., annually or every three years) 

The technique uses institutional research (IR) archives to match students’ employment outcomes to academic and demographic variables (e.g., academic GPA, courses taken, grades, major, additional certifications, internships, gender, race/ethnicity)By pairing student outcomes data with academic and demographic variables, we can contextualize student pathways and explore the whole pathway, not just a moment in time. 

Figure 1 shows an example of the Backtracking technique for two-year Associate of Arts (AA) and Associate of Science (AS).  

Figure 1. Backtracking Technique for AA/AS Programs 

Figure 1 illustrates three data collection layersLayer 1, Institutional Research College Data, provides student completion data, academic history, and contact informationAdvanced and transfer-degree data are also available through the National Student Clearinghouse, which can reveal the major that former student (or graduate) entered after completing the AA/AS degreeLayer 2, Alumni Transfer Employment Data, includes student employment and advanceddegree information self-reported in alumni surveys 

Layer 3, Pathway Explanatory Dataembeds a qualitative component within the Backtracking technique in order to let alumni explain their undergraduate experiences. This layer helps us understand what happened during and after collegeMost importantly, it lets us identify the critical junctures that students faced and the facilitators and hindrances that allowed students to overcome (or that caused) setbacks during these difficult periods 

To provide alumni with the best opportunities to share their experiences, we use IR archives to formulate questions based on key facts about students’ experiencesFor example, if IR records show that a student transferred from college A to university B, we may ask the student about that specific experience. For a student who failed Calculus 1 once but passed it on the second try, we may ask what allowed that success. 

Although individual student pathways are useful, we can also stratify these data by race and gender (or other factors) and then aggregate them to better understand student groupsWe demonstrate how we aggregate the pathways in this short video. 

The Backtracking technique requires skilled personnel with technical knowledge in IR and data collection and analysis or an Academic IR (who possesses both IR and research skills)Investing in such skill and knowledge is worthwhile  

    • Institutional research is powerful when used for formative and internal improvement and for generation of new knowledge 
    • Findings about former students using the Backtracking technique can provide useful information to improve program and institutional services (e.g., advising, formal practices, informal learning opportunities, etc.) 
    • Looking back at what worked or failed for past students can inform current practices and serve as a source of institutional learning 

References: 

Jones, F. R., Mardis, M. A. (2019, May 15)Alumni Tracking: The ultimate source for evaluating completer outcomes [Blog post]Retrieved from https://www.evalu-ate.org/blog/jones2-may19/

Blog: Strategies and Sources for Interpreting Evaluation Findings to Reach Conclusion

Posted on March 18, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Imagine: You’re an evaluator who has compiled lots of data about an ATE project. You’re preparing to present the results to stakeholders. You have many beautiful charts and compelling stories to share.  

Youre confident you’ll be able to answer the stakeholders’ questions about data collection and analysisBut you get queasy at the prospect of questions like What does this mean? Is this good? Has our investment been worthwhile?  

It seems like the project is on track and they’re doing good work, but you know your hunch is not a sound basis for a conclusion. You know you should have planned ahead for how findings would be interpreted in order to reach conclusions, and you regret that the task got lost in the shuffle.  

What is a sound basis for interpreting findings to make an evaluative conclusion?  

Interpretation requires comparison. Consider how you make judgments in daily life: If you declare, “this pizza is just so-so,” you are comparing that pizza with other pizza you’ve had, or maybe with your imagined ideal pizza. When you judge something, you’re comparing that thing with something else, even if you’re not fully conscious of that comparison.

The same thing happens in program evaluation, and its essential for evaluators to be fully conscious and transparent about what they’re comparing evaluative evidence againstWhen evaluators don’t make their comparison points explicit, their evaluative conclusions may seem arbitrary and stakeholders may dismiss them as unfounded 

Here are some sources and strategies for comparisons to inform interpretation. Evaluators can use these to make clear and reasoned conclusions about a project’s performance:  

Performance Targets: Review the project proposal to see if any performance targets were established (e.g., “The number of nanotechnology certificates awarded will increase by 10 percent per year”). When you compare the project’s results with those targets, keep in mind that the original targets may have been either under or overambitious. Talk with stakeholders to see if those original targets are appropriate or if they need adjustment. Performance targets usually follow the SMART structure. 

Project Goals: Goals may be more general than specific performance targets (e.g., “Meet industry demands for qualified CNC technicians”)To make lofty or vague goals more concrete, you can borrow a technique called Goal Attainment Scaling (GAS). GAS was developed to measure individuals’ progress toward desired psychosocial outcomesThe GAS resource from BetterEvaluation will give you a sense of how to use this technique to assess program goal attainment. 

Project Logic Model: If the project has a logic model, map your data points onto its components to compare the project’s actual achievements with the planned activities and outcomes expressed in the model. No logic model? Work with project staff to create one using EvaluATE’s logic model template. 

Similar Programs: Look online or ask colleagues to find evaluations of projects that serve similar purposes as the one you are evaluating. Compare the results of those projects’ evaluations to your evaluation results. The comparison can inform your conclusions about relative performance.  

Historical Data: Look for historical project data that you can compare the project’s current performance against. Enrollment numbers and student demographics are common data points for STEM education programs. Find out if baseline data were included in the project’s proposal or can be reconstructed with institutional data. Be sure to capture several years of pre-project data so year-to-year fluctuations can be accounted for. See the practical guidance for this interrupted time series approach to assessing change related to an intervention on the Towards Data Science website. 

Stakeholder Perspectives: Ask stakeholders for their opinions about the status of the project. You can work with stakeholders in person or online by holding a data party to engage them directly in interpreting findings 

 

Whatever sources or strategies you use, its critical that you explain your process in your evaluation reports so it is transparent to stakeholders. Clearly documenting the interpretation process will also help you replicate the steps in the future. 

Blog: Three Questions to Spur Action from Your Evaluation Report

Posted on March 4, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluators are urged to make their evaluations and useful. Project staff are encouraged to use their evaluations. An obvious way to support these aims is for evaluators to develop recommendations based on evidence and for project staff to follow those recommendations (if they agree with them, of course). But not all reports have recommendations, and sometimes recommendations are just “keep up the good work!” If implications for actions are not immediately obvious from an evaluation report, here are three questions that project staff can ask themselves to spark thinking and decision making about how to use evaluation findings.  I’ve included real-world examples based our experience at EvaluATE.

1) Are there any unexpected findings in the report? The EvaluATE team has been surprised to learn that we are attracting a large number of grant writers and other grant professionals to our webinars. We initially assumed that principal investigators (PIs) and evaluators would be our main audience. With growing attendance among grant writers, we became aware that they are often the ones who first introduce PIs to evaluation, guiding them on what should go in the evaluation section of a proposal and how to find an evaluator. The unexpected finding that grant writers are seeking out EvaluATE for guidance made us realize that we should develop more tailored content for this important audience as we work to advance evaluation in the ATE program.

Talk with your team and your evaluator to determine if any action is needed related to your unexpected results.

2) What’s the worst/least favorable evaluation finding from your evaluation? Although it can be uncomfortable to focus on a project’s weak points, doing so is where the greatest opportunity for growth and improvement lies. Consider the probable causes of the problem and potential solutions. Can you solve the problem with your current resources? If so, make an action plan. If not, decide if the problem is important enough to address through a new initiative.

At EvaluATE, we serve both evaluators and evaluation consumers who have a wide range of interests and experience. When asked what EvaluATE needs to improve, several respondents to our external evaluation survey noted that they want webinars to be more tailored to their specific needs and skill levels. Some noted that our content was too technical, while others remarked that it was too basic. To address this issue, we decided to develop an ATE evaluation competency framework. Webinars will be keyed to specific competencies, which will help our audience decide which are appropriate for them. We couldn’t implement this research and development work with our current resources, so we wrote this activity into a new proposal.

Don’t sweep an unfavorable result or criticism under the rug. Use it as a lever for positive change.

3) What’s the most favorable finding from your evaluation? Give yourself a pat on the back, and then figure out if this finding points to an aspect of your project you should expand. If you need more information to make that decision, determine what additional evidence could be obtained in the next round of the evaluation. Help others to learn from your successes—the ATE Principal Investigators Conference is an ideal place to share aspects of your work that are especially strong, along with your lessons learned and practical advice about implementing ATE projects.

At EvaluATE, we have been astounded at the interest in and positive response to our webinars. But we don’t yet have a full understanding of the extent to which webinar attendance translates to improvements in evaluation practice. So we decided to start collecting follow-up data from webinar participants to check on use of our content. With that additional evidence in hand, we’ll be better positioned to make an informed decision about expanding or modifying our webinar series.

Don’t just feel good about your positive results—use them as leverage for increased impact.

If you’ve considered your evaluation results carefully but still aren’t able to identify a call to action, it may be time to rethink your evaluation’s focus. You may need to make adjustments to ensure it produces useful, actionable information. Evaluation plans should be fluid and responsive—it is expected that plans will evolve to address emerging needs.

Blog: Understanding Data Literacy

Posted on February 19, 2020 by  in Blog ()

Dean of Institutional Effectiveness, Coastline College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In today’s data-filled society, institutions are abundant with data but lack data literacy, the ability to transform data into usable information and further utilize the knowledge to facilitate actionable change.

Data literacy is a foundational driver in understanding institutional capacity to gather, consume, and utilize various data to build insight and inform actions. Institutions can use a variety of strategies to determine the maturity of their data utilization culture. The following list provides a set of methods that can be used to better understand your organization’s level of data literacy:

  • Conduct a survey that provides insight into areas of awareness, access, application, and action associated with data utilization. For example, Coastline College uses a data utilization maturity index tool, the EDUCAUSE benchmark survey, and annual utilization statistics to get this information. The survey can be conducted in person or electronically, based on the access and comfort employees or stakeholders have with technology. The goal of this strategy is to gain surface-level insight into the maturity of your organizational data culture.
  • Lead focus groups with a variety of stakeholders (e.g., faculty members, project directors) to gather rich insight into ideas about and challenges associated with data. The goal of this approach is to glean a deeper understanding of the associated “whys” found in broader assessments (e.g., observations, institutional surveys, operational data mining).
  • Compare your organizational infrastructure and operations to similar institutions that have been identified as having successful data utilization. The goal of this strategy is to help visualize and understand what a data culture is, how your organization compares to others, and how your organization can adapt or differentiate its data strategy (or adopt another one). A few resources I would recommend include Harvard Business Review’s Analytics topic library, EDUCAUSE’s Analytics library, What Works Clearinghouse, McKinsey & Company’s data culture article, and Tableau’s article on data culture.
  • Host open discussions with stakeholders (e.g., faculty members, project directors, administrators) about the benefits, disadvantages, optimism, and fears related to data. This method can build awareness, interest, and insight to support your data planning. The goal of this approach is to effectively prepare and address any challenges prior to your data plan investment and implementation.

Based on the insight collected, organizational leadership can develop an implementation plan to adopt and adapt tools, operations, and trainings to build awareness, access, application, and action associated with data utilization.

Avoid the following pitfalls:

  • Investing in a technology prior to engaging stakeholders and understanding the organizational data culture. In these instances, the technology will help but will not be the catalyst or foundation to build the data culture. The “build it and they will come” theory is not applicable in today’s data society. Institutions must first determine what they are seeking to achieve. Clay Christensen’s Jobs to Be Done Theory is a resource that can may bring clarity to this matter.
  • Assuming individuals have a clear understanding of the technical aspects of data. This assumption could lead to misuse or limited use of your data. To address this issue, institutions need to conduct an assessment to understand the realities in which they are operating.
  • Hiring for a single position to lead the effort of building a data culture. In this instance, a title does not validate the effort or ensure that an institution has a data-informed strategy and infrastructure. To alleviate this challenge, institutions must invest in teams and continuous trainings. For example, Coastline College has an online data coaching course, in-person hands-on data labs, and open discussion forums and study sessions to learn about data access and utilization.

As institutions better understand and foster their data cultures, the work of evaluators can be tailored and utilized to meet project stakeholders (e.g., project directors, faculty members, supporters, and advisory boards) where they are. By understanding institutional data capacity, evaluators can support continuous improvement and scaling through the provision of meaningful and palatable evaluations, presentations, and reports.

Blog: How I Came to Learn R, and Why You Should Too!

Posted on February 5, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Title graphic image

A few years ago, I left my job on the research team at the Oregon Community Foundation and started working as an independent evaluation consultant. No longer constrained by the data analysis software choices made by others, I was free to use whatever tool I wanted. As an independent consultant, I couldn’t afford proprietary software such as SPSS, so I used Excel. But the limits of Excel quickly became apparent, and I went in search of other options.

I had heard of R, but it was sort of a black box in my mind. I knew it was a tool for data analysis and visualization, but I had no idea how to use it. I had never coded before, and the prospect of learning was daunting. But my desire to find a new tool was strong enough that I decided to take up the challenge of learning R.

My journey to successfully using R was rocky and circuitous. I would start many projects in R before finding I couldn’t do something, and I would have to slink back to Excel. Eventually, though, it clicked, and I finally felt comfortable using R for all of my work.

The more I used R, the more I came to appreciate its power.

  1. The code that had caused me such trouble when I was learning became second nature. And I could reuse code in multiple projects, so my workflow became more efficient.
  2. The data visualizations I made in R were far better and more varied than anything I had produced in Excel.
  3. The most fundamental shift in my work, though, has come from using RMarkdown. This tool enables me to go from data import to final report in R, avoiding the dance across, say, SPSS (for analyzing data), Excel (for visualizing data), and Word (for reporting). And when I receive new data, I can simply rerun my code, automatically generating my report.

In 2019, I started R for the Rest of Us to help evaluators and others learn to embrace the power of R. Through online courses, workshops, coaching, and custom training for organizations, I’ve helped many people transition to R.

I’m delighted to share some videos here that show you a bit more about what R is and why you might consider learning it. You’ll learn about what importing data into R looks like and how you can use a few lines of code to analyze your data, and you’ll see how you can do this all in RMarkdown. The videos should give you a good sense of what working in R looks like and help you decide if it makes sense for you to learn it.

I always tell people considering R that it is challenging to learn. But I also tell them that the time and energy you invest in learning R is very much worth it in the end. Learning R will not only improve the quality of your data analysis, data visualization, and workflow, but also ensure that you have access to this powerful tool forever—because, oh, did I mention that R is free? Learning R is an investment in your current self and your future self. What could be better than that?

R Video Series

Blog: How Your Editor Is a Lot Like an Evaluator

Posted on January 22, 2020 by  in Blog

Editor and Project Manager, Dragonfly Editorial

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Cynthia Williams, editor and project manager at Dragonfly Editorial and owner of Style Sheets Editorial Services. Having worked with lots of program evaluation and research consultant clients, I’ve seen how they help programs evaluate the quality of their offerings. In this blog, I’d like to show you how a good editor can act like an evaluator — for your publications.

We conduct needs assessments. Context is everything, and we want to make sure we’re sufficiently supporting your team. So if you tell us to focus on something specific in your reports — or not to mind, say, the capitalization of “Program Officer,” because that’s how the client likes it — we pay attention. Similarly, if the client wants a more muted tone to avoid too much bluster in the reporting of results, we’ll scan for that too. If your organization has a style guide, our editing is also informed by those requirements. By communicating with you about the right level of edit, we can avoid editing too lightly or too heavily.

We’re responsive to context. Further on context, we make sure to edit according to audience. If you’re reaching out to other experts in your field, we don’t query excessive jargon and terms of art — you’re talking to peers who know this stuff. But if you’re translating your research to lay readers (who may be educated but not versed in your area of expertise), we’ll add a comment if we come across phrasing or terms that make us, mostly editing generalists, do a double take. The thinking is, if we have to read that sentence more than once, so will the reader of your report.

Editors also bring industry standards to the table. Just as evaluators have the American Evaluation Association’s Guiding Principles For Evaluators, copy editors have an arsenal of guiding principles. We refer to style guides, such as The Chicago Manual of Style and the Publication Manual of the American Psychological Association. We employ usage manuals, such as Garner’s Modern English Usage and Merriam-Webster’s Dictionary of English Usage — and, of course, online dictionaries, encyclopedias, and grammar guides.

We use mixed methods. In addition to the above references, editors also use a more qualitative tool — that is, the editor’s ear. This practice is honed over years of reading enough similar materials to know industry norms and being versed in editing for readability and plain language.

Like evaluation, editing is a participatory process, a conversation between your organization and your eagle-eyed publication caretaker. The best results require open communication about each manuscript’s needs and audience, and flexibility from all parties to reach a high-quality final product.

Blog: Increasing Response Rates*

Posted on January 9, 2020 by  in Blog ()

Founder and President, EvalWorks, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Increasing Response Rates Graphic

Higher response rates result in greater sample sizes and reduce bias. Research on ways to increase response rates for mail and Internet surveys suggests that the following steps will improve the odds that participants will complete and return your survey, whether it is by Internet or mail.

Make the survey as salient as possible to potential respondents.
Relevance can be tested with a small group of people similar to your respondents.

If possible, use Likert-type questions, versus open-ended questions, to increase response rates. 
Generally, the shorter the survey appears to respondents, the better.

Limit the number of questions of a sensitive nature, when possible.
Additionally, if possible, make the survey anonymous, as opposed to confidential.

Include prenotification and follow-ups to survey respondents.
Personalizing these contacts will also increase response rates. In addition, surveys conducted by noncommercial institutions (e.g., colleges) obtain higher response rates than those conducted by commercial institutions.

Provide additional copies of or links to the survey.
This can be done as part of follow-up with potential respondents.

Provide incentives. 
Incentives included in the initial mailing produce higher return rates than those contingent upon survey return, with twice the increase when monetary (versus nonmonetary) incentives are included up-front.

Consider these additional strategies for mail surveys:
Sending surveys using recorded delivery, using colored paper for mail surveys, and providing addressed, stamped return envelopes.

Consider the following when conducting an Internet survey:
A visual indicator of how much of the survey respondents have completed—or, alternately, how much of the survey they have left to complete.

Although there are no hard-and-fast rules for what constitutes an appropriate response rate, many government agencies require response rates of 80 percent or higher before they are willing to report results. If you have conducted a survey and still have a low response rate, it is important to make additional efforts or use a different survey mode to reach non-respondents; however, it is important, to ensure that they do not respond differently than initial respondents and that the survey mode itself did not produce bias.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in spring 2010.

Blog: Utilization-focused Evaluation

Posted on December 11, 2019 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
John Cosgrove
Senior Partner, Cosgrove & Associates
Maggie Cosgrove
Senior Partner, Cosgrove & Associates

 

As seasoned evaluators committed to utilization-focused evaluation, we partner with clients to create questions and data analysis connected to continuous improvement. We stress developmental evaluation[1] to help link implementation and outcome evaluation. Sounds good, right? Well, not so fast.

Confession time. At times a client’s attention to data wanes as a project progresses, and efforts to actively engage clients to use data for continuous improvement do not generate the desired enthusiasm. Although interest in data re-emerges as the project concludes, that enthusiasm seems more related to answering “How did we do?” rather than exploring “What did we learn?” This phenomenon, depicted in the U-shaped curve in Figure 1, suggests that when data may have great potential to impact continuous improvement (“the Messy Middle”), clients may be less curious about their data.           

To address this issue, we revisit Stufflebeam’s guiding principle: the purpose of evaluation is to improve, not prove.[2] Generally, clients have good intentions to use data for improvement and are interested in such endeavors. However, as Bryk points out in his work with networked improvement communities (NIC),[3] sometimes practitioners need help learning to improve. Borrowing from NIC concepts,[4] we developed the Thought Partner Group (TPG) and incorporated it into our evaluation. This group’s purpose is to assist with data interpretation, sharing, and usage. To achieve these goals, we invite practitioners or stakeholders who are working across the project and who have a passion for the project, an interest in learning, and an eagerness to explore data. We ask this group to go beyond passive data conversations and address questions such as:

  • What issues are getting in the way of progress and what can be done to address them?
  • What data and actions are needed to support sustaining or scaling?
  • What gaps exist in the evaluation?

The TPG’s focus on improvement and data analysis breathes life into the evaluation and improvement processes. Group members are carefully selected for their deep understanding of local context and a willingness to support the transfer of knowledge gained during the evaluation. Evaluation data has a story to tell, and the TPG helps clients give a voice to their data.

Although not a silver bullet, the TPG has helped improve our clients’ use of evaluation data and has helped them get better at getting better. The TPG model supports the evaluation process and mirrors Englebart’s C-level activity[5] by helping shed light on the evaluator’s and the client’s understanding of the Messy Middle.

 

 


[1] Patton, M. Q. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.
[2] Stufflebeam, D. L. (1971). The relevance of the CIPP evaluation model for educational accountability. Journal of Research and Development in Education.
[3] Bryk, A., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Cambridge, MA: Harvard Education Publishing.
[4] Bryk A. S., Gomez, L. M., & Grunow A. (2010). Getting ideas into action: Building networked improvement communities in education. Stanford, CA: Carnegie Foundation for the Advancement of Teaching. Also see McKay, S. (2017, February 23). Quality improvement approaches: The networked improvement model. [blog].
[5] Englebart, D. C. (2003, September). Improving our ability to improve: A call for investment in a new future. IBM Co-Evolution Symposium.

Blog: Utilizing Social Media Analytics to Demonstrate Program Impact

Posted on November 26, 2019 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
LeAnn Brosius, Evaluator
Kansas State University
Office of Educational Innovation and Evaluation
Adam Cless, Evaluation Assistant
Kansas State University
Office of Educational Innovation and Evaluation

The application of social media within programs has grown exponentially over the past decade and has become a popular way for programs to engage and reach their stakeholders and inform engagement efforts. Consequently, organizations are utilizing data analytics from social media platforms as a way to measure impact. These data can help programs understand how program objectives, progress, and outcomes are disseminated and used (e.g., through discussions, viewing of content, following program social media pages). Social media allows programs to:

  • Reach broad and diverse audiences
  • Promote open communication and collaboration
  • Gain instantaneous feedback
  • Predict future impacts – “Forecasting based on social media has already proven surprisingly effective in diverse areas including predicting stock prices, election results and movie box-office returns.” (Priem, 2014)

Programs, as well as funding agencies, are now recognizing social media as a way to measure a program’s impact across social networks and dissemination efforts, increase visibility of a program, demonstrate broader impacts on audiences, and complement other impact measures. Nevertheless, the question remains…

Should a social media analysis be conducted?

Knowing when and if to conduct a social media analysis is an important concept to consider. Just because a social media analysis can be conducted doesn’t mean one should be conducted. Therefore, before beginning one it is important to take the time to determine a few things:

  1. What are the specific goals that will be answered through social media?
  2. How will these goals be measured using social media?
  3. Which platforms will be most valuable/useful in reaching the targeted audience?

So, why is an initial assessment important before conducting a social media analysis?

Metrics available for social media are extensive and not all are useful for determining the impact of a program’s social media efforts. As Sterne (2010) explains, there needs to be meaning with social media metrics because “measuring for measurement’s sake is a fool’s errand”; “without context, your measurements are meaningless”; and “without specific business goals, your metrics are meaningless.”. Therefore, it is important to consider specific program objectives and which metrics (key performance indicators [KPIs]) are central to assessing the progress and success of these objectives.

Additionally, it is also worthwhile to recognize that popular social media platforms are always changing, categorizing various social media platforms is difficult, and metrics used by different platforms vary.

In order to provide more meaning to the social media analyses of a program, it may be helpful to consider using a framework to provide a structure for aligning social media metrics to the program’s objectives and assist with demonstrating progress and success towards those objectives.

One framework in the literature developed by Neiger et al. (2012) was used to classify and measure various social media metrics and platforms utilized in health care. This framework looked at the use of social media for its potential to engage, communicate, and disseminate critical information to stakeholders, as well as promote programs and expand audience reach. In this framework, Neiger et al. presented four KPI categories (insight, exposure, reach, and engagement) for the analysis of the social media metrics used in healthcare promotion, which aligned to 39 metrics. This framework is a great place to start, but keep in mind that it may not be an exact fit with a program’s objectives. Below is an example of an alignment to the Neiger et al. framework to a different program. This table shows the social media metrics analyzed for the program, the KPI those metrics measured, and the alignment of the metrics and KPI’s to the program’s outreach goals. In this example, the program’s goals aligned to only three of the four KPIs from the Neiger et al. framework. Additionally, different metrics and other platforms were evaluated that were more representative of this program’s social media efforts. For example, this program incorporated the use of phone apps to disseminate program information, and therefore was added as a social media metric.

What are effective ways to share the results from a social media analysis?

After compiling and cleaning data from the social media platforms utilized by a program, it is important to then consider the program’s goals and audience in order to format a report and/or visual that will best communicate the results of the social media data. The results from the program example above were shared using a visual in order to illustrate the program’s progress towards their dissemination efforts and the metric evidence from each social media platform they used to reach their audience. This visual representation highlights the following information from the social media analysis:

  • The extent to which the program’s content was viewed
  • Evidence of the program’s dissemination efforts
  • The engagement and preferences with program content being posted on various social media platforms by stakeholders
  • Potential areas of focus for the program’s future social media efforts


What are some of the limitations of a social media analysis?

The use and application of social media as an effective means to measure program impacts can be restricted by several factors. It is important to be mindful of what these limitations are and present them with findings from the analysis. A few limiting aspects of social media analytics to keep in mind:

  • They do not define program impact
  • They may not measure program impact
  • There are many different platforms
  • There are a vast number of metrics (with multiple definitions between platforms)
  • The audience is mostly invisible/not traceable

What are the next steps for evaluators using social media analytics to demonstrate program impacts?

  • Develop a framework aligned to the intended program’s goals
  • Determine the social media platforms and metrics that most accurately demonstrate progress toward the program’s goals and reach target audiences
  • Establish growth rates for each metric to demonstrate progress and impact
  • Involve key stakeholders throughout the process
  • Continue to revise and revisit regularly

Editor’s Note: This blog is based on a presentation the authors gave at the 2018 American Evaluation Association (AEA) Annual Conference in Cleveland, OH.

References

Priem, J. (2014). Altmetrics. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact ( pp. 263-288). Cambridge, MA: The MIT Press.

Sterne, J. (2010). Social media metrics: How to measure and optimize your marketing investment. Hoboken, NJ: John Wiley & Sons, Inc.

Neiger, B. L., Thackeray, R., Van Wagenen, S. A., Hanson, C. L., West, J. H., Barnes, M. D., & Fagen, M. C. (2012). Use of social media in health promotion: Purposes, key performance indicators, and evaluation metrics. Health Promotion Practice, 13(2), 159-164