Blog




Blog: Utilizing Social Media Analytics to Demonstrate Program Impact

Posted on November 26, 2019 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
LeAnn Brosius, Evaluator
Kansas State University
Office of Educational Innovation and Evaluation
Adam Cless, Evaluation Assistant
Kansas State University
Office of Educational Innovation and Evaluation

The application of social media within programs has grown exponentially over the past decade and has become a popular way for programs to engage and reach their stakeholders and inform engagement efforts. Consequently, organizations are utilizing data analytics from social media platforms as a way to measure impact. These data can help programs understand how program objectives, progress, and outcomes are disseminated and used (e.g., through discussions, viewing of content, following program social media pages). Social media allows programs to:

  • Reach broad and diverse audiences
  • Promote open communication and collaboration
  • Gain instantaneous feedback
  • Predict future impacts – “Forecasting based on social media has already proven surprisingly effective in diverse areas including predicting stock prices, election results and movie box-office returns.” (Priem, 2014)

Programs, as well as funding agencies, are now recognizing social media as a way to measure a program’s impact across social networks and dissemination efforts, increase visibility of a program, demonstrate broader impacts on audiences, and complement other impact measures. Nevertheless, the question remains…

Should a social media analysis be conducted?

Knowing when and if to conduct a social media analysis is an important concept to consider. Just because a social media analysis can be conducted doesn’t mean one should be conducted. Therefore, before beginning one it is important to take the time to determine a few things:

  1. What are the specific goals that will be answered through social media?
  2. How will these goals be measured using social media?
  3. Which platforms will be most valuable/useful in reaching the targeted audience?

So, why is an initial assessment important before conducting a social media analysis?

Metrics available for social media are extensive and not all are useful for determining the impact of a program’s social media efforts. As Sterne (2010) explains, there needs to be meaning with social media metrics because “measuring for measurement’s sake is a fool’s errand”; “without context, your measurements are meaningless”; and “without specific business goals, your metrics are meaningless.”. Therefore, it is important to consider specific program objectives and which metrics (key performance indicators [KPIs]) are central to assessing the progress and success of these objectives.

Additionally, it is also worthwhile to recognize that popular social media platforms are always changing, categorizing various social media platforms is difficult, and metrics used by different platforms vary.

In order to provide more meaning to the social media analyses of a program, it may be helpful to consider using a framework to provide a structure for aligning social media metrics to the program’s objectives and assist with demonstrating progress and success towards those objectives.

One framework in the literature developed by Neiger et al. (2012) was used to classify and measure various social media metrics and platforms utilized in health care. This framework looked at the use of social media for its potential to engage, communicate, and disseminate critical information to stakeholders, as well as promote programs and expand audience reach. In this framework, Neiger et al. presented four KPI categories (insight, exposure, reach, and engagement) for the analysis of the social media metrics used in healthcare promotion, which aligned to 39 metrics. This framework is a great place to start, but keep in mind that it may not be an exact fit with a program’s objectives. Below is an example of an alignment to the Neiger et al. framework to a different program. This table shows the social media metrics analyzed for the program, the KPI those metrics measured, and the alignment of the metrics and KPI’s to the program’s outreach goals. In this example, the program’s goals aligned to only three of the four KPIs from the Neiger et al. framework. Additionally, different metrics and other platforms were evaluated that were more representative of this program’s social media efforts. For example, this program incorporated the use of phone apps to disseminate program information, and therefore was added as a social media metric.

What are effective ways to share the results from a social media analysis?

After compiling and cleaning data from the social media platforms utilized by a program, it is important to then consider the program’s goals and audience in order to format a report and/or visual that will best communicate the results of the social media data. The results from the program example above were shared using a visual in order to illustrate the program’s progress towards their dissemination efforts and the metric evidence from each social media platform they used to reach their audience. This visual representation highlights the following information from the social media analysis:

  • The extent to which the program’s content was viewed
  • Evidence of the program’s dissemination efforts
  • The engagement and preferences with program content being posted on various social media platforms by stakeholders
  • Potential areas of focus for the program’s future social media efforts


What are some of the limitations of a social media analysis?

The use and application of social media as an effective means to measure program impacts can be restricted by several factors. It is important to be mindful of what these limitations are and present them with findings from the analysis. A few limiting aspects of social media analytics to keep in mind:

  • They do not define program impact
  • They may not measure program impact
  • There are many different platforms
  • There are a vast number of metrics (with multiple definitions between platforms)
  • The audience is mostly invisible/not traceable

What are the next steps for evaluators using social media analytics to demonstrate program impacts?

  • Develop a framework aligned to the intended program’s goals
  • Determine the social media platforms and metrics that most accurately demonstrate progress toward the program’s goals and reach target audiences
  • Establish growth rates for each metric to demonstrate progress and impact
  • Involve key stakeholders throughout the process
  • Continue to revise and revisit regularly

Editor’s Note: This blog is based on a presentation the authors gave at the 2018 American Evaluation Association (AEA) Annual Conference in Cleveland, OH.

References

Priem, J. (2014). Altmetrics. In B. Cronin & C. R. Sugimoto (Eds.), Beyond bibliometrics: Harnessing multidimensional indicators of scholarly impact ( pp. 263-288). Cambridge, MA: The MIT Press.

Sterne, J. (2010). Social media metrics: How to measure and optimize your marketing investment. Hoboken, NJ: John Wiley & Sons, Inc.

Neiger, B. L., Thackeray, R., Van Wagenen, S. A., Hanson, C. L., West, J. H., Barnes, M. D., & Fagen, M. C. (2012). Use of social media in health promotion: Purposes, key performance indicators, and evaluation metrics. Health Promotion Practice, 13(2), 159-164

Blog: Contracting for Evaluator Services

Posted on November 13, 2019 by  in Blog ()

CREATE Energy Center Principal investigator, Madison Area Technical College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Contracting for Evaluator Services

You are excited to be working on a new grant proposal. You have a well-defined project objective, a solid plan to address the challenge at hand, a well-assembled team to execute the project, and a means for measuring your project’s outcomes. The missing ingredient is an evaluation plan for your project, and that means that you will need to retain the services of an evaluator!

New principal investigators often have limited prior experience with project evaluation, and  identifying and contracting with an evaluator can be a question mark for many. Fortunately, there are resources to help and recommended practices to make these processes easier.

The first tip is to explore the grant agency requirements and your institution’s procurement policies regarding evaluation services. Federal agencies such as the National Science Foundation (NSF) may accept a general evaluation plan written by the principal investigator, with agreement that an evaluator will be named later, or they may require the use of an external evaluator who is named in the proposal. Federal requirements can differ even within a single agency and may change from one year to the next. So it is important to be certain of the current program requirements.

Additionally, some institutions may require that project evaluation be conducted by independent third parties not affiliated with the college. Furthermore, depending on the size of the proposed project, and the scope of the evaluation plan, many colleges may have procurement policies that require a competitive request for quotes or bids for evaluator contracts. There may also be requirements that a request for bids must be publicly posted, and there may be rules dictating the minimum number of bids that must be received. Adhering to your school’s procurement policy may take several months to complete, so it is highly advisable to begin the search for an evaluator as early as possible.

The American Evaluation Association has a helpful website that includes a Find an Evaluator page, which can be used to search for evaluators by location. AEA members can also post a request for evaluator services to solicit bids. The EvaluATE website lists information specific to the NSF Advanced Technological Education (ATE) program and maintains a List of Current ATE Evaluators that may serve as a good starting point for identifying prospective evaluators.

When soliciting bids, it is advisable to create a detailed request that provides a summary of the project, a description of the services you are seeking, and specifies the information that you would like applicants to provide. At a minimum, you will want to request a copy of the evaluator’s CV and biosketch, and a description of their prior evaluation work.

If your institution requires you to entertain multiple bids, it is a good idea to develop a rubric that you can use to judge the bids that you receive. In most cases, you will not want to restrict yourself to accepting the lowest bid that is submitted. Instead, it is in the best interest of your project to make a selection based on both the experience and the qualifications of the prospective evaluator candidate, and on the perceived value of the services they can provide. In our past experience, we have found that hourly rates for evaluator services can vary by as much as 400%, so if a sufficiently large pool of bids are received, this can help to make sure that quoted rates are reasonable.

Blog: How Can You Make Sure Your Evaluation Meets the Needs of Multiple Stakeholders?*

Posted on October 31, 2019 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We talk a lot about stakeholders in evaluation. These are the folks who are involved in, affected by, or simply interested in the evaluation of your project. But what these stakeholders want or need to know from the evaluation, the time they have available for the evaluation, and their level of interest are probably quite variable. The table below is a generic guide to the types of ATE evaluation stakeholders, what they might need, and how to meet those needs.

ATE Evaluation Stakeholders

Stakeholder groups What they might need Tips for meeting those needs
Project leaders (PI, co-PIs) Information that will help you improve the project as it unfolds

Results you can include in your annual reports to NSF to demonstrate accountability and impact

Communicate your needs clearly to your evaluator, including when you need the information in order to make use of it.
Advisory committees or National Visiting Committees Results from the evaluation that show whether the project is on track for meeting its goals, and if changes in direction or operations are warranted

Summary information about the project’s strengths and weaknesses

Many advisory committee members donate their time, so they probably aren’t interested in reading lengthy reports. Provide a brief memo and/or short presentation with key findings at meetings, and invite questions about the evaluation. Be forthcoming about strengths and weaknesses.
Participants who provide data for the evaluation Access to reports in which their information was used

Summaries of what actions were taken based on the information they needed to provide

The most important thing for this group is to demonstrate use of the information they provided. You can share reports, but a personal message from project leaders along the lines of “we heard you and here is what we’re doing in response” is most valuable.
NSF program officers Evidence that the project is on track to meet its goals

Evidence of impact (not just what was done, but what difference the work is making)

Evidence that the project is using evaluation results to make improvements

Focus on Intellectual Merit (the intrinsic quality of the work and potential to advance knowledge) and Broader Impacts (the tangible benefits for individuals and progress toward desired societal outcomes). If you’re not sure about what your program officer needs from your evaluation, ask for clarification.
College administrators (department chairs, deans, executives, etc.) Results that demonstrate impact on students, faculty, institutional culture, infrastructure, and reputation Make full reports available upon request, but most busy administrators probably don’t have the time to read technical reports or don’t need the fine-grained data points. Prepare memos or share presentations that focus on the information they’re most interested in.
Partners and collaborators Information that helps them assess the return on the investment of their time or other resources

In case you didn’t read between the lines, the underlying message here is to provide stakeholders with the information that is most relevant to their particular “stake” in your project. A good way not to meet their needs is to only send everyone a long, detailed technical report with every data point collected. It’s good to have a full report available for those who request it, but many simply won’t have the time or level of interest needed to consume that quantity of evaluative information about your project.

Most importantly, don’t take our word about what your stakeholders might need: Ask them!

Not sure what stakeholders to involve in your evaluation or how? Check out our worksheet Identifying Stakeholders and Their Roles in an Evaluation at bit.ly/id-stake.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in October 2015.

Blog: Evaluating Professional Development Projects*

Posted on October 16, 2019 by  in Blog ()

Founder and President, The Allison Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Terryll Bailey Head Shot
Terryll Bailey
Founder and President
The Allison Group
Lori Wingate
Director of Research
The Evaluation Center at Western Michigan University

A good prompt to start thinking about how to approach the evaluation of an Advanced Technological Education (ATE) professional development (PD) project is the ATE program solicitation. Regarding PD grants, the solicitation states that “projects should be designed to enhance the educators’ disciplinary capabilities, teaching skills, and understanding of current technologies and practices, and employability skills.” It further recommends the “evaluation should demonstrate use in the classrooms and sustainable changes in practice of participating faculty and teachers leading to more qualified technicians for the industry. Changes in student learning outcomes as well as students’ perceptions of technical careers should be assessed” (National Science Foundation, p. 5).

ATE grants span multiple years. However sustainable, lasting systemic change is the long-term goal. It is important to consider the potential for systemic change as the project begins, and build in realistic indicators that the project activities are influencing the system. The following are some tips to consider when evaluating PD projects.

  1. Evaluate the design and process of PD interventions, as well as the outcomes. This is especially helpful for formative evaluation, which provides feedback for improving interventions while they’re underway. It’s also critical for illuminating the strengths and weaknesses of a PD effort to aid in understanding why certain outcomes were or were not achieved. Learning Forward’s Standards for Professional Learning and the Southern Regional Education Board’s Standards for Online Professional Development are good sources of information about what high-quality PD looks like. Fellow instructors or program deans with content knowledge can be helpful collaborators and internal evaluators, providing feedback on the quality of the content, instruction, and materials.
  2. Don’t reinvent the wheel with your evaluation design. PD is one of a relatively few areas where there are well-established frameworks for evaluation. Donald Kirkpatrick was the guru of PD evaluation and the originator of the “Four Levels” approach. Thomas Guskey adapted the Kirkpatrick model specifically for education contexts and defined five levels of professional learning evaluation. Jack and Patti Phillips bring a return-on-investment perspective to this work. Check out their materials for great ideas for framing your PD evaluation and for guidance in determining which data and data sources to employ. Joellen Killion brings these models together in her book Assessing Impact, which offers six levels to consider: reaction, learning, organizational support, application, impact on students, and return on investment.
  3. Once you embrace the “levels” approach to PD evaluation, project stakeholders can work collaboratively to define the intended outcomes for each level and the evaluation data collection methods and sources. One way to focus this work is to recall the National Science Foundation’s interest in impacting (1) educators’ disciplinary capabilities, teaching skills, and understanding of current technologies and practices, and employability skills, and (2) students’ learning outcomes and perceptions of technical
  4. If a professional learning community (e.g., community of practice, virtual learning community) is involved, pay special attention to capturing the nature of the interactions and associated learning among participants. In this type of PD initiative, assessing process is crucial. To learn more about evaluating professional communities, see Etienne and Beverly Wenger-Trayner’s overview of communities of practice.

Online PD has its own set of challenges for evaluation, but tools and frameworks are available to successfully evaluate them. Back-end analytics are available via various online venues, and with that technology, evaluation may actually be easier, because records are kept automatically.

ADDITIONAL RESOURCES

The Evaluation Exchange’s special issue on professional development (see especially the article by Spicer et al. about online professional development).

Example professional development follow-up survey developed by the ATE project, Destination Problem-Based Learning

The Student Assessment of Their Learning Gains Instrument for use by college instructors to “gather learning-focused feedback from students.”

* This blog is based on a handout from an EvaluATE workshop at the 2011 ATE Principal Investigators Conference.

Blog: Kirkpatrick Model for ATE Evaluation

Posted on October 2, 2019 by  in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Jim Kirkpatrick Wendy Kayser Kirkpatrick
Senior Consultant, Kirkpatrick Partners President, Kirkpatrick Partners

The Kirkpatrick Model is an evaluation framework organized around four levels of impact: reaction, learning, behavior, and results. It was developed more than 50 years ago by Jim’s father, Dr. Don Kirkpatrick, specifically for evaluating training initiatives in business settings. For decades, it has been widely believed that the four levels are applicable only to evaluating the effectiveness of corporate training programs. However, we and hundreds of global “four-level ambassadors” — including Lori Wingate and her colleagues at EvaluATE — have successfully applied Kirkpatrick outside of the typical “training” box. The Kirkpatrick Model has broad appeal because of its practical, results-oriented approach.

The Kirkpatrick Model provides the foundation for evaluating almost any kind of social, business, health, or education intervention. The process starts with identifying what success will look like and driving through with a well-coordinated, targeted plan of support, accountability, and measurement. It is a framework for demonstrating ultimate value through a compelling chain of evidence.

Kirpatrick Model Visual

Whether your Advanced Technological Education (ATE) grant focuses on enhancing a curricular program, providing professional development to faculty, developing educational materials, or serving as a resource and dissemination center, the four levels are relevant.

At the most basic level (Level 1: Reaction), you need to know what your participants think of your work and your products. If they don’t value what you’re providing, you have little chance of producing higher-level results.

Next, it’s important to determine how and to what extent participants’ knowledge, skills, attitudes, confidence, and/or commitment changed because of the resources and follow-up support you provided (Level 2: Learning). Many evaluations, unfortunately, don’t go beyond Level 2. But it’s a big mistake to assume that if learning takes place, behaviors change and results happen. It’s critical to determine the extent to which people are doing things differently because of their new knowledge, skill, etc. (Level 3: Behavior).

Finally, you need to be able to answer the question “So what?” In the ATE context, that means determining how your work has impacted the landscape of advanced technological education and workforce development (Level 4: Results).

The four levels are the foundation of the model, but there is much more to it. We hope you’ll take the time to examine and reflect on how this approach can bring value to your initiative and its evaluation. To learn more about Kirkpatrick, visit our website or  kirkpatrickpartners.com, where you’ll find a wealth of free resources, as well as information on our certificate and certification programs.

Want to learn more about this topic? View EvaluATE’s webinar ATE Evaluation: Measuring Reaction, Learning, Behavior, and Results.

 

Blog: The 1:3:25 Format for More Reader-Friendly Evaluation Reports

Posted on September 17, 2019 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m part of the EvaluATE team. I also lead evaluations as part of my work at Western Michigan University’s Evaluation Center, so I have written my fair share of evaluation reports over the years. I wanted to share a resource I’ve found to be game-changing for report writing. It’s the Canadian Health Services Research Foundation’s 1:3:25 reader-friendly report format. Even though I don’t follow the format exactly, what I’ve take away from the model has significantly improved the quality of my evaluation reports.

The 1:3:25 format for report writing consists of a one-page summary of main messages, a three-page executive summary, and a 25-page report body. Here’s a brief summary of each component:

1 Page for Main Messages: The main-messages page should contain an easy-to-scan bulleted list of information people can use to make decisions based on what was learned from the evaluation. This is not a summary of findings, but rather a compilation of key conclusions and recommendations that have implications for decision making. Think of the main-messages page as the go-to piece of the report for answering questions about what’s next.

3-Page Executive Summary: The purpose of the three-page executive summary is to provide an overview of the evaluation and help busy readers decide if your report will be useful to them. The executive summary should read more like a news article than an academic abstract. Information readers find most interesting should go first (i.e., conclusions and findings) and the less interesting information should go at the end (i.e., methods and background).

25-Page Report Body: The 25-page report body should contain information on the background of the project and its evaluation, and the evaluation methods, findings, conclusions, and recommendations. The order in which these sections are presented should correspond with the audience’s level of interest and familiarity with the project. Information that doesn’t fit in the 25-page report body can be placed in the appendices. Details that are critical for understanding the report should go in the report body; information that’s not critical for understanding the report should go in the appendices.

What I’ve found to be game-changing is having a specified page count to shoot for. With this information, I’ve gone from knowing my reports needed to be shorter to actually writing shorter reports. While I don’t always keep the report body to 25 pages, the practice of trying to keep it as close to 25 pages as possible has helped me shorten the length of my reports. At first, I was worried the shorter length would compromise the quality of the reports. Now, I feel as if I can have the best of both worlds: a report that is both reader friendly and transparent. The difference is that, now, many of the additional details are located in the appendices.

For more details, check out the Canadian Health Services Research Foundation’s guide on the 1:3:25 format.

Keywords: 1:3:25, reporting, evaluation report, evaluation reporting

Blog: What Grant Writers Need to Know About Evaluation

Posted on September 4, 2019 by  in Blog (, )

District Director of Grants and Educational Services, Coast Community College District

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Fellow grant writers: Do you ever stop and ask yourselves, “Why do we write grants?” Do you actually enjoy herding cats, pulling teeth, and the inevitable stress of a looming proposal deadline? I hope not. Then what is the driver? We shouldn’t write a grant just to get funded or to earn prestige for our colleges. Those benefits may be motivators, but we should write to get funding and support to positively impact our students, faculty, and the institutions involved. And we should be able to evaluate those results in useful and meaningful ways so that we can identify how to improve and demonstrate the project’s value.

Evaluation isn’t just about satisfying a promise or meeting a requirement to gather and report data. It’s about gathering meaningful data that can be utilized to determine the effectiveness of an activity and the impact of a project. When developing a grant proposal, one often starts with the goals, then thinks of the objectives, and then plans the activities, hoping that in the end, the evaluation data will prove that the goals were met and the project was a success. That requires a lot of “hope.”

I find it more promising to begin with the end in mind from an evaluation perspective: What is the positive change that we hope to achieve and how will it be evidenced? What does success mean? How can we tell if we have been successful? When will we know? And how can we get participants to provide the information we will need for the evaluation?

The role of a grant writer is too often like that of a quilt maker, delegating sections of the proposal’s development to different members of the institution, with the evaluation section often outsourced to a third-party evaluator. Each party submits their content, then the grant writer scrambles to patch it all together.

Instead of quilt making, the process should be more like the construction of a tapestry. Instead of chunks of material stitched together in independent sections, each thread is carefully woven in a thoughtful way to create a larger, more cohesive overall design. It is important that the entire professional development team works together to fully understand each aspect of the proposal. In this way, they can collaboratively develop a coherent plan to obtain the desired outcomes. The project work plan, budget, and evaluation components should not be designed or executed independently—they occur simultaneously and are dependent upon each other. Thus, they should tie together in a thoughtful manner.

I encourage you to think like an evaluator as you develop your proposals. Prepare yourself and challenge your team to be able to justify the value of each goal, objective, and activity and be able to explain how that value will be measured. If at all possible, involve your external or internal evaluator early on in proposal development. The better the evaluator understands your overall concept and activities, the better they can tailor the evaluation plan to derive the desired results. A strong work plan and evaluation plan will help proposal reviewers connect the dots and see the potential of your proposal. These elements will also serve as road maps to success for your project implementation team.

 

For questions or further information please reach out to the author, Lara Smith.

Blog: 5 Tips for Evaluating Multisite Projects*

Posted on August 21, 2019 by  in Blog (, )

Senior Research Manager, Social & Economic Sciences Research Center at Washington State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting evaluations for multisite projects can present unique challenges and opportunities. For example, evaluators must be careful to ensure that consistent data are captured across sites, which can be challenging. However, having results for multiple sites can lead to stronger conclusions about an intervention’s impact. The following are helpful tips for evaluating multisite projects.

 1.      Investigate the consistency of project implementation. Just because the same guidelines have been provided to each site does not mean that they have been implemented the same way! Variations in implementation can create difficulties in collecting the data and interpreting the evaluation results.

2.      Standardize data collection tools across sites. This will minimize confusion and result in a single dataset with information on all sites. On the downside, this may result in having to limit the data to a subset of information that is available across all sites.

3.      Help the project managers at each site understand the evaluation plan. Provide a clear, comprehensive overview of the evaluation plan that includes the expectations of the managers. Simplify their roles as much as possible.

4.      Be sensitive in reporting side-by-side results of the sites. Consult with project stakeholders to determine if it is appropriate or helpful to include side-by-side comparisons of the performance of the various sites.

5.      Analyze to what extent differences in outcomes are due to variations in project implementation. Variation in results across sites may provide clues to factors that may facilitate or impede the achievement of certain outcomes.

6.      Report the evaluation results back to the site managers in whatever form would be the most useful to them. This is an excellent opportunity to recruit the site managers as supporters of evaluation, especially if they see that the evaluation results can be used to aid their participant recruitment and fundraising efforts.

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 ATE PI Conference.

 

FOR MORE INFORMATION

Smith-Moncrieffe, D. (2009, October). Planning multi-site evaluations of model and promising programs. Paper presented at the Canadian Evaluation Society Conference, Ontario, CA.

Lawrenz, F., & Huffman, D. (2003). How can multi-site evaluations be participatory? American Journal of Evaluation, 24(4), 471–482.

Blog: SWOT Analysis: What Is It? How Can It Be Useful?

Posted on August 6, 2019 by  in Blog ()

Doctoral Candidate, University of North Carolina at Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! My name is Cherie Avent, and I am a graduate student at the University of North Carolina at Greensboro. As a member of an external evaluation team, I recently helped facilitate a SWOT analysis for program managers of a National Science Foundation project to aid them in understanding their strengths, areas of improvement, and potential issues impacting the overall success of the project. In this blog, I will share what a SWOT analysis is, how it can benefit evaluations, and how to conduct one.

What is a SWOT Analysis?

The acronym “SWOT” stands for strengths, weaknesses, opportunities, and threats. A SWOT analysis examines the current performance and the potential future of a program or project. Strengths and weaknesses are controllable factors internal to a program, while opportunities and threats are uncontrollable external factors potentially impacting the circumstances of the project (Chermack & Kasshanna, 2007). More specifically, a SWOT analysis is used to achieve more effective decision making, assessing how strengths can be utilized for new opportunities and how weaknesses can hinder programmatic progress or highlight threats (Helms & Nixon, 2010). The goal is to take advantage of strengths, address weaknesses, maximize opportunities, and limit the impact of threats (Chermack & Kasshanna, 2007).

How can a SWOT be useful?

As evaluators, we can facilitate SWOT analyses with program managers to assist them in 1) understanding current project actions that are working well or need improving, 2) identifying opportunities for leveraging, 3) limiting areas of challenge, and 4) refining decision making for the overall success of the program. Many of the projects we serve involve various objectives and actions for achieving the overarching program goal. Therefore, a SWOT analysis provides an opportunity for program managers to assess why specific strategies or plans work and others do not.

How does one conduct a SWOT analysis?

There are multiple ways to conduct a SWOT analysis. Here are a few steps we found useful (Chermack & Kasshanna, 2007):

  1. Define the objective of the SWOT analysis with participants. What do program managers or participants want to gain by conducting the SWOT analysis?
  2. Provide an explanation of SWOT analysis procedures to participants.
  3. Using the two-by-two matrix below, ask each participant to consider and write strengths, weaknesses, opportunities, and threats of the project. Included are questions they may think about for each area.

SWOT Analysis

  1. Combine the individual worksheets into a single chart or spreadsheet. You can use a Google document or a large wall chart so everyone can participate.
  2. Engage participants in a dialogue about their responses for each category, discussing why they chose those responses and how they see the descriptions impacting the project. Differing perspectives will likely emerge. Ask participants how weaknesses can become strengths and how opportunities can become threats.
  3. Lastly, develop an action plan for moving forward. It should consist of concrete and achievable steps program managers can take concerning the programmatic goals.

 

References:

Chermack, T. J., & Kasshanna, B. K. (2007). The use and misuse of SWOT analysis and implications for HRD professionals. Human Resource Development International, 10(4), 383–399. doi:10.1080/13678860701718760

Helms, M. M., & Nixon, J. (2010). Exploring SWOT analysis—where are we now? A review of academic research from the last decade. Journal of Strategy and Management, 3(3), 215–251. doi:10.1108/17554251011064837

Keywords: evaluators, programmatic performance, SWOT analysis

Blog: 11 Important Things to Know About Evaluating Curriculum Development Projects*

Posted on July 24, 2019 by  in Blog ()

Professor of Instructional Technology, Bloomsburg University of Pennsylvania

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Curriculum development projects are designed to create new content or present content to students in a new format with new activities or approaches. The following are important things to know about evaluating curriculum development projects.

1.     Understand the underlying model, pedagogy, and process used to develop the curriculum. There are several curriculum development models, including the DACUM model (Developing a Curriculum), the Backward Design Method, and the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model of instructional design. Whatever approach is used, make sure you understand its methodology and underlying philosophy so that these can help guide the evaluation.

2.     Establish a baseline. If possible, establish what student performance was before the curriculum was available, to assess the level of change or increased learning created as a result of the new curriculum. This could involve data on student grades or performance from the year before the new curriculum is introduced or data on job performance or another indicator.

3.     Clearly identify the outcomes expected of the curriculum. What should students know or be able to do when they have completed the curriculum? Take the time to understand the desired outcomes and how the curriculum content, activities, and approach support those outcomes. The outcomes should be directly linked to the project goals and objectives. Look for possible disconnects or gaps.

4.     Employ a pre/post test design. One method to establish that learning has occurred is to measure student knowledge of a subject before and after the curriculum is introduced. If you are comparing two curriculums, you may want to consider using one group as a control group that would not use the new curriculum and comparing the performance of the two groups in a pre/post test design.

5.     Employ content analysis techniques. Content analysis is the process of analyzing documents (student guides, instructor guides, online content, videos, and other materials) to determine the type of content, frequency of content, and internal coherence (consistency of different elements of the curriculum) and external coherence (interpretation in the curriculum fits the theories accepted in and outside the discipline).

6.     Participate in the activities. One effective method for helping evaluators understand the impact of activities and exercises is to participate in them. This helps determine the quality of the instructions, the level of engagement, and the learning outcomes that result from the activities.

7.     Ensure assessment items match instructional objectives. Assessment of student progress is typically measured through written tests. To ensure written tests assess the student’s grasp of the course objectives and curriculum, match the assessment items to the instructional objectives. Create a chart to match objectives to assessment items to ensure all the objectives are assessed and that all assessment items are pertinent to the curriculum.

8.     Review guidance and instruction provided to teachers/facilitators in guides. Determine if the materials are properly matched across the instructor guide, student manual, slides, and in-class activities. Determine if the instructions are clear and complete and that the activities are feasible.

9.     Interview students, faculty, and, possibly, workforce representatives. Faculty can provide insights into the usefulness and effectiveness of the materials, and students can provide input on level of engagement, learning effort, and overall impression of the curriculum. If the curriculum is tied to a technician profession, involve industry representatives in reviewing and examining the curriculum. This should be done as part of the development process, but if it is not, consider having a representative review the curriculum for alignment with industry expectations.

10.  Use Kirkpatrick’s four levels of evaluation. A highly effective model for evaluation of curriculum is called the Kirkpatrick Model. The levels in the model measure initial learner reactions, knowledge gained from the instruction, behavioral changes that might result from the instruction, and overall impact on the organization, field, or students.

11.  Pilot the instruction. Conduct pilot sessions as part of the formative evaluation to ensure that the instruction functions as designed. After the pilot, collect end-of-day reaction sheets/tools and trainer observations of learners. Having an end-of-program product—such as an action-planning tool to implement changes around curriculum focus issue(s)—is also useful.

RESOURCES

For detailed discussion of content analysis, see chapter 9 of Gall, M. D., Gall, J. P, & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston: Pearson.

DACUM Job Analysis Process: https://s3.amazonaws.com/static.nicic.gov/Library/010699.pdf

Backward Design Method: https://educationaltechnology.net/wp-content/uploads/2016/01/backward-design.pdf

ADDIE Model: http://www.nwlink.com/~donclark/history_isd/addie.html

Kirkpatrick Model: http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 Advanced Technological Education PI Conference.