Faye R. Jones

Senior Research Associate, Florida State University

Faye R. Jones is a research associate at Florida State University’s College of Communication & Information. Her research interests include STEM student outcomes and the exploration of student pathways through Institutional Research (IR) platforms.


Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog (, , )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.

 

Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.

References:

Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.