Archive: evaluation

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President, CSEdResearch.org

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from https://www.tacc.utexas.edu/epic/research. Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.

 

Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: Quick Reference Guides Evaluators Can’t Live Without

Posted on August 5, 2020 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on May 15, 2020:
https://aea365.org/blog/quick-reference-guides-evaluators-cant-live-without-by-kelly-robertson/

My name is Kelly Robertson, and I work at The Evaluation Center at Western Michigan University and EvaluATE, the National Science Foundation–funded evaluation hub for Advanced Technological Education.

I’m a huge fan of quick reference guides. Quick reference guides are brief summaries of important content that can be used to improve practice in real time. They’re also commonly referred to as job aids or cheat sheets.

I found quick reference guides to be especially helpful when I was just learning about evaluation. For example, Thomas Guskey’s Five Critical Levels of Professional Development Evaluation helped me learn about different levels of outcomes (e.g., reaction, learning, organizational support, application of skills, and target population outcomes).

Even with 10-plus years of experience, I still turn to quick reference guides every now and then. Here are a few of my personal favorites:

My colleague Lyssa Becho is also a huge fan of quick reference guides, and together we compiled a list of over 50 evaluation-related quick reference guides. The list draws on the results from a survey we conducted as part of our work at EvaluATE. It includes quick reference guides that 45 survey respondents rated as most useful for each stage of the evaluation process.

Here are some popular quick reference guides from the list:

  • Evaluation Planning: Patton’s Evaluation Flash Cards introduce core evaluation concepts such as evaluation questions, standards, and reporting in an easily accessible format.
  • Evaluation Design: Wingate’s Evaluation Data Matrix Template helps evaluators organize information about evaluation indicators, data collection sources, analysis, and interpretation.
  • Data Collection: Wingate and Schroeter’s Evaluation Questions Checklist for Program Evaluation provides criteria to help evaluators understand what constitutes high-quality evaluation questions.
  • Data Analysis: Hutchinson’s You’re Invited to a Data Party! explains how to engage stakeholders in collective data analysis.
  • Evaluation Reporting: Evergreen and Emery’s Data Visualization Checklist is a guide for the development of high-impact data visualizations. Topics covered include text, arrangement, color, and lines.

If you find any helpful evaluation-related quick reference guides are missing from the full collection please contact kelly.robertson@wmich.edu.

Blog: Three Ways to Boost Network Reporting

Posted on April 29, 2020 by  in Blog ()

Assistant Director, Collin College’s National Convergence Technology Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The National Convergence Technology Center (CTC), a national ATE center focusing on IT infrastructure technology, manages a community called the Convergence College Network (CCN)The CCN consists of 76 community colleges and four-year universities across 26 statesFaculty and administrators from the CCN meet regularly to share resources, trade know-how, and discuss common challenges 

 Because so much of the CTC’s work is directed to supporting the CCN, we ask the member colleges to submit a “CCN Yearly Report” evaluation each FebruaryThe data from that “CCN Yearly Report” informs the reporting we deliver to the NSF, to our National Visiting Committee, and to the annual ATE surveyEach of those three groups need slightly different information, so we’ve worked hard to include everything in a single evaluation tool. 

 We’re always trying to improve that “CCN Yearly Report” by improving the questions we ask, removing the questions we don’t need, and making any other adjustments that could improve the response rateWe want to make it easy on the respondentsOur efforts seem to be workingWe received 37 reports from the 76 CCN member colleges this past February, a 49% response rate. 

 We attribute this success to three strategies.  

  1. 1. Prepare them in advance.We start talking about the February “CCN Yearly Report” due date in the summerThe CCN community gets multiple email reminders, and we often mention the report deadline at our quarterly meetingsWe don’t want anyone to say they didn’t know about the report or its deadlinePart of this ongoing preparation also involves making sure everyone in the network understands the importance of the data we’re seekingWe emphasize that we need their help to accurately report grant impact to the NSF.
  1. Share the results.If we go to such lengths to make sure everyone understands the importance of the report up front, it makes sense to do the same after the results are inWe try to deliver a short overview of the results at our July quarterly meetingDoing so underscores the importance of the survey. Beyond that, research tells us that one key to nurturing a successful community of practice like the CCN is to provide positive feedback about the value of the groupBy sharing highlights of the report, we remind CCN members that they are a part of a thriving, successful group of educators. 
  1. Reward participation.Grant money is a great carrotBecause the CTC so often provides partial travel reimbursement to faculty from CCN member colleges so they can attend conferences and professional development events, we can incentivize the submission of yearly reports.  Colleges that want the maximum membership benefits, which include larger travel caps, must deliver a report.  Half of the 37 reports we received last year were from colleges seeking those maximum benefits. 

 We’re sure there are other grants with similar communities of organizations and institutions. We hope some of these strategies can help you get the data you need from your communities. 

 

References:  

 Milton, N. (2017, January 16). Why communities of practice succeed, and why they fail [Blog post].

Blog: Backtracking Alumni: Using Institutional Research and Reflective Inquiry to Improve Organizational Learning

Posted on April 2, 2020 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

In a recent blog post, we shared practical tips for developing an alumni tracking program to assess students’ employment outcomes. Alumni tracking is an effective tool for assessing the quality of educational programs and helping determine whether programs have the intended impact 

In this post, we share the Backtracking technique, aadvanced approach that supplements alumni tracking data with students’ institutionally archived recordsBacktracking assumes that institutions and programs already gather student outcomes information (e.g., employment, salary, and advanced educational data) from alumni on a periodic basis (e.g., annually or every three years) 

The technique uses institutional research (IR) archives to match students’ employment outcomes to academic and demographic variables (e.g., academic GPA, courses taken, grades, major, additional certifications, internships, gender, race/ethnicity)By pairing student outcomes data with academic and demographic variables, we can contextualize student pathways and explore the whole pathway, not just a moment in time. 

Figure 1 shows an example of the Backtracking technique for two-year Associate of Arts (AA) and Associate of Science (AS).  

Figure 1. Backtracking Technique for AA/AS Programs 

Figure 1 illustrates three data collection layersLayer 1, Institutional Research College Data, provides student completion data, academic history, and contact informationAdvanced and transfer-degree data are also available through the National Student Clearinghouse, which can reveal the major that former student (or graduate) entered after completing the AA/AS degreeLayer 2, Alumni Transfer Employment Data, includes student employment and advanceddegree information self-reported in alumni surveys 

Layer 3, Pathway Explanatory Dataembeds a qualitative component within the Backtracking technique in order to let alumni explain their undergraduate experiences. This layer helps us understand what happened during and after collegeMost importantly, it lets us identify the critical junctures that students faced and the facilitators and hindrances that allowed students to overcome (or that caused) setbacks during these difficult periods 

To provide alumni with the best opportunities to share their experiences, we use IR archives to formulate questions based on key facts about students’ experiencesFor example, if IR records show that a student transferred from college A to university B, we may ask the student about that specific experience. For a student who failed Calculus 1 once but passed it on the second try, we may ask what allowed that success. 

Although individual student pathways are useful, we can also stratify these data by race and gender (or other factors) and then aggregate them to better understand student groupsWe demonstrate how we aggregate the pathways in this short video. 

The Backtracking technique requires skilled personnel with technical knowledge in IR and data collection and analysis or an Academic IR (who possesses both IR and research skills)Investing in such skill and knowledge is worthwhile  

    • Institutional research is powerful when used for formative and internal improvement and for generation of new knowledge 
    • Findings about former students using the Backtracking technique can provide useful information to improve program and institutional services (e.g., advising, formal practices, informal learning opportunities, etc.) 
    • Looking back at what worked or failed for past students can inform current practices and serve as a source of institutional learning 

References: 

Jones, F. R., Mardis, M. A. (2019, May 15)Alumni Tracking: The ultimate source for evaluating completer outcomes [Blog post]Retrieved from https://www.evalu-ate.org/blog/jones2-may19/

Blog: Contracting for Evaluator Services

Posted on November 13, 2019 by  in Blog ()

CREATE Energy Center Principal investigator, Madison Area Technical College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Contracting for Evaluator Services

You are excited to be working on a new grant proposal. You have a well-defined project objective, a solid plan to address the challenge at hand, a well-assembled team to execute the project, and a means for measuring your project’s outcomes. The missing ingredient is an evaluation plan for your project, and that means that you will need to retain the services of an evaluator!

New principal investigators often have limited prior experience with project evaluation, and  identifying and contracting with an evaluator can be a question mark for many. Fortunately, there are resources to help and recommended practices to make these processes easier.

The first tip is to explore the grant agency requirements and your institution’s procurement policies regarding evaluation services. Federal agencies such as the National Science Foundation (NSF) may accept a general evaluation plan written by the principal investigator, with agreement that an evaluator will be named later, or they may require the use of an external evaluator who is named in the proposal. Federal requirements can differ even within a single agency and may change from one year to the next. So it is important to be certain of the current program requirements.

Additionally, some institutions may require that project evaluation be conducted by independent third parties not affiliated with the college. Furthermore, depending on the size of the proposed project, and the scope of the evaluation plan, many colleges may have procurement policies that require a competitive request for quotes or bids for evaluator contracts. There may also be requirements that a request for bids must be publicly posted, and there may be rules dictating the minimum number of bids that must be received. Adhering to your school’s procurement policy may take several months to complete, so it is highly advisable to begin the search for an evaluator as early as possible.

The American Evaluation Association has a helpful website that includes a Find an Evaluator page, which can be used to search for evaluators by location. AEA members can also post a request for evaluator services to solicit bids. The EvaluATE website lists information specific to the NSF Advanced Technological Education (ATE) program and maintains a List of Current ATE Evaluators that may serve as a good starting point for identifying prospective evaluators.

When soliciting bids, it is advisable to create a detailed request that provides a summary of the project, a description of the services you are seeking, and specifies the information that you would like applicants to provide. At a minimum, you will want to request a copy of the evaluator’s CV and biosketch, and a description of their prior evaluation work.

If your institution requires you to entertain multiple bids, it is a good idea to develop a rubric that you can use to judge the bids that you receive. In most cases, you will not want to restrict yourself to accepting the lowest bid that is submitted. Instead, it is in the best interest of your project to make a selection based on both the experience and the qualifications of the prospective evaluator candidate, and on the perceived value of the services they can provide. In our past experience, we have found that hourly rates for evaluator services can vary by as much as 400%, so if a sufficiently large pool of bids are received, this can help to make sure that quoted rates are reasonable.

Blog: How Can You Make Sure Your Evaluation Meets the Needs of Multiple Stakeholders?*

Posted on October 31, 2019 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We talk a lot about stakeholders in evaluation. These are the folks who are involved in, affected by, or simply interested in the evaluation of your project. But what these stakeholders want or need to know from the evaluation, the time they have available for the evaluation, and their level of interest are probably quite variable. The table below is a generic guide to the types of ATE evaluation stakeholders, what they might need, and how to meet those needs.

ATE Evaluation Stakeholders

Stakeholder groups What they might need Tips for meeting those needs
Project leaders (PI, co-PIs) Information that will help you improve the project as it unfolds

Results you can include in your annual reports to NSF to demonstrate accountability and impact

Communicate your needs clearly to your evaluator, including when you need the information in order to make use of it.
Advisory committees or National Visiting Committees Results from the evaluation that show whether the project is on track for meeting its goals, and if changes in direction or operations are warranted

Summary information about the project’s strengths and weaknesses

Many advisory committee members donate their time, so they probably aren’t interested in reading lengthy reports. Provide a brief memo and/or short presentation with key findings at meetings, and invite questions about the evaluation. Be forthcoming about strengths and weaknesses.
Participants who provide data for the evaluation Access to reports in which their information was used

Summaries of what actions were taken based on the information they needed to provide

The most important thing for this group is to demonstrate use of the information they provided. You can share reports, but a personal message from project leaders along the lines of “we heard you and here is what we’re doing in response” is most valuable.
NSF program officers Evidence that the project is on track to meet its goals

Evidence of impact (not just what was done, but what difference the work is making)

Evidence that the project is using evaluation results to make improvements

Focus on Intellectual Merit (the intrinsic quality of the work and potential to advance knowledge) and Broader Impacts (the tangible benefits for individuals and progress toward desired societal outcomes). If you’re not sure about what your program officer needs from your evaluation, ask for clarification.
College administrators (department chairs, deans, executives, etc.) Results that demonstrate impact on students, faculty, institutional culture, infrastructure, and reputation Make full reports available upon request, but most busy administrators probably don’t have the time to read technical reports or don’t need the fine-grained data points. Prepare memos or share presentations that focus on the information they’re most interested in.
Partners and collaborators Information that helps them assess the return on the investment of their time or other resources

In case you didn’t read between the lines, the underlying message here is to provide stakeholders with the information that is most relevant to their particular “stake” in your project. A good way not to meet their needs is to only send everyone a long, detailed technical report with every data point collected. It’s good to have a full report available for those who request it, but many simply won’t have the time or level of interest needed to consume that quantity of evaluative information about your project.

Most importantly, don’t take our word about what your stakeholders might need: Ask them!

Not sure what stakeholders to involve in your evaluation or how? Check out our worksheet Identifying Stakeholders and Their Roles in an Evaluation at bit.ly/id-stake.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in October 2015.

Blog: What Grant Writers Need to Know About Evaluation

Posted on September 4, 2019 by  in Blog ()

District Director of Grants and Educational Services, Coast Community College District

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Fellow grant writers: Do you ever stop and ask yourselves, “Why do we write grants?” Do you actually enjoy herding cats, pulling teeth, and the inevitable stress of a looming proposal deadline? I hope not. Then what is the driver? We shouldn’t write a grant just to get funded or to earn prestige for our colleges. Those benefits may be motivators, but we should write to get funding and support to positively impact our students, faculty, and the institutions involved. And we should be able to evaluate those results in useful and meaningful ways so that we can identify how to improve and demonstrate the project’s value.

Evaluation isn’t just about satisfying a promise or meeting a requirement to gather and report data. It’s about gathering meaningful data that can be utilized to determine the effectiveness of an activity and the impact of a project. When developing a grant proposal, one often starts with the goals, then thinks of the objectives, and then plans the activities, hoping that in the end, the evaluation data will prove that the goals were met and the project was a success. That requires a lot of “hope.”

I find it more promising to begin with the end in mind from an evaluation perspective: What is the positive change that we hope to achieve and how will it be evidenced? What does success mean? How can we tell if we have been successful? When will we know? And how can we get participants to provide the information we will need for the evaluation?

The role of a grant writer is too often like that of a quilt maker, delegating sections of the proposal’s development to different members of the institution, with the evaluation section often outsourced to a third-party evaluator. Each party submits their content, then the grant writer scrambles to patch it all together.

Instead of quilt making, the process should be more like the construction of a tapestry. Instead of chunks of material stitched together in independent sections, each thread is carefully woven in a thoughtful way to create a larger, more cohesive overall design. It is important that the entire professional development team works together to fully understand each aspect of the proposal. In this way, they can collaboratively develop a coherent plan to obtain the desired outcomes. The project work plan, budget, and evaluation components should not be designed or executed independently—they occur simultaneously and are dependent upon each other. Thus, they should tie together in a thoughtful manner.

I encourage you to think like an evaluator as you develop your proposals. Prepare yourself and challenge your team to be able to justify the value of each goal, objective, and activity and be able to explain how that value will be measured. If at all possible, involve your external or internal evaluator early on in proposal development. The better the evaluator understands your overall concept and activities, the better they can tailor the evaluation plan to derive the desired results. A strong work plan and evaluation plan will help proposal reviewers connect the dots and see the potential of your proposal. These elements will also serve as road maps to success for your project implementation team.

 

For questions or further information please reach out to the author, Lara Smith.

Blog: 11 Important Things to Know About Evaluating Curriculum Development Projects*

Posted on July 24, 2019 by  in Blog ()

Professor of Instructional Technology, Bloomsburg University of Pennsylvania

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Curriculum development projects are designed to create new content or present content to students in a new format with new activities or approaches. The following are important things to know about evaluating curriculum development projects.

1.     Understand the underlying model, pedagogy, and process used to develop the curriculum. There are several curriculum development models, including the DACUM model (Developing a Curriculum), the Backward Design Method, and the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model of instructional design. Whatever approach is used, make sure you understand its methodology and underlying philosophy so that these can help guide the evaluation.

2.     Establish a baseline. If possible, establish what student performance was before the curriculum was available, to assess the level of change or increased learning created as a result of the new curriculum. This could involve data on student grades or performance from the year before the new curriculum is introduced or data on job performance or another indicator.

3.     Clearly identify the outcomes expected of the curriculum. What should students know or be able to do when they have completed the curriculum? Take the time to understand the desired outcomes and how the curriculum content, activities, and approach support those outcomes. The outcomes should be directly linked to the project goals and objectives. Look for possible disconnects or gaps.

4.     Employ a pre/post test design. One method to establish that learning has occurred is to measure student knowledge of a subject before and after the curriculum is introduced. If you are comparing two curriculums, you may want to consider using one group as a control group that would not use the new curriculum and comparing the performance of the two groups in a pre/post test design.

5.     Employ content analysis techniques. Content analysis is the process of analyzing documents (student guides, instructor guides, online content, videos, and other materials) to determine the type of content, frequency of content, and internal coherence (consistency of different elements of the curriculum) and external coherence (interpretation in the curriculum fits the theories accepted in and outside the discipline).

6.     Participate in the activities. One effective method for helping evaluators understand the impact of activities and exercises is to participate in them. This helps determine the quality of the instructions, the level of engagement, and the learning outcomes that result from the activities.

7.     Ensure assessment items match instructional objectives. Assessment of student progress is typically measured through written tests. To ensure written tests assess the student’s grasp of the course objectives and curriculum, match the assessment items to the instructional objectives. Create a chart to match objectives to assessment items to ensure all the objectives are assessed and that all assessment items are pertinent to the curriculum.

8.     Review guidance and instruction provided to teachers/facilitators in guides. Determine if the materials are properly matched across the instructor guide, student manual, slides, and in-class activities. Determine if the instructions are clear and complete and that the activities are feasible.

9.     Interview students, faculty, and, possibly, workforce representatives. Faculty can provide insights into the usefulness and effectiveness of the materials, and students can provide input on level of engagement, learning effort, and overall impression of the curriculum. If the curriculum is tied to a technician profession, involve industry representatives in reviewing and examining the curriculum. This should be done as part of the development process, but if it is not, consider having a representative review the curriculum for alignment with industry expectations.

10.  Use Kirkpatrick’s four levels of evaluation. A highly effective model for evaluation of curriculum is called the Kirkpatrick Model. The levels in the model measure initial learner reactions, knowledge gained from the instruction, behavioral changes that might result from the instruction, and overall impact on the organization, field, or students.

11.  Pilot the instruction. Conduct pilot sessions as part of the formative evaluation to ensure that the instruction functions as designed. After the pilot, collect end-of-day reaction sheets/tools and trainer observations of learners. Having an end-of-program product—such as an action-planning tool to implement changes around curriculum focus issue(s)—is also useful.

RESOURCES

For detailed discussion of content analysis, see chapter 9 of Gall, M. D., Gall, J. P, & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston: Pearson.

DACUM Job Analysis Process: https://s3.amazonaws.com/static.nicic.gov/Library/010699.pdf

Backward Design Method: https://educationaltechnology.net/wp-content/uploads/2016/01/backward-design.pdf

ADDIE Model: http://www.nwlink.com/~donclark/history_isd/addie.html

Kirkpatrick Model: http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 Advanced Technological Education PI Conference.

Blog: An Evaluative Approach to Proposal Development*

Posted on June 27, 2019 by  in Blog - ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A student came into my office to ask me a question. Soon after she launched into her query, I stopped her and said I wasn’t the right person to help because she was asking about a statistical method that I wasn’t up-to-date on. She said, “Oh, you’re a qualitative person?” And I answered, “Not really.” She left looking puzzled. The exchange left me pondering the vexing question, “What am I?” (Now imagine these words echoing off my office walls in a spooky voice for a couple of minutes.) After a few uncomfortable moments, I proudly concluded, “I am a critical thinker!”  

Yes, evaluators are trained specialists with an arsenal of tools, strategies, and approaches for data collection, analysis, and reporting. But critical thinking—evaluative thinking—is really what drives good evaluation. In fact, the very definition of critical thinking—“the mental process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”2—describes the evaluation process to a T. Applying your critical, evaluative thinking skills in developing your funding proposal will go a long way toward ensuring your submission is competitive.

Make sure all the pieces of your proposal fit together like a snug puzzle. Your proposal needs both a clear statement of the need for your project and a description of the intended outcomes—make sure these match up. If you struggle with the outcome measurement aspect of your evaluation plan, go back to the rationale for your project. If you can observe a need or problem in your context, you should be able to observe the improvements as well.

Be logical. Develop a logic model to portray how your project will translate its resources into outcomes that address a need in your context. Sometimes simply putting things in a graphic format can reveal shortcomings in a project’s logical foundation (like when important outcomes can’t be tracked back to planned activities). The narrative description of your project’s goals, objectives, deliverables, and activities should match the logic model.

Be skeptical. Project planning and logic model development typically happen from an optimistic point of view. (“If we build it, they will come.”) When creating your work plan, step back from time to time and ask yourself and your colleagues, What obstacles might we face? What could really mess things up? Where are the opportunities for failure? And perhaps most important, ask, Is this really the best solution to the need we’re trying to address? Identify your plan’s weaknesses and build in safeguards against those threats. I’m all for an optimistic outlook, but proposal reviewers won’t be wearing rose-colored glasses when they critique your proposal and compare it with others written by smart people with great ideas, just like you. Be your own worst critic and your proposal will be stronger for it.

Evaluative thinking doesn’t replace specialized training in evaluation. But even the best evaluator and most rigorous evaluation plan cannot compensate for a disheveled, poorly crafted project plan. Give your proposal a competitive edge by applying your critical thinking skills and infusing an evaluative perspective throughout your project description.

* This blog is a reprint of an article from an EvaluATE newsletter published in summer 2015.

2 dictionary.com

Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf