Blog




Blog: The 1:3:25 Format for More Reader-Friendly Evaluation Reports

Posted on September 17, 2019 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m part of the EvaluATE team. I also lead evaluations as part of my work at Western Michigan University’s Evaluation Center, so I have written my fair share of evaluation reports over the years. I wanted to share a resource I’ve found to be game-changing for report writing. It’s the Canadian Health Services Research Foundation’s 1:3:25 reader-friendly report format. Even though I don’t follow the format exactly, what I’ve take away from the model has significantly improved the quality of my evaluation reports.

The 1:3:25 format for report writing consists of a one-page summary of main messages, a three-page executive summary, and a 25-page report body. Here’s a brief summary of each component:

1 Page for Main Messages: The main-messages page should contain an easy-to-scan bulleted list of information people can use to make decisions based on what was learned from the evaluation. This is not a summary of findings, but rather a compilation of key conclusions and recommendations that have implications for decision making. Think of the main-messages page as the go-to piece of the report for answering questions about what’s next.

3-Page Executive Summary: The purpose of the three-page executive summary is to provide an overview of the evaluation and help busy readers decide if your report will be useful to them. The executive summary should read more like a news article than an academic abstract. Information readers find most interesting should go first (i.e., conclusions and findings) and the less interesting information should go at the end (i.e., methods and background).

25-Page Report Body: The 25-page report body should contain information on the background of the project and its evaluation, and the evaluation methods, findings, conclusions, and recommendations. The order in which these sections are presented should correspond with the audience’s level of interest and familiarity with the project. Information that doesn’t fit in the 25-page report body can be placed in the appendices. Details that are critical for understanding the report should go in the report body; information that’s not critical for understanding the report should go in the appendices.

What I’ve found to be game-changing is having a specified page count to shoot for. With this information, I’ve gone from knowing my reports needed to be shorter to actually writing shorter reports. While I don’t always keep the report body to 25 pages, the practice of trying to keep it as close to 25 pages as possible has helped me shorten the length of my reports. At first, I was worried the shorter length would compromise the quality of the reports. Now, I feel as if I can have the best of both worlds: a report that is both reader friendly and transparent. The difference is that, now, many of the additional details are located in the appendices.

For more details, check out the Canadian Health Services Research Foundation’s guide on the 1:3:25 format.

Keywords: 1:3:25, reporting, evaluation report, evaluation reporting

Blog: What Grant Writers Need to Know About Evaluation

Posted on September 4, 2019 by  in Blog (, )

District Director of Grants and Educational Services, Coast Community College District

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Fellow grant writers: Do you ever stop and ask yourselves, “Why do we write grants?” Do you actually enjoy herding cats, pulling teeth, and the inevitable stress of a looming proposal deadline? I hope not. Then what is the driver? We shouldn’t write a grant just to get funded or to earn prestige for our colleges. Those benefits may be motivators, but we should write to get funding and support to positively impact our students, faculty, and the institutions involved. And we should be able to evaluate those results in useful and meaningful ways so that we can identify how to improve and demonstrate the project’s value.

Evaluation isn’t just about satisfying a promise or meeting a requirement to gather and report data. It’s about gathering meaningful data that can be utilized to determine the effectiveness of an activity and the impact of a project. When developing a grant proposal, one often starts with the goals, then thinks of the objectives, and then plans the activities, hoping that in the end, the evaluation data will prove that the goals were met and the project was a success. That requires a lot of “hope.”

I find it more promising to begin with the end in mind from an evaluation perspective: What is the positive change that we hope to achieve and how will it be evidenced? What does success mean? How can we tell if we have been successful? When will we know? And how can we get participants to provide the information we will need for the evaluation?

The role of a grant writer is too often like that of a quilt maker, delegating sections of the proposal’s development to different members of the institution, with the evaluation section often outsourced to a third-party evaluator. Each party submits their content, then the grant writer scrambles to patch it all together.

Instead of quilt making, the process should be more like the construction of a tapestry. Instead of chunks of material stitched together in independent sections, each thread is carefully woven in a thoughtful way to create a larger, more cohesive overall design. It is important that the entire professional development team works together to fully understand each aspect of the proposal. In this way, they can collaboratively develop a coherent plan to obtain the desired outcomes. The project work plan, budget, and evaluation components should not be designed or executed independently—they occur simultaneously and are dependent upon each other. Thus, they should tie together in a thoughtful manner.

I encourage you to think like an evaluator as you develop your proposals. Prepare yourself and challenge your team to be able to justify the value of each goal, objective, and activity and be able to explain how that value will be measured. If at all possible, involve your external or internal evaluator early on in proposal development. The better the evaluator understands your overall concept and activities, the better they can tailor the evaluation plan to derive the desired results. A strong work plan and evaluation plan will help proposal reviewers connect the dots and see the potential of your proposal. These elements will also serve as road maps to success for your project implementation team.

 

For questions or further information please reach out to the author, Lara Smith.

Blog: 5 Tips for Evaluating Multisite Projects*

Posted on August 21, 2019 by  in Blog (, )

Senior Research Manager, Social & Economic Sciences Research Center at Washington State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting evaluations for multisite projects can present unique challenges and opportunities. For example, evaluators must be careful to ensure that consistent data are captured across sites, which can be challenging. However, having results for multiple sites can lead to stronger conclusions about an intervention’s impact. The following are helpful tips for evaluating multisite projects.

 1.      Investigate the consistency of project implementation. Just because the same guidelines have been provided to each site does not mean that they have been implemented the same way! Variations in implementation can create difficulties in collecting the data and interpreting the evaluation results.

2.      Standardize data collection tools across sites. This will minimize confusion and result in a single dataset with information on all sites. On the downside, this may result in having to limit the data to a subset of information that is available across all sites.

3.      Help the project managers at each site understand the evaluation plan. Provide a clear, comprehensive overview of the evaluation plan that includes the expectations of the managers. Simplify their roles as much as possible.

4.      Be sensitive in reporting side-by-side results of the sites. Consult with project stakeholders to determine if it is appropriate or helpful to include side-by-side comparisons of the performance of the various sites.

5.      Analyze to what extent differences in outcomes are due to variations in project implementation. Variation in results across sites may provide clues to factors that may facilitate or impede the achievement of certain outcomes.

6.      Report the evaluation results back to the site managers in whatever form would be the most useful to them. This is an excellent opportunity to recruit the site managers as supporters of evaluation, especially if they see that the evaluation results can be used to aid their participant recruitment and fundraising efforts.

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 ATE PI Conference.

 

FOR MORE INFORMATION

Smith-Moncrieffe, D. (2009, October). Planning multi-site evaluations of model and promising programs. Paper presented at the Canadian Evaluation Society Conference, Ontario, CA.

Lawrenz, F., & Huffman, D. (2003). How can multi-site evaluations be participatory? American Journal of Evaluation, 24(4), 471–482.

Blog: SWOT Analysis: What Is It? How Can It Be Useful?

Posted on August 6, 2019 by  in Blog ()

Doctoral Candidate, University of North Carolina at Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! My name is Cherie Avent, and I am a graduate student at the University of North Carolina at Greensboro. As a member of an external evaluation team, I recently helped facilitate a SWOT analysis for program managers of a National Science Foundation project to aid them in understanding their strengths, areas of improvement, and potential issues impacting the overall success of the project. In this blog, I will share what a SWOT analysis is, how it can benefit evaluations, and how to conduct one.

What is a SWOT Analysis?

The acronym “SWOT” stands for strengths, weaknesses, opportunities, and threats. A SWOT analysis examines the current performance and the potential future of a program or project. Strengths and weaknesses are controllable factors internal to a program, while opportunities and threats are uncontrollable external factors potentially impacting the circumstances of the project (Chermack & Kasshanna, 2007). More specifically, a SWOT analysis is used to achieve more effective decision making, assessing how strengths can be utilized for new opportunities and how weaknesses can hinder programmatic progress or highlight threats (Helms & Nixon, 2010). The goal is to take advantage of strengths, address weaknesses, maximize opportunities, and limit the impact of threats (Chermack & Kasshanna, 2007).

How can a SWOT be useful?

As evaluators, we can facilitate SWOT analyses with program managers to assist them in 1) understanding current project actions that are working well or need improving, 2) identifying opportunities for leveraging, 3) limiting areas of challenge, and 4) refining decision making for the overall success of the program. Many of the projects we serve involve various objectives and actions for achieving the overarching program goal. Therefore, a SWOT analysis provides an opportunity for program managers to assess why specific strategies or plans work and others do not.

How does one conduct a SWOT analysis?

There are multiple ways to conduct a SWOT analysis. Here are a few steps we found useful (Chermack & Kasshanna, 2007):

  1. Define the objective of the SWOT analysis with participants. What do program managers or participants want to gain by conducting the SWOT analysis?
  2. Provide an explanation of SWOT analysis procedures to participants.
  3. Using the two-by-two matrix below, ask each participant to consider and write strengths, weaknesses, opportunities, and threats of the project. Included are questions they may think about for each area.

SWOT Analysis

  1. Combine the individual worksheets into a single chart or spreadsheet. You can use a Google document or a large wall chart so everyone can participate.
  2. Engage participants in a dialogue about their responses for each category, discussing why they chose those responses and how they see the descriptions impacting the project. Differing perspectives will likely emerge. Ask participants how weaknesses can become strengths and how opportunities can become threats.
  3. Lastly, develop an action plan for moving forward. It should consist of concrete and achievable steps program managers can take concerning the programmatic goals.

 

References:

Chermack, T. J., & Kasshanna, B. K. (2007). The use and misuse of SWOT analysis and implications for HRD professionals. Human Resource Development International, 10(4), 383–399. doi:10.1080/13678860701718760

Helms, M. M., & Nixon, J. (2010). Exploring SWOT analysis—where are we now? A review of academic research from the last decade. Journal of Strategy and Management, 3(3), 215–251. doi:10.1108/17554251011064837

Keywords: evaluators, programmatic performance, SWOT analysis

Blog: 11 Important Things to Know About Evaluating Curriculum Development Projects*

Posted on July 24, 2019 by  in Blog ()

Professor of Instructional Technology, Bloomsburg University of Pennsylvania

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Curriculum development projects are designed to create new content or present content to students in a new format with new activities or approaches. The following are important things to know about evaluating curriculum development projects.

1.     Understand the underlying model, pedagogy, and process used to develop the curriculum. There are several curriculum development models, including the DACUM model (Developing a Curriculum), the Backward Design Method, and the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model of instructional design. Whatever approach is used, make sure you understand its methodology and underlying philosophy so that these can help guide the evaluation.

2.     Establish a baseline. If possible, establish what student performance was before the curriculum was available, to assess the level of change or increased learning created as a result of the new curriculum. This could involve data on student grades or performance from the year before the new curriculum is introduced or data on job performance or another indicator.

3.     Clearly identify the outcomes expected of the curriculum. What should students know or be able to do when they have completed the curriculum? Take the time to understand the desired outcomes and how the curriculum content, activities, and approach support those outcomes. The outcomes should be directly linked to the project goals and objectives. Look for possible disconnects or gaps.

4.     Employ a pre/post test design. One method to establish that learning has occurred is to measure student knowledge of a subject before and after the curriculum is introduced. If you are comparing two curriculums, you may want to consider using one group as a control group that would not use the new curriculum and comparing the performance of the two groups in a pre/post test design.

5.     Employ content analysis techniques. Content analysis is the process of analyzing documents (student guides, instructor guides, online content, videos, and other materials) to determine the type of content, frequency of content, and internal coherence (consistency of different elements of the curriculum) and external coherence (interpretation in the curriculum fits the theories accepted in and outside the discipline).

6.     Participate in the activities. One effective method for helping evaluators understand the impact of activities and exercises is to participate in them. This helps determine the quality of the instructions, the level of engagement, and the learning outcomes that result from the activities.

7.     Ensure assessment items match instructional objectives. Assessment of student progress is typically measured through written tests. To ensure written tests assess the student’s grasp of the course objectives and curriculum, match the assessment items to the instructional objectives. Create a chart to match objectives to assessment items to ensure all the objectives are assessed and that all assessment items are pertinent to the curriculum.

8.     Review guidance and instruction provided to teachers/facilitators in guides. Determine if the materials are properly matched across the instructor guide, student manual, slides, and in-class activities. Determine if the instructions are clear and complete and that the activities are feasible.

9.     Interview students, faculty, and, possibly, workforce representatives. Faculty can provide insights into the usefulness and effectiveness of the materials, and students can provide input on level of engagement, learning effort, and overall impression of the curriculum. If the curriculum is tied to a technician profession, involve industry representatives in reviewing and examining the curriculum. This should be done as part of the development process, but if it is not, consider having a representative review the curriculum for alignment with industry expectations.

10.  Use Kirkpatrick’s four levels of evaluation. A highly effective model for evaluation of curriculum is called the Kirkpatrick Model. The levels in the model measure initial learner reactions, knowledge gained from the instruction, behavioral changes that might result from the instruction, and overall impact on the organization, field, or students.

11.  Pilot the instruction. Conduct pilot sessions as part of the formative evaluation to ensure that the instruction functions as designed. After the pilot, collect end-of-day reaction sheets/tools and trainer observations of learners. Having an end-of-program product—such as an action-planning tool to implement changes around curriculum focus issue(s)—is also useful.

RESOURCES

For detailed discussion of content analysis, see chapter 9 of Gall, M. D., Gall, J. P, & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston: Pearson.

DACUM Job Analysis Process: https://s3.amazonaws.com/static.nicic.gov/Library/010699.pdf

Backward Design Method: https://educationaltechnology.net/wp-content/uploads/2016/01/backward-design.pdf

ADDIE Model: http://www.nwlink.com/~donclark/history_isd/addie.html

Kirkpatrick Model: http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 Advanced Technological Education PI Conference.

Blog: Completing a National Science Foundation Freedom of Information Act Request

Posted on July 15, 2019 by  in Blog (, , )

Principal Consultant, The Rucks Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Completing a Form

You probably have heard of a FOIA (Freedom of Information Act) request, but it was probably in the context of journalism. Often, journalists will submit a FOIA request to obtain information that is not otherwise publicly available, but is key to an investigative reporting project.

There may be times when you as an evaluator may be evaluating or researching a topic and your work could be enhanced with information that requires submitting a FOIA request. For instance, while working as EvaluATE’s external evaluator, The Rucks Group needed to complete a FOIA request to learn how evaluation plans in ATE proposals have changed over time. And we were interested in documenting how EvaluATE may have influenced those changes. Toward that goal, a random sample of ATE proposals funded between 2004 and 2017 was sought to be reviewed. However, in spite of much effort over an 18-month period, we still were in need of actually obtaining nearly three dozen proposals. We needed to get these proposals via a FOIA request primarily because the projects were older and we were unable to reach either the principal investigators or the appropriate person at the institution. So we submitted a FOIA request to the National Science Foundation (NSF) for the outstanding proposals.

For me, this was a new and, at first, a mentally daunting task. Now, after having gone through the process, I realize that I need not be nervous because completing a FOIA request is actually quite simple. These are the elements that one needs to provide:

  1. Nature of request: We provided a detailed description of the proposals we needed and what we needed from each proposal. We also provided the rationale for the request, but I do not believe a rationale is required.
  2. Delivery method: Identify the method through which you prefer to receive the materials. We chose to receive digital copies via a secure digital system.
  3. Budget: Completing the task could require special fees, so you will need to indicate how much you are willing to pay for the request. Receiving paper copies through the US Postal Service can be more costly than receiving digital copies.

It may take a while for the FOIA request to be filled. We submitted the request in fall 2018 and received the materials in spring 2019. The delay may have been due in part to the 35-day government shutdown and a possibly lengthy process for Principal Investigator approval.

The NSF FOIA office was great to work with, and we appreciated staffers’ communications with us to keep us updated.

Because access is granted only for a particular time, pay attention to when you are notified via email that the materials have been released to you. In other words, do not let this notice sit in your inbox.

One caveat: When you submit the FOIA request, there may be encouragement to acquire the materials through other means. Submitting a FOIA request to colleges or state agencies may be an option for you.

While FOIA requests should be made judiciously, they are useful tools that, under the right circumstances, could enhance your evaluation efforts. They take time, but thanks to the law backing the public’s right to know, your FOIA requests will be honored.

To learn more, visit https://www.nsf.gov/policies/foia.jsp

Keywords: FOIA request, freedom of information act

Blog: An Evaluative Approach to Proposal Development*

Posted on June 27, 2019 by  in Blog - ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A student came into my office to ask me a question. Soon after she launched into her query, I stopped her and said I wasn’t the right person to help because she was asking about a statistical method that I wasn’t up-to-date on. She said, “Oh, you’re a qualitative person?” And I answered, “Not really.” She left looking puzzled. The exchange left me pondering the vexing question, “What am I?” (Now imagine these words echoing off my office walls in a spooky voice for a couple of minutes.) After a few uncomfortable moments, I proudly concluded, “I am a critical thinker!”  

Yes, evaluators are trained specialists with an arsenal of tools, strategies, and approaches for data collection, analysis, and reporting. But critical thinking—evaluative thinking—is really what drives good evaluation. In fact, the very definition of critical thinking—“the mental process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”2—describes the evaluation process to a T. Applying your critical, evaluative thinking skills in developing your funding proposal will go a long way toward ensuring your submission is competitive.

Make sure all the pieces of your proposal fit together like a snug puzzle. Your proposal needs both a clear statement of the need for your project and a description of the intended outcomes—make sure these match up. If you struggle with the outcome measurement aspect of your evaluation plan, go back to the rationale for your project. If you can observe a need or problem in your context, you should be able to observe the improvements as well.

Be logical. Develop a logic model to portray how your project will translate its resources into outcomes that address a need in your context. Sometimes simply putting things in a graphic format can reveal shortcomings in a project’s logical foundation (like when important outcomes can’t be tracked back to planned activities). The narrative description of your project’s goals, objectives, deliverables, and activities should match the logic model.

Be skeptical. Project planning and logic model development typically happen from an optimistic point of view. (“If we build it, they will come.”) When creating your work plan, step back from time to time and ask yourself and your colleagues, What obstacles might we face? What could really mess things up? Where are the opportunities for failure? And perhaps most important, ask, Is this really the best solution to the need we’re trying to address? Identify your plan’s weaknesses and build in safeguards against those threats. I’m all for an optimistic outlook, but proposal reviewers won’t be wearing rose-colored glasses when they critique your proposal and compare it with others written by smart people with great ideas, just like you. Be your own worst critic and your proposal will be stronger for it.

Evaluative thinking doesn’t replace specialized training in evaluation. But even the best evaluator and most rigorous evaluation plan cannot compensate for a disheveled, poorly crafted project plan. Give your proposal a competitive edge by applying your critical thinking skills and infusing an evaluative perspective throughout your project description.

* This blog is a reprint of an article from an EvaluATE newsletter published in summer 2015.

2 dictionary.com

Blog: LinkedIn for Alumni Tracking

Posted on June 13, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

LinkedIn for Alumni Tracking

Benjamin Reid Kevin Cooper
President of Impact Allies PI of RDNET and Dean of Advanced Technology at IRSC

Post-program outcomes for students are obviously huge indicators of success and primary metrics for measuring medium and long-term outcomes and impacts. EvaluATE’s 2019 revised Advanced Technological Education (ATE) Annual Survey states, “ATE program stakeholders would like to know more about post-program outcomes for students.” It lists the types of data sought:

    • Job placement
    • Salary
    • Employer satisfaction
    • Pursuit of additional STEM education
    • Acquisition of industry certifications or licenses

The survey also asks for the sources used to collect this data, giving the following choices:

    • Institutional research office
    • Survey of former students
    • Local economic data
    • Personal outreach to former students
    • State longitudinal data systems
    • Other (describe)

This blog introduces an “Other” data source: LinkedIn Alumni Tool (LAT).

LAT is data rich and free, yet underutilized. Each alumni’s professional information is readily available (i.e., no permissions process for the researcher) and personally updated. The information is also remarkably accurate, because the open-visibility and network effects help ensure honesty. These factors make LAT a great tool for quick health checks and an alternative to contacting each person and requesting this same information.

Even better, LinkedIn is a single tool that is useful for evaluators, principal investigators, instructors, and students. For example, a couple years ago Kevin, Principal Investigator for the Regional Center for Nuclear Education and Training (RCNET) and I (RCNET’s evaluator) realized that our respective work was leading us to use the same tool — LinkedIn — and that we should co-develop our strategies for connecting and communicating with students and alumni on this medium. Kevin uses it to help RCNET’s partner colleges to communicate opportunities (jobs, internships, scholarships, continued education) and develop soft skills (professional presentation, networking, awareness of industry news). I use it to glean information about students’ educational and professional experiences leading up to and during their programs and to track their paths and outcomes after graduation. LinkedIn is also a user-centric tool for students that — rather than ceasing to be useful after graduation — actually becomes more useful.

When I conducted a longitudinal study of RCNET’s graduates across the county over the preceding eight years, I used LinkedIn for two purposes: triangulation and connecting with alumni via another channel, because after college many students change their email addresses and telephone numbers. More than 30 percent of the alumni who responded were reached via LinkedIn, as their contact information on file with the colleges had since changed.

Using LAT, I viewed their current and former employers, job positions, promotions, locations, skills, and further education (and there were insignificant differences between what alumni reported on the survey and interview and what was on their LinkedIn profiles). That is, three of the five post-program outcomes for students of interest to ATE program stakeholders (plus a lot more) can be seen for many alumni via LinkedIn.

Visit https://university.linkedin.com/higher-ed-professionals for short videos about how to use the LinkedIn Alumni Tool and many others. Many of the videos take an institutional perspective, but here is a tip on how to pinpoint program-specific students and alumni. Find your college’s page, click Alumni, and type your program’s name in the search bar. This will filter the results only to the people in your program. It’s that simple.

 

Blog: Grant Evaluation: What Every PI Should Know and Do*

Posted on June 3, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A number of years ago, the typical Advanced Technological Education (ATE) Principal Investigator (PI) deemed evaluation a necessary evil. As a PI, I recall struggling even to find an evaluator who appeared to have reasonable credentials. I viewed evaluation as something you had to have in a proposal to get funded.

Having transitioned from the PI role to being an evaluator myself, I now appreciate how evaluation can add value to a project. I also know a lot more about how to find an evaluator and negotiate the terms of the evaluation contract.

Today, PIs typically identify evaluators through networking and sometimes use evaluator directories, such as the one maintained by EvaluATE at ATE Central. You can call colleagues and ask them to identify someone they trust and can recommend with confidence. If you don’t know anyone yet, start your networking by contacting an ATE center PI using the map at atecentral.net. Do this at least three months before the proposal submission date (i.e., now). When you approach an evaluator, ask for a résumé, references, and a work sample or two. Review their qualifications to be sure the proposal’s reviewers will perceive them as a credentialed evaluator.

Second, here is an important question many PIs ask: “Once you have identified the evaluator, can you expect them to write the evaluation section of your proposal for free?” The answer is (usually) yes. Just remember: Naming an individual in your proposal and engaging that person in proposal development reflects your commitment to enter into a contract with them if your proposal is funded. (An important caveat: Many community colleges’ procurement rules require a competition or bid process for evaluation services. That may affect your ability to commit to the evaluator should the proposal be funded. Have a frank discussion about this.)

Although there is a limit to what evaluators can or should do for free at the proposal stage, you should expect more than a boilerplate evaluation plan (provided you’ve allowed enough time for a thoughtful one). You want someone who will take a look at your goals and objectives and describe in 1 to 1.25 pages the approach for this project’s evaluation. This will serve you better than modifying their “standard language,” if they offer it, yourself. Once the proposal is funded, their first deliverable will be the complete evaluation plan; you generally won’t need that level of detail at the proposal stage.

Now that you have a handshake agreement with your selected evaluator, make it clear you need the draft evaluation section by a certain deadline — say, a month before the proposal due date. You do not have to discuss detailed contractual terms prior to the proposal being funded, but you do have to establish the evaluation budget and the evaluator’s daily rate, for your budget and budget justification. Establishing this rate requires a frank discussion about fees.

Communication in this process is key. Check out EvaluATE’s webinar, “Getting Everyone on the Same Page,” practical strategies for evaluator-stakeholder communication.

Once your proposal has been funded, you get to hammer out a real statement of work with your evaluator and set up a contract for the project. Then the real work begins.

*This blog is a reprint of an article from an EvaluATE newsletter published in summer 2012.

Keywords: evaluators, find evaluator, proposal, evaluation, evaluation proposal

Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf