Blog




Blog: Data Cleaning Tips in R*

Posted on July 8, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I recently came across a set of data cleaning tips in Excel from EvaluATE, which provides support for people looking to improve their evaluation practice.


Screenshot of the Excel Data Cleaning Tips

As I looked through the tips, I realized that I could show how to do each of the five tips listed in the document in R. Many people come to R from Excel so having a set of R to Excel equivalents (also see this post on a similar topic) is helpful.

The tips are not intended to be comprehensive, but they do show some common things that people do when cleaning messy data. I did a live stream recently where I took each tip listed in the document and showed its R equivalent.

As I mention at the end of the video, while you can certainly do data cleaning in Excel, switching to R enables you to make your work reproducible. Say you have some surveys that need cleaning today. You write your code and save it. Then, when you get 10 new surveys next week, you can simply rerun your code, saving you countless Excel points and clicks.

You can watch the full video at the very bottom or go each tip by using the videos immediately below. I hope it’s helpful in giving an overview of data cleaning in R!

Tip #1: Identify all cells that contain a specific word or (short) phrase in a column with open-ended text

Tip #2: Identify and remove duplicate data

Tip #3: Identify the outliers within a data set

Tip #4: Separate data from a single column into two or more column

Tip #5: Categorize data in a column, such as class assignments or subject groups

Full Video

*This is a Repost of David Keyes’ blog Data Cleaning Tips in R

Blog: What I’ve Learned about Evaluation: Lessons from the Field

Posted on June 21, 2020 by  in Blog ()

Coordinator in Educational Leadership, San Francisco State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

What I’ve Learned about Evaluation_ Lessons from the FieldI’m completing my second year as the external evaluator of a three-year ATE project. As a first-time evaluator, I have to confess that I’ve had a lot to learn.

The first surprise was that, in spite of my best intentions, my evaluation process seems always a bit messy. A grant proposal is just that: a proposed plan. It is an idealized vision of what may come. Therefore, the evaluation plan based on that vision is also idealized. Over time, I have had to reconsider my evaluation as grant activities and circumstances evolved—what data is to be collected, how it is to be collected, or whether that data is to be collected at all.

I also thought that my evaluations would somehow reveal something startling to my project team. In reality, my evaluations have served as a mirror to them, acknowledging what they have done and mostly confirming what they already suspect to be true. In a few instances, the manner in which I’ve analyzed data has allowed the team to challenge some assumptions made along the way. In general, though, my work is less revelatory than I had expected.

Similarly, I anticipated my role as a data analyst would be more important. However, this project was designed to use iterative continuous improvement and so the team has met frequently to analyze and consider anecdotal data and impromptu surveys. This more immediate feedback on project activities was regularly used to guide changes. So while my planned evaluation activities and formal data analysis has been important, it has been a less significant contribution than I had expected.

Instead, I’ve added the greatest value to the team by serving as a critical colleague. Benefiting from distance from the day-to-day work, I can offer a more objective, outsider’s view of the project activities. By doing so, I’m able to help a talented, innovative, and ambitious team consider their options and determine whether or not investing in certain activities promotes the goals of the grant or moves the team tangentially. This, of course, is critical for a small grant on a small budget.

Over my short time involved in this work, I see that by being brought into the project from the beginning, and encouraged to offer guidance along the way, I’ve assessed the progress made in achieving the grant goals, and I have been able to observe and document how individuals work together effectively to achieve those goals. This insight highlights another important service evaluators can offer: to tell the stories of successful teams to their stakeholders.

As evaluators, we are accountable to our project teams and also to their funders. It is in the funders’ interest to learn how teams work effectively to achieve results. I had not expected it, but I now see that it’s in the teams’ interest for the external evaluators to understand their successful collaboration and bring it to light.

Blog: Improving the Quality of Evaluation Data from Participants

Posted on June 10, 2020 by  in Blog ()

Professor of Educational Research and Evaluation, Tennessee Tech University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Improving the Quality of Evaluation Data from Participants

I have had experience evaluating a number of ATE projects, all of them collaborative projects among several four-year and two-year community colleges. One of the projects’ overarching goals is to provide training to college instructors as well as elementary-, middle-, and high-school teachers, to advance their skills in additive manufacturing and/or smart manufacturing.

The training is done via the use of train-the-trainer studios (TTS). TTSs provide novel hands-on learning experiences to program participants. As with any program, the evaluation of such projects needs to be informed by rich data to capture participants’ entire experience, including the knowledge they gain.

Here’s one lesson I’ve learned from evaluating these projects: Participants’ perception of their value in the project contributes crucially to the quality of data collected.

As the evaluator, one can make participants feel that the data they are being asked to provide (regarding technical knowledge gained, their application of it, and perceptions about all aspects of the training) will be beneficial to the overall program and to them directly or indirectly.

If they feel that their importance is minimal, and that the information they provide will not matter, they will provide the barest amount of information (regardless of the method of data collection employed). If they understand the importance of their participation, they’re more likely to provide rich data.

How can you make them feel valued?

Establish good rapport with each of the participants, particularly if the group(s) is(are) of reasonable size. Make sure to interact informally with each participant throughout the training workshop(s). Inquire about their professional work, and ask them about supports that they might need when they return to their workplace.

The responses to the open-ended questions on most of my workshop evaluations have been very rich and detailed¾much more so than those from participants to whom I administered the survey remotely, without ever meeting. Program participants want to connect to a real person, not a remote evaluator. In the event that in-person connections are not possible, explore other innovative ways of establishing rapport with individual participants, before and during the program.

How can you improve the quality of data they will provide?

 Sell the evaluation. Make it clear how the evaluation findings will be used and how the results will benefit the participants and their constituents specifically, directly or indirectly.

 Share success stories. During the training workshops that I have been evaluating, I’ve shared some previous success stories with participants in order to show them what they are capable of accomplishing as well.

The time and energy you spend building these connections with participants will result in high-quality evaluation data, ultimately helping the program serve participants better.

Blog: Evaluating Critical Thinking Skills

Posted on May 27, 2020 by  in Blog ()

Professor of Sociology and Co-Director of the Center for Assessment and Improvement of Learning, Tennessee Technological University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Like many of you, I wear multiple professional hats. Critical thinking skills are at the nexus of all my roles. The importance of improving critical thinking transcends disciplines, even though the contexts and applications vary. As a sociologist, I see the how the deficit of critical thinking skills has a negative impact on society. As an evaluator, I find that these skills are frequent targets for NSF projects across disciplines.

Identifying important skills and implementing strategies to improve them is only one part of a grant proposal. An equally challenging issue is finding appropriate assessments.

Through the years I have learned some useful tips in selecting an instrument to best complement your evaluation needs. You should select an assessment that:

    1. Aligns with the skills that are important to your project.
    2. Is transparent about both the questions and the rubric for assessing those questions.
    3. Provides insight and/or training as to how you can improve the skills.
    4. Provides flexible comparison groups for you (e.g., pre and post, or national user norms).
    5. Provides reports that are easy to read and use.
    6. Demonstrates validity, reliability, and cultural fairness.

As we struggled to find assessment options that could meet these standards ourselves, we developed and refined the Critical-thinking Assessment Test (CAT). If you are seeking to improve students’ critical thinking skills, you may want to consider this instrument.

The Critical-thinking Assessment Test (CAT)

This NSF-funded instrument is the product of 20 years’ extensive development, testing, and refinement with faculty and students from over 350 institutions and over 40 NSF projects. One innovation of this assessment is its integration of short-answer essay questions based on real-world situations. It provides quantitative and qualitative data on the skills that faculty believe are most important for their students to have 10 years after graduating.

Skills Assessed by the CAT:

Evaluating Information

    • Separate factual information from inferences.
    • Interpret numerical relationships in graphs.
    • Understand the limitations of correlational data.
    • Evaluate evidence and identify inappropriate conclusions.

Creative Thinking

    • Identify alternative interpretations for data and observations.
    • Identify new information that might support or contradict a hypothesis.
    • Explain how new information can change a problem.

Learning and Problem Solving

    • Separate relevant from irrelevant information.
    • Integrate information to solve problems.
    • Learn and apply new information.
    • Use mathematical skills to solve real-world problems.

Communication

    • Communicate ideas effectively.

Our team truly enjoys working with evaluators and PIs to help them assess these skills and provide evidence of their success. Some NSF projects and courses have made gains in critical thinking equivalent to those gained in an entire four-year college experience. Our partner institutions have experienced positive outcomes, growth, and learning from working with the CAT.

You can find more information about the CAT here, or you can contact me with any questions you have.

 

References:

Haynes, A., Lisic, E., Goltz, M., Stein, B., & Harris, K. (2016). Moving beyond assessment to improving students’ critical thinking skills: A model for implementing change. Journal of the Scholarship of Teaching and Learning16(4), 44–61.

Stein, B., & Haynes, A. (2011). Engaging faculty in the assessment and improvement of students’ critical thinking using the CAT. Change: The Magazine of Higher Learning, 43, 44–49.

Blog: Building Capacity for High-Quality Data Collection

Posted on May 13, 2020 by  in Blog (, )

Director of Evaluation, Thomas P. Miller & Associates, LLC 

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I, like everyone else, am adjusting to working at home and practicing social distancing, I have been thinking about how to conduct my evaluation projects remotely. One thing that’s struck me as I’ve been retooling evaluation plans and data collection timelines is the need for even more evaluation capacity building around high-quality data collection for our clients. We will continue to rely on our clients to collect program data, and now that they’re working remotely too, a refresher on how to collect data well feels timely.  

Below are some tips and tricks for increasing your clients capacity to collect their own high-quality data for use in evaluation and informed decision making. 

Identify who will need to collect the data.  

Especially with multiple-site programs or programs with multiple collectors, identifying who will be responsible for data collection and ensuring that all data collectors use the same tools is key to collecting similar data across the program.  

Determine what is going to be collected.  

Examine your tool. Consider the length of the tool, the types of data being requested, and the language used in the tool itself. When creating a tool that will be used by others, be certain that your tool will yield the data that you need and will make sense to those who will be using it. Test the tool with a small group of your data collectors, if possible, before full deployment.  

Make sure data collectors know why the data is being collected.  

When those collecting data understand how the data will be used, they’re more likely to be invested in the process and more likely to collect and report their data carefully. When you emphasize the crucial role that stakeholders play in collecting data, they see the value in the time they are spending using your tools. 

Train data collectors on how to use your data collection tools.  

Walking data collectors through the step-by-step process of using your data collection tool, even if the tool is a basic intake form, will ensure that all collectors use the tool in the same way. It will also ensure they have had a chance to walk through the best way to use the tool before they actually need to implement it. Provide written instructions, too, so that they can refer to them in the future.  

Determine an appropriate schedule for when data will be reported.  

To ensure that your data reporting schedule is not overly burdensome, consider the time commitment that the data collection may entail, as well as what else the collectors have on their plates.  

 Conduct regular quality checks of what data is collected.  

Checking the data regularly allows you to employ a quality control process and promptly identify when data collectors are having issues. Catching these errors quickly will allow for easier course correction.  

Blog: Three Ways to Boost Network Reporting

Posted on April 29, 2020 by  in Blog ()

Assistant Director, Collin College’s National Convergence Technology Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The National Convergence Technology Center (CTC), a national ATE center focusing on IT infrastructure technology, manages a community called the Convergence College Network (CCN)The CCN consists of 76 community colleges and four-year universities across 26 statesFaculty and administrators from the CCN meet regularly to share resources, trade know-how, and discuss common challenges 

 Because so much of the CTC’s work is directed to supporting the CCN, we ask the member colleges to submit a “CCN Yearly Report” evaluation each FebruaryThe data from that “CCN Yearly Report” informs the reporting we deliver to the NSF, to our National Visiting Committee, and to the annual ATE surveyEach of those three groups need slightly different information, so we’ve worked hard to include everything in a single evaluation tool. 

 We’re always trying to improve that “CCN Yearly Report” by improving the questions we ask, removing the questions we don’t need, and making any other adjustments that could improve the response rateWe want to make it easy on the respondentsOur efforts seem to be workingWe received 37 reports from the 76 CCN member colleges this past February, a 49% response rate. 

 We attribute this success to three strategies.  

  1. 1. Prepare them in advance.We start talking about the February “CCN Yearly Report” due date in the summerThe CCN community gets multiple email reminders, and we often mention the report deadline at our quarterly meetingsWe don’t want anyone to say they didn’t know about the report or its deadlinePart of this ongoing preparation also involves making sure everyone in the network understands the importance of the data we’re seekingWe emphasize that we need their help to accurately report grant impact to the NSF.
  1. Share the results.If we go to such lengths to make sure everyone understands the importance of the report up front, it makes sense to do the same after the results are inWe try to deliver a short overview of the results at our July quarterly meetingDoing so underscores the importance of the survey. Beyond that, research tells us that one key to nurturing a successful community of practice like the CCN is to provide positive feedback about the value of the groupBy sharing highlights of the report, we remind CCN members that they are a part of a thriving, successful group of educators. 
  1. Reward participation.Grant money is a great carrotBecause the CTC so often provides partial travel reimbursement to faculty from CCN member colleges so they can attend conferences and professional development events, we can incentivize the submission of yearly reports.  Colleges that want the maximum membership benefits, which include larger travel caps, must deliver a report.  Half of the 37 reports we received last year were from colleges seeking those maximum benefits. 

 We’re sure there are other grants with similar communities of organizations and institutions. We hope some of these strategies can help you get the data you need from your communities. 

 

References:  

 Milton, N. (2017, January 16). Why communities of practice succeed, and why they fail [Blog post].

Blog: Beyond Reporting: Getting More Value out of Your Evaluation*

Posted on April 15, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you’ve been a part of the ATE community for any time at all, you probably already know that ATE projects are required to have their work formally evaluated. NSF program officers want the projects they oversee to include evaluation results in their annual reports 

What may be less well known is that they also want to hear how projects are making use of their evaluations to learn from and improve their NSF-funded work. Did your evaluation results show that an activity you thought would help you reach your project goals turned out to be a flop? That may be disappointing, but it’s also extremely valuable information.   

There is more to using evaluation results than including findings in your annual reports to NSF or even following your evaluators’ recommendations. Project team members should take time to delve into the evaluation data on their own. For example: 

Read every comment in your qualitative data. Although you should avoid getting caught up in the less favorable remarks, they can be a valuable source of information about ways you might improve your work.  

  • Take time to consider the remarks that surprise you—they may reveal a blind spot that needs to be investigated.  
  • Don’t forget to pat yourself on the back for the stuff you’re already getting right.  

It’s important to find out whether a project is effective overall, but it can also be very revealing to disaggregate data by participant characteristics such as gender, age, discipline, enrollment status, or other factors. If you find out that some groups are getting more out of their experience with the project than others, you have an opportunity to adjust what you’re doing to better meet your intended audience’s needs. 

The single most important thing you can do to maximize an evaluation’s potential to bring value to your project is to make time to understand and use the results. That means:  

  • Meet with your evaluator to discuss the results.  
  • Review results with your project colleagues and advisors. 
  • Make decisions about how to move forward based on the evaluation results 
  • Record those decisions, along with what happens after you take action. That way, you can include this information in your annual reports to NSF. 

ATE grantees are awarded about $66 million annually by the federal government. We have an ethical obligation to be self-critical, use all available information sources to assess progress and opportunities for improvement, and use project evaluations to help us achieve excellence in all aspects of our work.  

 

*This blog is based on an article from an EvaluATE newsletter published in October 2014. 

Blog: Backtracking Alumni: Using Institutional Research and Reflective Inquiry to Improve Organizational Learning

Posted on April 2, 2020 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

In a recent blog post, we shared practical tips for developing an alumni tracking program to assess students’ employment outcomes. Alumni tracking is an effective tool for assessing the quality of educational programs and helping determine whether programs have the intended impact 

In this post, we share the Backtracking technique, aadvanced approach that supplements alumni tracking data with students’ institutionally archived recordsBacktracking assumes that institutions and programs already gather student outcomes information (e.g., employment, salary, and advanced educational data) from alumni on a periodic basis (e.g., annually or every three years) 

The technique uses institutional research (IR) archives to match students’ employment outcomes to academic and demographic variables (e.g., academic GPA, courses taken, grades, major, additional certifications, internships, gender, race/ethnicity)By pairing student outcomes data with academic and demographic variables, we can contextualize student pathways and explore the whole pathway, not just a moment in time. 

Figure 1 shows an example of the Backtracking technique for two-year Associate of Arts (AA) and Associate of Science (AS).  

Figure 1. Backtracking Technique for AA/AS Programs 

Figure 1 illustrates three data collection layersLayer 1, Institutional Research College Data, provides student completion data, academic history, and contact informationAdvanced and transfer-degree data are also available through the National Student Clearinghouse, which can reveal the major that former student (or graduate) entered after completing the AA/AS degreeLayer 2, Alumni Transfer Employment Data, includes student employment and advanceddegree information self-reported in alumni surveys 

Layer 3, Pathway Explanatory Dataembeds a qualitative component within the Backtracking technique in order to let alumni explain their undergraduate experiences. This layer helps us understand what happened during and after collegeMost importantly, it lets us identify the critical junctures that students faced and the facilitators and hindrances that allowed students to overcome (or that caused) setbacks during these difficult periods 

To provide alumni with the best opportunities to share their experiences, we use IR archives to formulate questions based on key facts about students’ experiencesFor example, if IR records show that a student transferred from college A to university B, we may ask the student about that specific experience. For a student who failed Calculus 1 once but passed it on the second try, we may ask what allowed that success. 

Although individual student pathways are useful, we can also stratify these data by race and gender (or other factors) and then aggregate them to better understand student groupsWe demonstrate how we aggregate the pathways in this short video. 

The Backtracking technique requires skilled personnel with technical knowledge in IR and data collection and analysis or an Academic IR (who possesses both IR and research skills)Investing in such skill and knowledge is worthwhile  

    • Institutional research is powerful when used for formative and internal improvement and for generation of new knowledge 
    • Findings about former students using the Backtracking technique can provide useful information to improve program and institutional services (e.g., advising, formal practices, informal learning opportunities, etc.) 
    • Looking back at what worked or failed for past students can inform current practices and serve as a source of institutional learning 

References: 

Jones, F. R., Mardis, M. A. (2019, May 15)Alumni Tracking: The ultimate source for evaluating completer outcomes [Blog post]Retrieved from https://www.evalu-ate.org/blog/jones2-may19/

Blog: Strategies and Sources for Interpreting Evaluation Findings to Reach Conclusions

Posted on March 18, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Imagine: You’re an evaluator who has compiled lots of data about an ATE project. You’re preparing to present the results to stakeholders. You have many beautiful charts and compelling stories to share.  

Youre confident you’ll be able to answer the stakeholders’ questions about data collection and analysisBut you get queasy at the prospect of questions like What does this mean? Is this good? Has our investment been worthwhile?  

It seems like the project is on track and they’re doing good work, but you know your hunch is not a sound basis for a conclusion. You know you should have planned ahead for how findings would be interpreted in order to reach conclusions, and you regret that the task got lost in the shuffle.  

What is a sound basis for interpreting findings to make an evaluative conclusion?  

Interpretation requires comparison. Consider how you make judgments in daily life: If you declare, “this pizza is just so-so,” you are comparing that pizza with other pizza you’ve had, or maybe with your imagined ideal pizza. When you judge something, you’re comparing that thing with something else, even if you’re not fully conscious of that comparison.

The same thing happens in program evaluation, and its essential for evaluators to be fully conscious and transparent about what they’re comparing evaluative evidence againstWhen evaluators don’t make their comparison points explicit, their evaluative conclusions may seem arbitrary and stakeholders may dismiss them as unfounded 

Here are some sources and strategies for comparisons to inform interpretation. Evaluators can use these to make clear and reasoned conclusions about a project’s performance:  

Performance Targets: Review the project proposal to see if any performance targets were established (e.g., “The number of nanotechnology certificates awarded will increase by 10 percent per year”). When you compare the project’s results with those targets, keep in mind that the original targets may have been either under or overambitious. Talk with stakeholders to see if those original targets are appropriate or if they need adjustment. Performance targets usually follow the SMART structure. 

Project Goals: Goals may be more general than specific performance targets (e.g., “Meet industry demands for qualified CNC technicians”)To make lofty or vague goals more concrete, you can borrow a technique called Goal Attainment Scaling (GAS). GAS was developed to measure individuals’ progress toward desired psychosocial outcomesThe GAS resource from BetterEvaluation will give you a sense of how to use this technique to assess program goal attainment. 

Project Logic Model: If the project has a logic model, map your data points onto its components to compare the project’s actual achievements with the planned activities and outcomes expressed in the model. No logic model? Work with project staff to create one using EvaluATE’s logic model template. 

Similar Programs: Look online or ask colleagues to find evaluations of projects that serve similar purposes as the one you are evaluating. Compare the results of those projects’ evaluations to your evaluation results. The comparison can inform your conclusions about relative performance.  

Historical Data: Look for historical project data that you can compare the project’s current performance against. Enrollment numbers and student demographics are common data points for STEM education programs. Find out if baseline data were included in the project’s proposal or can be reconstructed with institutional data. Be sure to capture several years of pre-project data so year-to-year fluctuations can be accounted for. See the practical guidance for this interrupted time series approach to assessing change related to an intervention on the Towards Data Science website. 

Stakeholder Perspectives: Ask stakeholders for their opinions about the status of the project. You can work with stakeholders in person or online by holding a data party to engage them directly in interpreting findings 

 

Whatever sources or strategies you use, its critical that you explain your process in your evaluation reports so it is transparent to stakeholders. Clearly documenting the interpretation process will also help you replicate the steps in the future. 

Blog: Three Questions to Spur Action from Your Evaluation Report

Posted on March 4, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluators are urged to make their evaluations useful. Project staff are encouraged to use their evaluations. An obvious way to support these aims is for evaluators to develop recommendations based on evidence and for project staff to follow those recommendations (if they agree with them, of course). But not all reports have recommendations, and sometimes recommendations are just “keep up the good work!” If implications for actions are not immediately obvious from an evaluation report, here are three questions that project staff can ask themselves to spark thinking and decision making about how to use evaluation findings.  I’ve included real-world examples based our experience at EvaluATE.

1) Are there any unexpected findings in the report? The EvaluATE team has been surprised to learn that we are attracting a large number of grant writers and other grant professionals to our webinars. We initially assumed that principal investigators (PIs) and evaluators would be our main audience. With growing attendance among grant writers, we became aware that they are often the ones who first introduce PIs to evaluation, guiding them on what should go in the evaluation section of a proposal and how to find an evaluator. The unexpected finding that grant writers are seeking out EvaluATE for guidance made us realize that we should develop more tailored content for this important audience as we work to advance evaluation in the ATE program.

Talk with your team and your evaluator to determine if any action is needed related to your unexpected results.

2) What’s the worst/least favorable evaluation finding from your evaluation? Although it can be uncomfortable to focus on a project’s weak points, doing so is where the greatest opportunity for growth and improvement lies. Consider the probable causes of the problem and potential solutions. Can you solve the problem with your current resources? If so, make an action plan. If not, decide if the problem is important enough to address through a new initiative.

At EvaluATE, we serve both evaluators and evaluation consumers who have a wide range of interests and experience. When asked what EvaluATE needs to improve, several respondents to our external evaluation survey noted that they want webinars to be more tailored to their specific needs and skill levels. Some noted that our content was too technical, while others remarked that it was too basic. To address this issue, we decided to develop an ATE evaluation competency framework. Webinars will be keyed to specific competencies, which will help our audience decide which are appropriate for them. We couldn’t implement this research and development work with our current resources, so we wrote this activity into a new proposal.

Don’t sweep an unfavorable result or criticism under the rug. Use it as a lever for positive change.

3) What’s the most favorable finding from your evaluation? Give yourself a pat on the back, and then figure out if this finding points to an aspect of your project you should expand. If you need more information to make that decision, determine what additional evidence could be obtained in the next round of the evaluation. Help others to learn from your successes—the ATE Principal Investigators Conference is an ideal place to share aspects of your work that are especially strong, along with your lessons learned and practical advice about implementing ATE projects.

At EvaluATE, we have been astounded at the interest in and positive response to our webinars. But we don’t yet have a full understanding of the extent to which webinar attendance translates to improvements in evaluation practice. So we decided to start collecting follow-up data from webinar participants to check on use of our content. With that additional evidence in hand, we’ll be better positioned to make an informed decision about expanding or modifying our webinar series.

Don’t just feel good about your positive results—use them as leverage for increased impact.

If you’ve considered your evaluation results carefully but still aren’t able to identify a call to action, it may be time to rethink your evaluation’s focus. You may need to make adjustments to ensure it produces useful, actionable information. Evaluation plans should be fluid and responsive—it is expected that plans will evolve to address emerging needs.