At EvaluATE, evaluation is a shared responsibility. We have a wonderful external evaluator, Dr. Lana Rucks, with whom we meet a few times a year in person and talk to on the phone about every other month. Dr. Rucks is responsible for determining our center’s mid- and long-term impact on the individuals who engage with us and on the ATE projects they influence. We supplement her external evaluation with surveys of workshop and webinar participants to obtain their immediate feedback on our activities. In addition, we carefully track the extent to which we are reaching our intended audiences. But for our team, evaluation is not just about the formal activities related to data collection and analysis. It’s how we do our work on a daily basis. Here are some examples:
- Everyone gives and gets constructive criticism. Every presentation, webinar, newsletter article, or other product we create gets reviewed by the whole team. This improves our final products, whether it means catching embarrassing typos, completely revamping a presentation to improve its relevance, or going back to the drawing board. We all have thick skins and understand that criticism is not personal; it’s essential to high-quality work.
- We are willing to admit when something’s not working or when we’ve bit off more than we can chew. We all realize it’s better to scrap an idea early and refocus rather than push it to completion with mediocre results.
- We look backward when moving forward. For example, when we begin developing a new webinar, we review the feedback from the previous one to determine what our audiences perceived as its strengths and weaknesses. Perhaps the most painful, yet valuable exercise is watching the recording of a prior webinar together, stopping to note what really worked and what didn’t— from the details of audio quality to the level of audience participation.
- We engage our advisors. Getting an external perspective on our work is invaluable. They ask us tough questions and cause us to check our assumptions.
- We use data every day. Whether determining which social media strategies are most effective or identifying which subgroups within our ATE constituency need more attention, we use the data we have in hand to inform decisions about our operations and priorities.
- We use our mission as a compass to plot our path forward. We are faced with myriad opportunities in the work that we do as a resource center. We consider options in terms of their potential to advance our mission. That keeps us focused and ensures that resources are expended on mission-critical efforts.
Integrating these and other evaluative activities and perspectives into our daily work gives us better results, as apparent in our formal evaluation results. Importantly, we share a belief that excellence is never achieved—it is something we continually strive for. What we did yesterday may have been pretty good, but we believe we can do better tomorrow.
As you plan your evaluation for this year, consider things you can do with your team to critique and improve your work on an ongoing basis.
All ATE projects are required to have an evaluation of their work and it is expected that results will be included in annual reports to NSF. But if that’s all a project is using its evaluation for, it’s probably not bringing a lot of value to the grant work. In our webinar, The Nuts and Bolts of ATE Evaluation Reporting, we presented ways evaluation results can be used beyond reporting to NSF. In this article, we share ways to use evaluation results for project improvement. For more details on other uses, check out the segment of the webinar at http://bit.ly/webinar-clip.
Using evaluation results to help improve your project requires more than just accepting the evaluator’s recommendations. Project team members should take time to delve into the evaluation data on their own. For example, read every comment in your qualitative data. Although you should avoid getting caught up in the less favorable remarks, they can be a valuable source of information about ways you might improve your work. Take time to consider the remarks that surprise you—they may reveal a blind spot that needs to be investigated. But don’t forget to pat yourself on the back for the stuff you’re already getting right.
Although it’s important to find out if a project is effective overall, it can be very revealing to disaggregate by participant characteristics, such as by gender, age, discipline, enrollment status, or other factors. If you find out that some groups are getting more out of their experience with the project than others, you have an opportunity to adjust what you’re doing to better meet your intended audience’s needs.
The single most important thing you can do to maximize an evaluation’s potential to bring value to your project is to make time to meet with your evaluator, review results with your project colleagues and advisors, and make decisions about how to move forward based on findings. ATE grantees are awarded about $60 million annually by the federal government. We have an ethical obligation to be self-critical, use all available information sources to assess progress and opportunities for improvement, and utilize project evaluations to help us achieve excellence in all aspects of our work.
The term critical friend describes a stance an evaluator can take in his or her relationship with the program or project they evaluate. Costa and Kallick (1993) provide this seminal definition: “A trusted person who asks provocative questions, provides data to be examined through another lens, and offers critique of a person’s work as a friend” (p.50).
The relationship between a project and an evaluator who is a critical friend is one where the evaluator has the best interests of the program at heart and the project staff trusts that this is the case. The evaluator may see their role as being both a trusted advisor and a staunch critic. He or she pushes the program to achieve its goals in the most effective way possible while maintaining independence. The evaluator helps the project staff to view information in different ways, while still being sensitive to the project staff’s own views and priorities. The evaluator will call attention to negative or less effective aspects of a project, but will do so in a constructive way. By pointing out potential pitfalls and flaws in the project, the critical friend evaluator can help the project to grow and improve.
To learn more…
Costa, A.L. & Kallick, B. (1993). Through the lens of a critical friend. Educational Leadership, 51(2) 49-51. http://bit.ly/crit-friend
Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as critical friend. New Directions for Evaluation, 86, 81-92.
As a PI for an ATE project or center, it is clear that working with evaluators provides key information for the success of the project. Gathering the information and synthesizing it contributes to the creation of know-ledge. Knowledge can be viewed as a valuable asset for the project and others. Some knowledge can be easily obtained from text or graphics, but other knowledge comes from experience. Tools exist to help with the management of such knowledge. In the Synergy: Research Practice Transformation (SynergyRPT) project, we used tools such as logic models, innovation attributes, and value creation worksheets to learn about and practice knowledge creation (see http://synergyrpt.org/resources-2). Recently, knowledge management software has been developed that can help organize information for projects. Some useful organizing tools include Trello, Dropbox, SharePoint, BrainKeeper, and IntelligenceBank.
One example of effective evaluation management is the Southwest Center for Microsystems Education, where the external and internal evaluators use a value creation framework as part of continuous improvement, as described in their evaluation handbook by David Hata and James Hyder (see http://bit.ly/scme-eval). This approach has proved useful and helps build understanding within the ATE community. Development of common definitions of terms has been essential to communication among interested parties. Transfer of successful assessment, knowledge creation, and evaluation outcomes continue to provide broader impact for the ATE projects.
Certainly the most important part of the evaluator and PI’s jobs is to promote a culture of sharing. Without the human desire to share knowledge and build true communities of practice, the knowledge is limited to small tweaking of a project. Along with the desire to share comes the support for strong evaluation plans and ways to disseminate findings. With that mindset, both PIs and evaluators can work with their networks to build trust and create communities of practice that are committed to sustainability and scale. In that way, lessons learned from the evaluation are not lost over time.