Archive: collaboration

Blog: Sustaining Private Evaluation Practices: Overcoming Challenges by Collaborating within Our ATE Community of Practice

Posted on September 27, 2017 by  in Blog ()

President, Impact Allies

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Ben Reid. I am the founder of Impact Allies, a private evaluation firm. The focus of this post is on the business, rather than technical aspects, of evaluation. My purpose is to present a challenge to sustaining a private evaluation practice and best serving clients and propose an opportunity to overcome that challenge by collaborating within our community of practice.

Challenge

Often evaluators act as one-person shows. It is important to give a single point of contact to a principal investigator (PI) and project team and for that evaluator of record to have thorough knowledge of the project and its partners. However, the many different jobs required of an evaluation contract simply cross too many specialties and personality types for one person to effectually serve a client best.

Opportunity

The first opportunity is to become more professionally aware of our strengths and weaknesses. What are your skills? And equally important, where are you skill-deficit (don’t know how to do it) and where are you performance-deficient (have the skill but aren’t suited for it—because of anxiety, frustration, no enthusiasm, etc.)?

The second opportunity is to build relationships within our community of practice. Get to know other evaluators, where their strengths are unique and whom they use for ancillary services (their book of contractors). (The upcoming NSF ATE PI conference is a great place to do this).

Example

My Strengths: Any evaluator can satisfactorily perform the basics – EvaluATE certainly has done a tremendous job of educating and training us. In this field, I am unique in my strengths of external communications, opportunity identification and assessment, strategic and creative thinking, and partnership development. Those skills and a background in education, marketing and branding, and project management, have helped me contribute broadly, which has proven useful time and again when working with small teams. Knowing clients well and having an entrepreneurial mindset allows me to do what is encouraged in NSF’s 2010 User-Friendly Handbook for Project Evaluation: “Certain evaluation activities can help meet multiple purposes, if used judiciously” (p. 119).

My Weaknesses: However, an area where I could use some outside support is graphic design and data visualization. This work, because it succinctly tells the story and successes of a project, is very important when communicating to multiple stakeholders, in published works, or for promotional purposes. Where I once performed these tasks (with much time and frustration and at a level which isn’t noteworthy), I now contract with an expert—and my clients are thereby better served.

Takeaway

“Focus on the user and all else will follow,” is the number one philosophy of Google, the company that has given us so much and in turn done so well for itself. Let us also focus on our clients, serving their needs by building our businesses where we are skilled and enthusiastic and collaborating (partnering, outsourcing, or referring) within our community of practice where another professional can do a better job for our clients.

Blog: The Shared Task of Evaluation

Posted on November 18, 2015 by  in Blog (, )

Independent Educational Program Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation was an important strand at the recent ATE meeting in Washington, DC. As I reflected on my own practice as an external evaluator and listened to the comments of my peers, I was impressed once again with how dependent evaluation is on a shared effort by project stakeholders. Ironically, the more external an evaluator is to a project, the more important it is to collaborate closely with PIs, program staff, and participating institutions. Many assessment and data collection activities that are technically part of the outside evaluation are logistically and financially dependent on the internal workings of the project.

This has implications for the scope of work for evaluation and for the evaluation budget. A task might appear in the project proposal as, “survey all participants,” and it would likely be part of the evaluator’s scope of work. But in practice, tasks such as deciding what to ask on the survey, reaching the participants, and following up with nonresponders are likely to require work by the PIs or their assistants.

Occasionally you hear certain percentages cited as appropriate levels of effort for evaluation. Whatever overall portion evaluation plays in a project, my approach is to think of that portion as the sum of my efforts and those of my clients. This has several advantages:

  • During planning, it immediately highlights data that might be difficult to collect. It is much easier to come up with a solution or an alternative in advance and avoid a big gap in the evidence record.
  • It makes clear who is responsible for what activities and avoids embarrassing confrontations along the lines of, “I thought you were going to do that.”
  • It keeps innocents on the project and evaluation staffs from being stuck with long (and possibly uncompensated) hours trying to carry out tasks outside their expected job descriptions.
  • It allows for more accurate budgeting. If I know that a particular study involves substantial clerical support for pulling records from school databases, I can reduce my external evaluation fee, while at the same time warning the PI to anticipate those internal evaluation costs.

The simplest way to assure that these dependencies are identified is to consider them during the initial logic modelling of the project. If an input is professional development, and its output is instructors who use the professional development, and the evidence for the output is use of project resources, who will have to be involved in collecting that evidence? Even if the evaluator proposes to visit every instructor and watch them in practice, it is likely that those visits will have to be coordinated by someone close to the instructional calendar and daily schedule. Specifying and fairly sharing those tasks produces more data, better data, and happier working relationships.

Blog: Building Effective Partnerships to Conduct Targeted Research on Student Pathways

Posted on March 4, 2015 by  in Blog ()

Associate Professor of Sociology, University of South Florida

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Will Tyson, associate professor of sociology at the University of South Florida. I am also principal investigator of PathTech (“Successful Academic and Employment Pathways in Advanced Technologies” [NSF #1104214]), an NSF ATE targeted research project aimed at better understanding pathways into technician education and into the workforce. In this post, I describe effective models through which ATE projects and centers can develop targeted research partnerships with STEM education researchers.

Personnel from 2-year and 4-year institutions bring different expertise to the table, but there is great potential for mutually beneficial partnerships built around the desire to learn more about student pathways and student outcomes. Within ATE, centers and projects are typically led by educators and practitioners with expertise in program development, curricular development, and professional development within their areas of technical expertise and technician education. Targeted research in technician education projects are led by STEM education researchers with backgrounds in social science and education interested in learning more about student pathways and outcomes while placing their experiences in a broader social context. What we do is very different, but our goals are the same.

When I discuss my research with ATE grantees and other stakeholders in K-12 education, community colleges, and local industry I get the same revealing responses: “NSF always wants to know about student outcomes, but we don’t really know how to do the research” and “We didn’t know there were people like you out there who did this research.” On the other hand, experienced NSF grantees who conduct research in K-12 education and/or four-year universities often know little about the “T” in STEM in community colleges and work being done through ATE Centers and Projects. Developing ways to bridge knowledge gaps between practitioners and researchers is necessary to increase our understanding of the processes of technician education.

PathTech is a partnership between social science and education researchers at the University of South Florida and the Florida Advanced Technological Education Center (FLATE), an NSF-ATE regional center of excellence. Such a partnership is both mandated by the ATE program solicitation and necessary to conduct high-impact research that can effectively be put into practice. This collaboration is an essential element of the PathTech research model, along with the proactive and enthusiastic participation of our community college, high school, and industry partners.

Through this multifaceted, interdisciplinary collaboration, we have been able to create a regional scale model that allows for the organic development of research objectives driven by the experiences and needs of college personnel as well as theory and scholarship. This is the foundation whereby knowledge is constructed and produced through interface and interaction with those experiencing technician educational and occupational pathways as administrators, teachers, students, employers, and policymakers. Most importantly, this collaboration also allows us to develop a mechanism for real-time dissemination of emerging findings and developing knowledge, thus allowing all parties to benefit from the research.

Newsletter: The Power of Evaluator and PI Collaboration

Posted on April 1, 2014 by  in Newsletter - ()

The PI of an ATE center or project has the responsibility of keeping a strong communication flow with the evaluator. This begins even before the project is funded and continues in a dynamic interchange throughout the funding cycle. There are easy ways that PI and evaluator can add value to a project. Simply asking for help is sometimes overlooked.

A recent example demonstrates how an ATE center used the expertise of the evaluator to get some specific feedback on the use of clearinghouse materials. The co-PI asked the evaluator for assistance and a very nice survey was created that allowed the evaluator to gather additional information about curriculum and instructional materials usage and the center PI’s to gain valuable input about the use of its existing materials.

Second, it is important to actually use the information gained from the evaluation data. What a natural and built in opportunity for the PI and the team to take advantage of impact data to drive the future direction of the  center or project. Using data to make decisions provides an opportunity to test assumptions and to learn if current practices and products are working.

Third, the evaluation develops evidence to be used to obtain further funding, advance technical education and contribute field of evaluation. By regular communication and collaboration, the project, the PI and the evaluator all gain value and can more effectively contribute to the design of the current and future projects. Together, the PI and the Evaluator can learn about impact, trends, and key successes that are appropriate for scaling. Thus evaluation is more than reporting but becomes a tool for strategic planning.

The Bio-Link evaluator, Candiya Mann, provides not only a written document that can be used for reporting and planning but also works with me to expand my connections with other projects and people that have similar interests in the use of data to drive actions and achieve broader impact. Removing isolation contributes new ideas for metrics and can actually make evaluation fun.

Learn more about Bio-Link at www.bio-link.org.

Newsletter: The PI Guide to Working with Evaluators

Posted on January 1, 2014 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

(originally published as blog at ltd.edc.org/strong-pievaluator-partnerships-users-guide on January 10, 2013)

Evaluation can be a daunting task for PIs. It can seem like the evaluator speaks another language, and the stakes for the project can seem very high. Evaluators face their own challenges. Often working with a tight budgets and timeframes, expectations are high that they deliver both rigor and relevance, along with evidence of project impact. With all this and more in the mix, it’s no surprise that tension can mount and miscommunication can drive animosity and stress.

As the head of evaluation for the ITEST Learning Resource Center and as a NSF program officer, I saw dysfunctional relationships between PIs and their evaluators contribute to missed deadlines, missed opportunities, and frustration on all sides. As an evaluator, I am deeply invested in building evaluators’ capacity to communicate their work and in helping program staff understand the value of evaluation and what it brings to their programs. I was concerned that these dysfunctional relationships would thwart the potential of evaluation to provide vital information for program staff to make decisions and demonstrate the value of their programs.

To help strengthen PI/evaluator collaborations, I’ve done a lot of what I called “evaluation marriage counseling” for PI/evaluator pairs. Through these “counseling sessions,” I learned that evaluation relationships are not so different from any other relationships. Expectations aren’t always made clear, communication often breaks down, and, more than anything else, all relation-ships need care and feeding.

As a program officer, I had the chance to help shape and create a new resource that supports PIs and evaluators in forming strong working relationships. Rick Bonney of the Cornell Lab of Ornithology and I developed a guide to working with evaluators, written by PIs, for PIs. Although it was designed for the Informal Science Education community, the lessons translate to just about any situation in which program staff are working with evaluators. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects is available at bit.ly/1l28nTt.