We EvaluATE - Evaluation Use

Blog: How Can You Make Sure Your Evaluation Meets the Needs of Multiple Stakeholders?*

Posted on October 31, 2019 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We talk a lot about stakeholders in evaluation. These are the folks who are involved in, affected by, or simply interested in the evaluation of your project. But what these stakeholders want or need to know from the evaluation, the time they have available for the evaluation, and their level of interest are probably quite variable. The table below is a generic guide to the types of ATE evaluation stakeholders, what they might need, and how to meet those needs.

ATE Evaluation Stakeholders

Stakeholder groups What they might need Tips for meeting those needs
Project leaders (PI, co-PIs) Information that will help you improve the project as it unfolds

Results you can include in your annual reports to NSF to demonstrate accountability and impact

Communicate your needs clearly to your evaluator, including when you need the information in order to make use of it.
Advisory committees or National Visiting Committees Results from the evaluation that show whether the project is on track for meeting its goals, and if changes in direction or operations are warranted

Summary information about the project’s strengths and weaknesses

Many advisory committee members donate their time, so they probably aren’t interested in reading lengthy reports. Provide a brief memo and/or short presentation with key findings at meetings, and invite questions about the evaluation. Be forthcoming about strengths and weaknesses.
Participants who provide data for the evaluation Access to reports in which their information was used

Summaries of what actions were taken based on the information they needed to provide

The most important thing for this group is to demonstrate use of the information they provided. You can share reports, but a personal message from project leaders along the lines of “we heard you and here is what we’re doing in response” is most valuable.
NSF program officers Evidence that the project is on track to meet its goals

Evidence of impact (not just what was done, but what difference the work is making)

Evidence that the project is using evaluation results to make improvements

Focus on Intellectual Merit (the intrinsic quality of the work and potential to advance knowledge) and Broader Impacts (the tangible benefits for individuals and progress toward desired societal outcomes). If you’re not sure about what your program officer needs from your evaluation, ask for clarification.
College administrators (department chairs, deans, executives, etc.) Results that demonstrate impact on students, faculty, institutional culture, infrastructure, and reputation Make full reports available upon request, but most busy administrators probably don’t have the time to read technical reports or don’t need the fine-grained data points. Prepare memos or share presentations that focus on the information they’re most interested in.
Partners and collaborators Information that helps them assess the return on the investment of their time or other resources

In case you didn’t read between the lines, the underlying message here is to provide stakeholders with the information that is most relevant to their particular “stake” in your project. A good way not to meet their needs is to only send everyone a long, detailed technical report with every data point collected. It’s good to have a full report available for those who request it, but many simply won’t have the time or level of interest needed to consume that quantity of evaluative information about your project.

Most importantly, don’t take our word about what your stakeholders might need: Ask them!

Not sure what stakeholders to involve in your evaluation or how? Check out our worksheet Identifying Stakeholders and Their Roles in an Evaluation at bit.ly/id-stake.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in October 2015.

Blog: Completing a National Science Foundation Freedom of Information Act Request

Posted on July 15, 2019 by  in Blog (, , )

Principal Consultant, The Rucks Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Completing a Form

You probably have heard of a FOIA (Freedom of Information Act) request, but it was probably in the context of journalism. Often, journalists will submit a FOIA request to obtain information that is not otherwise publicly available, but is key to an investigative reporting project.

There may be times when you as an evaluator may be evaluating or researching a topic and your work could be enhanced with information that requires submitting a FOIA request. For instance, while working as EvaluATE’s external evaluator, The Rucks Group needed to complete a FOIA request to learn how evaluation plans in ATE proposals have changed over time. And we were interested in documenting how EvaluATE may have influenced those changes. Toward that goal, a random sample of ATE proposals funded between 2004 and 2017 was sought to be reviewed. However, in spite of much effort over an 18-month period, we still were in need of actually obtaining nearly three dozen proposals. We needed to get these proposals via a FOIA request primarily because the projects were older and we were unable to reach either the principal investigators or the appropriate person at the institution. So we submitted a FOIA request to the National Science Foundation (NSF) for the outstanding proposals.

For me, this was a new and, at first, a mentally daunting task. Now, after having gone through the process, I realize that I need not be nervous because completing a FOIA request is actually quite simple. These are the elements that one needs to provide:

  1. Nature of request: We provided a detailed description of the proposals we needed and what we needed from each proposal. We also provided the rationale for the request, but I do not believe a rationale is required.
  2. Delivery method: Identify the method through which you prefer to receive the materials. We chose to receive digital copies via a secure digital system.
  3. Budget: Completing the task could require special fees, so you will need to indicate how much you are willing to pay for the request. Receiving paper copies through the US Postal Service can be more costly than receiving digital copies.

It may take a while for the FOIA request to be filled. We submitted the request in fall 2018 and received the materials in spring 2019. The delay may have been due in part to the 35-day government shutdown and a possibly lengthy process for Principal Investigator approval.

The NSF FOIA office was great to work with, and we appreciated staffers’ communications with us to keep us updated.

Because access is granted only for a particular time, pay attention to when you are notified via email that the materials have been released to you. In other words, do not let this notice sit in your inbox.

One caveat: When you submit the FOIA request, there may be encouragement to acquire the materials through other means. Submitting a FOIA request to colleges or state agencies may be an option for you.

While FOIA requests should be made judiciously, they are useful tools that, under the right circumstances, could enhance your evaluation efforts. They take time, but thanks to the law backing the public’s right to know, your FOIA requests will be honored.

To learn more, visit https://www.nsf.gov/policies/foia.jsp

Keywords: FOIA request, freedom of information act

Blog: Grant Evaluation: What Every PI Should Know and Do*

Posted on June 3, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A number of years ago, the typical Advanced Technological Education (ATE) Principal Investigator (PI) deemed evaluation a necessary evil. As a PI, I recall struggling even to find an evaluator who appeared to have reasonable credentials. I viewed evaluation as something you had to have in a proposal to get funded.

Having transitioned from the PI role to being an evaluator myself, I now appreciate how evaluation can add value to a project. I also know a lot more about how to find an evaluator and negotiate the terms of the evaluation contract.

Today, PIs typically identify evaluators through networking and sometimes use evaluator directories, such as the one maintained by EvaluATE at ATE Central. You can call colleagues and ask them to identify someone they trust and can recommend with confidence. If you don’t know anyone yet, start your networking by contacting an ATE center PI using the map at atecentral.net. Do this at least three months before the proposal submission date (i.e., now). When you approach an evaluator, ask for a résumé, references, and a work sample or two. Review their qualifications to be sure the proposal’s reviewers will perceive them as a credentialed evaluator.

Second, here is an important question many PIs ask: “Once you have identified the evaluator, can you expect them to write the evaluation section of your proposal for free?” The answer is (usually) yes. Just remember: Naming an individual in your proposal and engaging that person in proposal development reflects your commitment to enter into a contract with them if your proposal is funded. (An important caveat: Many community colleges’ procurement rules require a competition or bid process for evaluation services. That may affect your ability to commit to the evaluator should the proposal be funded. Have a frank discussion about this.)

Although there is a limit to what evaluators can or should do for free at the proposal stage, you should expect more than a boilerplate evaluation plan (provided you’ve allowed enough time for a thoughtful one). You want someone who will take a look at your goals and objectives and describe in 1 to 1.25 pages the approach for this project’s evaluation. This will serve you better than modifying their “standard language,” if they offer it, yourself. Once the proposal is funded, their first deliverable will be the complete evaluation plan; you generally won’t need that level of detail at the proposal stage.

Now that you have a handshake agreement with your selected evaluator, make it clear you need the draft evaluation section by a certain deadline — say, a month before the proposal due date. You do not have to discuss detailed contractual terms prior to the proposal being funded, but you do have to establish the evaluation budget and the evaluator’s daily rate, for your budget and budget justification. Establishing this rate requires a frank discussion about fees.

Communication in this process is key. Check out EvaluATE’s webinar, “Getting Everyone on the Same Page,” practical strategies for evaluator-stakeholder communication.

Once your proposal has been funded, you get to hammer out a real statement of work with your evaluator and set up a contract for the project. Then the real work begins.

*This blog is a reprint of an article from an EvaluATE newsletter published in summer 2012.

Keywords: evaluators, find evaluator, proposal, evaluation, evaluation proposal

Blog: Alumni Tracking: The Ultimate Source for Evaluating Completer Outcomes

Posted on May 15, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

When examining student programs, evaluators can use many student outcomes (e.g., enrollments, completions, and completion rates) as appropriate measures of success. However, to properly assess whether programs and interventions are having their intended impact, evaluators should consider performance metrics that capture data on individuals after they have completed degree programs or certifications, also known as “completer” outcomes.

For example, if a program’s goal is to increase the number of graduating STEM majors, then whether students can get STEM jobs after completing the program is very important to know. Similarly, if the purpose of offering high school students professional CTE certifications is to help them get jobs after graduation, it’s important to know if this indeed happened. Completer outcomes allow evaluators to assess whether interventions are having their intended effect, such as increasing the number of minorities entering academia or attracting more women to STEM professions. Programs aren’t just effective when participants have successfully entered and completed them; they are effective when graduates have a broad impact on society.

Tracking of completer outcomes is typical, as many college and university leaders are held accountable for student performance while students are enrolled and after students graduate. Educational policymakers are asking leaders to look beyond completion to outcomes that represent actual success and impact. As a result, alumni tracking has become an important tool in determining the success of interventions and programs. Unfortunately, while the solution sounds simple, the implementation is not.

Tracking alumni (i.e., defined as past program completers) can be an enormous undertaking, and many institutions do not have a dedicated person to do the job. Alumni also move, switch jobs, and change their names. Some experience survey fatigue after several survey requests. The following are practical tips from an article we co-authored explaining how we tracked alumni data for a five-year project that aimed to recruit, retain, and employ computing and technology majors (Jones, Mardis, McClure, & Randeree, 2017):

    • Recommend to principal investigators (PIs) that they extend outcome evaluations to include completer outcomes in an effort to capture graduation and alumni data, and downstream program impact.
    • Baseline alumni tracking details should be obtained prior to student completion, but not captured again until six months to one year after graduation, to provide ample transition time for the graduate.
    • Programs with a systematic plan for capturing outcomes are likely to have higher alumni response rates.
    • Surveys are a great tool for obtaining alumni tracking information, while Social media (e.g., LinkedIn) can be used to stay in contact with students for survey and interview requests. Suggest that PIs implement a social media strategy while students are participating in the program, so that the contact need only be continued after completion.
    • Data points might include student employment status, advanced educational opportunities (e.g., graduate school enrollment), position title, geographic location, and salary. For richer data, we recommend adding a qualitative component to the survey (or selecting a sample of alumni to participate in interviews).

The article also includes a sample questionnaire in the reference section.

A comprehensive review of completer outcomes requires that evaluators examine both the alumni tracking procedures and analysis of the resulting data.

Once evaluators have helped PIs implement a sound alumni tracking strategy, institutions should advance to alumni backtracking! We will provide more information on that topic in a future post.

* This work was partially funded by NSF ATE 1304382. For more details, go to https://technicianpathways.cci.fsu.edu/

References:

Jones, F. R., Mardis, M. A., McClure, C. M., & Randeree, E. (2017). Alumni tracking: Promising practices for collecting, analyzing, and reporting employment data. Journal of Higher Education Management 32(1), 167–185.  https://mardis.cci.fsu.edu/01.RefereedJournalArticles/1.9jonesmardisetal.pdf

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog (, , )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.

 

Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.

References:

Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.

Blog: Becoming a Sustainability Sleuth: Leaving and Looking for Clues of Long-Term Impact

Posted on August 1, 2018 by  in Blog ()

Director, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I’m Rebecca from SageFox Consulting Group, and I’d like to start a conversation about measuring sustainability. Many of us work on ambitious projects with long-term impacts that cannot be achieved within the grant period and require sustained grant activities. Projects are often tasked with providing evidence of sustainability but are not given the funding to assess sustainability and impact after grant funding. In five, 10, or 15 years, if someone were to pick up your final report, would they be able to use it to get a baseline understanding of what occurred during the grant, and would they know where to look for evidence of impact and sustainability? Below are some suggestions for documenting “clues” for sustainability:

Relationships are examples of how projects are sustained. You may want to consider documenting evidence of the depth of relationships: are they person-dependent, or has it become a true partnership between entities? Evidence of the depth of relationships is often revealed when a key supporter leaves their position, but the relationship continues. You might also try to distinguish a person from a role. For example, one project I worked on lost the support of a key contact (due to a reorganization) at a federal agency that hosted student interns during the summer. There was enough goodwill and experience, however, continued efforts from the project leadership resulted in more requests for interns than there were students available for.

Documenting how and why the innovation evolves can provide evidence of sustainability. Often the adopter, user, or customer finds their own value in relation to their unique context. Understanding how and why someone adapts the product or process gives great insight into what elements may go on and in what contexts. For example, you might ask users, “What modifications were needed for your context and why?”

In one of my projects, we began with a set of training modules for students, but we found that an online test preparation module for a certification was also valuable. Through a relationship with the testing agency, a revenue stream was developed that also allowed the project to continue classroom work with students.

Institutionalization (adoption of key products or processes by an institution)—often through a dedicated line item in a budget for a previously grant-funded student support position—reflects sustainability. For example, when a grant-funded program found a permanent home at the university by expanding its student-focused training in entrepreneurship to faculty members, it aligned itself with the mission of the department. Asking “What components of this program are critical for the host institution?” is one way to uncover institutionalization opportunities.

Revenue generation is another indicator of customer demand for the product or process. Many projects are reluctant to commercialize their innovations, but commercialization can be part of a sustainability plan. There are even National Science Foundation (NSF) programs to help plan for commercialization (e.g., NSF Innovation Corps), and seed money to get started is also available (e.g., NSF Small Business Innovation Research).

Looking for clues of sustainability often requires a qualitative approach to evaluation through capturing the story from the leadership team and participants. It also involves being on the lookout for unanticipated outcomes in addition to the deliberate avenues a project takes to ensure the longevity of the work.

Blog: Evaluating for Sustainability: How can Evaluators Help?

Posted on February 17, 2016 by  in Blog ()

Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Developing a functional strategy to sustain crucial program components is often overlooked by project staff implementing ATE-funded initiatives. At the same time, evaluators may neglect the opportunity to provide value to decision makers regarding program components most vital to sustain. In this blog, I suggest a few strategies to avoid both of these traps, established through my work at Hezel Associates, specifically with colleague Sarah Singer.

Defining sustainability is a difficult task in its own right, often eliciting a plethora of interpretations that could be deemed “correct.” However, the most recent NSF ATE program solicitation specifically asks grantees to produce a “realistic vision for sustainability” and defines the term as meaning “a project or center has developed a product or service that the host institution, its partners, and its target audience want continued.” Two phrases jump out of this definition: realistic vision and what stakeholders want continued. NSF’s definition, and these terms in particular, frame my tips for evaluating for sustainability for an ATE project while addressing three common challenges.

Challenge 1: The project staff doesn’t know what components to sustain.

I use a logic model to address this problem. Reverting to the definition of sustainability provided by the NSF-ATE program, it’s possible to replace “product” with “outputs” and “service” with “activities” (taking some liberties here) to put things in terms common to typical logic models. This produces a visual tool useful for an open discussion with project staff regarding the products or services they want continued and which ones are realistic to continue. The exercise can identify program elements to assess for sustainability potential, while unearthing less obvious components not described in the logic model.

Challenge 2: Resources are not available to evaluate for sustainability.

Embedding data collection for sustainability into the evaluation increases efficiency. First, I create a specific evaluation question (or questions) focusing on sustainability, using what stakeholders want continued and what is realistic as a framework to generate additional questions. For example, “What are the effective program components that stakeholders want to see continued post-grant-funding?” and “What inputs and strategies are needed to sustain desirable program components identified by program stakeholders?” Second, I utilize the components isolated in the aforementioned logic model discussion to inform qualitative instrument design. I explore those components’ utility through interviews with stakeholders, eliciting informants’ ideas for how to sustain them. Information collected from interviews allows me to refine potentially sustainable components based on stakeholder interest, possibly using the findings to create questionnaire items for further refinement. I’ve found that resources are not an issue if evaluating for sustainability is planned accordingly.

Challenge 3: High-level decision makers are responsible for sustaining project outcomes or activities and they don’t have the right information to make a decision.

This is a key reason why evaluating for sustainability throughout the entire project is crucial. Ultimately, decision makers to whom project staff report determine which program components are continued beyond the NSF funding period. A report consisting of three years of sustainability-oriented data, detailing what stakeholders want continued while addressing what is realistic, allows project staff to make a compelling case to decision makers for sustaining essential program elements. Evaluating for sustainability supports project staff with solid data, enabling comparisons between more and less desirable components that can easily be presented to decision makers. For example, findings focusing on sustainability might help a project manager reallocate funds to support crucial components, perhaps sacrificing others; change staffing, replace personnel with technology (or vice versa), or engage partners to provide resources.

The end result could be realistic strategies to sustain program components that stakeholders want continued supported by data.

Blog: Show Me a Story: Using Data Visualization to Communicate Evaluation Findings

Posted on January 13, 2016 by  in Blog (, )

Senior Research Associate, Education Development Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

graphic1

It’s all too easy for our evaluation reports to become a lifeless pile of numbers that gather dust on a shelf. As evaluators and PIs, we want to tell our stories and we want those stories to be heard. Data visualizations (like graphs and infographics) can be powerful ways to share evaluation findings, quickly communicate key themes, and ultimately have more impact.

Communicating evaluation findings visually can also help your stakeholders became better data analysts themselves. I’ve found that when stakeholders see a graph showing survey results, they are much more likely to spend time examining the findings, asking questions, and thinking about what the results might mean for the project than if the same information is presented in a traditional table of numbers.

Here are a few tips to get you started with data visualization:

  • Start with the data story. Pick one key finding that you want to communicate to a specific group of stakeholders. What is the key message you want those stakeholders to walk away with?
  • Put the mouse down! When you’re ready to develop a data viz, start by sketching various ways of showing the story you want to tell on a piece of paper.
  • Use Stephanie Evergreen’s and Ann Emery’s checklist to help you plan and critique your data visualization: http://stephanieevergreen.com/dataviz-checklist/.
  • Once you’ve drafted your data viz, run it by one or two colleagues to get their feedback.
    Some PIs, funders, and other stakeholders still want to see tables with all the numbers. We typically include tables with the complete survey results in an appendix.

Some of my favorite data viz resources:

For more design inspiration, check out:

Finally, don’t expect to hit a home run your first time at bat. (I certainly didn’t!) You will get better as you become more familiar with the software you use to produce your data visualizations and as you solicit and receive feedback from your audience. Keep showing those stories!

graphic 2

Blog: Strategic Knowledge Mapping: A New Tool for Visualizing and Using Evaluation Findings in STEM

Posted on January 6, 2016 by  in Blog (, )

Director of Research and Evaluation, Meaningful Evidence, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A challenge to designing effective STEM programs is that they address very large, complex goals, such as increasing the numbers of underrepresented students in advanced technology fields.

To design the best possible programs to address such a large, complex goal, we need a large, complex understanding (from looking at the big picture). It’s like when medical researchers seek to develop a new cure–they need deep understanding of how medications interact with the body, other medications, and how they will affect the patient based on their age and medical history.

A new method, Integrative Propositional Analysis (IPA), lets us visualize and assess information gained from evaluations. (For details, see our white papers.) At the 2015 American Evaluation Association conference, we demonstrated how to use the method to integrate findings from the PAC-Involved (Physics, Astronomy, Cosmology) evaluation into a strategic knowledge map. (View the interactive map.)

A strategic knowledge map supports program design and evaluation in many ways.

Measures understanding gained.
The map is an alternative logic model format that provides broader and deeper understanding than usual logic model approaches. Unlike other modeling techniques, IPA lets us quantitatively assess information gained. Results showed that the new map incorporating findings from the PAC-Involved evaluation had much greater breadth and depth than the original logic model. This indicates increased understanding of the program, its operating environment, how they work together, and options for action.

Graphic 1

Shows what parts of our program model (map) are better understood.
In the figure below, the yellow shadow around the concept “Attendance/attrition challenges” indicates that this concept is better understood. We better understand something when it has multiple causal arrows pointing to it—like when we have a map that shows multiple roads leading to each destination.

Graphic 2

Shows what parts of the map are most evidence supported.
We have more confidence in causal links that are supported by data from multiple sources. The thick arrow below shows a relationship that many sources of evaluation data supported. All five evaluation data sources—the project team interviews, student focus group, review of student reflective journals, observation, and student surveys all provided evidence that more experiments/demos/hands-on activities caused students to be more engaged in PAC-Involved.

graphic 3

Shows the invisible.
The map also helps us to “see the invisible.” If something does not have arrows pointing to it, we know that there is “something” that should be added to the map. This indicates that more research is needed to fill those “blank spots on the map” and improve our model.

Graphic 4

Supports collaboration.
The integrated map can support collaboration among the project team. We can zoom in to look at what parts are relevant for action.

Graphic 5

Supports strategic planning.
The integrated map also supports strategic planning. Solid arrows leading to our goals indicate things that help. Dotted lines show the challenges.

Graphic 6

Clarifies short-term and long-term outcomes.
We can create customized map views to show concepts of interest, such as outcomes for students and connections between the outcomes.

Graphic 7

We encourage you to add a Strategic Knowledge Map to your next evaluation. The evaluation team, project staff, students, and stakeholders will benefit tremendously.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Boyce
Manu Platt Ayesha Boyce
Associate Professor, Department of Educational Research Methodology
University of North Carolina Greensboro
Assistant Professor, Department of Educational Research Methodology
University of North Carolina Greensboro

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.