We EvaluATE - Evaluation Use

Blog: Evaluating for Sustainability: How can Evaluators Help?

Posted on February 17, 2016 by  in Blog ()

Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Developing a functional strategy to sustain crucial program components is often overlooked by project staff implementing ATE-funded initiatives. At the same time, evaluators may neglect the opportunity to provide value to decision makers regarding program components most vital to sustain. In this blog, I suggest a few strategies to avoid both of these traps, established through my work at Hezel Associates, specifically with colleague Sarah Singer.

Defining sustainability is a difficult task in its own right, often eliciting a plethora of interpretations that could be deemed “correct.” However, the most recent NSF ATE program solicitation specifically asks grantees to produce a “realistic vision for sustainability” and defines the term as meaning “a project or center has developed a product or service that the host institution, its partners, and its target audience want continued.” Two phrases jump out of this definition: realistic vision and what stakeholders want continued. NSF’s definition, and these terms in particular, frame my tips for evaluating for sustainability for an ATE project while addressing three common challenges.

Challenge 1: The project staff doesn’t know what components to sustain.

I use a logic model to address this problem. Reverting to the definition of sustainability provided by the NSF-ATE program, it’s possible to replace “product” with “outputs” and “service” with “activities” (taking some liberties here) to put things in terms common to typical logic models. This produces a visual tool useful for an open discussion with project staff regarding the products or services they want continued and which ones are realistic to continue. The exercise can identify program elements to assess for sustainability potential, while unearthing less obvious components not described in the logic model.

Challenge 2: Resources are not available to evaluate for sustainability.

Embedding data collection for sustainability into the evaluation increases efficiency. First, I create a specific evaluation question (or questions) focusing on sustainability, using what stakeholders want continued and what is realistic as a framework to generate additional questions. For example, “What are the effective program components that stakeholders want to see continued post-grant-funding?” and “What inputs and strategies are needed to sustain desirable program components identified by program stakeholders?” Second, I utilize the components isolated in the aforementioned logic model discussion to inform qualitative instrument design. I explore those components’ utility through interviews with stakeholders, eliciting informants’ ideas for how to sustain them. Information collected from interviews allows me to refine potentially sustainable components based on stakeholder interest, possibly using the findings to create questionnaire items for further refinement. I’ve found that resources are not an issue if evaluating for sustainability is planned accordingly.

Challenge 3: High-level decision makers are responsible for sustaining project outcomes or activities and they don’t have the right information to make a decision.

This is a key reason why evaluating for sustainability throughout the entire project is crucial. Ultimately, decision makers to whom project staff report determine which program components are continued beyond the NSF funding period. A report consisting of three years of sustainability-oriented data, detailing what stakeholders want continued while addressing what is realistic, allows project staff to make a compelling case to decision makers for sustaining essential program elements. Evaluating for sustainability supports project staff with solid data, enabling comparisons between more and less desirable components that can easily be presented to decision makers. For example, findings focusing on sustainability might help a project manager reallocate funds to support crucial components, perhaps sacrificing others; change staffing, replace personnel with technology (or vice versa), or engage partners to provide resources.

The end result could be realistic strategies to sustain program components that stakeholders want continued supported by data.

Blog: Show Me a Story: Using Data Visualization to Communicate Evaluation Findings

Posted on January 13, 2016 by  in Blog (, )

Senior Research Associate, Education Development Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

graphic1

It’s all too easy for our evaluation reports to become a lifeless pile of numbers that gather dust on a shelf. As evaluators and PIs, we want to tell our stories and we want those stories to be heard. Data visualizations (like graphs and infographics) can be powerful ways to share evaluation findings, quickly communicate key themes, and ultimately have more impact.

Communicating evaluation findings visually can also help your stakeholders became better data analysts themselves. I’ve found that when stakeholders see a graph showing survey results, they are much more likely to spend time examining the findings, asking questions, and thinking about what the results might mean for the project than if the same information is presented in a traditional table of numbers.

Here are a few tips to get you started with data visualization:

  • Start with the data story. Pick one key finding that you want to communicate to a specific group of stakeholders. What is the key message you want those stakeholders to walk away with?
  • Put the mouse down! When you’re ready to develop a data viz, start by sketching various ways of showing the story you want to tell on a piece of paper.
  • Use Stephanie Evergreen’s and Ann Emery’s checklist to help you plan and critique your data visualization: http://stephanieevergreen.com/dataviz-checklist/.
  • Once you’ve drafted your data viz, run it by one or two colleagues to get their feedback.
    Some PIs, funders, and other stakeholders still want to see tables with all the numbers. We typically include tables with the complete survey results in an appendix.

Some of my favorite data viz resources:

For more design inspiration, check out:

Finally, don’t expect to hit a home run your first time at bat. (I certainly didn’t!) You will get better as you become more familiar with the software you use to produce your data visualizations and as you solicit and receive feedback from your audience. Keep showing those stories!

graphic 2

Blog: Strategic Knowledge Mapping: A New Tool for Visualizing and Using Evaluation Findings in STEM

Posted on January 6, 2016 by  in Blog (, )

Director of Research and Evaluation, Meaningful Evidence, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A challenge to designing effective STEM programs is that they address very large, complex goals, such as increasing the numbers of underrepresented students in advanced technology fields.

To design the best possible programs to address such a large, complex goal, we need a large, complex understanding (from looking at the big picture). It’s like when medical researchers seek to develop a new cure–they need deep understanding of how medications interact with the body, other medications, and how they will affect the patient based on their age and medical history.

A new method, Integrative Propositional Analysis (IPA), lets us visualize and assess information gained from evaluations. (For details, see our white papers.) At the 2015 American Evaluation Association conference, we demonstrated how to use the method to integrate findings from the PAC-Involved (Physics, Astronomy, Cosmology) evaluation into a strategic knowledge map. (View the interactive map.)

A strategic knowledge map supports program design and evaluation in many ways.

Measures understanding gained.
The map is an alternative logic model format that provides broader and deeper understanding than usual logic model approaches. Unlike other modeling techniques, IPA lets us quantitatively assess information gained. Results showed that the new map incorporating findings from the PAC-Involved evaluation had much greater breadth and depth than the original logic model. This indicates increased understanding of the program, its operating environment, how they work together, and options for action.

Graphic 1

Shows what parts of our program model (map) are better understood.
In the figure below, the yellow shadow around the concept “Attendance/attrition challenges” indicates that this concept is better understood. We better understand something when it has multiple causal arrows pointing to it—like when we have a map that shows multiple roads leading to each destination.

Graphic 2

Shows what parts of the map are most evidence supported.
We have more confidence in causal links that are supported by data from multiple sources. The thick arrow below shows a relationship that many sources of evaluation data supported. All five evaluation data sources—the project team interviews, student focus group, review of student reflective journals, observation, and student surveys all provided evidence that more experiments/demos/hands-on activities caused students to be more engaged in PAC-Involved.

graphic 3

Shows the invisible.
The map also helps us to “see the invisible.” If something does not have arrows pointing to it, we know that there is “something” that should be added to the map. This indicates that more research is needed to fill those “blank spots on the map” and improve our model.

Graphic 4

Supports collaboration.
The integrated map can support collaboration among the project team. We can zoom in to look at what parts are relevant for action.

Graphic 5

Supports strategic planning.
The integrated map also supports strategic planning. Solid arrows leading to our goals indicate things that help. Dotted lines show the challenges.

Graphic 6

Clarifies short-term and long-term outcomes.
We can create customized map views to show concepts of interest, such as outcomes for students and connections between the outcomes.

Graphic 7

We encourage you to add a Strategic Knowledge Map to your next evaluation. The evaluation team, project staff, students, and stakeholders will benefit tremendously.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
Manu Platt Ayesha Boyce

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs). 

Blog: Evaluation’s Role in Retention and Cultural Diversity in STEM

Posted on October 28, 2015 by  in Blog ()

Research Associate, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Recently, I attended the Building Pathways and Partnerships in STEM for a Global Network conference, hosted by the State University of New York (SUNY) system. It focused on innovative practices in STEM higher education, centered on increasing retention, completion, and cultural diversity.

As an evaluator, it was enlightening to hear about new practices being used by higher education faculty and staff to encourage students, particularly students in groups traditionally underrepresented in STEM, to stay enrolled and get their degrees. These included:

  • Research opportunities! Students should be exposed to real research if they are going to engage in STEM. This is not only important for four-year degree students, but also community college students, whether they plan to continue their education or move into the workforce.
  • Internships (PAID!) are crucial for gaining practical experience before entering the workforce.
  • Partnerships, partnerships, partnerships. Internships and research opportunities are most useful if they are with organizations outside of the school. This means considerable outreach and relationship-building.
  • One-on-one peer mentoring. Systems where upper level students work directly with new students to help them get through tough classes or labs has been shown to keep students enrolled not only in STEM programs, but in college in general.

The main takeaway from this conference is that the SUNY system is being more creative in engaging students in STEM. They are making a concerted effort to help underrepresented students. This trend is not limited to NY—many colleges and universities are focusing on these issues.

What does all this mean for evaluation? Evidence is more important than ever to sort out what types of new practices work and for whom. Evaluation designs and methods need to be just as innovative as the programs they are reviewing. As evaluators, we need to channel program designers’ creativity and apply our knowledge in useful ways. Examples include:

  • Being flexible. Many methods are brand new or new to the institution or department, so implementers may tweak them along the way. Which means we need to pay attention to how we assess outcomes, perhaps taking guidance from Patton’s Developmental Evaluation work.
  • Considering cultural viewpoints. We should always be mindful of the diversity of perspectives and backgrounds when developing instruments and data collection methods. This is especially important when programs are meant to improve underrepresented groups’ outcomes. Think about how individuals will be able to access an instrument (online, paper) and pay attention to language when writing questionnaire items. The American Evaluation Association provides useful resources for this: http://aea365.org/blog/faheemah-mustafaa-on-pursuing-racial-equity-in-evaluation-practice/
  • Thinking beyond immediate outcomes. What do students accomplish in the long-term? Do they go on to get higher degrees, do they get jobs that fit with their expectations? If you can’t measure these due to budget or timeline constraints, help institutions design ways to do this themselves. It can help them continue to identify program strengths and weaknesses.

Keep these in mind, and your evaluation can provide valuable information for programs geared to make a real difference.

Blog: Changing Focus Mid-Project

Posted on September 30, 2015 by  in Blog ()

Physics Instructor, Spokane Falls Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Along with co-PIs Michelle Moore and Max Josquin, I am a recent recipient of an NSF ATE grant aimed at increasing female enrollment and retention in my college’s Information Systems (IS) program. Our year one activities included creating a daylong Information Technology (IT) camp for incoming eighth and ninth grade young women.

LogoCamp

In our original plan, we had set aside money for five IS college students to help us for eight hours during the summer camp. We decided to meet weekly with the students during the months leading up to our event to stay on task and schedule.

1st surprise: Nine students showed up to the initial meeting, and eight of those remained with us for the project’s duration.

2nd surprise: Instead of waiting for our guidance, the students went off and did their own research and then presented a day-long curriculum that would teach hardware, software, and networking by installing and configuring the popular game Minecraft on Raspberry Pi microcomputers.

MineCraft

3rd surprise: When asked to think about marketing, the students showed us a logo and a flyer that they had already designed. They wanted T-shirts with the new logo for each of the campers. And they wanted each camper to be able to take home their Raspberry Pi.

ConfiguringRaspberryPi

At this point, it was very clear to my colleagues and I that we should take a step back and let the students run the show. We helped them create lesson plans to achieve the outcomes they wanted, but they took ownership of everything else. We had to set up registration and advertising, but on the day of the camp, the students were the ones in the classroom teaching the middle-graders. My colleagues and I were the gofers who collected permission slips, got snacks ready, and picked up pizza for lunch.

Perhaps our biggest surprise came when our external evaluator, Terryll Bailey, showed us the IS college student survey results:

“96.8% of the volunteers indicated that participating as a Student Instructor increased their confidence in teamwork and leadership in the following areas:

  • Taking a leadership role.
  • Drive a project to completion.
  • Express your point of view taking into account the complexities of a situation.
  • Synthesize others’ points of view with your ideas.
  • Ability to come up with creative ideas that take into account the complexities of the situation.
  •  Help a team move forward by articulating the merits of alternative ideas or proposals.
  • Engage team members in ways that acknowledge their contributions by building on or synthesizing the contributions of others.
  • Provide assistance or encouragement to team members.

All eight (100%) indicated that their confidence increased in providing assistance or encouragement to team members.”

For year two of our grant, we’re moving resources around in order to pay more students for more hours. We are partnering with community centers and middle schools to use our IS college students as mentors. We hope to formalize this such that our students can receive internship credits, which are required for their degree.

Our lessons learned during this first year of the grant include being open to change and being willing to relinquish control. We are also happy that we decided to work with an external evaluator, even though our grant is a small grant for institutions new to ATE. Because of the questions our evaluator asked, we have the data to justify moving resources around in our budget.

If you want to know more about how Terryll and I collaborated on the evaluation plan and project proposal, check out this webinar in which we discuss how to find the right external evaluator for your project: Your ATE Proposal: Got Evaluation?.

You may contact the author of this blog entry at: asa.bradley@sfcc.spokane.edu

Blog: Creation, Dissemination, and Accessibility of ATE-Funded Resources

Posted on July 15, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Bouda
Kendra Bouda,
Metadata and Information Specialist – Internet Scout Research Group
University of Wisconsin-Madison
Bower
Rachael Bower,
Director/PI – Internet Scout Research Group
University of Wisconsin-Madison

As most ATE community members are aware, the National Science Foundation requires that all grant applicants provide a one- to two-page data management plan describing how the grantee’s proposal will meet NSF guidelines on the dissemination of grant-funded work. In 2014, NSF added a new requirement to the ATE solicitation mandating that newly funded grantees archive their deliverables with ATE Central.

We were curious to find out more about the materials created within the ATE community. So, when EvaluATE approached us about including questions related to data management planning and archiving in their annual survey of ATE grantees, we jumped at the chance. We had an interest in discovering not only what resources have been created, but also how those resources are disseminated to larger audiences. Additionally, we hoped to discover whether grantees are actively making their materials web accessible to users with disabilities—a practice that ensures access by the broadest possible audience.

The survey responses highlight that the most widely created materials include (not surprisingly) curriculum and professional development materials, with newsletters and journal articles taking up the rear. Other materials created by the ATE community include videos, white papers and reports, data sets, and webinars.

However, although grantees are creating a lot of valuable resources, they may not be sharing them widely and, in some cases, may be unsure of how best to make them available after funding ends. The graphs below illustrate the available of these materials, both currently and after grant funding ends.

Bouda Chart

Data from the annual survey shows that 65 percent of respondents are aware of accessibility standards—specifically Section 508 of the Rehabilitation Act; however, 35 percent are not. Forty-eight percent of respondents indicated that some or most of their materials are accessible, while another 22 percent reported that all materials generated by their project or center adhere to accessibility standards. Happily, only 1 percent of respondents reported that their materials do not adhere to standards; however, 29 percent are unsure whether their materials adhere to those standards or not.

For more information about accessibility, visit the official Section 508 site, the World Wide Web Consortium’s (W3C) Accessibility section or the Web Content Accessibility Guidelines 2.0 area of W3C.

Many of us struggle with issues related to sustaining our resources, which is part of the reason we are all asked by NSF to create a data management plan. To help PIs plan for long-term access, ATE Central offers an assortment of free services. Specifically, ATE Central supports data management planning efforts, provides sustainability training, and archives materials created by ATE projects and centers, ensuring access to these materials beyond the life of the project or center that created them.

For more about ATE Central, check out our suite of tools, services, and publications or visit our website. If you have questions or comments, contact us at info@atecentral.net.

Blog: Examining the Recruitment and Retention of Underrepresented Minority Students in the ATE Program

Posted on June 10, 2015 by  in Blog ()

Doctoral Associate, EvaluATE, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the objectives NSF has for its funded programs is to broaden participation in STEM. Broadening participation entails increasing the number of underrepresented minorities (URMs) in STEM programs of study, employment, and research. NSF defines URMs as “women, persons with disabilities, and three racial/ethnic groups—blacks, Hispanics, and American Indians” (NSF, 2013, p. 2). Lori Wingate and I recently wrote a paper examining the strategies used by ATE grantees for recruiting and retaining URM students and how effective they perceive these strategies to be. Each year, the annual ATE survey collects demographic data on student enrollment in ATE programming. We noticed that when compared with national data, ATE programs were doing well when it came to enrolling URM students, especially African-American students. So we decided to investigate what strategies ATE programs were using to recruit and to retain these students. This study was based on data from the 2013 survey of ATE principal investigators.

Our survey asked about 10 different strategies. The strategies were organized into a framework consisting of three parts: motivation and access, social and academic support and affordability[1] as presented in the figure below. The percentages and associated bars represent the proportion of grantees who reported using a particular strategy. The gray lines and orange dots represent the rank of perceived impact, where 1 is the highest reported impact and 10 is the lowest.

Corey Chart

Overall, we found that ATE projects and centers were using strategies related to motivation and access more than those related to either social/academic support or affordability. These types of strategies are also more focused on recruiting students as opposed to retaining them. It was interesting that there was a greater emphasis on recruitment strategies, particularly because many of these strategies ranked low in terms of perceived impact. In fact, when we compared the overall perceptions of effectiveness to the actual use of particular strategies, we found that many of the strategies perceived to have the highest impact on improving the participation of URM students in ATE programs were being used the least.

Although based on the observations of a wide range of practitioners who are engaged in the daily work of technological education, perceptions of impact are just that, perceptions; the findings must be interpreted with caution. These data raise the question of whether or not ATE grantees are using the most effective strategies available to them for increasing the participation of URM students.

With the improving economy, enrollment at community colleges is down, putting programs with low enrollment at risk of being discontinued. This makes it ever more important not only to continue to enhance the recruitment of students to ATE programs, but also to use effective and cost-efficient strategies to retain them from year to year.

[1] Hrabowski, F. A., et al. (2011). Expanding underrepresented minority participation: America’s science and technology talent at the crossroads. Washington, DC: National Academies Press. Available here (to access this report by the National Academies of Sciences, you must create a free account)

Blog: Adapting Based on Feedback

Posted on May 13, 2015 by  in Blog ()

Director, South Carolina Advanced Technological Education Center of Excellence, Florence-Darlington Technical College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 4 – Why making changes based on evidence is important

At the Mentor-Connect: Leadership Development and Outreach for ATE project (www.Mentor-Connect.org), formative feedback guides the activities we provide and resources we develop. It is the compass that keeps us heading in the direction of greatest impact. I’ll share three examples of how feedback in the different stages of the project’s life cycle helped us adapt the project. The first was feedback from an outside source; the second two were based on our internal feedback processes.

Craft LM1 Pic

The initial Mentor-Connect technical assistance workshop for each cohort focuses on developing grant writing skills for the NSF ATE program. The workshop was originally designed to serve teams of two STEM faculty members from participant colleges; however, we were approached by grant writers from those colleges who also wanted to attend. On a self-pay basis, we welcomed these additional participants. Post-workshop surveys and conversations with grant writers at the event indicated that during the workshop we should offer a special breakout session just for grant writers so that issues specific to their role in the grant development and submission process could be addressed. This breakout session was added and is now integral to our annual workshop.

Craft LM2 Pic

Second, feedback from our mentors about our activities caused us to change the frequency of our face-to-face workshops. Mentors reported that the nine-month time lag between the project’s January face-to-face workshop with mentors and the college team’s submission of a proposal the following October made it hard to maintain momentum. Mentors yearned for more face-to-face time with their mentees and vice versa. As a result, a second face-to-face workshop was added the following July. Evaluation feedback from this second gathering of mentors and mentees was resoundingly positive. This second workshop is now incorporated as a permanent part of Mentor-Connect’s annual programming.

Craft LM3 pic

Finally, one of our project outputs helps us keep our project on track. We use a brief reporting form that indicates a team’s progress along a grant development timeline. Mentors and their mentees independently complete and submit the same form. When both responses indicate “ahead of schedule” or “on time” or even “behind schedule,” this consensus is an indicator of good communications between the mentor and his or her college team. They are on the same page. If we observe a disconnect between the mentee’s and mentor’s progress reports, this provides an early alert to the Mentor-Connect team that an intervention may be needed with that mentee/mentor team. Most interventions prompted by this feedback process have been effective in getting the overall proposal back on track for success.

With NSF ATE projects, PIs have the latitude and are expected to make adjustments to improve project outcomes. After all, it is a grant and not a contract. NSF expects you to behave like a scientist and adjust based on evidence. So, don’t be glued to your original plan! Change can be a good thing. The key is to listen to those who provide feedback, study your evaluation data, and adjust accordingly.