Ayesha Boyce

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Dr. Ayesha Boyce received her Ph.D. in Educational Psychology with a specialization in Evaluation from the University of Illinois Urbana-Champaign. She is an assistant professor at the University of North Carolina at Greensboro. Her research interests focus on addressing issues related to diversity, equity, access, climate, and cultural responsiveness while judging the quality of implementation, effectiveness, impact, and institutionalization of educational programs, especially those that are multi-site and/or STEM. Dr. Boyce has evaluated many programs funded by the National Science Foundation, National Institutes of Health, Title VI, and others. She is the Chair of the American Evaluation Association STEM TIG.


Blog: Attending to culture, diversity, and equity in STEM program evaluation (Part 2)

Posted on May 9, 2018 by  in Blog ()

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In my previous post, I gave an overview of two strategies you can use to inform yourself about the theoretical aspect of engagement with culture, diversity, and equity in evaluation. I now present two practical strategies, which I believe should follow the theoretical strategies presented in my previous post.

Strategy three: Engage with related sensitive topics informally

To begin to feel comfortable with these topics, engage with these issues during interactions with your evaluation team members, clients, or other stakeholders. Evaluators should acknowledge differing stakeholder opinions, while also attempting to assist stakeholders in surfacing their own values, prejudices, and subjectivities (Greene, Boyce, & Ahn, 2011).

To do this, bring up issues of race, power, inequity, diversity, and culture for dialogue in meetings, emails, and conversations (Boyce, 2017). Call out and discuss micro-aggressions (Sue, 2010) and practice acts of micro-validation (Packard, Gagnon, LaBelle, Jeffers, & Lynn, 2011). For example, when meeting with clients, you might ask them to discuss how they plan to ensure not just diversity but inclusivity within their program. You also can ask them to chart out program goals through a logic model but also ask them to consider if they think underrepresented participants might experience the program differently than their majority participants. Ask clients if they have considered cultural sensitivity training for program managers and/or participants.

Strategy four: Attend to issues of culture, equity, and diversity formally

Numerous scholars have addressed the implications of cultural responsiveness in practice (Frierson, Hood, Hughes, & Thomas, 2010; Hood, Hopson, & Kirkhart, 2015), with some encouraging contemplation surrounding threats to, as well as evidence for, multicultural validity by examining relational, consequential, theoretical, experiential, and methodological justificatory perspectives (Kirkhart, 2005, 2010). I believe the ultimate goal is to be able to attend to culture and context in all formal aspects of the research and evaluation. It is especially important to take a strengths-based, anti-defect approach (Chun & Evans, 2009) and focus on research intersectionality (Collins, 2000).

To do this, you can begin with the framing of the program goals. My programs aim to give underrepresented minorities in STEM skills to survive in the field. This perspective assumes that something is inherently wrong with these students. Instead, think about rewording evaluation questions to examine the culture of the department or program, to explore why more underrepresented groups (at least to have parity with the percentage in population) don’t thrive. Further, evaluators can attempt to include these topics in evaluation questions, develop culturally commensurate data instruments, and be sensitive to these issues during data collection, analysis, and reporting. Challenge yourself to think about this attendance as more than the inclusion of symbolic and politically correct buzzwords (Boyce & Chouinard, 2017), but as a true infusion of these aspects into your practice. For example, I always include an evaluation question about diversity, equity, and culture in my evaluation plans.

These two blog posts are really just the tip of the iceberg. I hope you find these strategies useful as you begin to engage with culture, equity, and diversity in your work. As I previously noted, I have included citations throughout so that you can read more about these important concepts. In a recently published article, my colleague Jill Anne Chouinard and I discuss how we trained evaluators to work through these strategies in a Culturally Responsive Approaches to Research and Evaluation course (Boyce & Chouinard, 2017).

References

Blog: Attending to culture, diversity, and equity in STEM program evaluation (Part 1)

Posted on May 1, 2018 by  in Blog ()

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The conversation, both practical and theoretical, surrounding culture, diversity, and equity in evaluation has increased in recent years. As many STEM education programs aim to broaden participation of women, ethnic minority groups, and persons with disabilities, attention to culture, diversity, and equity is paramount. In two blog posts, I will provide a brief overview of four strategies to meaningfully and respectfully engage with these important topics. In this first blog, I will focus on strategies that are helpful in learning more about these issues but that are theoretical and not directly related to evaluation practice. I will also help you learn more about these issues. I should note that I purposely have included a number of citations so that you may read further about these topics.

Strategy one: Recognize social inquiry is a cultural product

Social science knowledge of minority populations, constructed with narrow worldviews, has demeaned characteristics, distorted interpretations of conditions and potential, and remained limited in its capacity to inform efforts to improve the life chances of historically disadvantaged populations (Ladson-Billings, 2000). Begin by educating yourself about the role communicentric bias—the tendency to make one’s own community, often the majority class, the center of conceptual frames that constrains all thought (Gordon, Miller, & Rollock, 1990)—and individual, institutional, societal, and civilizational racism play in education and the social sciences (Scheurich & Young, 2002). Seek to understand the culture, context, historical perspectives, power, oppressions, and privilege in each new context (Greene, 2005; Pon, 2009).

To do this, you can read and discuss books, articles, and chapters related to epistemologies— theories of knowledge—of difference, racialized discourses, and critiques about the nature of social inquiry. Some excellent examples include Stamped from the Beginning by Ibram X. Kendi, The Shape of the River by William G. Bowen and Derek Bok, and Race Matters by Cornel West. Each of these books is illuminating and a must-read as you begin or continue your journey to better understand race and privilege in America. Perhaps start a book club so that you can process these ideas with colleagues and friends.

Strategy two: Locate your own values, prejudices, and identities

The lens through which we view the world influences all evaluation processes, from design to implementation and interpretations (Milner, 2007; Symonette, 2015). In order to think crtically bout issues of culture, power, equity, class, race, and diversity, evaluators should understand their own personal and cultural values (Symonette, 2004). As Peshkin (1988) has noted, the practice of locating oneself can result in a better understanding of one’s own subjectivities. In my own work, I always attempt to acknowledge the role my education, gender, class, and ethnicity will play in my work.

To do this, you can reflect on your own educational background, personal identities, experiences, values, prejudices, predispositions, beliefs, and intuition. Focus on your own social identity, the identities of others, whether you belong to any groups with power and privilege, and how your educational background and identities shape your beliefs, role as an evaluator, and experiences. To unearth some of the more underlying values, you might consider participating in a privilege walk exercise and reflecting on your responses to current events.

These two strategies are just the beginning. In my second blog post, I will focus on engaging with these topics informally and formally within your evaluation practice.

References

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
Manu Platt Ayesha Boyce

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs).