We EvaluATE - General Issues

Blog: Evaluation Training and Professional Development

Posted on October 7, 2015 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello ATE Community!

My name is Cheryl Endres, and I am the new blog editor and doctoral associate for EvaluATE. I am a doctoral student in the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. To help me begin to learn more about ATE and identify blog topics, we (EvaluATE) took a closer look at some results from the survey conducted by EvaluATE’s external evaluator. As you can see from the chart, the majority of ATE evaluators have gotten their knowledge about evaluation on the job, through self-study, and nonacademic professional development. Knowing this gives us some idea about additional resources for building your evaluation “toolkit.”

HelloATE--Graph

It may be difficult for practicing evaluators to take time for formal, graduate-level coursework.  Fortunately, there are abundant opportunities just a click away on the Internet!  Since wading through the array of options can be somewhat daunting, we have compiled a short list to get you started in your quest. As the evaluation field continues to expand, the opportunities do as well, and there are a number of online and in-person options for continuing to build your knowledge base about evaluation. Listed below are just a few to get you started:

  • The EvaluATE webinars evalu-ate.org/category/webinars/ are a great place to get started for information specific to evaluation in the ATE context.
  • The American Evaluation Association has a “Learn” tab that provides information about the Coffee Break Webinar series, eStudies, and the Summer Evaluation Institute. There are also links to online and in-person events around the country (and world) and university programs, some of which offer certificate programs in evaluation in addition to degree programs (master’s or doctoral level). The AEA annual conference in November is also a great option, offering an array of preconference workshops: eval.org
  • The Canadian Evaluation Society offers free webinars to members. The site includes archived webinars as well: evaluationcanada.ca/professional-learning
  • The Evaluators’ Institute at George Washington University offers in-person institutes in Washington, D.C. in February and July. They offer four different certificates in evaluation. Check out the schedules at tei.gwu.edu
  • EvalPartners has a number of free e-learning programs: mymande.org/elearning

These should get you started. If you find other good sources, please email me at cheryl.endres@wmich.edu.

Blog: Evaluation and Planning: Partners Through the Journey

Posted on August 19, 2015 by  in Blog ()

Director, Collaborative for Evaluation and Assessment Capacity, University of Pittsburgh

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation goes hand-in-hand with good planning—and so, good implementation. To plan well, you need to know the areas of priority needs (a good needs assessment is critical and often the backbone of planning with efficient use of resources!), and to implement well, you need to know about both process and outcomes. It’s not usually enough in our complex world to simply claim an outcome—the first question after that is usually, “how did you accomplish that?” Evaluations that are more integrated with both planning and implementation can better address those questions and support a strong organizational learning agenda.

Often in areas of grant-funded operations, evaluators are asked to come in pretty late in the process—to provide evaluation of a program or intervention already in action, after funding and programming has occurred. While this form of evaluation is possible and can be important, we find it better to be consulted on the front end of planning and grant writing. Our expertise is often helpful to our clients in connecting their specific needs with the resources they seek, through the most effective processes that can then lead to the outcomes they seek. Evaluation can become the “connecting tissue” between resources and outcomes, needs and processes, and activities and outcomes. Evaluation and planning are iterative partners—continuing to inform each other throughout the history of a project.

We often use tools such as logic modeling and the development of a theory of action or change to identify and articulate the most important elements of the equation. By identifying these components for program planners and articulating the theory of action, evaluation planning also assists in illustrating good project planning!

Evaluation provides the iterative planning and reflection process that is the hallmark of good programming and effective and efficient use of resources. Consider viewing evaluation more holistically—and resist the more narrow definition of evaluation as something that comes at the end of planning and implementation efforts!

By including the requirement for integrated evaluation in their requests for proposals (RFPs), grant funders can help project staff write better proposals for funding, and once funded, help to assure better planning toward achieving goals. Foundations such as W. K. Kellogg Foundation, Robert Wood Johnson Foundation, and a number of more local funders, for example the Heinz Endowments, the Grable Foundation, and FISA Foundation in our own region of southwestern Pennsylvania, have come to recognize these needs. The Common Guidelines for Education Research and Development, published in 2013 and used by the U.S. Department of Education and the National Science Foundation to advise research efforts, identifies the need for gathering and making meaning from evidence in all aspects of change endeavors, including evaluation.

In this 2015 International Year of Evaluation, let’s further examine how we use evaluation to inform all of the aspects of our work, with evaluation, planning and implementation as a seamless partnership!

Blog: Evidence and Evaluation in STEM Education: The Federal Perspective

Posted on August 12, 2015 by  in Blog ()

Evaluation Manager, NASA Office of Education

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you have been awarded federal grants over many years, you probably have seen the increasing emphasis on evaluation and evidence. As a federal evaluator working at NASA, I have seen firsthand the government-wide initiative to increase use of evidence to improve social programs. Federal agencies have been strongly encouraged by the administration to better integrate evidence and rigorous evaluation into their budget, management, operational, and policy decisions by:

(1) making better use of already-collected data within government agencies; (2) promoting the use of high-quality, low-cost evaluations and rapid, iterative experimentation; (3) adopting more evidence-based structures for grant programs; and (4) building agency evaluation capacity and developing tools to better communicate what works. (https://www.whitehouse.gov/omb/evidence)

Federal STEM education programs have also been affected by this increasing focus on evidence and evaluation. Read, for example, the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan (2013),1 which was prepared by the Committee on STEM Education of the National Science and Technology Council.2 This strategic plan provides an overview of the importance of STEM education to American society and describes the current state of federal STEM education efforts. Five priority STEM education investment areas are discussed where a coordinated federal strategy is currently under development. The plan also presents methods to build and share evidence. Finally, the plan lays out several strategic objectives for improving the exploration and sharing of evidence-based practices, including supporting syntheses of existing research that can inform federal investments in the STEM education priority areas, improving and aligning evaluation and research expertise and strategies across federal agencies, and streamlining processes for interagency collaboration (e.g., Memoranda of Understanding, Interagency Agreements).

Another key federal document that is influencing evaluation in STEM agencies is the Common Guidelines for Education Research and Development (2013),3 jointly prepared by the U.S. Department of Education’s Institute of Education Sciences and the National Science Foundation. This document describes the two agencies’ shared understandings of the roles of various types of research in generating evidence about strategies and interventions for increasing student learning. These research types range from studies that generate fundamental understandings related to education and learning to research (“Foundational Research”) to studies that assesses the impact of an intervention on an education-related outcome, including efficacy research, effectiveness research, and scale-up research. The Common Guidelines provide the two agencies and the broader education research community with a common vocabulary to describe the critical features of these study types.

Both documents have shaped, and will continue to shape, federal STEM programs and their evaluations. Reading them will help federal grantees gain a richer understanding of the larger federal context that is influencing reporting and evaluation requirements for grant awards.

1 A copy of the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan can be obtained here: https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_stratplan_2013.pdf

2 For more information on the Committee on Science, Technology, Engineering, and Math Education, visit https://www.whitehouse.gov/administration/eop/ostp/nstc/committees/costem.

3 A copy of the Common Guidelines for Education Research and Development can be obtained here: http://ies.ed.gov/pdf/CommonGuidelines.pdf

Blog: Creation, Dissemination, and Accessibility of ATE-Funded Resources

Posted on July 15, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Bouda
Kendra Bouda,
Metadata and Information Specialist – Internet Scout Research Group
University of Wisconsin-Madison
Bower
Rachael Bower,
Director/PI – Internet Scout Research Group
University of Wisconsin-Madison

As most ATE community members are aware, the National Science Foundation requires that all grant applicants provide a one- to two-page data management plan describing how the grantee’s proposal will meet NSF guidelines on the dissemination of grant-funded work. In 2014, NSF added a new requirement to the ATE solicitation mandating that newly funded grantees archive their deliverables with ATE Central.

We were curious to find out more about the materials created within the ATE community. So, when EvaluATE approached us about including questions related to data management planning and archiving in their annual survey of ATE grantees, we jumped at the chance. We had an interest in discovering not only what resources have been created, but also how those resources are disseminated to larger audiences. Additionally, we hoped to discover whether grantees are actively making their materials web accessible to users with disabilities—a practice that ensures access by the broadest possible audience.

The survey responses highlight that the most widely created materials include (not surprisingly) curriculum and professional development materials, with newsletters and journal articles taking up the rear. Other materials created by the ATE community include videos, white papers and reports, data sets, and webinars.

However, although grantees are creating a lot of valuable resources, they may not be sharing them widely and, in some cases, may be unsure of how best to make them available after funding ends. The graphs below illustrate the available of these materials, both currently and after grant funding ends.

Bouda Chart

Data from the annual survey shows that 65 percent of respondents are aware of accessibility standards—specifically Section 508 of the Rehabilitation Act; however, 35 percent are not. Forty-eight percent of respondents indicated that some or most of their materials are accessible, while another 22 percent reported that all materials generated by their project or center adhere to accessibility standards. Happily, only 1 percent of respondents reported that their materials do not adhere to standards; however, 29 percent are unsure whether their materials adhere to those standards or not.

For more information about accessibility, visit the official Section 508 site, the World Wide Web Consortium’s (W3C) Accessibility section or the Web Content Accessibility Guidelines 2.0 area of W3C.

Many of us struggle with issues related to sustaining our resources, which is part of the reason we are all asked by NSF to create a data management plan. To help PIs plan for long-term access, ATE Central offers an assortment of free services. Specifically, ATE Central supports data management planning efforts, provides sustainability training, and archives materials created by ATE projects and centers, ensuring access to these materials beyond the life of the project or center that created them.

For more about ATE Central, check out our suite of tools, services, and publications or visit our website. If you have questions or comments, contact us at info@atecentral.net.

Blog: LGBT-Inclusive Language in Data Collection

Posted on May 27, 2015 by  in Blog (, , )

Coordinator of LGBT Student Services, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In order to obtain relevant and reliable demographic information regarding lesbian, gay, bisexual and transgender (LGBT) individuals, it is important to understand the complexities of sex, sexual orientation, gender identity and gender expression. In many English-speaking communities, the concepts of sex, gender and sexual orientation are frequently conflated in ways that can marginalize those who do not fit the dominant heterosexual, men are men/women are women narrative. Thus, when asking about an individual’s sex, gender, or sexual orientation on a survey, it is important that survey developers have a clear understanding of these terms and what it really means to ask for specific demographic information to ensure that the information collected is valid.

In Western culture, sex assignment is usually determined by a physician around the time of birth based on the appearance of external genitalia. There is an assumption that the development of external genitalia aligns with an expected chromosomal make up and release of hormones. Sex categories are commonly identified exclusively as either female or male, but a growing number of communities, cultures, and countries are advocating for expanded recognition of additional sex categories, including intersex.

SexAssignmentQuestion

Gender identity, while frequently used interchangeably with and conflated with sex assigned at birth, describes the way a person sees themselves in terms of gender categories, such as man, woman, agender, third-gender, and other gender identifier language. Gender expression describes the ways a person expresses their gender to other people through roles, mannerisms, clothing, interactions, hair styles, and other perceivable ways. If seeking to better understand how the respondent interacts with the outside world, a survey may ask for gender identity and gender expression.

The normative alignment of sex assigned at birth and gender identity, such as a person assigned female at birth who identifies as a woman, is described by the term cisgender. Transgender is a term that is broadly defined as an identity in which a person’s sex assigned at birth and gender identity does not align. It’s important to recognize that those who identify as transgender may or may not identify with binary male/female or man/woman categories. Surveys seeking reliable data regarding transgender populations should ask descriptive and precise questions regarding transgender identity, sex assigned at birth and gender identity with options of providing their own identity term.

GenderIdentityQuestion

Finally, sexual orientation describes a person’s emotional and/or physical attraction to other people. Common sexual orientation terms may include straight, gay, lesbian, bisexual, asexual and many others. Sexual orientation and sexual behavior may be different, however. If we are seeking to address health disparities that result from same-sex sexual behavior, it would be more relevant to ask about sexual behavior than sexual orientation. This is because identity and behavior are not the same.

SexualOrientationQuestion SexualBehaviorQuestion

It’s important that research tools reflect an understanding of the complexity and meaning of each of these categories in order to collect relevant demographic information that serves to answer an intended question. An important rule of thumb is to only ask what you really need to know about.

Blog: Finding Opportunity in Unintended Outcomes

Posted on April 15, 2015 by  in Blog (, , , )

Research and Evaluation Consultant, Steven Budd Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Working with underage students bears an increased responsibility for their supervision. Concerns may arise during the implementation of activities that were never envisioned when the project was designed. These unintended consequences may be revealed during an evaluation, thus presenting an opportunity for PIs and evaluators to both learn and intervene.

One project I’m evaluating includes a website designed for young teens, and features videos from ATETV and other sources. The site encourages our teen viewers to share information about the site with their peers and to explore links to videos hosted on other popular sites like YouTube. The overarching goal is to attract kids to STEM and technician careers by piquing their interest with engaging and accurate science content. What we didn’t anticipate was the volume of links to pseudoscience, science denial, and strong political agendas they would encounter. The question for the PI and Co-PIs became, “How do we engage our young participants in a conversation about good versus not-so-good science and how to think critically about what they see?”

As the internal project evaluator, I first began a conversation with the project PI and senior personnel around the question of responsibility. What is the responsibility of the PIs to engage our underage participants in a conversation about critical thinking and learning, so they can discriminate between questionable and solid content? Such content is readily accessible to young teens as they surf the Web, so a more important question was how the project team might capture this reality and capitalize on it. In this sense, was a teaching moment at hand?

As evaluators on NSF-funded projects, we know that evaluator engagement is critical right from the start. Formative review becomes especially important when even well-designed and well thought out activities take unanticipated turns. Our project incorporates a model of internal evaluation, which enables project personnel to gather data and provide real-time assessment of activity outcomes. We then present the data with comment to our external evaluator. The evaluation team works with the project leadership to identify concerns as they arise and strategize a response. That response might include refining activities and how they are implemented or by creating entirely new activities that address a concern directly.

After thinking it through, the project leadership chose to open a discussion about critical thinking and science content with the project’s teen advisory group. Our response was to:

  • Initiate more frequent “check-ins” with our teen advisers and have more structured conversations around science content and what they think.
  • Sample other teen viewers as they join their peers in the project’s discussion groups and social media postings.
  • Seek to better understand how teens engage Internet-based content and how they make sense of what they see.
  • Seek new approaches to activities that engage young teens in building their science literacy and critical thinking.

Tips to consider

  • Adjust your evaluation questions to better understand the actual experience of your project’s participants, and then look for the teaching opportunities in response to what you hear.
  • Vigilant evaluation may reveal the first signs of unintended impacts.

Blog: Evaluation Procurement: Regulations, Rules and Red Tape… Oh My!

Posted on April 8, 2015 by  in Blog (, )

Grants Specialist, Virginia Western Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Jacqueline Rearick, and I am a Grants Specialist at Virginia Western Community College where I support our NSF/ATE projects and sub-awards, among other grants. I’m also an evaluation advocate and can get a bit overzealous about logic models, outcomes, surveys, and assessments. Recently, our grants office had to work through the process of procurement to secure evaluation services for our ATE project. Although we referenced an external evaluator in the project design, the policies and procedures of our individual state procurement regulations trumped the grant proposal and became the focus of a steep learning curve for all involved.

Because we have different priorities it may appear that the grants office and the procurement office can be in direct opposition with one another. Grant proposals that require evaluation services, like ATE, work best when the evaluator is part of the process and can assist with developing the plan and then execute the evaluation. Procurement regulations at your individual institution could require a bid process; which may or may not result in securing the evaluator who helped you write the initial evaluation plan.

Hot Tip: Invite the procurement office to the table early

Securing evaluation services for your ATE project is important; so is following internal procurement rules. Touch base with your procurement office early in the evaluation development process. Are there local or state regulations that will require a bid process? If your ATE evaluator assists with the writing of your evaluation section in the proposal, will you be able to use that same evaluator if the grant is funded? Have an honest conversation with your evaluator about the procurement process.

Hot Tip: Levels of procurement, when the rules change

While working through the procurement process, we discovered that state rules change when the procurement of goods or services reach different funding levels. What was a simple evaluation procurement for our first small ATE grant ($200k) turned into much larger scale procurement for our second ATE project grant ($900k), based on our state guidelines. Check with your institution to determine thresholds and the required guidelines for consultant services at various funding levels.

Lesson Learned: All’s well that ends well

The process of securing evaluation services through procurement is designed to be one that allows the PI to review all competitors to determine quality evaluation services at a reasonable price. The evaluator who helped write our evaluation in the proposal was encouraged to bid on the project. What’s even better, this evaluator is now set up as a vendor in our state system and will be available to other colleges in the state as they seek quality ATE evaluation services.

Blog: Gratitude for our ATE Community of Practitioners

Posted on March 11, 2015 by  in Blog ()

Educational Consultant, Lamoka Educational Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Recently a friend posted a blog whose theme was gratitude. Reading his blog caused me to reflect on gratitude in my own professional life.

I have been evaluating ATE projects since 2002, and I have been on a learning curve from the beginning – though I must say that the curve is not nearly as steep as it once was. I have learned much from the ATE Community, especially those evaluators with whom I have collaborated. I realized early on that we evaluators have different styles and strengths, and I sought out those who I hoped would complement my own strengths. Here are some examples of lessons learned from my evaluator colleagues:

  • Peter is a global thinker who taught me how to formulate the “big picture evaluation questions” that Jane Davidson describes in her lovely guide Actionable Evaluation Basics.
  • John’s attention to detail – and to reminding me to pay attention to the logic model –  has served me well over the years.  I remember first learning about logic model development in one of EvaluATE’s webinars http://www.evalu-ate.org/wp-content/uploads/formidable/Slides_2010-EvalTools.pdf
  • Dave has consistently utilized outside-the-box thinking and encouraged a number of us to employ a “value creation” framework in our evaluation work. He first introduced me to Wenger’s work a couple of years ago: http://wenger-trayner.com/documents/Wenger_Trayner_DeLaat_Value_creation.pdf
  • And Candiya, with her extensive experience in education and workforce research, has taught me a great deal about evaluation through conversations and presentations (see http://www.evalu-ate.org/wp-content/uploads/2014/10/Report_2011_MATE_Highlights.pdf for a nice summary of some of Candiya’s work).
  • Other evaluators in the ATE Community – Terryll, Stephanie, and the two Amy G’s – have influenced my ways of thinking, and have provided encouragement and thoughtful advice.

I am especially indebted to the leadership team at EvaluATE, and to Lori and Arlen in particular. Their guidance and support through the years has been invaluable, and I have benefited from EvaluATE’s wealth of resources. Arlen’s recent blog “Strengthening Post Hoc Professional Development Evaluations” is printed out and occupies a prominent spot on my makeshift bulletin board. Lori’s thoughtful attention to detail in designing and developing the EvaluATE webinars continues to serve the community well. I always learn something new when I tune in.

ATE is a community rich in knowledge and experience. I encourage you to make opportunities to learn from the members of this community. Find ways to collaborate. If you are an evaluator new to ATE, or if you are thinking about becoming an ATE evaluator – start by spending time with the evalu-ATE.org website. If you have questions, ask for help. Start with me if you wish. I certainly don’t have all the answers, but I’m pretty good at finding people who do. I am grateful for the mentors and colleagues in the ATE Community who have shared their wisdom. Thank you.

 

Blog: Air Travel: Getting Down to the Nitty-Gritty

Posted on January 28, 2015 by  in Blog (, )

Managing Director, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Have you ever been surprised when you get home from a work trip and realize how much it cost? There are a lot of hidden costs associated with travel, and I would like to share some tips to help you eliminate your after travel sticker shock.

These tips are for domestic air travel only; keep an eye out for my next blog on foreign travel!

Pre-Travel. Prior to travel, create a budget. Making a budget for your trip allows you to see exactly what your expenses will be. (Download our travel budget template.) Now I will go into details about various expenses (please note this is not an exhaustive list).

Tip: Check your organization’s travel policies prior to booking and traveling.

Flight. Gather your estimated flight cost from your desired carrier. I recommend adding $100 to the total cost to help cover flight changes prior to booking. If you use a booking service such as AAA, check to see if there is a booking fee and calculate this in (normally $10-$20).

Tip: If you find a lower flight that what AAA is offering, you can let them know and they may match the lower price.

Hotel. You can access prices through the hotel websites, but make sure tax is included in your calculations.

Tip: Always use your correct travel dates when price checking, hotel rates can vary by both day and week.

Food. I would suggest using government per diem rates to calculate food cost. The GSA Per Diem Rates page lets you enter the city you are traveling to and provides you with the cost per day. The per diem rate includes costs for meals, lodging, and incidentals, for this purpose just use the meal rate.

Tip: Some institutions only allow 75% of per diem for first and last day of travel—you may want to check on this.

Miscellaneous. The major categories have been addressed, so what else might be missing from the budget?

  • Checked Baggage. This varies by airline but can be $25 per bag/way (check with your airline for charges).
  • Ground Transportation. Will you be using a taxi, rental car, or ground transportation? Estimate all these costs and add them into your budget. You can search online to get estimated charges for all transportation.
  • Airport Parking. Are you parking your vehicle at the airport? Fees can range from $8-$20 per day, depending on location and duration.
  • Mileage/Gas. Are you driving to the airport or renting a car? Make sure to include a budget for mileage or gas. Check with your organization regarding mileage reimbursement rates.
  • Internet? If you are planning on using the internet at your hotel, there may be a fee associated. I have seen these vary from a flat fee to a per-day charge. Check with the hotel and calculate in any fees.

Tip: Do you have an external evaluator or an advisory committee? Make sure your organization’s travel policy is correctly reflected in their contract, this could be an issue when they are processing travel.

Once your budget is finalized, I would suggest adding a buffer of $100-$200 to the final budgeted amount. This helps cover any unforeseen incidentals. It’s always better to over budget then to under budget. Happy Traveling!

Example Travel Budget to Orlando, FL

Cost Buffer Total
Flight $350.00 $100.00 $450.00
Hotel $600.00 $600.00
Food $150.00 $150.00
Bags (include both ways) $50.00 $50.00
Ground Transportation $50.00 $50.00
Parking $35.00 $35.00
Mileage/Gas $28.00 $28.00
Internet $10.00 $10.00
Buffer $100.00 $100.00
Total Estimated Budget     $ 1,473.00

Blog: Gender Evaluation Strategies: Improving Female Recruitment and Retention in ATE Projects

Posted on January 14, 2015 by  in Blog (, )

Executive Director, CalWomen Tech ScaleUP, IWITTS

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

How can ATE project staff and/or STEM educators in general tell if the strategies they are  implementing to increase diversity are impacting the targeted students and if those students actually find those strategies helpful?

I’m very passionate about using evaluation and data to support the National Science Foundation’s (NSF’s) goal of broadening impacts in STEM education. In IWITTS’ CalWomenTech Project, we provided technical assistance to seven community colleges in California between 2006 and 2011 to help them recruit and retain female students into technology programs where they were underrepresented. Six of seven CalWomenTech colleges had increases in female enrollment in targeted introductory technology courses and four colleges increased both female and male completion rates substantially (six colleges increased male retention). So how could the CalWomenTech colleges tell during the project if the strategies they were implementing were helping female technology students?

The short answer is: The CalWomenTech colleges knew because 1) the project was measuring increases in female (and male) enrollment and completion numbers in as close to real time as possible; and 2) they asked the female students in the targeted classes if they had experienced project strategies, found those strategies helpful, and wanted to experience strategies they hadn’t encountered.

What I want to focus on here is how the CalWomenTech Project was able to use the findings from those qualitative surveys. The external evaluators for the CalWomenTech Project developed an anonymous “Survey of Female Technology Course Students” that was distributed among the colleges. The survey was a combination of looking at classroom retention strategies that the instructors had been trained on as part of the project, recruitment strategies, and population demographics. The first time we administered the survey, 60 female students responded (out of 121 surveyed) across seven CalWomenTech colleges. The colleges were also provided with the female survey data filtered for their specific college.

Fifty percent or more of the 60 survey respondents reported exposure to over half the retention strategies listed in the survey. One of the most important outcomes of the survey was that the CalWomenTech colleges were able to use the survey results to choose which strategies to focus on. Instructors exposed to the results during a site visit or monthly conference call came up with ways to start incorporating the strategies female students requested in their classroom. For example, one STEM instructor came up with a plan to start assigning leadership roles in group projects randomly to avoid men taking the leadership role in groups more often than women, after she saw how many female students wanted to try out a leadership role in class.

To hear about more evaluation lessons learned, watch the webinar “How well are we serving our female students in STEM?” or read more about the CalWomenTech survey of female technology students here.

Human Subjects Alert: If you are administering a survey such as this to a specific group of students and there are only a few in the program, then it’s not anonymous. It’s important to be very careful about how the responses are shared and with whom, since this kind of survey includes confidential information that could harm respondents.