Archive: STEM Evaluation

Blog: Strategic Knowledge Mapping: A New Tool for Visualizing and Using Evaluation Findings in STEM

Posted on January 6, 2016 by  in Blog (, )

Director of Research and Evaluation, Meaningful Evidence, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A challenge to designing effective STEM programs is that they address very large, complex goals, such as increasing the numbers of underrepresented students in advanced technology fields.

To design the best possible programs to address such a large, complex goal, we need a large, complex understanding (from looking at the big picture). It’s like when medical researchers seek to develop a new cure–they need deep understanding of how medications interact with the body, other medications, and how they will affect the patient based on their age and medical history.

A new method, Integrative Propositional Analysis (IPA), lets us visualize and assess information gained from evaluations. (For details, see our white papers.) At the 2015 American Evaluation Association conference, we demonstrated how to use the method to integrate findings from the PAC-Involved (Physics, Astronomy, Cosmology) evaluation into a strategic knowledge map. (View the interactive map.)

A strategic knowledge map supports program design and evaluation in many ways.

Measures understanding gained.
The map is an alternative logic model format that provides broader and deeper understanding than usual logic model approaches. Unlike other modeling techniques, IPA lets us quantitatively assess information gained. Results showed that the new map incorporating findings from the PAC-Involved evaluation had much greater breadth and depth than the original logic model. This indicates increased understanding of the program, its operating environment, how they work together, and options for action.

Graphic 1

Shows what parts of our program model (map) are better understood.
In the figure below, the yellow shadow around the concept “Attendance/attrition challenges” indicates that this concept is better understood. We better understand something when it has multiple causal arrows pointing to it—like when we have a map that shows multiple roads leading to each destination.

Graphic 2

Shows what parts of the map are most evidence supported.
We have more confidence in causal links that are supported by data from multiple sources. The thick arrow below shows a relationship that many sources of evaluation data supported. All five evaluation data sources—the project team interviews, student focus group, review of student reflective journals, observation, and student surveys all provided evidence that more experiments/demos/hands-on activities caused students to be more engaged in PAC-Involved.

graphic 3

Shows the invisible.
The map also helps us to “see the invisible.” If something does not have arrows pointing to it, we know that there is “something” that should be added to the map. This indicates that more research is needed to fill those “blank spots on the map” and improve our model.

Graphic 4

Supports collaboration.
The integrated map can support collaboration among the project team. We can zoom in to look at what parts are relevant for action.

Graphic 5

Supports strategic planning.
The integrated map also supports strategic planning. Solid arrows leading to our goals indicate things that help. Dotted lines show the challenges.

Graphic 6

Clarifies short-term and long-term outcomes.
We can create customized map views to show concepts of interest, such as outcomes for students and connections between the outcomes.

Graphic 7

We encourage you to add a Strategic Knowledge Map to your next evaluation. The evaluation team, project staff, students, and stakeholders will benefit tremendously.

Blog: Evidence and Evaluation in STEM Education: The Federal Perspective

Posted on August 12, 2015 by  in Blog ()

Evaluation Manager, NASA Office of Education

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you have been awarded federal grants over many years, you probably have seen the increasing emphasis on evaluation and evidence. As a federal evaluator working at NASA, I have seen firsthand the government-wide initiative to increase use of evidence to improve social programs. Federal agencies have been strongly encouraged by the administration to better integrate evidence and rigorous evaluation into their budget, management, operational, and policy decisions by:

(1) making better use of already-collected data within government agencies; (2) promoting the use of high-quality, low-cost evaluations and rapid, iterative experimentation; (3) adopting more evidence-based structures for grant programs; and (4) building agency evaluation capacity and developing tools to better communicate what works. (https://www.whitehouse.gov/omb/evidence)

Federal STEM education programs have also been affected by this increasing focus on evidence and evaluation. Read, for example, the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan (2013),1 which was prepared by the Committee on STEM Education of the National Science and Technology Council.2 This strategic plan provides an overview of the importance of STEM education to American society and describes the current state of federal STEM education efforts. Five priority STEM education investment areas are discussed where a coordinated federal strategy is currently under development. The plan also presents methods to build and share evidence. Finally, the plan lays out several strategic objectives for improving the exploration and sharing of evidence-based practices, including supporting syntheses of existing research that can inform federal investments in the STEM education priority areas, improving and aligning evaluation and research expertise and strategies across federal agencies, and streamlining processes for interagency collaboration (e.g., Memoranda of Understanding, Interagency Agreements).

Another key federal document that is influencing evaluation in STEM agencies is the Common Guidelines for Education Research and Development (2013),3 jointly prepared by the U.S. Department of Education’s Institute of Education Sciences and the National Science Foundation. This document describes the two agencies’ shared understandings of the roles of various types of research in generating evidence about strategies and interventions for increasing student learning. These research types range from studies that generate fundamental understandings related to education and learning to research (“Foundational Research”) to studies that assesses the impact of an intervention on an education-related outcome, including efficacy research, effectiveness research, and scale-up research. The Common Guidelines provide the two agencies and the broader education research community with a common vocabulary to describe the critical features of these study types.

Both documents have shaped, and will continue to shape, federal STEM programs and their evaluations. Reading them will help federal grantees gain a richer understanding of the larger federal context that is influencing reporting and evaluation requirements for grant awards.

1 A copy of the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan can be obtained here: https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_stratplan_2013.pdf

2 For more information on the Committee on Science, Technology, Engineering, and Math Education, visit https://www.whitehouse.gov/administration/eop/ostp/nstc/committees/costem.

3 A copy of the Common Guidelines for Education Research and Development can be obtained here: http://ies.ed.gov/pdf/CommonGuidelines.pdf

Blog: Gender Evaluation Strategies: Improving Female Recruitment and Retention in ATE Projects

Posted on January 14, 2015 by  in Blog (, )

Executive Director, CalWomen Tech ScaleUP, IWITTS

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

How can ATE project staff and/or STEM educators in general tell if the strategies they are  implementing to increase diversity are impacting the targeted students and if those students actually find those strategies helpful?

I’m very passionate about using evaluation and data to support the National Science Foundation’s (NSF’s) goal of broadening impacts in STEM education. In IWITTS’ CalWomenTech Project, we provided technical assistance to seven community colleges in California between 2006 and 2011 to help them recruit and retain female students into technology programs where they were underrepresented. Six of seven CalWomenTech colleges had increases in female enrollment in targeted introductory technology courses and four colleges increased both female and male completion rates substantially (six colleges increased male retention). So how could the CalWomenTech colleges tell during the project if the strategies they were implementing were helping female technology students?

The short answer is: The CalWomenTech colleges knew because 1) the project was measuring increases in female (and male) enrollment and completion numbers in as close to real time as possible; and 2) they asked the female students in the targeted classes if they had experienced project strategies, found those strategies helpful, and wanted to experience strategies they hadn’t encountered.

What I want to focus on here is how the CalWomenTech Project was able to use the findings from those qualitative surveys. The external evaluators for the CalWomenTech Project developed an anonymous “Survey of Female Technology Course Students” that was distributed among the colleges. The survey was a combination of looking at classroom retention strategies that the instructors had been trained on as part of the project, recruitment strategies, and population demographics. The first time we administered the survey, 60 female students responded (out of 121 surveyed) across seven CalWomenTech colleges. The colleges were also provided with the female survey data filtered for their specific college.

Fifty percent or more of the 60 survey respondents reported exposure to over half the retention strategies listed in the survey. One of the most important outcomes of the survey was that the CalWomenTech colleges were able to use the survey results to choose which strategies to focus on. Instructors exposed to the results during a site visit or monthly conference call came up with ways to start incorporating the strategies female students requested in their classroom. For example, one STEM instructor came up with a plan to start assigning leadership roles in group projects randomly to avoid men taking the leadership role in groups more often than women, after she saw how many female students wanted to try out a leadership role in class.

To hear about more evaluation lessons learned, watch the webinar “How well are we serving our female students in STEM?” or read more about the CalWomenTech survey of female technology students here.

Human Subjects Alert: If you are administering a survey such as this to a specific group of students and there are only a few in the program, then it’s not anonymous. It’s important to be very careful about how the responses are shared and with whom, since this kind of survey includes confidential information that could harm respondents.