Archive: STEM

Blog: Logic Models for Curriculum Evaluation

Posted on June 7, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Rachel Tripathy Linlin Li
Research Associate, WestEd Senior Research Associate, WestEd

At the STEM Program at WestEd, we are in the third year of an evaluation of an innovative, hands-on STEM curriculum. Learning by Making is a two-year high school STEM course that integrates computer programming and engineering design practices with topics in earth/environmental science and biology. Experts in the areas of physics, biology, environmental science, and computer engineering at Sonoma State University (SSU) developed the curriculum by integrating computer software with custom-designed experiment set-ups and electronics to create inquiry-based lessons. Throughout this project-based course, students apply mathematics, computational thinking, and the Next Generation Science Standards (NGSS) Scientific and Engineering Design Practices to ask questions about the world around them, and seek the answers. Learning by Making is currently being implemented in rural California schools, with a specific effort being made to enroll girls and students from minority backgrounds, who are currently underrepresented in STEM fields. You can listen to students and teachers discussing the Learning by Making curriculum here.

Using a Logic Model to Drive Evaluation Design

We derived our evaluation design from the project’s logic model. A logic model is a structured description of how a specific program achieves an intended learning outcome. The purpose of the logic model is to precisely describe the mechanisms behind the program’s effects. Our approach to the Learning by Making logic model is a variant on the five-column logic format that describes the inputs, activities, outputs, outcomes, and impacts of a program (W.K. Kellogg Foundation, 2014).

Learning by Making Logic Model

Click image to view enlarge

Logic models are read as a series of conditionals. If the inputs exist, then the activities can occur. If the activities do occur, then the outputs should occur, and so on. Our evaluation of the Learning by Making curriculum centers on the connections indicated by the orange arrows connecting outputs to outcomes in the logic model above. These connections break down into two primary areas for evaluation: 1) teacher professional development, and 2) classroom implementation of Learning by Making. The questions that correlate with the orange arrows above can be summarized as:

  • Are the professional development (PD) opportunities and resources for the teachers increasing teacher competence in delivering a computational thinking-based STEM curriculum? Does Learning by Making PD increase teachers’ use of computational thinking and project-based instruction in the classroom?
  • Does the classroom implementation of Learning by Making increase teachers’ use of computational thinking and project-based instruction in the classroom? Does classroom implementation promote computational thinking and project-based learning? Do students show an increased interest in STEM subjects?

Without effective teacher PD or classroom implementation, the logic model “breaks,” making it unlikely that the desired outcomes will be observed. To answer our questions about outcomes related to teacher PD, we used comprehensive teacher surveys, observations, bi-monthly teacher logs, and focus groups. To answer our questions about outcomes related to classroom implementation, we used student surveys and assessments, classroom observations, teacher interviews, and student focus groups. SSU used our findings to revise both the teacher PD resources and the curriculum itself to better situate these two components to produce the outcomes intended. By deriving our evaluation design from a clear and targeted logic model, we succeeded in providing actionable feedback to SSU aimed at keeping Learning by Making on track to achieve its goals.

Blog: Evaluating Network Growth through Social Network Analysis

Posted on May 11, 2017 by  in Blog ()

Doctoral Student, College of Education, University of Nebraska at Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the most impactful learning experiences from the ATE Principal Investigators Conference I attended in October 2016 was the growing use of Business and Industry Leadership Teams (BILT) partnerships in developing and implementing new STEM curriculum throughout the country.  This need for cross-sector partnerships has become apparent and reinforced through specific National Science Foundation (NSF) grants.

The need for empirical data about networks and collaborations is increasing within the evaluation realm, and social network surveys are one method of quickly and easily gathering that data. Social network surveys come in a variety of forms. The social network survey I have used is in a roster format. Each participant of the program is listed, and each individual completes the survey by selecting which option best describes their relationships with one another. The options vary in degree from not knowing that person at one extreme, to having formally collaborated with that person at the other extreme. In the past, data from these types of surveys was analyzed through social network analysis, which necessitated a large amount of programming knowledge.  Due to recent technological advancements, there are new social network analysis programs that make analyzing this data more user-friendly for non-programmers. I have worked on an NSF-funded project at the University of Nebraska at Oaha where the goal is to provide professional development and facilitate the growth of a network for middle school teachers in order to create and implement computer science lessons into their current curriculum (visit the SPARCS website).  One of the methods for evaluating the facilitation of the network is through a social network analysis questionnaire. This method has proved very helpful in determining the extent to which the professional relationships of the cohort members have evolved over the course of their year-long experience within the program.

The social network analysis program I have been using is known as NodeXL and is an Excel add-in. It is very user-friendly and can easily be used to generate quantitative data on network development. I was able to take the data gathered from the social network analysis, conduct research, and present my article, “Identification of the Emergent Leaders within a CSE Professional Development Program,” at an international conference in Germany. While the article is not focused on evaluation, it does review the survey instrument itself.  You may access the article through this link (although I think your organization must have access to ACM):  Tracie Evans Reding WiPSCE Article. The article is also posted on my Academia.edu page.

Another funding strand emphasizing networks through the National Science Foundation is known as Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (INCLUDES). The long-term goal of NSF INCLUDES is to “support innovative models, networks, partnerships, technical capabilities and research that will enable the U.S. science and engineering workforce to thrive by ensuring that traditionally underrepresented and underserved groups are represented in percentages comparable to their representation in the U.S. population.” Noted in the synopsis for this funding opportunity is the importance of “efforts to create networked relationships among organizations whose goals include developing talent from all sectors of society to build the STEM workforce.” The increased funding available for cross-sector collaborations makes it imperative that evaluators are able to empirically measure these collaborations. While the notion of “networks” is not a new one, the availability of resources such as NodeXL will make the evaluation of these networks much easier.

 

Full Citation for Article:

Evans Reding, T., Dorn, B., Grandgenett, N., Siy, H., Youn, J., Zhu, Q., Engelmann, C. (2016).  Identification of the Emergent Teacher Leaders within a CSE Professional Development Program.  Proceedings for the 11th Workshop in Primary and Secondary Computing Education.  Munster, Germany:  ACM.

Blog: Evaluation’s Role in Retention and Cultural Diversity in STEM

Posted on October 28, 2015 by  in Blog ()

Research Associate, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Recently, I attended the Building Pathways and Partnerships in STEM for a Global Network conference, hosted by the State University of New York (SUNY) system. It focused on innovative practices in STEM higher education, centered on increasing retention, completion, and cultural diversity.

As an evaluator, it was enlightening to hear about new practices being used by higher education faculty and staff to encourage students, particularly students in groups traditionally underrepresented in STEM, to stay enrolled and get their degrees. These included:

  • Research opportunities! Students should be exposed to real research if they are going to engage in STEM. This is not only important for four-year degree students, but also community college students, whether they plan to continue their education or move into the workforce.
  • Internships (PAID!) are crucial for gaining practical experience before entering the workforce.
  • Partnerships, partnerships, partnerships. Internships and research opportunities are most useful if they are with organizations outside of the school. This means considerable outreach and relationship-building.
  • One-on-one peer mentoring. Systems where upper level students work directly with new students to help them get through tough classes or labs has been shown to keep students enrolled not only in STEM programs, but in college in general.

The main takeaway from this conference is that the SUNY system is being more creative in engaging students in STEM. They are making a concerted effort to help underrepresented students. This trend is not limited to NY—many colleges and universities are focusing on these issues.

What does all this mean for evaluation? Evidence is more important than ever to sort out what types of new practices work and for whom. Evaluation designs and methods need to be just as innovative as the programs they are reviewing. As evaluators, we need to channel program designers’ creativity and apply our knowledge in useful ways. Examples include:

  • Being flexible. Many methods are brand new or new to the institution or department, so implementers may tweak them along the way. Which means we need to pay attention to how we assess outcomes, perhaps taking guidance from Patton’s Developmental Evaluation work.
  • Considering cultural viewpoints. We should always be mindful of the diversity of perspectives and backgrounds when developing instruments and data collection methods. This is especially important when programs are meant to improve underrepresented groups’ outcomes. Think about how individuals will be able to access an instrument (online, paper) and pay attention to language when writing questionnaire items. The American Evaluation Association provides useful resources for this: http://aea365.org/blog/faheemah-mustafaa-on-pursuing-racial-equity-in-evaluation-practice/
  • Thinking beyond immediate outcomes. What do students accomplish in the long-term? Do they go on to get higher degrees, do they get jobs that fit with their expectations? If you can’t measure these due to budget or timeline constraints, help institutions design ways to do this themselves. It can help them continue to identify program strengths and weaknesses.

Keep these in mind, and your evaluation can provide valuable information for programs geared to make a real difference.

Blog: Changing Focus Mid-Project

Posted on September 30, 2015 by  in Blog ()

Physics Instructor, Spokane Falls Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Along with co-PIs Michelle Moore and Max Josquin, I am a recent recipient of an NSF ATE grant aimed at increasing female enrollment and retention in my college’s Information Systems (IS) program. Our year one activities included creating a daylong Information Technology (IT) camp for incoming eighth and ninth grade young women.

LogoCamp

In our original plan, we had set aside money for five IS college students to help us for eight hours during the summer camp. We decided to meet weekly with the students during the months leading up to our event to stay on task and schedule.

1st surprise: Nine students showed up to the initial meeting, and eight of those remained with us for the project’s duration.

2nd surprise: Instead of waiting for our guidance, the students went off and did their own research and then presented a day-long curriculum that would teach hardware, software, and networking by installing and configuring the popular game Minecraft on Raspberry Pi microcomputers.

MineCraft

3rd surprise: When asked to think about marketing, the students showed us a logo and a flyer that they had already designed. They wanted T-shirts with the new logo for each of the campers. And they wanted each camper to be able to take home their Raspberry Pi.

ConfiguringRaspberryPi

At this point, it was very clear to my colleagues and I that we should take a step back and let the students run the show. We helped them create lesson plans to achieve the outcomes they wanted, but they took ownership of everything else. We had to set up registration and advertising, but on the day of the camp, the students were the ones in the classroom teaching the middle-graders. My colleagues and I were the gofers who collected permission slips, got snacks ready, and picked up pizza for lunch.

Perhaps our biggest surprise came when our external evaluator, Terryll Bailey, showed us the IS college student survey results:

“96.8% of the volunteers indicated that participating as a Student Instructor increased their confidence in teamwork and leadership in the following areas:

  • Taking a leadership role.
  • Drive a project to completion.
  • Express your point of view taking into account the complexities of a situation.
  • Synthesize others’ points of view with your ideas.
  • Ability to come up with creative ideas that take into account the complexities of the situation.
  •  Help a team move forward by articulating the merits of alternative ideas or proposals.
  • Engage team members in ways that acknowledge their contributions by building on or synthesizing the contributions of others.
  • Provide assistance or encouragement to team members.

All eight (100%) indicated that their confidence increased in providing assistance or encouragement to team members.”

For year two of our grant, we’re moving resources around in order to pay more students for more hours. We are partnering with community centers and middle schools to use our IS college students as mentors. We hope to formalize this such that our students can receive internship credits, which are required for their degree.

Our lessons learned during this first year of the grant include being open to change and being willing to relinquish control. We are also happy that we decided to work with an external evaluator, even though our grant is a small grant for institutions new to ATE. Because of the questions our evaluator asked, we have the data to justify moving resources around in our budget.

If you want to know more about how Terryll and I collaborated on the evaluation plan and project proposal, check out this webinar in which we discuss how to find the right external evaluator for your project: Your ATE Proposal: Got Evaluation?.

You may contact the author of this blog entry at: asa.bradley@sfcc.spokane.edu

Blog: Evidence and Evaluation in STEM Education: The Federal Perspective

Posted on August 12, 2015 by  in Blog ()

Evaluation Manager, NASA Office of Education

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you have been awarded federal grants over many years, you probably have seen the increasing emphasis on evaluation and evidence. As a federal evaluator working at NASA, I have seen firsthand the government-wide initiative to increase use of evidence to improve social programs. Federal agencies have been strongly encouraged by the administration to better integrate evidence and rigorous evaluation into their budget, management, operational, and policy decisions by:

(1) making better use of already-collected data within government agencies; (2) promoting the use of high-quality, low-cost evaluations and rapid, iterative experimentation; (3) adopting more evidence-based structures for grant programs; and (4) building agency evaluation capacity and developing tools to better communicate what works. (https://www.whitehouse.gov/omb/evidence)

Federal STEM education programs have also been affected by this increasing focus on evidence and evaluation. Read, for example, the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan (2013),1 which was prepared by the Committee on STEM Education of the National Science and Technology Council.2 This strategic plan provides an overview of the importance of STEM education to American society and describes the current state of federal STEM education efforts. Five priority STEM education investment areas are discussed where a coordinated federal strategy is currently under development. The plan also presents methods to build and share evidence. Finally, the plan lays out several strategic objectives for improving the exploration and sharing of evidence-based practices, including supporting syntheses of existing research that can inform federal investments in the STEM education priority areas, improving and aligning evaluation and research expertise and strategies across federal agencies, and streamlining processes for interagency collaboration (e.g., Memoranda of Understanding, Interagency Agreements).

Another key federal document that is influencing evaluation in STEM agencies is the Common Guidelines for Education Research and Development (2013),3 jointly prepared by the U.S. Department of Education’s Institute of Education Sciences and the National Science Foundation. This document describes the two agencies’ shared understandings of the roles of various types of research in generating evidence about strategies and interventions for increasing student learning. These research types range from studies that generate fundamental understandings related to education and learning to research (“Foundational Research”) to studies that assesses the impact of an intervention on an education-related outcome, including efficacy research, effectiveness research, and scale-up research. The Common Guidelines provide the two agencies and the broader education research community with a common vocabulary to describe the critical features of these study types.

Both documents have shaped, and will continue to shape, federal STEM programs and their evaluations. Reading them will help federal grantees gain a richer understanding of the larger federal context that is influencing reporting and evaluation requirements for grant awards.

1 A copy of the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan can be obtained here: https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_stratplan_2013.pdf

2 For more information on the Committee on Science, Technology, Engineering, and Math Education, visit https://www.whitehouse.gov/administration/eop/ostp/nstc/committees/costem.

3 A copy of the Common Guidelines for Education Research and Development can be obtained here: http://ies.ed.gov/pdf/CommonGuidelines.pdf