Rachel Becker-Klein

Senior Research Associate, PEER Associates

Rachel Becker-Klein, Ph.D. is a Senior Research Associate at PEER Associates. Dr. Becker-Klein has over a decade of experience as an evaluator. Dr. Becker-Klein’s interest in systems thinking that derived from a Ph.D. in Community Psychology (from New York University in 2003) has pushed her to bring a holistic approach to evaluation and assessment tools. Embedded assessment tools as a way to measure participant skills, knowledge, and behavior are an important part of the work she does as an evaluator. Dr. Becker-Klein has developed embedded assessment tools for several STEM education programs (in both formal and informal educational settings).


Blog: Part 2: Using Embedded Assessment to Understand Science Skills

Posted on January 31, 2018 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
RBKlein
Rachel Becker-Klein
Senior Research Associate
PEER Associates
KPeterman
Karen Peterman
President
Karen Peterman Consulting
CStylinski
Cathlyn Stylinski
Senior Agent
University of Maryland Center
for Environmental Science

In our last EvaluATE blog, we defined embedded assessments (EAs) and described the benefits and challenges of using EAs to measure and understand science skills. Since then, our team has been testing the development and use of EAs for three citizen science projects through our National Science Foundation (NSF) project, Embedded Assessment for Citizen Science. Below we describe our journey and findings, including the creation and testing of an EA development model.

Our project first worked to test a process model for the development of EAs that could be both reliable and valid (Peterman, Becker-Klein, Stylinski, & Grack-Nelson, in press). Stage 1 was about articulating program goals and determining evidence for documenting those goals. In Stage 2, we collected both content validity evidence (the extent to which a measure was related to the identified goal) and response process validity evidence (how understandable the task was to participants). Finally, the third stage involved field-testing the EA. The exploratory process, with stages and associated products, is depicted in the figure below.

We applied our EA development approach to three citizen-science case study sites and were successful at creating an EA for each. For instance, for Nature’s Notebook (an online monitoring program where naturalists record observations of plants and animals to generate long-term datasets), we worked together to create an EA of paying close attention. This EA was developed for participants to use in the in-person workshop, where they practiced observation skills by collecting data about flora and fauna at the training site. Participants completed a Journal and Observation Worksheet as part of their training, and the EA process standardized the worksheet and also included a rubric for assessing how participants’ responses reflected their ability to pay close attention to the flora and fauna around them.

Embedded Assessment Development Process

Lessons Learned:

  • The EA development process had the flexibility to accommodate the needs of each case study to generate EAs that included a range of methods and scientific inquiry skills.
  • Both the SMART goals and Measure Design Template (see Stage 1 in the figure above) proved useful as a way to guide the articulation of project goals and activities, and the identification of meaningful ways to document evidence of inquiry learning.
  • The response process validity component (from Stage 2) resulted in key changes to each EA, such as changes to the assessment itself (e.g., streamlining the activities) as well as the scoring procedures.

Opportunities for using EAs:

  • Modifying existing activities. All three of the case studies had project activities that we could build off to create an EA. We were able to work closely with program staff to modify the activities to increase the rigor and standardization.
  • Formative use of EAs. Since a true EA is indistinguishable from the program itself, the process of developing and using an EA often resulted in strengthened project activities.

Challenges of using EAs:

  • Fine line between EA and program activities. If an EA is truly indistinguishable from the project activity itself, it can be difficult for project leaders and evaluators to determine where the program ends and the assessment begins. This ambiguity can create tension in cases where volunteers are not performing scientific inquiry skills as expected, making it difficult to disentangle whether the results were due to shortcomings of the program or a failing of the EA designed to evaluate the program.
  • Group versus individual assessments. Another set of challenges for administering EAs relates to the group-based implementation of many informal science projects. Group scores may not represent the skills of the entire group, making the results biased and difficult to interpret.

Though the results of this study are promising, we are at the earliest stages of understanding how to capture authentic evidence to document learning related to science skills. The use of a common EA development process, with common products, has the potential to generate new research to address the challenges of using EAs to measure inquiry learning in the context of citizen science projects and beyond. We will continue to explore these issues in our new NSF grant, Streamlining Embedded Assessment for Citizen Science (DRL #1713424).

Acknowledgments:

We would like to thank our case study partners: LoriAnne Barnett from Nature’s Notebook; Chris Goforth, Tanessa Schulte, and Julie Hall from Dragonfly Detectives; and Erick Anderson from the Young Scientists Club. This work was supported by the National Science Foundation under grant number DRL#1422099.

Resource:

Peterman, K., Becker-Klein, R., Stylinski, C., & Grack-Nelson, A. (2017). Exploring embedded assessment to document scientific inquiry skills within citizen science. In C. Herodotou, M. Sharples, & E. Scanlon (Eds.), Citizen inquiry: A fusion of citizen science and inquiry learning (pp. 63-82). New York, NY: Rutledge.

Blog: Using Embedded Assessment to Understand Science Skills

Posted on August 5, 2015 by , , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CStylinski
Cathlyn Stylinski
Senior Agent
University of Maryland Center
for Environmental Science
KPeterman
Karen Peterman
President
Karen Peterman Consulting
RBKlein
Rachel Becker-Klein
Senior Research Associate
PEER Associates

As our field explores the impact of informal (and formal) science programs on learning and skill development, it is imperative that we integrate research and evaluation methods into the fabric of the programs being studied. Embedded assessments (EAs) are “opportunities to assess participant progress and performance that are integrated into instructional materials and are virtually indistinguishable from day-to-day [program] activities” (Wilson & Sloane, 2000, p. 182). As such, EAs allow learners to demonstrate their science competencies through tasks that are integrated seamlessly into the learning experience itself.

Since they require that participants demonstrate their skills, rather than simply rate their confidence in using them, EAs offer an innovative way to understand and advance the evidence base for knowledge about the impacts of informal science programs. EAs can take on many forms and can be used in a variety of settings. The essential defining feature is that these assessments document and measure participant learning as a natural component of the program implementation and often as participants apply or demonstrate what they are learning.

Related concepts that you may have heard of:

  • Performance assessments: EA methods can include performance assessments, in which participants do something to demonstrate their knowledge and skills (e.g., scientific observation).
  • Authentic assessments: Authentic assessments are assessments of skills where the learning tasks mirror real-life problem-solving situations (e.g., the specific data collection techniques used in a project) and could be embedded into project activities. (Rural School and Community Trust, 2001; Wilson & Soane, 2000)

You can use EAs to measure participants’ abilities alongside more traditional research and evaluation measures and also to measure skills across time. So, along with surveys of content knowledge and confidence in a skill area, you might consider adding experiential and hands-on ways of assessing participant skills. For instance, if you were interested in assessing participants’ skills in observation, you might already be asking them to make some observations as a part of your program activities. You could then develop and use a rubric to assess the depth of that observation.

Although EA offers many benefits, the method also poses some significant challenges that have prevented widespread adoption to date. For the application of EA to be successful, there are two significant challenges to address: (1) the need for a standard EA development process that includes reliability and validity testing and (2) the need for professional development related to EA.

With these benefits and challenges in mind, we encourage project leaders, evaluators, and researchers to help us to push the envelope by:

  • Thinking critically about the inquiry skills fostered by their informal science projects and ensuring that those skills are measured as part of the evaluation and research plans.
  • Considering whether projects include practices that could be used as an EA of skill development and, if so, taking advantage of those systems for evaluation and research purposes.
  • Developing authentic methods that address the complexities of measuring skill development.
  • Sharing these experiences broadly with the community in an effort to highlight the valuable role that such projects can play in engaging the public with science.

We are currently working on a National Science Foundation grant (Embedded Assessment for Citizen Science – EA4CS) that is investigating the effectiveness of embedded assessment as a method to capture participant gains in science and other skills. We are conducting a needs assessment and working on creating embedded assessments at each of three different case study sites. Look for updates on our progress and additional blogs over the next year or so.

Rural School and Community Trust (2001). Assessing Student Work. Available from http://www.ruraledu.org/user_uploads/file/Assessing_Student_Work.pdf

Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181-208. Available from http://dx.doi.org/10.1207/S15324818AME1302_4