Blog




What Goes Where? Reporting Evaluation Results to NSF

Posted on April 26, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I provide advice for Advanced Technological Education (ATE) principal investigators (PIs) on how to include information from their project evaluations in their annual reports to the National Science Foundation (NSF).

Annual reports for NSF grants are due within 90 days of the award’s anniversary date. That means if your project’s initial award date was September 1, your annual reports will be due between June and August each year until the final year of the grant (at which point an outcome report is due within 90 days after the award anniversary date).

When you prepare your first annual report for NSF at Research.gov, you may be surprised to see there is no specific request for results from your project’s evaluation or a prompt to upload your evaluation report. That’s because Research.gov is the online reporting system used by all NSF grantees, whether they are researching fish populations in Wisconsin lakes or developing technician education programs.  So what do you do with the evaluation report your external evaluator prepared or all the great information in it?

1. Report evidence from your evaluation in the relevant sections of your annual report.

The Research.gov system for annual reports includes seven sections: Cover, Accomplishments, Products, Participants, Impact, Changes/Problems, and Special Requirements. Findings and conclusions from your evaluation should be reported in the Accomplishments and Impact sections, as described in the table below. Sometimes evaluation findings will point to a need for changes in project implementation or even its goals. In this case, pertinent evidence should be reported in the Changes/Problems section of the annual report. Highlight the most important evaluation findings and conclusions in these report sections. Refer to the full evaluation report for additional details (see Point 2 below).

NSF annual report section What to report from your evaluation
Accomplishments
  • Number of participants in various activities
  • Data related to participant engagement and satisfaction
  • Data related to the development and dissemination of products (Note: The Products section of the annual report is simply for listing products, not reporting evaluative information about them.)
Impacts
  • Evidence of the nature and magnitude of changes brought about by project activities, such as changes in individual knowledge, skills, attitudes, or behaviors or larger institutional, community, or workforce conditions
  • Evidence of increased participation by members of groups historically underrepresented in STEM
  • Evidence of the project’s contributions to the development of infrastructure that supports STEM education and research, including physical resources, such as labs and instruments; institutional policies; and enhanced access to scientific information
Changes/Problems
  • Evidence of shortcomings or opportunities that point to a need for substantial changes in the project

Do you have a logic model that delineates your project’s activities, outputs, and outcomes? Is your evaluation report organized around the elements in your logic model? If so, a straightforward rule of thumb is to follow that logic model structure and report evidence related to your project activities and outputs in the Accomplishments section and evidence related to your project outcomes in the Impacts section of your NSF annual report.

2. Upload your evaluation report.

Include your project’s most recent evaluation report as a supporting file in the Accomplishments or Impact section of Research.gov. If the report is longer than about 25 pages, make sure it includes a 1-3 page executive summary that highlights key results. Your NSF program officer is very interested in your evaluation results, but probably doesn’t have time to carefully read lengthy reports from all the projects he or she oversees.

Blog: Evaluation Management Skill Set

Posted on April 12, 2017 by  in Blog ()

CEO, SPEC Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We as evaluators, all know that managing an evaluation is quite different from managing a scientific research project. Sure, we need to take due diligence in completing the basic inquiry tasks:  deciding study questions/hypotheses; figuring out the strongest design, sampling plan, data collection methods and analysis strategies; and interpreting/reporting results. But evaluation’s purposes extend well beyond proving or disproving a research hypothesis. Evaluators must also focus on how the evaluation will lead to enlightenment and what role it plays in support of decision making. Evaluations can leave in place important processes that extend beyond the study itself, like data collection systems and changed organizational culture that places greater emphasis on data-informed decision making. Evaluations also exist within local and organizational political contexts, which are of less importance to academic and scientific research.

Very little has been written in the evaluation literature about evaluation management. Compton and Baizerman are the most prolific authors editing two issues of New Directions in Evaluation on the topic. They approach evaluation management from a theoretical perspective, discussing issues like the basic competencies of evaluation managers within different organizational contexts (2009) and the role of evaluation managers in advice giving (2012).

I would like to describe good evaluation management in terms of the actual tasks that an evaluation manager must excel in—what evaluation managers must be able to actually do. For this, I looked to the field of project management. There is a large body of literature about project management, and whole organizations, like the Project Management Institute, dedicated to the topic. Overlaying evaluation management onto the core skills of a project manager, here is the skill set I see as needed to effectively manage an evaluation:

Technical Skills:

  • Writing an evaluation plan (including but not limited to descriptions of basic inquiry tasks)
  • Creating evaluation timelines
  • Writing contracts between the evaluation manager and various members of the evaluation team (if they are subcontractors), and with the client organization
  • Completing the application for human subjects institutional review board (HSIRB) approval, if needed

Financial Skills:

  • Creating evaluation budgets, including accurately estimating hours each person will need to devote to each task
  • Generating or justifying billing rates of each member of the evaluation team
  • Tracking expenditures to assure that the evaluation is completed within the agreed-upon budget

Interpersonal Skills:

  • Preparing a communications plan outlining who needs to be apprised of what information or involved in which decisions, how often and by what method
  • Using appropriate verbal and nonverbal communication skills to assure that the evaluation not only gets done, but good relationships are maintained throughout
  • Assuming leadership in guiding the evaluation to its completion
  • Resolving the enormous number of conflicts that can arise both within the evaluation team and between the evaluators and the stakeholders

I think that this framing can provide practical guidance for what new evaluators need to know to effectively manage an evaluation and guidance for how veteran evaluators can organize their knowledge for practical sharing. I’d be interested in comments as to the comprehensiveness and appropriateness of this list…am I missing something?

Blog: Gauging Workplace Readiness Among Cyberforensics Program Graduates

Posted on March 29, 2017 by  in Blog ()

Principal Consultant, Preferred Program Evaluations

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I share my experience leading a multi-year external evaluation that provided useful insights about how to best strengthen the work readiness components of an ATE project.

The Advanced Cyberforensics Education Consortium (ACE) is a National Science Foundation- funded Advanced Technological Education center whose goal is to design and deliver an industry-driven curriculum that produces qualified and adaptive graduates equipped to work in the field of cyberforensics and secure our nation’s electronic infrastructure.  The initiative is being led by Daytona State College of Florida and three other “state lead” partner institutions in Georgia, South Carolina, and North Carolina.  The targeted geographic audience of ACE is community and state colleges in the southeastern region of the United States.

The number of cyberforensics and network security program offerings among ACE’s four state lead institutions increased nearly fivefold between the initiative’s first and fourth year.  One of ACE’s objectives is to align the academic program core with employers’ needs and ensure the curriculum remains current with emerging trends, applications, and cyberforensics platforms.  In an effort to determine the extent to which this was occurring across partner institutions, I, ACE’s external evaluator, sought feedback directly from the project’s industry partners.

A Dialogue with Industry Representatives

Based on a series of stakeholder interviews conducted with industry partners, I learned that program graduates were viewed favorably for their content knowledge and professionalism.  The interviewees noted that the graduates they hired added value to their organizations and that they would consider hiring additional graduates from the same academic programs.  In contrast, I also received feedback via interviews that students were falling short in the desired fundamental set of soft skills.

An electronic survey for industry leaders affiliated with ACE state lead institutions was designed to gauge their experience working with graduates of the respective cyberforensics programs and solicit suggestions for enhancing the programs’ ability to generate graduates who have the requisite skills to succeed in the workplace.  The first iteration of the survey read too much like a performance review.  To address this limitation, the question line was modified to inquire more specifically about the graduates’ knowledge, skills, and abilities related to employability in the field of cyberforensics.

ACE’s P.I. and I wanted to discover how the programs could be tailored to ensure a smoother transition from higher education to industry and how to best acclimate graduates to the workplace.  Additionally, we sought to determine the ways in which the coursework is accountable and to what extent the graduates’ skillset is transferable.

What We Learned from Industry Partners

On the whole, new hires were academically prepared to complete assigned tasks, possessed intellectual curiosity, and displayed leadership qualities.  A few recommendations were specific to collaboration between the institution and the business community.  One suggestion included inviting some of the college’s key faculty and staff to the businesses to learn more about day-to- day operations and how they could be integrated with classroom instruction.  Another industry representative encouraged institutions to engage more readily with the IT business community to generate student internships and co-ops.  The promotion of professional membership in IT organizations for a well-rounded point-of-view as a business technologist was also suggested by survey respondents.

ACE’s P.I. and I came to understand that recent graduates – regardless of age – have room for improvement when it comes to communicating and following complex directions with little oversight.  Employers were of the opinion that graduates could have benefited from more emphasis on attention to detail, critical thinking, and best practices.  Another recommendation centered on the inclusion of a “systems level” class or “big picture integrator” that would allow students to explore how all of the technology pieces fit together cohesively.  Lastly, to remain responsive to industry trends, the partners requested additional hands-on coursework related to telephony and cloud-based security.

Blog: Evolution of Evaluation as ATE Grows Up

Posted on March 15, 2017 by  in Blog ()

Independent Consultant, Independent Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I attended a packed workshop by EvaluATE called “A Practical Approach to Outcome Evaluation” at the 2016 NSF ATE Principal Investigators Conference. Two lessons from the workshop reminded me that the most significant part of the evaluation process is the demystification of the process itself:

  • “Communicate early and often with human data sources about the importance of their cooperation.”
  • “Ensure everyone understands their responsibilities related to data collection.”

Stepping back, it made me reflect upon the evolution of evaluation in the ATE community. When I first started out in the ATE world in 1995, I was on the staff of one of the first ATE centers ever funded. Back then, being “evaluated” was perceived as quite a different experience, something akin to taking your first driver’s test or defending a dissertation—a meeting of the tester and the tested.

As the ATE community has matured, so has our approach to both evaluation and the integral communication component that goes with it. When we were a fledgling center, the meetings with our evaluator could have been a chance to take advantage of the evaluation team’s many years of experience of what works and what doesn’t. Yet, at the start we didn’t realize that it was a two-way street where both parties learned from each other. Twenty years ago, evaluator-center/project relationships were neither designed nor explained in that fashion.

Today, my colleague, Dr. Sandra Mikolaski, and I are co-evaluators for NSF ATE clients who range from a small new-to-ATE grant (they weren’t any of those back in the day!) to a large center grant that provides resources to a number of other centers and projects and even has its own internal evaluation team. The experience of working with our new-to-ATE client was perhaps what forced us to be highly thoughtful about how we hope both parties view their respective roles and input. Because the “fish don’t talk about the water” (i.e., project teams are often too close to their own work to honk their own horn), evaluators can provide not only perspective and advice, but also connections to related work and other project and center principal investigators. This perspective can have a tremendous impact on how activities are carried out and on the goals and objectives of a project.

We use EvaluATE webinars like “User-Friendly Evaluation Reports” and “Small-Scale Evaluation” as references and resources not only for ourselves but also for our clients. These webinars help them understand that an evaluation is not meant to assess and critique, but to inform, amplify, modify, and benefit.

We have learned from being on the other side of the fence that an ongoing dialog, an ethnographic approach (on-the-ground research, participant observation, holistic approach), and formative input-based partnership with our client makes for a more fruitful process for everyone.

Blog: Designing a Purposeful Mixed Methods Evaluation

Posted on March 1, 2017 by  in Blog ()

Doctoral Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A mixed methods evaluation involves collecting, analyzing, and integrating data from both quantitative and qualitative sources. Sometimes, I find that while I plan evaluations with mixed methods, I do not think purposely about how or why I am choosing and ordering these methods. Intentionally planning a mixed methods design can help strengthen evaluation practices and the evaluative conclusions reached.

Here are three common mixed methods designs, each with its own purpose. Use these designs when you need to (1) see the whole picture, (2) dive deeper into your data, or (3) know what questions to ask.

1. When You Need to See the Whole Picture
First, the convergent parallel design allows evaluators to view the same aspect of a project from multiple perspectives, creating a more complete understanding. In this design, quantitative and qualitative data are collected simultaneously and then brought together in the analysis or interpretation stage.

For example, in an evaluation of a project whose goal is to attract underrepresented minorities into STEM careers, a convergent parallel design might include surveys of students asking Likert questions about their future career plans, as well as focus groups to ask questions about their career motivations and aspirations. These data collection activities would occur at the same time. The two sets of data would then come together to inform a final conclusion.

2. When You Need to Dive Deeper into Data

The explanatory sequential design uses qualitative data to further explore quantitative results. Quantitative data is collected and analyzed first. These results are then used to shape instruments and questions for the qualitative phase. Qualitative data is then collected and analyzed in a second phase.

For example, instead of conducting both a survey and focus groups at the same time, the survey would be conducted and results analyzed before the focus group protocol is created. The focus group questions can be designed to enrich understanding of the quantitative results. For example, while the quantitative data might be able to tell evaluators how many Hispanic students are interested in pursuing engineering, the qualitative could follow up on students’ motivations behind these responses.

3. When You Need to Know What to Ask

The exploratory sequential design allows an evaluator to investigate a situation more closely before building a measurement tool, giving guidance to what questions to ask, what variables to track, or what outcomes to measure. It begins with qualitative data collection and analysis to investigate unknown aspects of a project. These results are then used to inform quantitative data collection.

If an exploratory sequential design was used to evaluate our hypothetical project, focus groups would first be conducted to explore themes in students’ thinking about STEM careers. After analysis of this data, conclusions would be used to construct a quantitative instrument to measure the prevalence of these discovered themes in the larger student body. The focus group data could also be used to create more meaningful and direct survey questions or response sets.

Intentionally choosing a design that matches the purpose of your evaluation will help strengthen evaluative conclusions. Studying different designs can also generate ideas of different ways to approach different evaluations.

For further information on these designs and more about mixed methods in evaluation, check out these resources:

Creswell, J. W. (2013). What is Mixed Methods Research? (video)

Frechtling, J., and Sharp, L. (Eds.). (1997). User-Friendly Handbook for Mixed Method Evaluations. National Science Foundation.

Watkins, D., & Gioia, D. (2015). Mixed methods research. Pocket guides to social work research methods series. New York, NY: Oxford University Press.

Blog: Sustaining Career Pathways System Development Efforts

Posted on February 15, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Debbie Mills
Director
National Career Pathways Network
Steven Klein
Director
RTI International

Career pathways are complex systems that leverage education, workforce development, and social service supports to help people obtain the skills they need to find employment and advance in their careers. Coordinating people, services, and resources across multiple state agencies and training providers can be a complicated, confusing, and at times, frustrating process. Changes to longstanding organizational norms can feel threatening, which may lead some to question or actively resist proposed reforms.

To ensure lasting success, sustainability and evaluation efforts should be integrated into career pathways system development and implementation efforts at the outset to ensure new programmatic connections are robust and positioned for longevity.

To support states and local communities in evaluating and planning for sustainability, RTI International created A Tool for Sustaining Career Pathways Efforts.

This innovative paper draws upon change management theory and lessons learned from a multi-year, federally-funded initiative to support five states in integrating career and technical education into their career pathways. Hyperlinks embedded within the paper allow readers to access and download state resources developed to help evaluate and sustain career pathways efforts. A Career Pathways Sustainability Checklist, included at the end of the report, can be used to assess your state’s or local community’s progress toward building a foundation for the long-term success of its career pathways system development efforts.

This paper identified three factors that contribute to sustainability in career pathways systems.

1. Craft a Compelling Vision and Building Support for Change

Lasting system transformation begins with lowering organizational resistance to change. This requires that stakeholders build consensus around a common vision and set of goals for the change process, establish new management structures to facilitate cross-agency communications, obtain endorsements from high-level leaders willing to champion the initiative, and publicize project work through appropriate communication channels.

2. Engage Partners and Stakeholders in the Change Process

Relationships play a critical role in maintaining systems over time. Sustaining change requires actively engaging a broad range of partners in an ongoing dialogue to share information about project work, progress, and outcomes, making course corrections when needed. Employer involvement also is essential to ensure that education and training services are aligned with labor market demand.

3. Adopt New Behaviors, Practices, and Processes

Once initial objectives are achieved, system designers will want to lock down new processes and connections to prevent systems from reverting to their original form. This can be accomplished by formalizing new partner roles and expectations, creating an infrastructure for ensuring ongoing communication, formulating accountability systems to track systemic outcomes, and securing new long-term resources and making more effective use of existing funding.

For additional information contact the authors:

Steve Klein; sklein@rti.org
Debbie Mills; fdmills1@comcast.net

Blog: Declutter Your Reports: The Checklist for Straightforward Evaluation Reports

Posted on February 1, 2017 by  in Blog (, )

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation reports have a reputation for being long, overly complicated, and impractical. The recent buzz about fresh starts and tidying up for the new year got me thinking about the similarities between these infamous evaluation reports and the disastrously cluttered homes featured on reality makeover shows. The towering piles of stuff overflowing from these homes reminds me of the technical language and details that clutter up so many evaluation reports. Informational clutter, like physical clutter, can turn reports, just like homes, into difficult-to-navigate obstacle courses that can render the contents virtually unusable. If you are looking for ideas on how to organize and declutter your reports, check out the Checklist for Straightforward Evaluation Reports that Lori Wingate and I developed. The checklist provides guidance on how to produce comprehensive evaluation reports that are concise, easy to understand, and easy to navigate. Main features of the checklist include:

  • Quick reference sheet: A one-page summary of content to include in an evaluation report and tips for presenting content in a straightforward manner.
  • Detailed checklist: A list and description of possible content to include in each report section.
  • Straightforward reporting tips: General and section-specific suggestions on how to present content in a straightforward manner.
  • Recommended resources: List of resources that expand on information presented in the checklist.

Evaluators, evaluation clients, or other stakeholders can use the report to set reporting expectations such as what content to include and how to present information.

Straightforward Reporting Tips

Here are some tips, inspired by the checklist, on how to tidy up your reports:

  • Use short sentences: Each sentence should communicate one idea. Sentences should contain no more than 25 words. Downsize your words to only the essentials, just like you might downsize your closet.
  • Use headings: Use concise and descriptive headings and subheadings to clearly label and distinguish report sections. Use report headings, like labels on boxes, to make it easier to locate items in the future.
  • Organize results by evaluation questions: Organize the evaluation results section by evaluation question with separate subheadings for findings and conclusions under each evaluation question. Just like most people don’t put decorations for various holidays in one box, don’t put findings for various evaluation questions in one findings section.
  • Present takeaway messages: Label each figure with a numbered title and separate takeaway message. Similarly, use callout to grab readers’ attention and highlight takeaway messages. For example, use a callout in the results section to summarize the conclusion in one-sentence under the evaluation question.
  • Minimize report body length: Reduce page length as much as possible without compromising quality. One way to do this is to place details that enhance understanding—but are not critical for basic understanding—in the appendices. Only information that is critical for readers’ understanding of the evaluation process and results should be included in the report body. Think of the appendices like a storage area such as a basement, attic, or shed where you keep items you need but don’t use all the time.

If you’d like to provide feedback you can write your comments in an email or return a review form to info@evalu-ate.org. We are especially interested in getting feedback from individuals that have used the checklist as they develop evaluation reports.

Blog: Scavenging Evaluation Data

Posted on January 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

But little Mouse, you are not alone,
In proving foresight may be vain:
The best laid schemes of mice and men
Go often askew,
And leave us nothing but grief and pain,
For promised joy!

From To a Mouse, by Robert Burns (1785), modern English version

Research and evaluation textbooks are filled with elegant designs for studies that will illuminate our understanding of social phenomena and programs. But as any evaluator will tell you, the real world is fraught with all manner of hazards and imperfect conditions that wreak havoc on design, bringing grief and pain, rather than the promised joy of a well-executed evaluation.

Probably the biggest hindrance to executing planned designs is that evaluation is just not the most important thing to most people. (GASP!) They are reluctant to give two minutes for a short survey, let alone an hour for a focus group. Your email imploring them to participate in your data collection effort is one of hundreds of requests for their time and attention that they are bombarded with daily.

So, do all the things the textbooks tell you to do. Take the time to develop a sound evaluation design and do your best to follow it. Establish expectations early with project participants and other stakeholders about the importance of their cooperation. Use known best practices to enhance participation and response rates.

In addition: Be a data scavenger. Here are two ways to get data for an evaluation that do not require hunting down project participants and convincing them to give you information.

1. Document what the project is doing.

I have seen a lot of evaluation reports in which evaluators painstakingly recount a project’s activities as a tedious story rather than straightforward account. This task typically requires the evaluator to ask many questions of project staff, pore through documents, and track down materials. It is much more efficient for project staff to keep a record of their own activities. For example, see EvaluATE’s resume. It is a no-nonsense record of our funding, activities, dissemination, scholarship, personnel, and contributors.  In and of itself, our resume does most of the work of the accountability aspect of our evaluation (i.e., Did we do what we promised?).  In addition, the resume can be used to address questions like these:

  • Is the project advancing knowledge, as evidenced by peer-reviewed publications and presentations?
  • Is the project’s productivity adequate in relation to its resources (funding and personnel)?
  • To what extent is the project leveraging the expertise of the ATE community?

2. Track participation.

If your project holds large events, use a sign-in sheet to get attendance numbers. If you hold webinars, you almost certainly have records with information about registrants and attendees. If you hold smaller events, pass around a sign-in sheet asking for basic information like name, institution, email address, and job title (or major if it’s a student group). If the project has developed a course, get enrollment information from the registrar.  Most importantly: Don’t put these records in a drawer. Compile them in a spreadsheet and analyze the heck out of them. Here are example data points that we glean from EvaluATE’s participation records:

  • Number of attendees
  • Number of attendees from various types of organizations (such as two- and four-year colleges, nonprofits, government agencies, and international organizations)
  • Number and percentage of attendees who return for subsequent events
  • Geographic distribution of attendees

Project documentation and participation data will be most helpful for process evaluation and accountability. You will still need cooperation from participants for outcome evaluation—and you should engage them early to garner their interest and support for evaluation efforts. Still, you may be surprised by how much valuable information you can get from these two sources—documentation of activities and participation records—with minimal effort.

Get creative about other data you can scavenge, such as institutional data that colleges already collect; website data, such as Google Analytics; and citation analytics for published articles.

Blog: Research Goes to School (RGS) Model

Posted on January 10, 2017 by  in Blog ()

Project Coordinator, Discovery Learning Research Center, Purdue University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Data regarding pathways to STEM careers indicate that a critical transition point exists between high school and college.  Many students who are initially interested in STEM disciplines and could be successful in these fields either do not continue to higher education or choose non-STEM majors in college.  In part, these students do not see what role they can have in STEM careers.  For this reason, the STEM curriculum needs to reflect its applicability to today’s big challenges and connect students to the roles that these issues have for them on a personal level.

We proposed a project that infused high school STEM curricula with cross-cutting topics related to the hot research areas that scientists are working on today.  We began by focusing on sustainable energy concepts and then shifted to nanoscience and technology.

Pre-service and in-service teachers came to a large Midwestern research university for two weeks of intensive professional development in problem-based learning (PBL) pedagogy.  Along with PBL training, participants also connected with researchers in the grand challenge areas of sustainable energy (in project years 1-3) and nanoscience and technology (years 4-5).

We proposed a two-tiered approach:

1. Develop a model for education that consisted of two parts:

  • Initiate a professional development program that engaged pre-service and in-service high school teachers around research activities in grand challenge programs.
  • Support these teachers to transform their curricula and classroom practice by incorporating concepts of the grand challenge programs.

2. Establish a systemic approach for integrating research and education activities.

Results provided a framework for creating professional development with researchers and STEM teachers that culminates with integration of grand challenge concepts and education curricula.

Using developmental evaluation over a multi-year process, core practices for an effective program began emerging:

  • Researchers must identify the basic scientific concepts their work entails. For example, biofuels researchers work with the energy and carbon cycles; nanotechnology researchers must thoroughly understand size-dependent properties, forces, self-assembly, size and scale, and surface area-to-volume ratio.
  • Once identified, these concepts must be mapped to teachers’ state teaching standards and Next Generation Science Standards (NGSS), making them relevant for teachers.
  • Professional development must be planned for researchers to help them share their research at an appropriate level for use by high school teachers in their classrooms.
  • Professional development must be planned for teachers to help them integrate the research content into their teaching and learning standards in meaningful ways.
  • The professional development for teachers must include illustrative activities that demonstrate scientific concepts and be mapped to state and NGSS teaching standards.

The iterative and rapid feedback processes of developmental evaluation allowed for evolution of the program.  Feedback from data provided impetus for change, but debriefing sessions provided insight to the program and to core practices.  To evaluate the core practices found in the biofuels topic from years 1-3, we used a dissimilar topic, nanotechnology, in years 4-5.  We saw a greater integration of research and education activities in teachers’ curricula as the core practices became more fully developed through iterative repetition even with a new topic. The core practices remained true regardless of topic, and practitioners became better at delivery with more repetitions in years 4 and 5.

 

Blog: Evaluating Creativity in the Context of STEAM Education

Posted on December 16, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Shelly Engelman
Senior Researcher
The Findings Group, LLC
Morgan Miller
Research Associate
The Findings Group, LLC

At The Findings Group, we are assessing a National Science Foundation Discovery Research K-12 project that gives students an opportunity to learn about computing in the context of music through EarSketch. As with other STEAM (Science, Technology, Engineering, Arts, Math) approaches, EarSketch aims to motivate and engage students in computing through a creative, cross-disciplinary approach. Our challenge with this project was threefold: 1) defining creativity within the context of STEAM education, 2) measuring creativity, and 3) demonstrating how creativity gives rise to more engagement in computing.

The 4Ps of Creativity

To understand creativity, we turned to the literature first.  According to previous research, creativity has been discussed from four perspectives, or the 4Ps of creativity: Process, Person, Press/Place, and Product   For our study, we focused on creativity from the perspective of the Person and the Place. Person refers to the traits, tendencies, and characteristics of the individual who creates something or engages in a creative endeavor. Place refers to the environmental factors that encourage creativity.

Measuring Creativity – Person

Building on previous work by Carroll (2009) and colleagues, we developed a self-report Creativity – Person measure that taps into six aspects of personal expressiveness within computing. These aspects include:

  • Expressiveness: Conveying one’s personal view through computing
  • Exploration:  Investigating ideas in computing
  • Immersion/Flow: Feeling absorbed by the computing activity
  • Originality: Generating unique and personally novel ideas in computing

Through a series of pilot tests with high school students, our final Creativity – Person scale consisted of 10-items and yielded excellent reliability (Cronbach’s alpha= .90 to .93); likewise, it is positively correlated with other psychosocial measures such as computing confidence, enjoyment, and identity and belongingness.

Measuring Creativity—Place

Assessing creativity at the environmental level proved to be more of a challenge! In building the Creativity – Place scale, we turned our attention to previous work by Shaffer and Resnick (1999) who assert that learning environments or materials that are “thickly authentic”—personally-relevant and situated in the real world—promote engagement in learning. Using this as our operational definition of a creative environment, we designed a self-report scale that taps into four identifiable components of a thickly authentic learning environment:

  • Personal: Learning that is personally meaningful for the learner
  • Real World: Learning that relates to the real-world outside of school
  • Disciplinary: Learning that provides an opportunity to think in the modes of a particular discipline
  • Assessment: Learning where the means of assessment reflect the learning process.

Our Creativity – Place scale consisted of 8 items and also yielded excellent reliability (Cronbach’s alpha=.91).

 Predictive Validity

Once we had our two self-report questionnaires in hand—Creativity – Person and Creativity – Place scales—we collected data among high school students who utilized EarSketch as part of their computing course. Our main findings were:

  • Students show significant increases from pre to post in personal expressiveness in computing (Creativity – Person), and
  • A creative learning environment (Creativity – Place) predicted students’ engagement in computing and intent to persist. That is, through a series of multiple regression analyses, we found that a creative learning environment, fueled by a meaningful and personally relevant curriculum, drives improvements in students’ attitudes and intent to persist in computing.

Moving forward, we plan on expanding our work by examining other facets of creativity (e.g., Creativity – Product) through the development of creativity rubrics to assess algorithmic music compositions.

References

Carroll, E.A., Latulipe, C. Fung, R., & Terry, M. (2009). Creativity factor evaluation: Towards a standardized survey metric for creativity support. In C&C ’09: Proceedings of the Seventh ACM Conference on Creativity and Cognition (pp. 127-136). New York, NY:  Association for Computing Machinery.

Engelman, S., Magerko, M., McKlin, T., Miller, M., Douglas, E., & Freeman, J. (in press). Creativity in authentic STEAM education with EarSketch. SIGCSE ’17: Proceedings of the 48th ACM Technical Symposium on Computer Science Education.Seattle, WA: Association for Computing Machinery.

Shaffer, D. W., & Resnick, M. (1999). “Thick” authenticity: New media and authentic learning. Journal of Interactive Learning Research, 10(2), 195-215.