Blog




Evaluator, Researcher, Both?

Posted on June 21, 2017 by  in Blog ()

Professor, College of William & Mary

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Having served as a project evaluator and as a project researcher, it is apparent to me how critical it is to have conversations about roles at the onset of funded projects.  Early and open conversations can help avoid confusion, help eliminate missed timing to collect critical data, and highlight where differences exist for each project team role. The blurring of lines over time regarding strict differences between evaluator and researcher requires project teams, evaluators, and researchers to create new definitions for project roles, to understand scope of responsibility for each role, and to build data systems that allow for sharing information across roles.

Evaluation serves a central role in funded research projects. The lines between the role of the evaluator and that of the researcher can blur, however, because many researchers also conduct evaluations. Scriven (2003/2004) saw the role of evaluation as a means to determine “the merit, worth, or value of things” (para. #1), whereas social science research instead is “restricted to empirical (rather than evaluative) research, and bases its conclusion only on factual results—that is, observed, measured, or calculated data” (para. #2).  Consider too, how Powell (2006) posited “Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes” (p. 102).  It is easy to see how confusion arises.

Taking a step back can shed light on the differences in these roles and ways they are now being redefined. The role of researcher shows a different project perspective, as a goal of research is the production of knowledge, whereas the role of the external evaluator is to provide an “independent” assessment of the project and its outcomes. Typically, an evaluator is seen as a judge of a project’s merits, which assumes a perspective that a “right” outcome exists. Yet inherent in the role of evaluation are the values held by the evaluator, the project team, and the stakeholders as context influences the process and who makes decisions on where to focus attention, why, and how feedback is used (Skolits, Morrow, & Burr, 2009).  Knowing more about how the project team intends to use evaluation results to help improve project outcomes requires a shared understanding of the role of the evaluator (Langfeldt & Kyvik, 2011).

Evaluators seek to understand what information is important to collect and review and how to best use the findings to relate outcomes to stakeholders (Levin-Rozalis, 2003).  Researchers instead focus on diving deep into investigating a particular issue or topic with a goal of producing new ways of understanding in these areas. In a perfect world, the roles of evaluators and researchers are distinct and separate. But, given requirements for funded projects to produce outcomes that inform the field, new knowledge is also discovered by evaluators. The swirl of roles results in evaluators publishing results of projects that informs the field, researchers leveraging their evaluator roles to publish scholarly work, and both evaluators and researchers borrowing strategies from each other to conduct their work.

The blurring of roles requires project leaders to provide clarity about evaluator and researcher team functions. The following questions can help in this process:

  • How will the evaluator and researcher share data?
  • What are the expectations for publication from the project?
  • What kinds of formative evaluation might occur that ultimately changes the project trajectory? How do these changes influence the research portion of the project?
  • How does shared meaning of terms, role, scope of work, and authority for the project team occur?

Knowing how the evaluator and researcher will work together provides an opportunity to leverage expertise in ways that move beyond the simple additive effect of both roles.  Opportunities to share information is only possible when roles are coordinated, which requires advanced planning. It is important to move beyond siloed roles and towards more collaborative models of evaluation and research within projects. Collaboration requires more time and attention to sharing information and defining roles, but the time spent on coordinating these joint efforts is worth it given the contributions to both the project and to the field.


References

Levin-Rozalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation, 18(2):1-31.

Powell, R. R. (2006).  Evaluation research:  An overview.  Library Trends, 55(1), 102-120.

Scriven, M. (2003/2004).  Michael Scriven on the differences between evaluation and social science research.  The Evaluation Exchange, 9(4).

Logic Models for Curriculum Evaluation

Posted on June 7, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Rachel Tripathy Linlin Li
Research Associate, WestEd Senior Research Associate, WestEd

At the STEM Program at WestEd, we are in the third year of an evaluation of an innovative, hands-on STEM curriculum. Learning by Making is a two-year high school STEM course that integrates computer programming and engineering design practices with topics in earth/environmental science and biology. Experts in the areas of physics, biology, environmental science, and computer engineering at Sonoma State University (SSU) developed the curriculum by integrating computer software with custom-designed experiment set-ups and electronics to create inquiry-based lessons. Throughout this project-based course, students apply mathematics, computational thinking, and the Next Generation Science Standards (NGSS) Scientific and Engineering Design Practices to ask questions about the world around them, and seek the answers. Learning by Making is currently being implemented in rural California schools, with a specific effort being made to enroll girls and students from minority backgrounds, who are currently underrepresented in STEM fields. You can listen to students and teachers discussing the Learning by Making curriculum here.

Using a Logic Model to Drive Evaluation Design

We derived our evaluation design from the project’s logic model. A logic model is a structured description of how a specific program achieves an intended learning outcome. The purpose of the logic model is to precisely describe the mechanisms behind the program’s effects. Our approach to the Learning by Making logic model is a variant on the five-column logic format that describes the inputs, activities, outputs, outcomes, and impacts of a program (W.K. Kellogg Foundation, 2014).

Learning by Making Logic Model

Click image to view enlarge

Logic models are read as a series of conditionals. If the inputs exist, then the activities can occur. If the activities do occur, then the outputs should occur, and so on. Our evaluation of the Learning by Making curriculum centers on the connections indicated by the orange arrows connecting outputs to outcomes in the logic model above. These connections break down into two primary areas for evaluation: 1) teacher professional development, and 2) classroom implementation of Learning by Making. The questions that correlate with the orange arrows above can be summarized as:

  • Are the professional development (PD) opportunities and resources for the teachers increasing teacher competence in delivering a computational thinking-based STEM curriculum? Does Learning by Making PD increase teachers’ use of computational thinking and project-based instruction in the classroom?
  • Does the classroom implementation of Learning by Making increase teachers’ use of computational thinking and project-based instruction in the classroom? Does classroom implementation promote computational thinking and project-based learning? Do students show an increased interest in STEM subjects?

Without effective teacher PD or classroom implementation, the logic model “breaks,” making it unlikely that the desired outcomes will be observed. To answer our questions about outcomes related to teacher PD, we used comprehensive teacher surveys, observations, bi-monthly teacher logs, and focus groups. To answer our questions about outcomes related to classroom implementation, we used student surveys and assessments, classroom observations, teacher interviews, and student focus groups. SSU used our findings to revise both the teacher PD resources and the curriculum itself to better situate these two components to produce the outcomes intended. By deriving our evaluation design from a clear and targeted logic model, we succeeded in providing actionable feedback to SSU aimed at keeping Learning by Making on track to achieve its goals.

Evaluating New Technology

Posted on May 23, 2017 by  in Blog ()

Professor and Senior Associate Dean, Rochester Institute of Technology

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a STEM practitioner and evaluator, I have had many opportunities to assess new and existing courses, workshops, and programs. But there are often requests that still challenge me, especially evaluating new technology. The problem lies in clarifying the role of new technology, and focusing the evaluation on the proper questions.

Well, ok, you ask, “what are the roles I need to focus on?” In a nutshell, new technologies rear their heads in two ways:

(1) As content to be learned in the instructional program and,

(2) As a delivery mechanism for the instruction.

These are often at odds with each other, and sometimes overlap in unusual ways. For example, a course on “getting along at work” could be delivered via an iPad. A client could suggest that we should “evaluate the iPads, too.” In this context, an evaluation of the iPad should be limited to its contribution to achieving the program outcomes. Among other questions, did it function in a way that students enjoyed (or didn’t hate) and in a way that contributed to (or didn’t interfere with) learning. In a self-paced program, the iPad might be the primary vehicle for content delivery. However, using FaceTime or Skype via an iPad only requires the system to be a communication device – it will provide little more than a replacement of other technologies. In both cases, evaluation questions would center on the impact of the iPad on the learning process. Note that this is no more of a “critical” question than “did the students enjoy (or not hate) the snacks provided to them.” Interesting, but only as a supporting process.

Alternatively, a classroom program could be devoted to “learning the iPad.” In this case, the iPad has become “subject matter” that is to be learned through the process of human classroom interaction. In this case, how much they learned about the iPad is the whole point of the program! Ironically, a student could learn things about the iPad (through pictures, simulations, or through watching demonstrations) without actually using an iPad! But remember, it is not only an enabling contributor to the program – it can be the object of study.

So, the evaluation of new technology means that the evaluator must determine which aspect of new technology is being evaluated: technology as a process for delivering instruction, or as a subject of study. And a specific, somewhat circular case exists as well: Learning about an iPad through training delivered on an iPad. In this case, we would try to generate evaluation questions that allow us to address iPads both as delivery tools and iPads as skills to be learned.

While this may now seem straightforward as you read about it, remember that it is not straightforward to clients who are making an evaluation request. It might help to print this blog (or save a link) to help make clear these different, but sometimes interacting, uses of technology.

Evaluating Network Growth through Social Network Analysis

Posted on May 11, 2017 by  in Blog ()

Doctoral Student, College of Education, University of Nebraska at Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the most impactful learning experiences from the ATE Principal Investigators Conference I attended in October 2016 was the growing use of Business and Industry Leadership Teams (BILT) partnerships in developing and implementing new STEM curriculum throughout the country.  This need for cross-sector partnerships has become apparent and reinforced through specific National Science Foundation (NSF) grants.

The need for empirical data about networks and collaborations is increasing within the evaluation realm, and social network surveys are one method of quickly and easily gathering that data. Social network surveys come in a variety of forms. The social network survey I have used is in a roster format. Each participant of the program is listed, and each individual completes the survey by selecting which option best describes their relationships with one another. The options vary in degree from not knowing that person at one extreme, to having formally collaborated with that person at the other extreme. In the past, data from these types of surveys was analyzed through social network analysis, which necessitated a large amount of programming knowledge.  Due to recent technological advancements, there are new social network analysis programs that make analyzing this data more user-friendly for non-programmers. I have worked on an NSF-funded project at the University of Nebraska at Oaha where the goal is to provide professional development and facilitate the growth of a network for middle school teachers in order to create and implement computer science lessons into their current curriculum (visit the SPARCS website).  One of the methods for evaluating the facilitation of the network is through a social network analysis questionnaire. This method has proved very helpful in determining the extent to which the professional relationships of the cohort members have evolved over the course of their year-long experience within the program.

The social network analysis program I have been using is known as NodeXL and is an Excel add-in. It is very user-friendly and can easily be used to generate quantitative data on network development. I was able to take the data gathered from the social network analysis, conduct research, and present my article, “Identification of the Emergent Leaders within a CSE Professional Development Program,” at an international conference in Germany. While the article is not focused on evaluation, it does review the survey instrument itself.  You may access the article through this link (although I think your organization must have access to ACM):  Tracie Evans Reding WiPSCE Article. The article is also posted on my Academia.edu page.

Another funding strand emphasizing networks through the National Science Foundation is known as Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (INCLUDES). The long-term goal of NSF INCLUDES is to “support innovative models, networks, partnerships, technical capabilities and research that will enable the U.S. science and engineering workforce to thrive by ensuring that traditionally underrepresented and underserved groups are represented in percentages comparable to their representation in the U.S. population.” Noted in the synopsis for this funding opportunity is the importance of “efforts to create networked relationships among organizations whose goals include developing talent from all sectors of society to build the STEM workforce.” The increased funding available for cross-sector collaborations makes it imperative that evaluators are able to empirically measure these collaborations. While the notion of “networks” is not a new one, the availability of resources such as NodeXL will make the evaluation of these networks much easier.

 

Full Citation for Article:

Evans Reding, T., Dorn, B., Grandgenett, N., Siy, H., Youn, J., Zhu, Q., Engelmann, C. (2016).  Identification of the Emergent Teacher Leaders within a CSE Professional Development Program.  Proceedings for the 11th Workshop in Primary and Secondary Computing Education.  Munster, Germany:  ACM.

What Goes Where? Reporting Evaluation Results to NSF

Posted on April 26, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I provide advice for Advanced Technological Education (ATE) principal investigators (PIs) on how to include information from their project evaluations in their annual reports to the National Science Foundation (NSF).

Annual reports for NSF grants are due within 90 days of the award’s anniversary date. That means if your project’s initial award date was September 1, your annual reports will be due between June and August each year until the final year of the grant (at which point an outcome report is due within 90 days after the award anniversary date).

When you prepare your first annual report for NSF at Research.gov, you may be surprised to see there is no specific request for results from your project’s evaluation or a prompt to upload your evaluation report. That’s because Research.gov is the online reporting system used by all NSF grantees, whether they are researching fish populations in Wisconsin lakes or developing technician education programs.  So what do you do with the evaluation report your external evaluator prepared or all the great information in it?

1. Report evidence from your evaluation in the relevant sections of your annual report.

The Research.gov system for annual reports includes seven sections: Cover, Accomplishments, Products, Participants, Impact, Changes/Problems, and Special Requirements. Findings and conclusions from your evaluation should be reported in the Accomplishments and Impact sections, as described in the table below. Sometimes evaluation findings will point to a need for changes in project implementation or even its goals. In this case, pertinent evidence should be reported in the Changes/Problems section of the annual report. Highlight the most important evaluation findings and conclusions in these report sections. Refer to the full evaluation report for additional details (see Point 2 below).

NSF annual report section What to report from your evaluation
Accomplishments
  • Number of participants in various activities
  • Data related to participant engagement and satisfaction
  • Data related to the development and dissemination of products (Note: The Products section of the annual report is simply for listing products, not reporting evaluative information about them.)
Impacts
  • Evidence of the nature and magnitude of changes brought about by project activities, such as changes in individual knowledge, skills, attitudes, or behaviors or larger institutional, community, or workforce conditions
  • Evidence of increased participation by members of groups historically underrepresented in STEM
  • Evidence of the project’s contributions to the development of infrastructure that supports STEM education and research, including physical resources, such as labs and instruments; institutional policies; and enhanced access to scientific information
Changes/Problems
  • Evidence of shortcomings or opportunities that point to a need for substantial changes in the project

Do you have a logic model that delineates your project’s activities, outputs, and outcomes? Is your evaluation report organized around the elements in your logic model? If so, a straightforward rule of thumb is to follow that logic model structure and report evidence related to your project activities and outputs in the Accomplishments section and evidence related to your project outcomes in the Impacts section of your NSF annual report.

2. Upload your evaluation report.

Include your project’s most recent evaluation report as a supporting file in the Accomplishments or Impact section of Research.gov. If the report is longer than about 25 pages, make sure it includes a 1-3 page executive summary that highlights key results. Your NSF program officer is very interested in your evaluation results, but probably doesn’t have time to carefully read lengthy reports from all the projects he or she oversees.

Blog: Evaluation Management Skill Set

Posted on April 12, 2017 by  in Blog ()

CEO, SPEC Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We as evaluators, all know that managing an evaluation is quite different from managing a scientific research project. Sure, we need to take due diligence in completing the basic inquiry tasks:  deciding study questions/hypotheses; figuring out the strongest design, sampling plan, data collection methods and analysis strategies; and interpreting/reporting results. But evaluation’s purposes extend well beyond proving or disproving a research hypothesis. Evaluators must also focus on how the evaluation will lead to enlightenment and what role it plays in support of decision making. Evaluations can leave in place important processes that extend beyond the study itself, like data collection systems and changed organizational culture that places greater emphasis on data-informed decision making. Evaluations also exist within local and organizational political contexts, which are of less importance to academic and scientific research.

Very little has been written in the evaluation literature about evaluation management. Compton and Baizerman are the most prolific authors editing two issues of New Directions in Evaluation on the topic. They approach evaluation management from a theoretical perspective, discussing issues like the basic competencies of evaluation managers within different organizational contexts (2009) and the role of evaluation managers in advice giving (2012).

I would like to describe good evaluation management in terms of the actual tasks that an evaluation manager must excel in—what evaluation managers must be able to actually do. For this, I looked to the field of project management. There is a large body of literature about project management, and whole organizations, like the Project Management Institute, dedicated to the topic. Overlaying evaluation management onto the core skills of a project manager, here is the skill set I see as needed to effectively manage an evaluation:

Technical Skills:

  • Writing an evaluation plan (including but not limited to descriptions of basic inquiry tasks)
  • Creating evaluation timelines
  • Writing contracts between the evaluation manager and various members of the evaluation team (if they are subcontractors), and with the client organization
  • Completing the application for human subjects institutional review board (HSIRB) approval, if needed

Financial Skills:

  • Creating evaluation budgets, including accurately estimating hours each person will need to devote to each task
  • Generating or justifying billing rates of each member of the evaluation team
  • Tracking expenditures to assure that the evaluation is completed within the agreed-upon budget

Interpersonal Skills:

  • Preparing a communications plan outlining who needs to be apprised of what information or involved in which decisions, how often and by what method
  • Using appropriate verbal and nonverbal communication skills to assure that the evaluation not only gets done, but good relationships are maintained throughout
  • Assuming leadership in guiding the evaluation to its completion
  • Resolving the enormous number of conflicts that can arise both within the evaluation team and between the evaluators and the stakeholders

I think that this framing can provide practical guidance for what new evaluators need to know to effectively manage an evaluation and guidance for how veteran evaluators can organize their knowledge for practical sharing. I’d be interested in comments as to the comprehensiveness and appropriateness of this list…am I missing something?

Blog: Gauging Workplace Readiness Among Cyberforensics Program Graduates

Posted on March 29, 2017 by  in Blog ()

Principal Consultant, Preferred Program Evaluations

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I share my experience leading a multi-year external evaluation that provided useful insights about how to best strengthen the work readiness components of an ATE project.

The Advanced Cyberforensics Education Consortium (ACE) is a National Science Foundation- funded Advanced Technological Education center whose goal is to design and deliver an industry-driven curriculum that produces qualified and adaptive graduates equipped to work in the field of cyberforensics and secure our nation’s electronic infrastructure.  The initiative is being led by Daytona State College of Florida and three other “state lead” partner institutions in Georgia, South Carolina, and North Carolina.  The targeted geographic audience of ACE is community and state colleges in the southeastern region of the United States.

The number of cyberforensics and network security program offerings among ACE’s four state lead institutions increased nearly fivefold between the initiative’s first and fourth year.  One of ACE’s objectives is to align the academic program core with employers’ needs and ensure the curriculum remains current with emerging trends, applications, and cyberforensics platforms.  In an effort to determine the extent to which this was occurring across partner institutions, I, ACE’s external evaluator, sought feedback directly from the project’s industry partners.

A Dialogue with Industry Representatives

Based on a series of stakeholder interviews conducted with industry partners, I learned that program graduates were viewed favorably for their content knowledge and professionalism.  The interviewees noted that the graduates they hired added value to their organizations and that they would consider hiring additional graduates from the same academic programs.  In contrast, I also received feedback via interviews that students were falling short in the desired fundamental set of soft skills.

An electronic survey for industry leaders affiliated with ACE state lead institutions was designed to gauge their experience working with graduates of the respective cyberforensics programs and solicit suggestions for enhancing the programs’ ability to generate graduates who have the requisite skills to succeed in the workplace.  The first iteration of the survey read too much like a performance review.  To address this limitation, the question line was modified to inquire more specifically about the graduates’ knowledge, skills, and abilities related to employability in the field of cyberforensics.

ACE’s P.I. and I wanted to discover how the programs could be tailored to ensure a smoother transition from higher education to industry and how to best acclimate graduates to the workplace.  Additionally, we sought to determine the ways in which the coursework is accountable and to what extent the graduates’ skillset is transferable.

What We Learned from Industry Partners

On the whole, new hires were academically prepared to complete assigned tasks, possessed intellectual curiosity, and displayed leadership qualities.  A few recommendations were specific to collaboration between the institution and the business community.  One suggestion included inviting some of the college’s key faculty and staff to the businesses to learn more about day-to- day operations and how they could be integrated with classroom instruction.  Another industry representative encouraged institutions to engage more readily with the IT business community to generate student internships and co-ops.  The promotion of professional membership in IT organizations for a well-rounded point-of-view as a business technologist was also suggested by survey respondents.

ACE’s P.I. and I came to understand that recent graduates – regardless of age – have room for improvement when it comes to communicating and following complex directions with little oversight.  Employers were of the opinion that graduates could have benefited from more emphasis on attention to detail, critical thinking, and best practices.  Another recommendation centered on the inclusion of a “systems level” class or “big picture integrator” that would allow students to explore how all of the technology pieces fit together cohesively.  Lastly, to remain responsive to industry trends, the partners requested additional hands-on coursework related to telephony and cloud-based security.

Blog: Evolution of Evaluation as ATE Grows Up

Posted on March 15, 2017 by  in Blog ()

Independent Consultant, Independent Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I attended a packed workshop by EvaluATE called “A Practical Approach to Outcome Evaluation” at the 2016 NSF ATE Principal Investigators Conference. Two lessons from the workshop reminded me that the most significant part of the evaluation process is the demystification of the process itself:

  • “Communicate early and often with human data sources about the importance of their cooperation.”
  • “Ensure everyone understands their responsibilities related to data collection.”

Stepping back, it made me reflect upon the evolution of evaluation in the ATE community. When I first started out in the ATE world in 1995, I was on the staff of one of the first ATE centers ever funded. Back then, being “evaluated” was perceived as quite a different experience, something akin to taking your first driver’s test or defending a dissertation—a meeting of the tester and the tested.

As the ATE community has matured, so has our approach to both evaluation and the integral communication component that goes with it. When we were a fledgling center, the meetings with our evaluator could have been a chance to take advantage of the evaluation team’s many years of experience of what works and what doesn’t. Yet, at the start we didn’t realize that it was a two-way street where both parties learned from each other. Twenty years ago, evaluator-center/project relationships were neither designed nor explained in that fashion.

Today, my colleague, Dr. Sandra Mikolaski, and I are co-evaluators for NSF ATE clients who range from a small new-to-ATE grant (they weren’t any of those back in the day!) to a large center grant that provides resources to a number of other centers and projects and even has its own internal evaluation team. The experience of working with our new-to-ATE client was perhaps what forced us to be highly thoughtful about how we hope both parties view their respective roles and input. Because the “fish don’t talk about the water” (i.e., project teams are often too close to their own work to honk their own horn), evaluators can provide not only perspective and advice, but also connections to related work and other project and center principal investigators. This perspective can have a tremendous impact on how activities are carried out and on the goals and objectives of a project.

We use EvaluATE webinars like “User-Friendly Evaluation Reports” and “Small-Scale Evaluation” as references and resources not only for ourselves but also for our clients. These webinars help them understand that an evaluation is not meant to assess and critique, but to inform, amplify, modify, and benefit.

We have learned from being on the other side of the fence that an ongoing dialog, an ethnographic approach (on-the-ground research, participant observation, holistic approach), and formative input-based partnership with our client makes for a more fruitful process for everyone.

Blog: Designing a Purposeful Mixed Methods Evaluation

Posted on March 1, 2017 by  in Blog ()

Doctoral Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

A mixed methods evaluation involves collecting, analyzing, and integrating data from both quantitative and qualitative sources. Sometimes, I find that while I plan evaluations with mixed methods, I do not think purposely about how or why I am choosing and ordering these methods. Intentionally planning a mixed methods design can help strengthen evaluation practices and the evaluative conclusions reached.

Here are three common mixed methods designs, each with its own purpose. Use these designs when you need to (1) see the whole picture, (2) dive deeper into your data, or (3) know what questions to ask.

1. When You Need to See the Whole Picture
First, the convergent parallel design allows evaluators to view the same aspect of a project from multiple perspectives, creating a more complete understanding. In this design, quantitative and qualitative data are collected simultaneously and then brought together in the analysis or interpretation stage.

For example, in an evaluation of a project whose goal is to attract underrepresented minorities into STEM careers, a convergent parallel design might include surveys of students asking Likert questions about their future career plans, as well as focus groups to ask questions about their career motivations and aspirations. These data collection activities would occur at the same time. The two sets of data would then come together to inform a final conclusion.

2. When You Need to Dive Deeper into Data

The explanatory sequential design uses qualitative data to further explore quantitative results. Quantitative data is collected and analyzed first. These results are then used to shape instruments and questions for the qualitative phase. Qualitative data is then collected and analyzed in a second phase.

For example, instead of conducting both a survey and focus groups at the same time, the survey would be conducted and results analyzed before the focus group protocol is created. The focus group questions can be designed to enrich understanding of the quantitative results. For example, while the quantitative data might be able to tell evaluators how many Hispanic students are interested in pursuing engineering, the qualitative could follow up on students’ motivations behind these responses.

3. When You Need to Know What to Ask

The exploratory sequential design allows an evaluator to investigate a situation more closely before building a measurement tool, giving guidance to what questions to ask, what variables to track, or what outcomes to measure. It begins with qualitative data collection and analysis to investigate unknown aspects of a project. These results are then used to inform quantitative data collection.

If an exploratory sequential design was used to evaluate our hypothetical project, focus groups would first be conducted to explore themes in students’ thinking about STEM careers. After analysis of this data, conclusions would be used to construct a quantitative instrument to measure the prevalence of these discovered themes in the larger student body. The focus group data could also be used to create more meaningful and direct survey questions or response sets.

Intentionally choosing a design that matches the purpose of your evaluation will help strengthen evaluative conclusions. Studying different designs can also generate ideas of different ways to approach different evaluations.

For further information on these designs and more about mixed methods in evaluation, check out these resources:

Creswell, J. W. (2013). What is Mixed Methods Research? (video)

Frechtling, J., and Sharp, L. (Eds.). (1997). User-Friendly Handbook for Mixed Method Evaluations. National Science Foundation.

Watkins, D., & Gioia, D. (2015). Mixed methods research. Pocket guides to social work research methods series. New York, NY: Oxford University Press.

Blog: Sustaining Career Pathways System Development Efforts

Posted on February 15, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Debbie Mills
Director
National Career Pathways Network
Steven Klein
Director
RTI International

Career pathways are complex systems that leverage education, workforce development, and social service supports to help people obtain the skills they need to find employment and advance in their careers. Coordinating people, services, and resources across multiple state agencies and training providers can be a complicated, confusing, and at times, frustrating process. Changes to longstanding organizational norms can feel threatening, which may lead some to question or actively resist proposed reforms.

To ensure lasting success, sustainability and evaluation efforts should be integrated into career pathways system development and implementation efforts at the outset to ensure new programmatic connections are robust and positioned for longevity.

To support states and local communities in evaluating and planning for sustainability, RTI International created A Tool for Sustaining Career Pathways Efforts.

This innovative paper draws upon change management theory and lessons learned from a multi-year, federally-funded initiative to support five states in integrating career and technical education into their career pathways. Hyperlinks embedded within the paper allow readers to access and download state resources developed to help evaluate and sustain career pathways efforts. A Career Pathways Sustainability Checklist, included at the end of the report, can be used to assess your state’s or local community’s progress toward building a foundation for the long-term success of its career pathways system development efforts.

This paper identified three factors that contribute to sustainability in career pathways systems.

1. Craft a Compelling Vision and Building Support for Change

Lasting system transformation begins with lowering organizational resistance to change. This requires that stakeholders build consensus around a common vision and set of goals for the change process, establish new management structures to facilitate cross-agency communications, obtain endorsements from high-level leaders willing to champion the initiative, and publicize project work through appropriate communication channels.

2. Engage Partners and Stakeholders in the Change Process

Relationships play a critical role in maintaining systems over time. Sustaining change requires actively engaging a broad range of partners in an ongoing dialogue to share information about project work, progress, and outcomes, making course corrections when needed. Employer involvement also is essential to ensure that education and training services are aligned with labor market demand.

3. Adopt New Behaviors, Practices, and Processes

Once initial objectives are achieved, system designers will want to lock down new processes and connections to prevent systems from reverting to their original form. This can be accomplished by formalizing new partner roles and expectations, creating an infrastructure for ensuring ongoing communication, formulating accountability systems to track systemic outcomes, and securing new long-term resources and making more effective use of existing funding.

For additional information contact the authors:

Steve Klein; sklein@rti.org
Debbie Mills; fdmills1@comcast.net