Blog




Blog: Reporting Anticipated, Questionable, and Unintended Project Outcomes

Posted on August 16, 2017 by  in Blog ()

Education Administrator, Independent

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Project evaluators are aware that evaluation aims to support learning and improvement. Through a series of planned interactions, event observations, and document reviews, the evaluator is charged with reporting to the project leadership team and ultimately the project’s funding agency, informing audiences of the project’s merit. This is not to suggest that reporting should only aim to identify positive impacts and outcomes of the project. Equally, there is substantive value in informing audiences of unintended and unattained project outcomes.

Evaluation reporting should discuss aspects of the project’s outcomes, whether anticipated, questionable, or unintended. When examining project outcomes the evaluator analyzes obtained information and facilitates project leadership through reflective thinking exercises for the purpose of defining the significance of the project and summarizing why outcomes matter.

Let’s be clear, outcomes are not to be regarded as something negative. In fact, with the projects that I have evaluated over the years, outcomes have frequently served as an introspective platform informing future curriculum decisions and directions internal to the institutional funding recipient. For example, the outcomes of one STEM project that focused on renewable energy technicians provided the institution with information that prompted the development of subsequent proposals and projects targeting engineering pathways.

Discussion and reporting of project outcomes also encapsulates lessons learned and affords the opportunity for the evaluator to ask questions such as:

  • Did the project increase the presence of the target group in identified STEM programs?
  • What initiatives will be sustained during post funding to maintain an increased presence of the target group in STEM programs?
  • Did project activities contribute to the retention/completion rates of the target group in identified STEM programs?
  • Which activities seemed to have the greatest/least impact on retention/completion rates?
  • On reflection, are there activities that could have more significantly contributed to retention/completion rates that were not implemented as part of the project?
  • To what extent did the project supply regional industries with a more diverse STEM workforce?
  • What effect will this have on regional industries during post project funding?
  • Were partners identified in the proposal realistic contributors to the funded project? Did they ensure a successful implementation enabling the attainment of anticipated outcomes?
  • What was learned about the characteristics of “good” and “bad” partners?
  • What are characteristics to look for and avoid to maximize productivity with future work?

Factors influencing outcomes include, but are not limited to:

  • Institutional changes, e.g., leadership;
  • Partner constraints or changes; and
  • Project/budgetary limitations.

In some instances, it is not unusual for the proposed project to be somewhat grandiose in identifying intended outcomes. Yet, when project implementation gets underway, intended activities may be compromised by external challenges. For example, when equipment is needed to support various aspects of a project, procurement and production channels may contribute to delays in equipment acquisition, thus adversely effecting project leadership’s ability to launch planned components of the project.

As a tip, it is worthwhile for those seeking funding to pose the outcome questions at the front-end of the project – when the proposal is being developed. Doing this will assist them in conceptualizing the intellectual merit and impact of the proposed project.

Resources and Links:

Developing an Effective Evaluation Report: Setting the Course for Effective Program Evaluation. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity, 2013.

Blog: Integrating Perspectives for a Quality Evaluation Design

Posted on August 2, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
John Dorris

Director of Evaluation and Assessment, NC State Industry Expansion Solutions

Dominick Stephenson

Assistant Director of Research Development and Evaluation, NC State Industry Expansion Solutions

Designing a rigorous and informative evaluation depends on communication with program staff to understand planned activities and how those activities relate to the program sponsor’s objectives and the evaluation questions that reflect those objectives (see white paper related to communication). At NC State Industry Expansion Solutions, we have worked long enough on evaluation projects to know that such communication is not always easy because program staff and the program sponsor often look at the program from two different perspectives: The program staff focus on work plan activities (WPAs), while the program sponsor may be more focused on the evaluation questions (EQs). So, to help facilitate communication at the beginning of the evaluation project and assist in the design and implementation, we developed a simple matrix technique to link the WPAs and the EQs (see below).

Click to enlarge

For each of the WPAs, we link one or more EQs and indicate what types of data collection events will take place during the evaluation. During project planning and management, the crosswalk of WPAs and EQs will be used to plan out qualitative and quantitative data collection events.

Click to enlarge

The above framework may be more helpful with the formative assessment (process questions and activities). However, it can also enrich the knowledge gained by the participant outcomes analysis in the summative evaluation in the following ways:

Understanding how the program has been implemented will help determine fidelity to the program as planned, which will help determine the degree to which participant outcomes can be attributed to the program design.
Details on program implementation that are gathered during the formative assessment, when combined with evaluation of participant outcomes, can suggest hypotheses regarding factors that would lead to program success (positive participant outcomes) if the program is continued or replicated.
Details regarding the data collection process that are gathered during the formative assessment will help assess the quality and limitations of the participant outcome data, and the reliability of any conclusions based on that data.

So, for us this matrix approach is a quality-check on our evaluation design that also helps during implementation. Maybe you will find it helpful, too.

Blog: Evaluation’s Role in Helping Clients Avoid GroupThink

Posted on July 10, 2017 by  in Blog ()

Senior Evaluator, SmartStart Evaluation & Research

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In December of 2016, I presented a poster on a STEM-C education project at the Restore America’s Estuaries National Summit, co-hosted by The Coastal Society. Having a social science background, I assumed I’d be “out of my depth” amid restoration science topics. However, a documentary on estuarine restoration projects along New Jersey’s Hidden Coast inspired me with insights on the importance of evaluation in helping projects achieve effective outcomes. The film highlights the vital importance of horseshoe crabs as a keystone species beset by myriad threats: Their sustainability as a renewable resource was overestimated and their ecological importance undervalued until serious repercussions became impossible to ignore. Teams of biologists, ecologists, military veterans, communication specialists, and concerned local residents came together to help restore their habitat and raise awareness to help preserve this vital species.

This documentary was not the only project presented at the conference in which diverse teams of scientists, volunteers, educators, and others came together to work toward a shared goal. I began to reflect on how similar the composition of these groups and their need for successful collaboration was to contributors on many projects I evaluate. Time and again it was revealed that the various well-intended interdisciplinary team members often initially struggled to communicate effectively due to different expectations, priorities, and perspectives. Often presenters spoke about ways these challenges had been overcome, most frequently through extensive communication with open exchanges of ideas. However, these only represented successful projects promoting their outcomes as inspiration and guidance for others. How often might lack of open communication lead projects down a different path? When does this occur? and How can an evaluator help the leaders foresee and avoid potential pitfalls?

Often, the route to undesired and unsuccessful outcomes lies in lack of effective communication, which is a common symptom of GroupThink. Imagine the leadership team on any project you evaluate:

  • Are they a highly cohesive group?
  • Do they need to make important decisions, often under deadlines or other pressure?
  • Do members prefer consensus to conflict?

These are ideal conditions for GroupThink, when team members disregard information that does not fit with their shared beliefs, and dissenting ideas or opinions are unwelcome. Partners’ desire for harmony can lead them to ignore early warning signs of threats to achieving goals and lead to making poor decisions.

How do we, as evaluators, help them avoid GroupThink?

  • Examine perceived sustainability objectively: Horseshoe crabs are an ancient species, once so plentiful they covered Atlantic beaches during spawning, each laying 100,000 or more eggs. Perceived as a sustainable species, their usefulness as bait and fertilizer has led to overharvesting. Similarly, project leaders may have misconceptions about resources or little knowledge of other factors influencing capacity to maintain their activities. By using validated measures, such as Washington University’s Program Sustainability Assessment Tool (PSAT), evaluators can raise awareness among project leaders on factors contributing to sustainability and facilitate planning sessions to identify adaptation strategies and increase chances of success.
  • Investigate an unintended consequence of project’s activities: Horseshoe crabs’ copper-based blood is crucial to the pharmaceutical industry. However, they cannot successfully be raised in captivity. Instead, they are captured, drained of about 30 percent of their blood, and returned to the ocean. While survival rates are 70 percent or more, researchers are becoming concerned the trauma may impact breeding and other behaviors. Evaluators can help project leaders delve into cause-and-effect relationships underlying problems by employing techniques such as the Five Whys to identify root causes and developing logic models to clarify relationships between resources, activities, outputs, and outcomes.
  • Anticipate unintended chains of events: Horseshoe crabs’ eggs are the primary source of protein for migrating birds. The declining population of horseshoe crabs has put at least three species of birds’ survival at risk. As evaluators, we have many options (e.g., key informant interviews, risk assessments, negative program theory) to identify aspects of program activities with potentially negative impacts and make recommendations to mitigate the harm.

Horseshoe Crab-in-a-bottle sits on my desk to remind me not to be reticent about making constructive criticisms in order to help project leaders avoid GroupThink.

Blog: Evaluator, Researcher, Both?

Posted on June 21, 2017 by  in Blog ()

Professor, College of William & Mary

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Having served as a project evaluator and as a project researcher, it is apparent to me how critical it is to have conversations about roles at the onset of funded projects.  Early and open conversations can help avoid confusion, help eliminate missed timing to collect critical data, and highlight where differences exist for each project team role. The blurring of lines over time regarding strict differences between evaluator and researcher requires project teams, evaluators, and researchers to create new definitions for project roles, to understand scope of responsibility for each role, and to build data systems that allow for sharing information across roles.

Evaluation serves a central role in funded research projects. The lines between the role of the evaluator and that of the researcher can blur, however, because many researchers also conduct evaluations. Scriven (2003/2004) saw the role of evaluation as a means to determine “the merit, worth, or value of things” (para. #1), whereas social science research instead is “restricted to empirical (rather than evaluative) research, and bases its conclusion only on factual results—that is, observed, measured, or calculated data” (para. #2).  Consider too, how Powell (2006) posited “Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes” (p. 102).  It is easy to see how confusion arises.

Taking a step back can shed light on the differences in these roles and ways they are now being redefined. The role of researcher shows a different project perspective, as a goal of research is the production of knowledge, whereas the role of the external evaluator is to provide an “independent” assessment of the project and its outcomes. Typically, an evaluator is seen as a judge of a project’s merits, which assumes a perspective that a “right” outcome exists. Yet inherent in the role of evaluation are the values held by the evaluator, the project team, and the stakeholders as context influences the process and who makes decisions on where to focus attention, why, and how feedback is used (Skolits, Morrow, & Burr, 2009).  Knowing more about how the project team intends to use evaluation results to help improve project outcomes requires a shared understanding of the role of the evaluator (Langfeldt & Kyvik, 2011).

Evaluators seek to understand what information is important to collect and review and how to best use the findings to relate outcomes to stakeholders (Levin-Rozalis, 2003).  Researchers instead focus on diving deep into investigating a particular issue or topic with a goal of producing new ways of understanding in these areas. In a perfect world, the roles of evaluators and researchers are distinct and separate. But, given requirements for funded projects to produce outcomes that inform the field, new knowledge is also discovered by evaluators. The swirl of roles results in evaluators publishing results of projects that informs the field, researchers leveraging their evaluator roles to publish scholarly work, and both evaluators and researchers borrowing strategies from each other to conduct their work.

The blurring of roles requires project leaders to provide clarity about evaluator and researcher team functions. The following questions can help in this process:

  • How will the evaluator and researcher share data?
  • What are the expectations for publication from the project?
  • What kinds of formative evaluation might occur that ultimately changes the project trajectory? How do these changes influence the research portion of the project?
  • How does shared meaning of terms, role, scope of work, and authority for the project team occur?

Knowing how the evaluator and researcher will work together provides an opportunity to leverage expertise in ways that move beyond the simple additive effect of both roles.  Opportunities to share information is only possible when roles are coordinated, which requires advanced planning. It is important to move beyond siloed roles and towards more collaborative models of evaluation and research within projects. Collaboration requires more time and attention to sharing information and defining roles, but the time spent on coordinating these joint efforts is worth it given the contributions to both the project and to the field.


References

Levin-Rozalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation, 18(2):1-31.

Powell, R. R. (2006).  Evaluation research:  An overview.  Library Trends, 55(1), 102-120.

Scriven, M. (2003/2004).  Michael Scriven on the differences between evaluation and social science research.  The Evaluation Exchange, 9(4).

Blog: Logic Models for Curriculum Evaluation

Posted on June 7, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Rachel Tripathy Linlin Li
Research Associate, WestEd Senior Research Associate, WestEd

At the STEM Program at WestEd, we are in the third year of an evaluation of an innovative, hands-on STEM curriculum. Learning by Making is a two-year high school STEM course that integrates computer programming and engineering design practices with topics in earth/environmental science and biology. Experts in the areas of physics, biology, environmental science, and computer engineering at Sonoma State University (SSU) developed the curriculum by integrating computer software with custom-designed experiment set-ups and electronics to create inquiry-based lessons. Throughout this project-based course, students apply mathematics, computational thinking, and the Next Generation Science Standards (NGSS) Scientific and Engineering Design Practices to ask questions about the world around them, and seek the answers. Learning by Making is currently being implemented in rural California schools, with a specific effort being made to enroll girls and students from minority backgrounds, who are currently underrepresented in STEM fields. You can listen to students and teachers discussing the Learning by Making curriculum here.

Using a Logic Model to Drive Evaluation Design

We derived our evaluation design from the project’s logic model. A logic model is a structured description of how a specific program achieves an intended learning outcome. The purpose of the logic model is to precisely describe the mechanisms behind the program’s effects. Our approach to the Learning by Making logic model is a variant on the five-column logic format that describes the inputs, activities, outputs, outcomes, and impacts of a program (W.K. Kellogg Foundation, 2014).

Learning by Making Logic Model

Click image to view enlarge

Logic models are read as a series of conditionals. If the inputs exist, then the activities can occur. If the activities do occur, then the outputs should occur, and so on. Our evaluation of the Learning by Making curriculum centers on the connections indicated by the orange arrows connecting outputs to outcomes in the logic model above. These connections break down into two primary areas for evaluation: 1) teacher professional development, and 2) classroom implementation of Learning by Making. The questions that correlate with the orange arrows above can be summarized as:

  • Are the professional development (PD) opportunities and resources for the teachers increasing teacher competence in delivering a computational thinking-based STEM curriculum? Does Learning by Making PD increase teachers’ use of computational thinking and project-based instruction in the classroom?
  • Does the classroom implementation of Learning by Making increase teachers’ use of computational thinking and project-based instruction in the classroom? Does classroom implementation promote computational thinking and project-based learning? Do students show an increased interest in STEM subjects?

Without effective teacher PD or classroom implementation, the logic model “breaks,” making it unlikely that the desired outcomes will be observed. To answer our questions about outcomes related to teacher PD, we used comprehensive teacher surveys, observations, bi-monthly teacher logs, and focus groups. To answer our questions about outcomes related to classroom implementation, we used student surveys and assessments, classroom observations, teacher interviews, and student focus groups. SSU used our findings to revise both the teacher PD resources and the curriculum itself to better situate these two components to produce the outcomes intended. By deriving our evaluation design from a clear and targeted logic model, we succeeded in providing actionable feedback to SSU aimed at keeping Learning by Making on track to achieve its goals.

Blog: Evaluating New Technology

Posted on May 23, 2017 by  in Blog ()

Professor and Senior Associate Dean, Rochester Institute of Technology

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a STEM practitioner and evaluator, I have had many opportunities to assess new and existing courses, workshops, and programs. But there are often requests that still challenge me, especially evaluating new technology. The problem lies in clarifying the role of new technology, and focusing the evaluation on the proper questions.

Well, ok, you ask, “what are the roles I need to focus on?” In a nutshell, new technologies rear their heads in two ways:

(1) As content to be learned in the instructional program and,

(2) As a delivery mechanism for the instruction.

These are often at odds with each other, and sometimes overlap in unusual ways. For example, a course on “getting along at work” could be delivered via an iPad. A client could suggest that we should “evaluate the iPads, too.” In this context, an evaluation of the iPad should be limited to its contribution to achieving the program outcomes. Among other questions, did it function in a way that students enjoyed (or didn’t hate) and in a way that contributed to (or didn’t interfere with) learning. In a self-paced program, the iPad might be the primary vehicle for content delivery. However, using FaceTime or Skype via an iPad only requires the system to be a communication device – it will provide little more than a replacement of other technologies. In both cases, evaluation questions would center on the impact of the iPad on the learning process. Note that this is no more of a “critical” question than “did the students enjoy (or not hate) the snacks provided to them.” Interesting, but only as a supporting process.

Alternatively, a classroom program could be devoted to “learning the iPad.” In this case, the iPad has become “subject matter” that is to be learned through the process of human classroom interaction. In this case, how much they learned about the iPad is the whole point of the program! Ironically, a student could learn things about the iPad (through pictures, simulations, or through watching demonstrations) without actually using an iPad! But remember, it is not only an enabling contributor to the program – it can be the object of study.

So, the evaluation of new technology means that the evaluator must determine which aspect of new technology is being evaluated: technology as a process for delivering instruction, or as a subject of study. And a specific, somewhat circular case exists as well: Learning about an iPad through training delivered on an iPad. In this case, we would try to generate evaluation questions that allow us to address iPads both as delivery tools and iPads as skills to be learned.

While this may now seem straightforward as you read about it, remember that it is not straightforward to clients who are making an evaluation request. It might help to print this blog (or save a link) to help make clear these different, but sometimes interacting, uses of technology.

Blog: Evaluating Network Growth through Social Network Analysis

Posted on May 11, 2017 by  in Blog ()

Doctoral Student, College of Education, University of Nebraska at Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the most impactful learning experiences from the ATE Principal Investigators Conference I attended in October 2016 was the growing use of Business and Industry Leadership Teams (BILT) partnerships in developing and implementing new STEM curriculum throughout the country.  This need for cross-sector partnerships has become apparent and reinforced through specific National Science Foundation (NSF) grants.

The need for empirical data about networks and collaborations is increasing within the evaluation realm, and social network surveys are one method of quickly and easily gathering that data. Social network surveys come in a variety of forms. The social network survey I have used is in a roster format. Each participant of the program is listed, and each individual completes the survey by selecting which option best describes their relationships with one another. The options vary in degree from not knowing that person at one extreme, to having formally collaborated with that person at the other extreme. In the past, data from these types of surveys was analyzed through social network analysis, which necessitated a large amount of programming knowledge.  Due to recent technological advancements, there are new social network analysis programs that make analyzing this data more user-friendly for non-programmers. I have worked on an NSF-funded project at the University of Nebraska at Oaha where the goal is to provide professional development and facilitate the growth of a network for middle school teachers in order to create and implement computer science lessons into their current curriculum (visit the SPARCS website).  One of the methods for evaluating the facilitation of the network is through a social network analysis questionnaire. This method has proved very helpful in determining the extent to which the professional relationships of the cohort members have evolved over the course of their year-long experience within the program.

The social network analysis program I have been using is known as NodeXL and is an Excel add-in. It is very user-friendly and can easily be used to generate quantitative data on network development. I was able to take the data gathered from the social network analysis, conduct research, and present my article, “Identification of the Emergent Leaders within a CSE Professional Development Program,” at an international conference in Germany. While the article is not focused on evaluation, it does review the survey instrument itself.  You may access the article through this link (although I think your organization must have access to ACM):  Tracie Evans Reding WiPSCE Article. The article is also posted on my Academia.edu page.

Another funding strand emphasizing networks through the National Science Foundation is known as Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (INCLUDES). The long-term goal of NSF INCLUDES is to “support innovative models, networks, partnerships, technical capabilities and research that will enable the U.S. science and engineering workforce to thrive by ensuring that traditionally underrepresented and underserved groups are represented in percentages comparable to their representation in the U.S. population.” Noted in the synopsis for this funding opportunity is the importance of “efforts to create networked relationships among organizations whose goals include developing talent from all sectors of society to build the STEM workforce.” The increased funding available for cross-sector collaborations makes it imperative that evaluators are able to empirically measure these collaborations. While the notion of “networks” is not a new one, the availability of resources such as NodeXL will make the evaluation of these networks much easier.

 

Full Citation for Article:

Evans Reding, T., Dorn, B., Grandgenett, N., Siy, H., Youn, J., Zhu, Q., Engelmann, C. (2016).  Identification of the Emergent Teacher Leaders within a CSE Professional Development Program.  Proceedings for the 11th Workshop in Primary and Secondary Computing Education.  Munster, Germany:  ACM.

Blog: What Goes Where? Reporting Evaluation Results to NSF

Posted on April 26, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I provide advice for Advanced Technological Education (ATE) principal investigators (PIs) on how to include information from their project evaluations in their annual reports to the National Science Foundation (NSF).

Annual reports for NSF grants are due within 90 days of the award’s anniversary date. That means if your project’s initial award date was September 1, your annual reports will be due between June and August each year until the final year of the grant (at which point an outcome report is due within 90 days after the award anniversary date).

When you prepare your first annual report for NSF at Research.gov, you may be surprised to see there is no specific request for results from your project’s evaluation or a prompt to upload your evaluation report. That’s because Research.gov is the online reporting system used by all NSF grantees, whether they are researching fish populations in Wisconsin lakes or developing technician education programs.  So what do you do with the evaluation report your external evaluator prepared or all the great information in it?

1. Report evidence from your evaluation in the relevant sections of your annual report.

The Research.gov system for annual reports includes seven sections: Cover, Accomplishments, Products, Participants, Impact, Changes/Problems, and Special Requirements. Findings and conclusions from your evaluation should be reported in the Accomplishments and Impact sections, as described in the table below. Sometimes evaluation findings will point to a need for changes in project implementation or even its goals. In this case, pertinent evidence should be reported in the Changes/Problems section of the annual report. Highlight the most important evaluation findings and conclusions in these report sections. Refer to the full evaluation report for additional details (see Point 2 below).

NSF annual report section What to report from your evaluation
Accomplishments
  • Number of participants in various activities
  • Data related to participant engagement and satisfaction
  • Data related to the development and dissemination of products (Note: The Products section of the annual report is simply for listing products, not reporting evaluative information about them.)
Impacts
  • Evidence of the nature and magnitude of changes brought about by project activities, such as changes in individual knowledge, skills, attitudes, or behaviors or larger institutional, community, or workforce conditions
  • Evidence of increased participation by members of groups historically underrepresented in STEM
  • Evidence of the project’s contributions to the development of infrastructure that supports STEM education and research, including physical resources, such as labs and instruments; institutional policies; and enhanced access to scientific information
Changes/Problems
  • Evidence of shortcomings or opportunities that point to a need for substantial changes in the project

Do you have a logic model that delineates your project’s activities, outputs, and outcomes? Is your evaluation report organized around the elements in your logic model? If so, a straightforward rule of thumb is to follow that logic model structure and report evidence related to your project activities and outputs in the Accomplishments section and evidence related to your project outcomes in the Impacts section of your NSF annual report.

2. Upload your evaluation report.

Include your project’s most recent evaluation report as a supporting file in the Accomplishments or Impact section of Research.gov. If the report is longer than about 25 pages, make sure it includes a 1-3 page executive summary that highlights key results. Your NSF program officer is very interested in your evaluation results, but probably doesn’t have time to carefully read lengthy reports from all the projects he or she oversees.

Blog: Evaluation Management Skill Set

Posted on April 12, 2017 by  in Blog ()

CEO, SPEC Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We as evaluators, all know that managing an evaluation is quite different from managing a scientific research project. Sure, we need to take due diligence in completing the basic inquiry tasks:  deciding study questions/hypotheses; figuring out the strongest design, sampling plan, data collection methods and analysis strategies; and interpreting/reporting results. But evaluation’s purposes extend well beyond proving or disproving a research hypothesis. Evaluators must also focus on how the evaluation will lead to enlightenment and what role it plays in support of decision making. Evaluations can leave in place important processes that extend beyond the study itself, like data collection systems and changed organizational culture that places greater emphasis on data-informed decision making. Evaluations also exist within local and organizational political contexts, which are of less importance to academic and scientific research.

Very little has been written in the evaluation literature about evaluation management. Compton and Baizerman are the most prolific authors editing two issues of New Directions in Evaluation on the topic. They approach evaluation management from a theoretical perspective, discussing issues like the basic competencies of evaluation managers within different organizational contexts (2009) and the role of evaluation managers in advice giving (2012).

I would like to describe good evaluation management in terms of the actual tasks that an evaluation manager must excel in—what evaluation managers must be able to actually do. For this, I looked to the field of project management. There is a large body of literature about project management, and whole organizations, like the Project Management Institute, dedicated to the topic. Overlaying evaluation management onto the core skills of a project manager, here is the skill set I see as needed to effectively manage an evaluation:

Technical Skills:

  • Writing an evaluation plan (including but not limited to descriptions of basic inquiry tasks)
  • Creating evaluation timelines
  • Writing contracts between the evaluation manager and various members of the evaluation team (if they are subcontractors), and with the client organization
  • Completing the application for human subjects institutional review board (HSIRB) approval, if needed

Financial Skills:

  • Creating evaluation budgets, including accurately estimating hours each person will need to devote to each task
  • Generating or justifying billing rates of each member of the evaluation team
  • Tracking expenditures to assure that the evaluation is completed within the agreed-upon budget

Interpersonal Skills:

  • Preparing a communications plan outlining who needs to be apprised of what information or involved in which decisions, how often and by what method
  • Using appropriate verbal and nonverbal communication skills to assure that the evaluation not only gets done, but good relationships are maintained throughout
  • Assuming leadership in guiding the evaluation to its completion
  • Resolving the enormous number of conflicts that can arise both within the evaluation team and between the evaluators and the stakeholders

I think that this framing can provide practical guidance for what new evaluators need to know to effectively manage an evaluation and guidance for how veteran evaluators can organize their knowledge for practical sharing. I’d be interested in comments as to the comprehensiveness and appropriateness of this list…am I missing something?

Blog: Gauging Workplace Readiness Among Cyberforensics Program Graduates

Posted on March 29, 2017 by  in Blog ()

Principal Consultant, Preferred Program Evaluations

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I share my experience leading a multi-year external evaluation that provided useful insights about how to best strengthen the work readiness components of an ATE project.

The Advanced Cyberforensics Education Consortium (ACE) is a National Science Foundation- funded Advanced Technological Education center whose goal is to design and deliver an industry-driven curriculum that produces qualified and adaptive graduates equipped to work in the field of cyberforensics and secure our nation’s electronic infrastructure.  The initiative is being led by Daytona State College of Florida and three other “state lead” partner institutions in Georgia, South Carolina, and North Carolina.  The targeted geographic audience of ACE is community and state colleges in the southeastern region of the United States.

The number of cyberforensics and network security program offerings among ACE’s four state lead institutions increased nearly fivefold between the initiative’s first and fourth year.  One of ACE’s objectives is to align the academic program core with employers’ needs and ensure the curriculum remains current with emerging trends, applications, and cyberforensics platforms.  In an effort to determine the extent to which this was occurring across partner institutions, I, ACE’s external evaluator, sought feedback directly from the project’s industry partners.

A Dialogue with Industry Representatives

Based on a series of stakeholder interviews conducted with industry partners, I learned that program graduates were viewed favorably for their content knowledge and professionalism.  The interviewees noted that the graduates they hired added value to their organizations and that they would consider hiring additional graduates from the same academic programs.  In contrast, I also received feedback via interviews that students were falling short in the desired fundamental set of soft skills.

An electronic survey for industry leaders affiliated with ACE state lead institutions was designed to gauge their experience working with graduates of the respective cyberforensics programs and solicit suggestions for enhancing the programs’ ability to generate graduates who have the requisite skills to succeed in the workplace.  The first iteration of the survey read too much like a performance review.  To address this limitation, the question line was modified to inquire more specifically about the graduates’ knowledge, skills, and abilities related to employability in the field of cyberforensics.

ACE’s P.I. and I wanted to discover how the programs could be tailored to ensure a smoother transition from higher education to industry and how to best acclimate graduates to the workplace.  Additionally, we sought to determine the ways in which the coursework is accountable and to what extent the graduates’ skillset is transferable.

What We Learned from Industry Partners

On the whole, new hires were academically prepared to complete assigned tasks, possessed intellectual curiosity, and displayed leadership qualities.  A few recommendations were specific to collaboration between the institution and the business community.  One suggestion included inviting some of the college’s key faculty and staff to the businesses to learn more about day-to- day operations and how they could be integrated with classroom instruction.  Another industry representative encouraged institutions to engage more readily with the IT business community to generate student internships and co-ops.  The promotion of professional membership in IT organizations for a well-rounded point-of-view as a business technologist was also suggested by survey respondents.

ACE’s P.I. and I came to understand that recent graduates – regardless of age – have room for improvement when it comes to communicating and following complex directions with little oversight.  Employers were of the opinion that graduates could have benefited from more emphasis on attention to detail, critical thinking, and best practices.  Another recommendation centered on the inclusion of a “systems level” class or “big picture integrator” that would allow students to explore how all of the technology pieces fit together cohesively.  Lastly, to remain responsive to industry trends, the partners requested additional hands-on coursework related to telephony and cloud-based security.