We EvaluATE - Evaluation Management

Blog: Untangling the Story When You’re Part of the Complexity

Posted on April 16, 2019 by  in Blog ()

Evaluator, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

I am wrestling with a wicked evaluation problem: How do I balance evaluation, research, and technical assistance work when they are so interconnected? I will discuss strategies for managing different aspects of work and the implications of evaluating something that you are simultaneously trying to change.

Background

In 2017, the National Science Foundation solicited proposals that called for researchers and practitioners to partner in conducting research that directly informs problems of practice through the Research Practice Partnership (RPP) model. I work on one project funded under this grant: Using a Researcher-Practitioner Partnership Approach to Develop a Shared Evaluation and Research Agenda for Computer Science for All (RPPforCS). RPPforCS aims to learn how projects supported under this funding are conducting research and improving practice. It also brings a community of researchers and evaluators across funded partnerships together for collective capacity building.

The Challenge

The RPPforCS work requires a dynamic approach to evaluation, and it challenges conventional boundaries between research, evaluation, and technical assistance. I am both part of the evaluation team for individual projects and part of a program-wide research project that aims to understand how projects are using an RPP model to meet their computer science and equity goals. Given the novelty of the program and research approach, the RPPforCS team also supports these projects with targeted technical assistance to improve their ability to use an RPP model (ideas that typically come out of what we’re learning across projects).

Examples in Practice

The RPPforCS team examines changes through a review of project proposals and annual reports, yearly interviews with a member of each project, and an annual community survey. Using these data collection mechanisms, we ask about the impact of the technical assistance on the functioning of the project. Being able to rigorously document how the technical assistance aspect of our research project influences their work allows us to track change affected by the RPPforCS team separately from change stemming from the individual project.

We use the technical assistance (e.g., tools, community meetings, webinars) to help projects further their goals and as research and evaluation data collection opportunities to understand partnership dynamics. The technical assistance tools are all shared through Google Suite, allowing us to see how the teams engage with them. Teams are also able to use these tools to improve their partnership practice (e.g., using our Health Assessment Tool to establish shared goals with partners). Structured table discussions at our community meetings allow us to understand more about specific elements of partnership that are demonstrated within a given project. We share all of our findings with the community on a frequent basis to foreground the research effort, while still providing necessary support to individual projects. 

Hot Tips

  • Rigorous documentation The best way I have found to account for our external impact is rigorous documentation. This may sound like a basic approach to evaluation, but it is the easiest way to track change over time and track change that you have introduced (as opposed to organic change coming from within the project).
  • Multi-use activities Turn your technical assistance into a data collection opportunity. It both builds capacity within a project and allows you to access information for your own evaluation and research goals.

Blog: The Business of Evaluation: Liability Insurance

Posted on January 11, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Bottom line: you may need liability insurance, and you have to pay for it.

The proposal has been funded, you are the named evaluator, you have created a detailed scope of work, and the educational institution has sent you a Professional Services Contract to sign (and read!).

This contract will contain many provisions, one of which is having insurance. I remember the first time I read it: The contractor shall maintain commercial general liability insurance against any claims that might incur in carrying out this agreement. Minimum coverage shall be $1,000,000.

I thought, well, this probably doesn’t pertain to me, but then I read further: Upon request, the contractor is required to provide a Certificate of Insurance. That got my attention.

You might find what happened next interesting. I called the legal offices at the community college. My first question was Can we just strike that from the contract? No, we were required by law to have it. Then she explained, “Mike that sort of liability thing is mostly for contractors coming to do physical work on our campus, in case there was an injury, brick falling on the head of a student, things like that.” She lowered her voice. “ I can tell you we are never going to ask you to show that certificate to us.”

However, sometimes, you will be asked to maintain and provide, on request, professional liability insurance, also called errors and omissions insurance (E&O insurance) or indemnity insurance. This protects your business if you are sued for negligently performing your services, even if you haven’t made a mistake. (OK, I admit, this doesn’t seem likely in our business of evaluation.)

Then the moment of truth came. A decent-sized contract arrived from a major university I shall not name located in Tempe, Arizona, with a mascot that is a devil with a pitchfork. It said if you want a purchase order from us, sign the contract and attach your Certificate of Insurance.

I was between the devil and a hard place. Somewhat naively, I called my local insurance agent (i.e., for home and car.) He actually had never heard of professional liability insurance and promised to get back to me. He didn’t.

I turned to Google, the fount of all things. (Full disclosure, I am not advocating for a particular company—just telling you what I did.) I explored one company that came up high in the search results. Within about an hour, I was satisfied that it was what I needed, had a quote, and typed in my credit card number. In the next hour, I had my policy online and printed out the one-page Certificate of Insurance with the university’s name as “additional insured.” Done.

I would like to clarify one point. I did not choose general liability insurance because there is no risk to physical damage to property or people that may be caused by my operations. In the business of evaluation that is not a risk.

I now have a $2 million professional liability insurance policy that costs $700 per year. As I add clients, if they require it, I can create a one-page certificate naming them as additional insured, at no extra cost.

Liability insurance, that’s one of the costs of doing business.

Blog: How Evaluators Can Use InformalScience.org

Posted on December 13, 2018 by  in Blog ()

Evaluation and Research Manager, Science Museum of Minnesota and Independent Evaluation Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m excited to talk to you about the Center for Advancement of Informal Science Education (CAISE) and the support they offer evaluators of informal science education (ISE) experiences. CAISE is a National Science Foundation (NSF) funded resource center for NSF’s Advancing Informal STEM Learning program. Through InformalScience.org, CAISE provides a wide range of resources valuable to the EvaluATE community.

Defining Informal Science Education

ISE is lifelong learning in science, technology, engineering, and math (STEM) that takes place across a multitude of designed settings and experiences outside of the formal classroom. The video below is a great introduction to the field.

Outcomes of ISE experiences have some similarities to those of formal education. However, ISE activities tend to focus less on content knowledge and more on other types of outcomes, such as interest, attitudes, engagement, skills, behavior, or identity. CAISE’s Evaluation and Measurement Task Force investigates the outcome areas of STEM identity, interest, and engagement to provide evaluators and experience designers with guidance on how to define and measure these outcomes. Check out the results of their work on the topic of STEM identity (results for interest and engagement are coming soon).

Resources You Can Use

InformalScience.org has a variety of resources that I think you’ll find useful for your evaluation practice.

  1. In the section “Design Evaluation,” you can learn more about evaluation in the ISE field through professional organizations, journals, and projects researching ISE evaluation. The “Evaluation Tools and Instruments” page in this section lists sites with tools for measuring outcomes of ISE projects, and there is also a section about reporting and dissemination. I provide a walk-through of CAISE’s evaluation pages in this blog post: How to Use InformalScience.org for Evaluation.
  2. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects has been extremely useful for me in introducing ISE evaluation to evaluators new to the field.
  3. In the “News & Views” section are several evaluation-related blogs, including a series on working with an institutional review board and another one on conducting culturally responsive evaluations.
  4. If you are not affiliated with an academic institution, you can access peer-reviewed articles in some of your favorite academic journals by becoming a member InformalScienceorg. Click here to join; it’s free! Once you’re logged in, select “Discover Research” in the menu bar and scroll down to “Access Peer-Reviewed Literature (EBSCO).” Journals of interest include Science Education and Cultural Studies of Science Education. If you are already a member of InformalScience.org, you can immediately begin searching the EBSCO Education Source database.

My favorite part of InformalScience.org is the repository of evaluation reports—1,020 reports and growing—which is the largest collection of reports in the evaluation field. Evaluators can use this rich collection to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating ISE projects. Even if you don’t evaluate ISE experiences, I encourage you to take a minute to search the reports and see what you can find. And if you conduct ISE evaluations, consider sharing your own reports on InformalScience.org.

Do you have any questions about CAISE or InformalScience.org? Contact Melissa Ballard, communications and community manager, at mballard@informalscience.org.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho
Principal Research Associate
The Evaluation Center
Research Manager
EvaluATE

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Getting Your New ATE Project’s Evaluation off to a Great Start

Posted on October 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

New ATE project principal investigators (PIs): When you worked with your evaluator to develop an evaluation plan for your project proposal, you were probably focused on the big picture—how to gather credible and meaningful evidence about the quality and impact of your work. To ensure your evaluation achieves its aims, take these four steps now to make sure your project provides the human resources, time, and information needed for a successful evaluation:

  1. Schedule regular meetings with your evaluator. Regular meetings help ensure that your project’s evaluation receives adequate attention. These exchanges should be in real time—via phone call, web meetings, or face-to-face—not just email. See EvaluATE’s new Communication Plan Checklist for ATE PIs and Evaluators for a list of other communication issues to discuss with your evaluator at the start of a project.
  1. Work with your evaluator to create a project evaluation calendar. This calendar should span the life of your project and include the following:
  • Due dates for National Science Foundation (NSF) annual reports: You should include your evaluation reports or at least information from the evaluation in these reports. Work backward from their due dates to determine when evaluation reports should be completed. To find out when your annual report is due, go to Research.gov, enter your NSF login information, select “Awards & Reporting,” then “Project Reports.”
  • Advisory committee meeting dates: You may want your evaluator to attend these meetings to learn more about your project and to communicate directly with committee members.
  • Project events: Activities such as workshops and outreach events present valuable opportunities to collect data directly from the individuals involved in the project. Make sure your evaluator is aware of them.
  • Due dates for new proposal submissions: If submitting to NSF again, you will need to include evidence of your current project’s intellectual merit and broader impacts. Working with your evaluator now will ensure you have compelling evidence to support a future submission.
  1. Keep track of what you’re doing and who is involved. Don’t leave these tasks to your evaluator or wait until the last minute. Taking an active—and proactive—role in documenting the project’s work will save you time and result in more accurate information. Your evaluator can then use that information when preparing their reports. Moreover, you will find it immensely useful to have good documentation at your fingertips when preparing your annual NSF report.
  • Maintain a record of project activities and products—such as conference presentations, trainings, outreach events, competitions, publications—as they are completed. Check out EvaluATE’s project vita as an example.
  • Create a participant database (or spreadsheet): Everyone who engages with your project should be listed. Record their contact information, role in the project, and pertinent demographic characteristics (such as whether a student is a first-generation college student, a veteran, or part of a group that has been historically underrepresented in STEM). You will probably find several uses for this database, such as for follow-up with participants for evaluation purposes, for outreach, and as evidence of your project’s broader impacts.

An ounce of prevention is worth of pound of cure: Investing time up front to make sure your evaluation is on solid footing will save headaches down the round.

Blog: Sustaining Private Evaluation Practices: Overcoming Challenges by Collaborating within Our ATE Community of Practice

Posted on September 27, 2017 by  in Blog ()

President, Impact Allies

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Ben Reid. I am the founder of Impact Allies, a private evaluation firm. The focus of this post is on the business, rather than technical aspects, of evaluation. My purpose is to present a challenge to sustaining a private evaluation practice and best serving clients and propose an opportunity to overcome that challenge by collaborating within our community of practice.

Challenge

Often evaluators act as one-person shows. It is important to give a single point of contact to a principal investigator (PI) and project team and for that evaluator of record to have thorough knowledge of the project and its partners. However, the many different jobs required of an evaluation contract simply cross too many specialties and personality types for one person to effectually serve a client best.

Opportunity

The first opportunity is to become more professionally aware of our strengths and weaknesses. What are your skills? And equally important, where are you skill-deficit (don’t know how to do it) and where are you performance-deficient (have the skill but aren’t suited for it—because of anxiety, frustration, no enthusiasm, etc.)?

The second opportunity is to build relationships within our community of practice. Get to know other evaluators, where their strengths are unique and whom they use for ancillary services (their book of contractors). (The upcoming NSF ATE PI conference is a great place to do this).

Example

My Strengths: Any evaluator can satisfactorily perform the basics – EvaluATE certainly has done a tremendous job of educating and training us. In this field, I am unique in my strengths of external communications, opportunity identification and assessment, strategic and creative thinking, and partnership development. Those skills and a background in education, marketing and branding, and project management, have helped me contribute broadly, which has proven useful time and again when working with small teams. Knowing clients well and having an entrepreneurial mindset allows me to do what is encouraged in NSF’s 2010 User-Friendly Handbook for Project Evaluation: “Certain evaluation activities can help meet multiple purposes, if used judiciously” (p. 119).

My Weaknesses: However, an area where I could use some outside support is graphic design and data visualization. This work, because it succinctly tells the story and successes of a project, is very important when communicating to multiple stakeholders, in published works, or for promotional purposes. Where I once performed these tasks (with much time and frustration and at a level which isn’t noteworthy), I now contract with an expert—and my clients are thereby better served.

Takeaway

“Focus on the user and all else will follow,” is the number one philosophy of Google, the company that has given us so much and in turn done so well for itself. Let us also focus on our clients, serving their needs by building our businesses where we are skilled and enthusiastic and collaborating (partnering, outsourcing, or referring) within our community of practice where another professional can do a better job for our clients.

Blog: Evaluation’s Role in Helping Clients Avoid GroupThink

Posted on July 10, 2017 by  in Blog ()

Senior Evaluator, SmartStart Evaluation & Research

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In December of 2016, I presented a poster on a STEM-C education project at the Restore America’s Estuaries National Summit, co-hosted by The Coastal Society. Having a social science background, I assumed I’d be “out of my depth” amid restoration science topics. However, a documentary on estuarine restoration projects along New Jersey’s Hidden Coast inspired me with insights on the importance of evaluation in helping projects achieve effective outcomes. The film highlights the vital importance of horseshoe crabs as a keystone species beset by myriad threats: Their sustainability as a renewable resource was overestimated and their ecological importance undervalued until serious repercussions became impossible to ignore. Teams of biologists, ecologists, military veterans, communication specialists, and concerned local residents came together to help restore their habitat and raise awareness to help preserve this vital species.

This documentary was not the only project presented at the conference in which diverse teams of scientists, volunteers, educators, and others came together to work toward a shared goal. I began to reflect on how similar the composition of these groups and their need for successful collaboration was to contributors on many projects I evaluate. Time and again it was revealed that the various well-intended interdisciplinary team members often initially struggled to communicate effectively due to different expectations, priorities, and perspectives. Often presenters spoke about ways these challenges had been overcome, most frequently through extensive communication with open exchanges of ideas. However, these only represented successful projects promoting their outcomes as inspiration and guidance for others. How often might lack of open communication lead projects down a different path? When does this occur? and How can an evaluator help the leaders foresee and avoid potential pitfalls?

Often, the route to undesired and unsuccessful outcomes lies in lack of effective communication, which is a common symptom of GroupThink. Imagine the leadership team on any project you evaluate:

  • Are they a highly cohesive group?
  • Do they need to make important decisions, often under deadlines or other pressure?
  • Do members prefer consensus to conflict?

These are ideal conditions for GroupThink, when team members disregard information that does not fit with their shared beliefs, and dissenting ideas or opinions are unwelcome. Partners’ desire for harmony can lead them to ignore early warning signs of threats to achieving goals and lead to making poor decisions.

How do we, as evaluators, help them avoid GroupThink?

  • Examine perceived sustainability objectively: Horseshoe crabs are an ancient species, once so plentiful they covered Atlantic beaches during spawning, each laying 100,000 or more eggs. Perceived as a sustainable species, their usefulness as bait and fertilizer has led to overharvesting. Similarly, project leaders may have misconceptions about resources or little knowledge of other factors influencing capacity to maintain their activities. By using validated measures, such as Washington University’s Program Sustainability Assessment Tool (PSAT), evaluators can raise awareness among project leaders on factors contributing to sustainability and facilitate planning sessions to identify adaptation strategies and increase chances of success.
  • Investigate an unintended consequence of project’s activities: Horseshoe crabs’ copper-based blood is crucial to the pharmaceutical industry. However, they cannot successfully be raised in captivity. Instead, they are captured, drained of about 30 percent of their blood, and returned to the ocean. While survival rates are 70 percent or more, researchers are becoming concerned the trauma may impact breeding and other behaviors. Evaluators can help project leaders delve into cause-and-effect relationships underlying problems by employing techniques such as the Five Whys to identify root causes and developing logic models to clarify relationships between resources, activities, outputs, and outcomes.
  • Anticipate unintended chains of events: Horseshoe crabs’ eggs are the primary source of protein for migrating birds. The declining population of horseshoe crabs has put at least three species of birds’ survival at risk. As evaluators, we have many options (e.g., key informant interviews, risk assessments, negative program theory) to identify aspects of program activities with potentially negative impacts and make recommendations to mitigate the harm.

Horseshoe Crab-in-a-bottle sits on my desk to remind me not to be reticent about making constructive criticisms in order to help project leaders avoid GroupThink.

Blog: Evaluation Management Skill Set

Posted on April 12, 2017 by  in Blog ()

CEO, SPEC Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We as evaluators, all know that managing an evaluation is quite different from managing a scientific research project. Sure, we need to take due diligence in completing the basic inquiry tasks:  deciding study questions/hypotheses; figuring out the strongest design, sampling plan, data collection methods and analysis strategies; and interpreting/reporting results. But evaluation’s purposes extend well beyond proving or disproving a research hypothesis. Evaluators must also focus on how the evaluation will lead to enlightenment and what role it plays in support of decision making. Evaluations can leave in place important processes that extend beyond the study itself, like data collection systems and changed organizational culture that places greater emphasis on data-informed decision making. Evaluations also exist within local and organizational political contexts, which are of less importance to academic and scientific research.

Very little has been written in the evaluation literature about evaluation management. Compton and Baizerman are the most prolific authors editing two issues of New Directions in Evaluation on the topic. They approach evaluation management from a theoretical perspective, discussing issues like the basic competencies of evaluation managers within different organizational contexts (2009) and the role of evaluation managers in advice giving (2012).

I would like to describe good evaluation management in terms of the actual tasks that an evaluation manager must excel in—what evaluation managers must be able to actually do. For this, I looked to the field of project management. There is a large body of literature about project management, and whole organizations, like the Project Management Institute, dedicated to the topic. Overlaying evaluation management onto the core skills of a project manager, here is the skill set I see as needed to effectively manage an evaluation:

Technical Skills:

  • Writing an evaluation plan (including but not limited to descriptions of basic inquiry tasks)
  • Creating evaluation timelines
  • Writing contracts between the evaluation manager and various members of the evaluation team (if they are subcontractors), and with the client organization
  • Completing the application for human subjects institutional review board (HSIRB) approval, if needed

Financial Skills:

  • Creating evaluation budgets, including accurately estimating hours each person will need to devote to each task
  • Generating or justifying billing rates of each member of the evaluation team
  • Tracking expenditures to assure that the evaluation is completed within the agreed-upon budget

Interpersonal Skills:

  • Preparing a communications plan outlining who needs to be apprised of what information or involved in which decisions, how often and by what method
  • Using appropriate verbal and nonverbal communication skills to assure that the evaluation not only gets done, but good relationships are maintained throughout
  • Assuming leadership in guiding the evaluation to its completion
  • Resolving the enormous number of conflicts that can arise both within the evaluation team and between the evaluators and the stakeholders

I think that this framing can provide practical guidance for what new evaluators need to know to effectively manage an evaluation and guidance for how veteran evaluators can organize their knowledge for practical sharing. I’d be interested in comments as to the comprehensiveness and appropriateness of this list…am I missing something?

Blog: Gauging Workplace Readiness Among Cyberforensics Program Graduates

Posted on March 29, 2017 by  in Blog ()

Principal Consultant, Preferred Program Evaluations

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this blog, I share my experience leading a multi-year external evaluation that provided useful insights about how to best strengthen the work readiness components of an ATE project.

The Advanced Cyberforensics Education Consortium (ACE) is a National Science Foundation- funded Advanced Technological Education center whose goal is to design and deliver an industry-driven curriculum that produces qualified and adaptive graduates equipped to work in the field of cyberforensics and secure our nation’s electronic infrastructure.  The initiative is being led by Daytona State College of Florida and three other “state lead” partner institutions in Georgia, South Carolina, and North Carolina.  The targeted geographic audience of ACE is community and state colleges in the southeastern region of the United States.

The number of cyberforensics and network security program offerings among ACE’s four state lead institutions increased nearly fivefold between the initiative’s first and fourth year.  One of ACE’s objectives is to align the academic program core with employers’ needs and ensure the curriculum remains current with emerging trends, applications, and cyberforensics platforms.  In an effort to determine the extent to which this was occurring across partner institutions, I, ACE’s external evaluator, sought feedback directly from the project’s industry partners.

A Dialogue with Industry Representatives

Based on a series of stakeholder interviews conducted with industry partners, I learned that program graduates were viewed favorably for their content knowledge and professionalism.  The interviewees noted that the graduates they hired added value to their organizations and that they would consider hiring additional graduates from the same academic programs.  In contrast, I also received feedback via interviews that students were falling short in the desired fundamental set of soft skills.

An electronic survey for industry leaders affiliated with ACE state lead institutions was designed to gauge their experience working with graduates of the respective cyberforensics programs and solicit suggestions for enhancing the programs’ ability to generate graduates who have the requisite skills to succeed in the workplace.  The first iteration of the survey read too much like a performance review.  To address this limitation, the question line was modified to inquire more specifically about the graduates’ knowledge, skills, and abilities related to employability in the field of cyberforensics.

ACE’s P.I. and I wanted to discover how the programs could be tailored to ensure a smoother transition from higher education to industry and how to best acclimate graduates to the workplace.  Additionally, we sought to determine the ways in which the coursework is accountable and to what extent the graduates’ skillset is transferable.

What We Learned from Industry Partners

On the whole, new hires were academically prepared to complete assigned tasks, possessed intellectual curiosity, and displayed leadership qualities.  A few recommendations were specific to collaboration between the institution and the business community.  One suggestion included inviting some of the college’s key faculty and staff to the businesses to learn more about day-to- day operations and how they could be integrated with classroom instruction.  Another industry representative encouraged institutions to engage more readily with the IT business community to generate student internships and co-ops.  The promotion of professional membership in IT organizations for a well-rounded point-of-view as a business technologist was also suggested by survey respondents.

ACE’s P.I. and I came to understand that recent graduates – regardless of age – have room for improvement when it comes to communicating and following complex directions with little oversight.  Employers were of the opinion that graduates could have benefited from more emphasis on attention to detail, critical thinking, and best practices.  Another recommendation centered on the inclusion of a “systems level” class or “big picture integrator” that would allow students to explore how all of the technology pieces fit together cohesively.  Lastly, to remain responsive to industry trends, the partners requested additional hands-on coursework related to telephony and cloud-based security.

Blog: 3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • Set a recurring conference call or meeting with your evaluator—at least once a month.
  • Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • Assign one person on your project team to be the point-person for evaluation.
  • Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.