Benchmarking is a process for comparing your organization’s activities and achievements with those of other organizations. In the business world, benchmarking emphasizes measuring one’s performance against organizations “known to be leaders in one or more aspects of their operations” (asq.org). In education contexts, benchmarking tends to be more about comparing an institution’s performance with its peer institutions. This may be done by using data from the National Community College Benchmark Project (nccbp.org/benchmarks) and the National Survey of Student Engagement (nsse.iub.edu),1 which has data specific to community colleges as well as four-year institutions. In short, benchmarking can be used to assess organizational performance against what is typical or exceptional, depending on your needs.
The ATE survey, conducted annually since 2000, provides aggregate information about ATE-funded projects and centers. The survey data may be used for comparing your individual project or center against the program as a whole. Such a comparison could be used to make a case for addressing a continuing need within the ATE program or to demonstrate your grant’s performance in relation to the program overall. For example, one concern throughout the ATE program and NSF is the participation of women and underrepresented minorities. Based on the 2014 survey of ATE grantees, we know that
- 42 percent of students served by the ATE program are from minority groups that are underrepresented in STEM; in comparison, individuals from these minority groups make up 31 percent of the U.S. population.
- 25 percent of students in ATE are women, compared with 51 percent of the population; only in biotechnology does the percentage of women reflect that of the U.S. population.
These and other demographic data may be used to help your project or center assess how it’s doing with regard to broadening participation in comparison with the ATE program as a whole or within your discipline. Similarly, information about ATE project and center practices may help gain insights on grant operations. Results from the 2014 ATE survey indicate that
- 85 percent of projects and centers collaborated with business and industry; of those, 63 percent obtained information about workforce needs from their collaborators.
- 90 percent of ATE grantees have engaged an evaluator; most evaluators (84%) are external to both the institution and the grant.
Check out our ATE survey fact sheets and data snapshots to identify data points that you can use to assess your performance against other ATE projects and centers: evalu-ate.org/annual_survey. If you would like a tailored snapshot report to assist your project or center with benchmarking against the ATE program, email firstname.lastname@example.org. To see a demonstration of how to compare grant-level, program-level, and national-level data, go to evalu-ate.org/resources/video-data1.
Keep in mind that the ATE program should not be used as a proxy for all technician education in the U.S. See Corey Smith’s article on page 3 for a list of other sources of secondary data that may be of use for planning, evaluation, and benchmarking.
1Both entities restrict data access to institutional members.