Newsletter - Winter 2015

Newsletter: Have You Overlooked Data That Might Strengthen Your Project Evaluation Reports or Grant Proposals?

Posted on January 1, 2015 by , in Newsletter - () ()

Many institutions of higher education collect very useful quantitative data as part of their regular operational and reporting processes. Why not put it to good use for your project evaluations or grant proposals? An office of institutional research, which often participates in the reporting process, can serve as a guide for data definitions and can often assist in creating one-time reports on this data and/or provide training to access and use existing reports.

Course Offerings: How many classes are offered in statistics? How frequently are they offered? Getting a sense of course enrollment numbers over time can illustrate need in a grant narrative. If a project involves the creation of new curricular elements, pre- and post-intervention enrollment numbers can serve as an outcome measure in an evaluation.

Student Transcripts: Is there a disproportionate number of veterans taking Spanish? How do they fare? Where do students major and minor? These data can serve as proxies for interest in different majors, identify gateway courses that might need support, uncover course-taking patterns, and/or relationships to GPA or full-/part-time status. Many of these can become outcomes or benchmarks in the evaluations, as well as context for a narrative.

Student Demographic and Admissions Data: Who are our students? How do they shape the institutional narrative?  Examine academic origin (high school, community college); incoming characteristics such as GPA, SAT, or ACT scores; race/ethnicity; veteran status; age; gender; Pell-grant eligibility status; underrepresented minority status; resident/nonresident status; and on-/off-campus housing. Student populations can be broken down into treatment cohorts for an evaluation of groups shown by research to benefit most from the intervention.

Faculty Demographic Information: Who are our faculty?  Examining full-time/part-time status, race/ethnicity, and gender can yield interesting observations. How do faculty demographics match students’ demographics? What is the student/faculty ratio? This information can enhance narrative descriptions of how students are served.

Financial Aid Data: How do we support our students fiscally? Information about cost of attendance vs tuition, net cost vs. “sticker cost,” percentage of students graduating with loans and average loan burden can be important to describe. It can also be a way of dividing students when evaluating outcomes, and can be an outcome measure in itself for grants intended to affect financial aid or financial literacy.

Student Outcomes: What does persistence look like at your institution? What are the one-year,  retention rates; four-, five-, and six-year graduation rates; and number of graduates by CIP (Classification by Instructional Programs) code?  These are almost always the standard benchmarks for interventions intended to affect retention and completion.

To further your case and provide context, comparison data for most of these are available in IPEDS (Integrated Postsecondary Education Data System) and may be tracked by federal surveys like the Beginning Postsecondary Students longitudinal survey and National Postsecondary Student Aid Survey, all of which are potential sources for external benchmarking. Of course, collecting these types of data can be addictive as you discover new ways to enliven your narrative and empower your evaluation with the help of institutional research. Happy hunting!

To learn more about institutional data from Carolyn and Russ, read their contribution to EvaluATE’s blog at

Newsletter: Secondary Data

Posted on January 1, 2015 by  in Newsletter - () ()

EvaluATE Blog Editor

Secondary data is data that is repurposed from its original use, typically collected by a different entity. This is different from primary data, which is data you collect and analyze for your own needs. Secondary data may include, but is not limited to, data already collected by other departments at your institution, by national agencies, or even by other grants. Secondary data can be useful for planning, benchmarking, and evaluation.

Using secondary data in evaluation could involve using institutional data about student ethnicity and gender to help determine your project’s impact on underrepresented minority graduation rates. National education statistics can be used for benchmarking purposes. A national survey of educational pipelines into industry can help you direct your recruitment planning.

The primary benefit of using secondary data is that it is often cheaper to acquire than primary data in terms of time, labor, and financial expenses, which is especially important if you are involved in a small grant with limited resources. However, secondary data sources may not provide all the information needed for your evaluation—you will still have to do some primary data collection in order to get the full picture of your project’s quality and effectiveness.

One final note: Accessing institutional data may require working closely with offices that are not part of your grant, so you must plan accordingly. It is helpful to connect your evaluator with those offices to facilitate access throughout the evaluation.

Newsletter: Can the ATE Survey Data be Used for Benchmarking?

Posted on January 1, 2015 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

Benchmarking is a process for comparing your organization’s activities and achievements with those of other organizations. In the business world, benchmarking emphasizes measuring one’s performance against organizations “known to be leaders in one or more aspects of their operations” ( In education contexts, benchmarking tends to be more about comparing an institution’s performance with its peer institutions. This may be done by using data from the National Community College Benchmark Project ( and the National Survey of Student Engagement (,1 which has data specific to community colleges as well as four-year institutions. In short, benchmarking can be used to assess organizational performance against what is typical or exceptional, depending on your needs.

The ATE survey, conducted annually since 2000, provides aggregate information about ATE-funded projects and centers. The survey data may be used for comparing your individual project or center against the program as a whole. Such a comparison could be used to make a case for addressing a continuing need within the ATE program or to demonstrate your grant’s performance in relation to the program overall. For example, one concern throughout the ATE program and NSF is the participation of women and underrepresented minorities. Based on the 2014 survey of ATE grantees, we know that

  • 42 percent of students served by the ATE program are from minority groups that are underrepresented in STEM; in comparison, individuals from these minority groups make up 31 percent of the U.S. population.
  • 25 percent of students in ATE are women, compared with 51 percent of the population; only in biotechnology does the percentage of women reflect that of the U.S. population.

These and other demographic data may be used to help your project or center assess how it’s doing with regard to broadening participation in comparison with the ATE program as a whole or within your discipline. Similarly, information about ATE project and center practices may help gain insights on grant operations. Results from the 2014 ATE survey indicate that

  • 85 percent of projects and centers collaborated with business and industry; of those, 63 percent obtained information about workforce needs from their collaborators.
  • 90 percent of ATE grantees have engaged an evaluator; most evaluators (84%) are external to both the institution and the grant.

Check out our ATE survey fact sheets and data snapshots to identify data points that you can use to assess your performance against other ATE projects and centers: If you would like a tailored snapshot report to assist your project or center with benchmarking against the ATE program, email To see a demonstration of how to compare grant-level, program-level, and national-level data, go to

Keep in mind that the ATE program should not be used as a proxy for all technician education in the U.S. See Corey Smith’s article on page 3 for a list of other sources of secondary data that may be of use for planning, evaluation, and benchmarking.

1Both entities restrict data access to institutional members.

Newsletter: Secondary Data Resources

Posted on January 1, 2015 by  in Newsletter - () ()

Doctoral Associate, EvaluATE, Western Michigan University

It’s easier than ever to access national-level data that may be useful to ATE projects and centers for planning, benchmarking, or evaluation. A few of these resources are listed below:

The National Center for Education Statistics ( Digest of Education Statistics provides information about U.S. students and education institutions. The IPEDS system is a tool that focuses on postsecondary students and . The NCES website also contains a variety of data related to education from K-12 schools and 2- and 4-year colleges.

American FactFinder ( provides access to all variables collected by the U.S. Census and, on a more regular basis, the American Community Survey.

The Bureau of Labor Statistics ( provides access to data on employment, earnings, and industry activity.

The National Student Clearinghouse ( offers a subscription-based service for tracking students and their history. Your institution may already be a subscriber to this service.

College institutional research offices often collect information about students, including demographics, enrollment, course completions, and grades. Building a relationship with this office at your institution could help you access a range of individual student data.

Shameless self-promotion aside, EvaluATE ( administers an annual survey to all ATE grants, collecting data that could be used for benchmarking against the ATE community or just within your discipline.

Newsletter: How Do You Know if Your Program is Meeting Industry Needs?

Posted on January 1, 2015 by  in Newsletter - () ()

Since the core of the ATE program is to prepare a technical workforce, it is critical to match the skills that students are developing with the needs of the local industry. Most ATE projects and centers work with local industry and have advisory groups. But there are some additional resources available for evaluating how well a curricular program is meeting the industry needs on both the industry demand and educational supply sides.

Burning Glass Labor/InsightTM interactive software can be used to generate real-time demand (jobs) data. The beauty of Burning Glass is that the user can find the job titles for advertised positions. This helps ATE projects find the demand for specific jobs and then match program supply information to get a comparison. The Standard Occupational Classification (SOC) system is used by Federal statistical agencies to classify workers into one of 840 occupational categories. The North American Industry Classification System (NAICS) is the standard used by Federal statistical agencies in classifying U.S. business establishments. The SOC and NAICS codes do not always identify the job titles used in job announcements. Therefore, it is advisable to use a combination of data sources. By utilizing Burning Glass information in conjunction with the federal codes, a more accurate determination of the demand for technicians can be made. Institutional data and statewide community or technical college data as well as student follow-up can be helpful in determining the supply side.

Environmental scans can focus on labor market needs that warrant a community college response. A recent publication addressing the comparison of demand and supply is the Life Sciences & Biotechnology Middle Skills Workforce in California (October 2014) report, available from