Archive: secondary data

Blog: Scavenging Evaluation Data

Posted on January 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

But little Mouse, you are not alone,
In proving foresight may be vain:
The best laid schemes of mice and men
Go often askew,
And leave us nothing but grief and pain,
For promised joy!

From To a Mouse, by Robert Burns (1785), modern English version

Research and evaluation textbooks are filled with elegant designs for studies that will illuminate our understanding of social phenomena and programs. But as any evaluator will tell you, the real world is fraught with all manner of hazards and imperfect conditions that wreak havoc on design, bringing grief and pain, rather than the promised joy of a well-executed evaluation.

Probably the biggest hindrance to executing planned designs is that evaluation is just not the most important thing to most people. (GASP!) They are reluctant to give two minutes for a short survey, let alone an hour for a focus group. Your email imploring them to participate in your data collection effort is one of hundreds of requests for their time and attention that they are bombarded with daily.

So, do all the things the textbooks tell you to do. Take the time to develop a sound evaluation design and do your best to follow it. Establish expectations early with project participants and other stakeholders about the importance of their cooperation. Use known best practices to enhance participation and response rates.

In addition: Be a data scavenger. Here are two ways to get data for an evaluation that do not require hunting down project participants and convincing them to give you information.

1. Document what the project is doing.

I have seen a lot of evaluation reports in which evaluators painstakingly recount a project’s activities as a tedious story rather than straightforward account. This task typically requires the evaluator to ask many questions of project staff, pore through documents, and track down materials. It is much more efficient for project staff to keep a record of their own activities. For example, see EvaluATE’s resume. It is a no-nonsense record of our funding, activities, dissemination, scholarship, personnel, and contributors.  In and of itself, our resume does most of the work of the accountability aspect of our evaluation (i.e., Did we do what we promised?).  In addition, the resume can be used to address questions like these:

  • Is the project advancing knowledge, as evidenced by peer-reviewed publications and presentations?
  • Is the project’s productivity adequate in relation to its resources (funding and personnel)?
  • To what extent is the project leveraging the expertise of the ATE community?

2. Track participation.

If your project holds large events, use a sign-in sheet to get attendance numbers. If you hold webinars, you almost certainly have records with information about registrants and attendees. If you hold smaller events, pass around a sign-in sheet asking for basic information like name, institution, email address, and job title (or major if it’s a student group). If the project has developed a course, get enrollment information from the registrar.  Most importantly: Don’t put these records in a drawer. Compile them in a spreadsheet and analyze the heck out of them. Here are example data points that we glean from EvaluATE’s participation records:

  • Number of attendees
  • Number of attendees from various types of organizations (such as two- and four-year colleges, nonprofits, government agencies, and international organizations)
  • Number and percentage of attendees who return for subsequent events
  • Geographic distribution of attendees

Project documentation and participation data will be most helpful for process evaluation and accountability. You will still need cooperation from participants for outcome evaluation—and you should engage them early to garner their interest and support for evaluation efforts. Still, you may be surprised by how much valuable information you can get from these two sources—documentation of activities and participation records—with minimal effort.

Get creative about other data you can scavenge, such as institutional data that colleges already collect; website data, such as Google Analytics; and citation analytics for published articles.

Blog: Look No Further! Potential Sources of Institutional Data

Posted on February 4, 2015 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
brennan
Carolyn Brennan
Assistant Vice Chancellor for Research
University of Washington Bothell
cannon
Russell Cannon
Director of Institutional Research
University of Washington Bothell

This blog entry is a follow-up to our article in EvaluATE’s Winter 2015 newsletter on the use of institutional data for evaluation and grant proposals. In that article, we highlight data collected by most higher education institutions. Here we discuss additional sources that may be available on your campus.

  • Longitudinal Data Systems: As part of new federal regulations, states must track students longitudinally through and beyond the education pipeline. The implementation of these systems, the extent of the data stored, and the data availability varies greatly by state, with Texas, Florida, and Washington leading the pack.
  • Support Services: Check the college’s catalog of specific support service offerings and their target population; such listings are often created as resources for students as part of campus student success efforts. This information can help shape a grant proposal narrative and the services themselves may be potential spaces for embedding and assessing interventions.
  • Surveys: Many institutions administer surveys that track student beliefs, behaviors, and self-reported actions and characteristics. These include national surveys (which allow external benchmarking but less customization) and internal surveys (which allow more customization but only internal benchmarking). Data from such surveys can help shape grant proposals and evaluations in more qualitative ways. Frequently used survey types include:

Caution: All surveys may suffer from low response rates, low response bias, and the subjectivity of responses; they should only be used when more data are not available or to augment those “harder” data.

  • National Student Clearinghouse (NSC) data: Although schools are required to track data on student success at their own institutions, many are increasingly using tools like the National Student Clearinghouse to track where students transfer, whether they eventually graduate, and whether they go on to professional and graduate school. NSC is nearly always the most accurate source of data on graduate school attainment and can add nuance by reframing some “drop-outs” as transfers who eventually graduate.
  • Data on student behavior: While self-reported student behavior data can be obtained through surveys, many institutions, including have adopted card-swipe systems and tracking mechanisms that monitor student activity on learning management systems, which provide hard data on certain elements of student behavior, such as participation in extra-curricular activities, time spent with study groups or learning resources or behaviors such as coming late to class.
  • Campus-level assessment: Some institutions use standardized national tools like ACT’s Collegiate Assessment of Academic Proficiency or the Council for Aid to Education’s Collegiate Learning Assessment. They are sometimes administered to all students; more often they are administered to a sample, sometimes on a voluntary basis (which may result in bias). At the urging of internal efforts or external accreditors, some institutions have developed in-house instruments (rubric-graded analysis of actual student work). While these may not be as “accurate” or “reliable” as nationally developed instruments, they are often better proxies of faculty and campus engagement.
  • Program-level assessment: Many programs may have program-specific capstones or shared assessments that can be used as part of the evaluation process.

These are all potential sources of data that can improve the assessment and description of interventions designed to support the mission of higher education institutions: an increase in student retention, enhanced academic performance and improved graduation rates. We’d like to hear if you’ve used any of these or others successfully towards this aim or others.