Russell Cannon

Director of Institutional Research, University of Washington Bothell

Russell Cannon is the Director of Institutional Research at the University of Washington Bothell where his office oversees strategic analysis, reporting, and institutional assessment. His work focuses on the use of applied mixed-methods research and strategic planning to promote access, student success, and sustainable institutional development. He is also a member of the University of Wisconsin-Madison H.O.P.E. Lab and writes on education policy and data via his blog www.aroundlearning.com and on Twitter @aroundlearning.


Webinar: Small Project Evaluation: Principles and Practices

Posted on February 10, 2016 by , , , , in Webinars

Presenter(s): Charlotte Forrest, Elaine Craft, Lori Wingate, Miranda Lee, Russell Cannon
Date(s): March 23, 2016
Time: 1-2:30 p.m. EDT
Recording: https://youtu.be/WUFTMyyRgyU

An effective small project evaluation requires a clear-cut and feasible project plan, an evaluation plan that matches the project’s scope and purpose, and a project team and external evaluator who are willing and able to share responsibility for implementing the evaluation. In this webinar, we will review foundational principles of small project evaluation and discuss strategies for putting them into practice for a high-quality, economical, and useful evaluation of a small project.

Webinar participants will be able to

  1. Create or refine a project logic model that accurately represents a project’s activities and intended outcomes as a foundation for an evaluation plan.
  2. Develop evaluation questions that are appropriate for a small project.
  3. Identify project process and outcome indicators for answering the evaluation questions.

Resources:
Slides
Handout

Blog: Look No Further! Potential Sources of Institutional Data

Posted on February 4, 2015 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
brennan
Carolyn Brennan
Assistant Vice Chancellor for Research
University of Washington Bothell
cannon
Russell Cannon
Director of Institutional Research
University of Washington Bothell

This blog entry is a follow-up to our article in EvaluATE’s Winter 2015 newsletter on the use of institutional data for evaluation and grant proposals. In that article, we highlight data collected by most higher education institutions. Here we discuss additional sources that may be available on your campus.

  • Longitudinal Data Systems: As part of new federal regulations, states must track students longitudinally through and beyond the education pipeline. The implementation of these systems, the extent of the data stored, and the data availability varies greatly by state, with Texas, Florida, and Washington leading the pack.
  • Support Services: Check the college’s catalog of specific support service offerings and their target population; such listings are often created as resources for students as part of campus student success efforts. This information can help shape a grant proposal narrative and the services themselves may be potential spaces for embedding and assessing interventions.
  • Surveys: Many institutions administer surveys that track student beliefs, behaviors, and self-reported actions and characteristics. These include national surveys (which allow external benchmarking but less customization) and internal surveys (which allow more customization but only internal benchmarking). Data from such surveys can help shape grant proposals and evaluations in more qualitative ways. Frequently used survey types include:

Caution: All surveys may suffer from low response rates, low response bias, and the subjectivity of responses; they should only be used when more data are not available or to augment those “harder” data.

  • National Student Clearinghouse (NSC) data: Although schools are required to track data on student success at their own institutions, many are increasingly using tools like the National Student Clearinghouse to track where students transfer, whether they eventually graduate, and whether they go on to professional and graduate school. NSC is nearly always the most accurate source of data on graduate school attainment and can add nuance by reframing some “drop-outs” as transfers who eventually graduate.
  • Data on student behavior: While self-reported student behavior data can be obtained through surveys, many institutions, including have adopted card-swipe systems and tracking mechanisms that monitor student activity on learning management systems, which provide hard data on certain elements of student behavior, such as participation in extra-curricular activities, time spent with study groups or learning resources or behaviors such as coming late to class.
  • Campus-level assessment: Some institutions use standardized national tools like ACT’s Collegiate Assessment of Academic Proficiency or the Council for Aid to Education’s Collegiate Learning Assessment. They are sometimes administered to all students; more often they are administered to a sample, sometimes on a voluntary basis (which may result in bias). At the urging of internal efforts or external accreditors, some institutions have developed in-house instruments (rubric-graded analysis of actual student work). While these may not be as “accurate” or “reliable” as nationally developed instruments, they are often better proxies of faculty and campus engagement.
  • Program-level assessment: Many programs may have program-specific capstones or shared assessments that can be used as part of the evaluation process.

These are all potential sources of data that can improve the assessment and description of interventions designed to support the mission of higher education institutions: an increase in student retention, enhanced academic performance and improved graduation rates. We’d like to hear if you’ve used any of these or others successfully towards this aim or others.

Newsletter: Have You Overlooked Data That Might Strengthen Your Project Evaluation Reports or Grant Proposals?

Posted on January 1, 2015 by , in Newsletter - () ()

Many institutions of higher education collect very useful quantitative data as part of their regular operational and reporting processes. Why not put it to good use for your project evaluations or grant proposals? An office of institutional research, which often participates in the reporting process, can serve as a guide for data definitions and can often assist in creating one-time reports on this data and/or provide training to access and use existing reports.

Course Offerings: How many classes are offered in statistics? How frequently are they offered? Getting a sense of course enrollment numbers over time can illustrate need in a grant narrative. If a project involves the creation of new curricular elements, pre- and post-intervention enrollment numbers can serve as an outcome measure in an evaluation.

Student Transcripts: Is there a disproportionate number of veterans taking Spanish? How do they fare? Where do students major and minor? These data can serve as proxies for interest in different majors, identify gateway courses that might need support, uncover course-taking patterns, and/or relationships to GPA or full-/part-time status. Many of these can become outcomes or benchmarks in the evaluations, as well as context for a narrative.

Student Demographic and Admissions Data: Who are our students? How do they shape the institutional narrative?  Examine academic origin (high school, community college); incoming characteristics such as GPA, SAT, or ACT scores; race/ethnicity; veteran status; age; gender; Pell-grant eligibility status; underrepresented minority status; resident/nonresident status; and on-/off-campus housing. Student populations can be broken down into treatment cohorts for an evaluation of groups shown by research to benefit most from the intervention.

Faculty Demographic Information: Who are our faculty?  Examining full-time/part-time status, race/ethnicity, and gender can yield interesting observations. How do faculty demographics match students’ demographics? What is the student/faculty ratio? This information can enhance narrative descriptions of how students are served.

Financial Aid Data: How do we support our students fiscally? Information about cost of attendance vs tuition, net cost vs. “sticker cost,” percentage of students graduating with loans and average loan burden can be important to describe. It can also be a way of dividing students when evaluating outcomes, and can be an outcome measure in itself for grants intended to affect financial aid or financial literacy.

Student Outcomes: What does persistence look like at your institution? What are the one-year,  retention rates; four-, five-, and six-year graduation rates; and number of graduates by CIP (Classification by Instructional Programs) code?  These are almost always the standard benchmarks for interventions intended to affect retention and completion.

To further your case and provide context, comparison data for most of these are available in IPEDS (Integrated Postsecondary Education Data System) and may be tracked by federal surveys like the Beginning Postsecondary Students longitudinal survey and National Postsecondary Student Aid Survey, all of which are potential sources for external benchmarking. Of course, collecting these types of data can be addictive as you discover new ways to enliven your narrative and empower your evaluation with the help of institutional research. Happy hunting!

To learn more about institutional data from Carolyn and Russ, read their contribution to EvaluATE’s blog at evalu-ate.org/blog/brennancannon-feb15.