Kerrie Anna Douglas

Visiting Assistant Professor, Director of Evaluation – INSPIRE & nanoHUB-U, School of Engineering Education, Purdue University

Dr. Douglas is a Visiting Assistant Professor in the Purdue School of Engineering Education and Director of Evaluation for INSPIRE, the Research Institute for Pre-College Engineering and for nanoHUB-U. She received her B.A. in Psychology, M.S. Ed. in School Counseling, and her Ph.D. in Educational Psychology, with an emphasis on Research Methods and Measurement from Purdue University. Her research focuses on methods of assessment and evaluation in engineering education. Most recently, her research has related to methods of evaluation in an open online learning environment.


Blog: Surveying Learners in Open Online Technical Courses

Posted on July 22, 2015 by  in Blog ()

Visiting Assistant Professor, Director of Evaluation – INSPIRE & nanoHUB-U, School of Engineering Education, Purdue University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I lead evaluation for nanoHUB-U, one of the educational opportunities provided through nanoHub.org. The concept of nanoHUB-U is to provide free access to interdisciplinary, highly technical topics of current significance in STEM research, particularly those related to nanotechnology. In fact, many of the recent nanoHUB-U course topics are so new that the information is not yet available in published textbooks. My job is to lead the effort to determine the merit of offering the courses and provide useable information to improve future courses. Sounds great, right? So what’s the problem?

Open online technical courses are similar to a broader group of learning opportunities, frequently referred to as MOOCs (massive open online courses). However, technical courses are not intended for a massive audience. How many people on the globe really want to learn about nanophotonic modeling? One of the major challenges for evaluation in open contexts is that anyone from around the world with Internet connection can access the course, whether or not they intend to “complete” the course, have language proficiency to understand the instructor, desire to reference materials or complete all course aspects, etc. In short: we know little about who is coming into the course and why.

To reach evaluative conclusions, evaluators must first begin with understanding stakeholders’ reasons for offering the course and a deep understanding of learners. Demographic questions must go well beyond the usual race/ethnicity and gender identity. For this blog, I’m focusing on survey aspects of open online technical course evaluation.

Practical Tips:

  1. Design the survey collaboratively with the course instructors. Instructors are experts in the technical content and will know what type of background information is necessary to be successful in the course (e.g., Have you ever taken a differential equations course?)
  2.  Design questions that target learners’ motivations, goals, and intentions for the course. Some examples include: How much time per week do you intend to spend on this course? How much of the course do you intend to complete? How concerned are you with grade outcomes? What do you hope to gain from this experience? Are you currently working full-time, part-time, full-time student, part-time student or unemployed?
  3. Embed the pre-survey in the first week’s course material. While not technically a true “pretest,” we have found this technique has resulted in a significantly higher response rate than the instructor emailing a link to the survey.
  4. Capture the outcomes of the group the course was designed for. The opinions of thousands may not be in alignment with who the stakeholders intended the course to serve. Design questions with a logic that targets the intended learner group by using deeper, open-ended questions (e.g., If this information has not been provided, where would you have learned this type of material?)
  5. Embed the post-survey in the last week’s course material. Again, our experience has been that this approach for surveys has generated a much higher response rate than emailing the course participants a link (even with multiple reminders). Most likely those that take the post-survey are the learners who participated in most aspects of the course.
  6. Use the survey results to identify groups of learners within the course. It is really useful to compare what learners’ intentions were in the course to what their behavior was, as well as their pre- and post-survey responses. When interpreting the results, it is important to examine responses based on groups of learners, rather than summing up the overall course averages.

Surveys are one aspect of the evaluation design for nanoHUB-U. Evaluation in an open educational context requires much more contextualization, and traditional educational evaluation metrics should be adapted to provide the most useful, action-oriented information possible.