I lead evaluation for nanoHUB-U, one of the educational opportunities provided through nanoHub.org. The concept of nanoHUB-U is to provide free access to interdisciplinary, highly technical topics of current significance in STEM research, particularly those related to nanotechnology. In fact, many of the recent nanoHUB-U course topics are so new that the information is not yet available in published textbooks. My job is to lead the effort to determine the merit of offering the courses and provide useable information to improve future courses. Sounds great, right? So what’s the problem?

Open online technical courses are similar to a broader group of learning opportunities, frequently referred to as MOOCs (massive open online courses). However, technical courses are not intended for a massive audience. How many people on the globe really want to learn about nanophotonic modeling? One of the major challenges for evaluation in open contexts is that anyone from around the world with Internet connection can access the course, whether or not they intend to “complete” the course, have language proficiency to understand the instructor, desire to reference materials or complete all course aspects, etc. In short: we know little about who is coming into the course and why.

To reach evaluative conclusions, evaluators must first begin with understanding stakeholders’ reasons for offering the course and a deep understanding of learners. Demographic questions must go well beyond the usual race/ethnicity and gender identity. For this blog, I’m focusing on survey aspects of open online technical course evaluation.

Practical Tips:

  1. Design the survey collaboratively with the course instructors. Instructors are experts in the technical content and will know what type of background information is necessary to be successful in the course (e.g., Have you ever taken a differential equations course?)
  2.  Design questions that target learners’ motivations, goals, and intentions for the course. Some examples include: How much time per week do you intend to spend on this course? How much of the course do you intend to complete? How concerned are you with grade outcomes? What do you hope to gain from this experience? Are you currently working full-time, part-time, full-time student, part-time student or unemployed?
  3. Embed the pre-survey in the first week’s course material. While not technically a true “pretest,” we have found this technique has resulted in a significantly higher response rate than the instructor emailing a link to the survey.
  4. Capture the outcomes of the group the course was designed for. The opinions of thousands may not be in alignment with who the stakeholders intended the course to serve. Design questions with a logic that targets the intended learner group by using deeper, open-ended questions (e.g., If this information has not been provided, where would you have learned this type of material?)
  5. Embed the post-survey in the last week’s course material. Again, our experience has been that this approach for surveys has generated a much higher response rate than emailing the course participants a link (even with multiple reminders). Most likely those that take the post-survey are the learners who participated in most aspects of the course.
  6. Use the survey results to identify groups of learners within the course. It is really useful to compare what learners’ intentions were in the course to what their behavior was, as well as their pre- and post-survey responses. When interpreting the results, it is important to examine responses based on groups of learners, rather than summing up the overall course averages.

Surveys are one aspect of the evaluation design for nanoHUB-U. Evaluation in an open educational context requires much more contextualization, and traditional educational evaluation metrics should be adapted to provide the most useful, action-oriented information possible.

About the Authors

Kerrie Douglas

Kerrie Douglas box with arrow

Assistant Professor, Purdue University

Dr. Kerrie Douglas is an assistant professor of engineering education at Purdue University. She studies how to make inferences about student learning in online environments and how to use assessment to improve the quality of the learning experience in online courses.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.