Karl Kapp

Professor of Instructional Technology, Bloomsburg University of Pennsylvania

Karl Kapp, Ed.D., is a professor of instructional technology at Bloomsburg University of Pennsylvania. He has served as an external evaluator on several National Science Foundation grants and is currently a researcher on a National Institutes of Health grant investigating methods to help childcare workers detect child abuse.


Blog: 11 Important Things to Know About Evaluating Curriculum Development Projects*

Posted on July 24, 2019 by  in Blog ()

Professor of Instructional Technology, Bloomsburg University of Pennsylvania

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Curriculum development projects are designed to create new content or present content to students in a new format with new activities or approaches. The following are important things to know about evaluating curriculum development projects.

1.     Understand the underlying model, pedagogy, and process used to develop the curriculum. There are several curriculum development models, including the DACUM model (Developing a Curriculum), the Backward Design Method, and the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model of instructional design. Whatever approach is used, make sure you understand its methodology and underlying philosophy so that these can help guide the evaluation.

2.     Establish a baseline. If possible, establish what student performance was before the curriculum was available, to assess the level of change or increased learning created as a result of the new curriculum. This could involve data on student grades or performance from the year before the new curriculum is introduced or data on job performance or another indicator.

3.     Clearly identify the outcomes expected of the curriculum. What should students know or be able to do when they have completed the curriculum? Take the time to understand the desired outcomes and how the curriculum content, activities, and approach support those outcomes. The outcomes should be directly linked to the project goals and objectives. Look for possible disconnects or gaps.

4.     Employ a pre/post test design. One method to establish that learning has occurred is to measure student knowledge of a subject before and after the curriculum is introduced. If you are comparing two curriculums, you may want to consider using one group as a control group that would not use the new curriculum and comparing the performance of the two groups in a pre/post test design.

5.     Employ content analysis techniques. Content analysis is the process of analyzing documents (student guides, instructor guides, online content, videos, and other materials) to determine the type of content, frequency of content, and internal coherence (consistency of different elements of the curriculum) and external coherence (interpretation in the curriculum fits the theories accepted in and outside the discipline).

6.     Participate in the activities. One effective method for helping evaluators understand the impact of activities and exercises is to participate in them. This helps determine the quality of the instructions, the level of engagement, and the learning outcomes that result from the activities.

7.     Ensure assessment items match instructional objectives. Assessment of student progress is typically measured through written tests. To ensure written tests assess the student’s grasp of the course objectives and curriculum, match the assessment items to the instructional objectives. Create a chart to match objectives to assessment items to ensure all the objectives are assessed and that all assessment items are pertinent to the curriculum.

8.     Review guidance and instruction provided to teachers/facilitators in guides. Determine if the materials are properly matched across the instructor guide, student manual, slides, and in-class activities. Determine if the instructions are clear and complete and that the activities are feasible.

9.     Interview students, faculty, and, possibly, workforce representatives. Faculty can provide insights into the usefulness and effectiveness of the materials, and students can provide input on level of engagement, learning effort, and overall impression of the curriculum. If the curriculum is tied to a technician profession, involve industry representatives in reviewing and examining the curriculum. This should be done as part of the development process, but if it is not, consider having a representative review the curriculum for alignment with industry expectations.

10.  Use Kirkpatrick’s four levels of evaluation. A highly effective model for evaluation of curriculum is called the Kirkpatrick Model. The levels in the model measure initial learner reactions, knowledge gained from the instruction, behavioral changes that might result from the instruction, and overall impact on the organization, field, or students.

11.  Pilot the instruction. Conduct pilot sessions as part of the formative evaluation to ensure that the instruction functions as designed. After the pilot, collect end-of-day reaction sheets/tools and trainer observations of learners. Having an end-of-program product—such as an action-planning tool to implement changes around curriculum focus issue(s)—is also useful.

RESOURCES

For detailed discussion of content analysis, see chapter 9 of Gall, M. D., Gall, J. P, & Borg, W. R. (2007). Educational research: An introduction (8th ed.). Boston: Pearson.

DACUM Job Analysis Process: https://s3.amazonaws.com/static.nicic.gov/Library/010699.pdf

Backward Design Method: https://educationaltechnology.net/wp-content/uploads/2016/01/backward-design.pdf

ADDIE Model: http://www.nwlink.com/~donclark/history_isd/addie.html

Kirkpatrick Model: http://www.nwlink.com/~donclark/hrd/isd/kirkpatrick.html

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 Advanced Technological Education PI Conference.

Webinar: E-valuation: Assessing Webinars, Social Media, and Website Usage

Posted on November 16, 2011 by , , , in Webinars ()

Presenter(s): Jason Burkhardt, Karl Kapp, Kurt Wilson, Stephanie Evergreen
Date(s): November 16, 2011
Recording: https://vimeo.com/32331229

ATE grantees are using the Web for outreach, instruction, professional development, dissemination, and more.  As the Web becomes more central to the activities and deliverables of ATE grants, evaluation strategies need to keep pace. In this webinar featuring Karl Kapp, ATE evaluator and noted expert on e-learning, we’ll share recent research on webinar, social media, and website evaluation practices.

Resources:
Slide PDF
Handout PDF

Webinar: Making Evaluation Integral to your ATE Proposal

Posted on July 21, 2010 by , , , , , in Webinars ()

Presenter(s): Gordon Snyder, Karl Kapp, Linnea Fletcher, Lori Wingate, Peggie Weeks, Stephanie Evergreen
Date(s): July 21, 2010
Recording: https://vimeo.com/13577194

In this free, 90-minute webinar, participants will learn how to make evaluation a strong component of their ATE proposals. Staff from the ATE Evaluation Resource Center will provide guidance about how to focus an ATE evaluation, develop a plan for data collection and analysis, describe the evaluation in a proposal, and work with an evaluator.

The webinar will feature NSF-ATE program officer Linnea Fletcher, who will provide NSF’s perspective on these topics, Gordon Snyder and Karl Kapp, a veteran ATE PI-evaluator team, will also join the webinar, talking about their successful experiences working together on funded ATE proposals.

Resources:
Slide PDF
Handout PDF