During the course of evaluating the sustainability of NSF’s Advanced Technological Education program, I introduced a new method for creating evaluation surveys. I call it a Peer-Generated Likert Scale because it uses actual statements of the population of interest as the basis for the items on the survey. Listed below are the steps one would follow to develop a peer-generated Likert-type survey, using a generic example of a summer institute in the widget production industry.

1. Describe the subject of the evaluation and the purpose of the evaluation.
In this step, you want to develop a sense of the scope of your evaluation activity, the relevant content, and the relevant subjects. For example:

“This is a six-day faculty development program designed for middle and high school teachers, college faculty, administrators, and others to learn about the widget industry. The purpose of the evaluation is to obtain information about the success of the program.”

2. Define the domain of content to be measured by the survey.
This would require a review of the curriculum materials, conversations with the instructors, and perhaps a couple of classroom observations. Let us suppose the following are some of the elements of the domain to be addressed by a survey:

a. perceived learning about the widget industry
b. attitudes toward the institute
c. judgments about the quality of instruction
d. backgrounds of participants
e. institute organization and administration
f. facilities
g. etc.

3. Collect statements from the participants about the activity related to those domains.
Participants who are involved in the educational activity are given the opportunity to reflect anonymously upon their experiences. They are given prompts, such as :

a. Please list three strengths of the summer institute.
b. Please list three limitations of the institute.

4. Review the statements, select potential survey items, and pilot the survey.
These statements are then reviewed by the evaluation team and selected according to their match with the elements of the domain. They are put in a Likert-type format going from Strongly Agree, Agree, Uncertain, Disagree, to Strongly Disagree. You can plan that response time will be about 30 seconds/item. Most surveys will consist of 20 – 30 items.

5. Collect data and interpret the results.
The most effective way to report the results of this type of survey is to show the percent agreeing or strongly agreeing with the positively stated items (“This was one of the most effective workshops that I have ever taken.”) and disagreeing with the negatively stated items (“There was too much lecture and not enough hands-on experiences.”)

The survey I developed for my ATE research contained 23 such items, and I estimated it would take about 15 minutes to complete. Although I was evaluating ATE sustainability, ATE team leaders could use the process to evaluate their program or individual products and activities. Further information on the details of the procedure can be found in Welch, W. W. (2011). A study of the impact of the advanced technological education program. This study is available from University of Colorado’s DECA Project.

About the Authors

Wayne Welch

Wayne Welch box with arrow

Professor

My name is Wayne Welch and I am a retired professor from the University of Minnesota. My special interests are program evaluation and STEM education. I have worked with the ATE program in several ways: I chaired the advisory panel for the ATE evaluation project at Western Michigan University from 1998 to 2006; along with Bob Reineke, I wrote the Handbook for National Research Committees; and I have had two Targeted Research Grants (2008 – 2014) to study the impact and sustainability of the ATE program

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Related Blog Posts

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.