Wayne Welch


My name is Wayne Welch and I am a retired professor from the University of Minnesota. My special interests are program evaluation and STEM education. I have worked with the ATE program in several ways: I chaired the advisory panel for the ATE evaluation project at Western Michigan University from 1998 to 2006; along with Bob Reineke, I wrote the Handbook for National Research Committees; and I have had two Targeted Research Grants (2008 – 2014) to study the impact and sustainability of the ATE program

Resource: NVC Handbook

Posted on November 28, 2017 by , , in NVC ()

The purpose of this handbook is to help those who are responsible for organizing, planning, or conducting NVC meetings. It is mainly intended for principal investigators (PIs), but other audiences include center staff, committee chairs, committee members, and others with responsibilities related to NVCs. The NVC Handbook is not intended to establish policy, nor does it necessarily apply to other NSF programs.

File: Click Here
Type: Doc
Category: Resources
Author(s): Emma Perk, Lori Wingate, Wayne Welch

Report: An Exploratory Test of a Model for Enhancing the Sustainability of NSF’s Advanced Technological Education (ATE) Program

Posted on February 25, 2015 by  in

The purpose of this research is to examine the effectiveness of a model that purports to improve the sustainability of ATE projects and centers. According to Lawrenz, Keiser, & Lavoie (2003), several models for sustainability have been proposed in the organizational change literature. However, for the most part, the models are advocacy statements based on author experience rather than on empirical studies. These authors concluded there was little research directly related to sustainability.

File: Click Here
Type: Report
Category: ATE Research & Evaluation
Author(s): Wayne Welch

Blog: Evaluating ATE Efforts Using Peer-Generated Surveys

Posted on December 17, 2014 by  in Blog ()


Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

During the course of evaluating the sustainability of NSF’s Advanced Technological Education program, I introduced a new method for creating evaluation surveys. I call it a Peer-Generated Likert Scale because it uses actual statements of the population of interest as the basis for the items on the survey. Listed below are the steps one would follow to develop a peer-generated Likert-type survey, using a generic example of a summer institute in the widget production industry.

1. Describe the subject of the evaluation and the purpose of the evaluation.
In this step, you want to develop a sense of the scope of your evaluation activity, the relevant content, and the relevant subjects. For example:

“This is a six-day faculty development program designed for middle and high school teachers, college faculty, administrators, and others to learn about the widget industry. The purpose of the evaluation is to obtain information about the success of the program.”

2. Define the domain of content to be measured by the survey.
This would require a review of the curriculum materials, conversations with the instructors, and perhaps a couple of classroom observations. Let us suppose the following are some of the elements of the domain to be addressed by a survey:

a. perceived learning about the widget industry
b. attitudes toward the institute
c. judgments about the quality of instruction
d. backgrounds of participants
e. institute organization and administration
f. facilities
g. etc.

3. Collect statements from the participants about the activity related to those domains.
Participants who are involved in the educational activity are given the opportunity to reflect anonymously upon their experiences. They are given prompts, such as :

a. Please list three strengths of the summer institute.
b. Please list three limitations of the institute.

4. Review the statements, select potential survey items, and pilot the survey.
These statements are then reviewed by the evaluation team and selected according to their match with the elements of the domain. They are put in a Likert-type format going from Strongly Agree, Agree, Uncertain, Disagree, to Strongly Disagree. You can plan that response time will be about 30 seconds/item. Most surveys will consist of 20 – 30 items.

5. Collect data and interpret the results.
The most effective way to report the results of this type of survey is to show the percent agreeing or strongly agreeing with the positively stated items (“This was one of the most effective workshops that I have ever taken.”) and disagreeing with the negatively stated items (“There was too much lecture and not enough hands-on experiences.”)

The survey I developed for my ATE research contained 23 such items, and I estimated it would take about 15 minutes to complete. Although I was evaluating ATE sustainability, ATE team leaders could use the process to evaluate their program or individual products and activities. Further information on the details of the procedure can be found in Welch, W. W. (2011). A study of the impact of the advanced technological education program. This study is available from University of Colorado’s DECA Project.

Report: The ATE program: Issues for consideration, a monograph

Posted on October 8, 2014 by , , , , , , , , in

This report addresses nine issues of interest to ATE program stakeholders: Collaboration, dissemination, materials development, professional development, program improvement, advisory committees, evaluation, recruitment and retention,and sustainability.

File: Click Here
Type: Report
Category: ATE Research & Evaluation
Author(s): Arlen Gullickson, Frances Lawrenz, Gloria Rogers, Gloria Tressler, Karen Powe, Lester Reed, Norman Gold, Thomas Owens, Wayne Welch