Newsletter - Fall 2013

Newsletter: 20 Years of ATE Evaluation

Posted on October 1, 2013 by  in Newsletter - ()

Evaluation has been required of ATE projects and centers since the program began in 1993. Many evaluations were concerned more with numbers of students and faculty impacted, rather than the effectiveness of the intervention. The sophistication of evaluation expectations has been increasing over time. Early in the program, there was a shortage of evaluators who understood both the disciplinary content and the methods of evaluation. Through a separate grant, Arlen Gullickson at the Western Michigan University Evaluation Center provided an internship program for novice evaluators who spent six months evaluating a component of an ATE project. Several ATE evaluators got their start in this program, and several PIs learned what evaluation could do for them and their projects.

The ATE program responded to the Government Performance Results Act by developing a project monitoring survey that provided a snapshot of the program. The survey is still administered annually by EvaluATE (see p. 3). Although this monitoring system also emphasized “body counts,” as time went on the survey was modified with the input of program officers to include questions that encouraged evaluation of project effectiveness.

For example, questions were asked if the project’s evaluation investigated the extent to which the participants in professional development actually implemented the content correctly and the resulting impact on student learning, following the ideas of the Kirkpatrick model for evaluation. The evaluations reported in renewal proposals still concentrate on “body counts.” Proposal reviewers ask, “What happened as a result?” To develop project evaluations that could be aggregated to determine how the ATE program was meeting its goals, a workshop was held with evaluators from centers. The participants suggested that projects could be evaluated along eight dimensions: impact on students, faculty, the college, the community, industry, interaction among colleges, the region, and the nation. A review of several project and center annual reports found that all categories were addressed, and very few items could not be accommodated in this scheme.

Following the evaluation in NSF’s Math Science Partnerships program, I have encouraged project and center leaders to make a FEW claims about the effectiveness of their projects. The evaluator should provide evidence for the extent to which the claims are justified. This view is consistent with the annual report template in Research.gov, which asks for the major goals of the project. It also limits summative evaluation to a few major issues. Much of the emphasis, both here and in general, has been on summative evaluation focused on impact and effectiveness. Projects should also be engaged in formative evaluation to inform project improvements. This requires a short feedback cycle that is usually not possible with only external evaluation. An internal evaluator working with an external evaluator may be useful for collecting data and providing timely feedback to the project. A grant has recently been awarded to strengthen the practice and use for formative evaluation by ATE grantees. Arlen Gullickson, EvaluATE’s co-PI, is leading this work, in cooperation with EvaluATE.

Gerhard Salinger is a founding program officer of the ATE program. The ideas expressed here are his alone and may not reflect the views of the National Science Foundation.

Newsletter: ATE Sustainability

Posted on October 1, 2013 by  in Newsletter - ()

Sustainability is about ensuring that at least some aspects of a project or center’s work—such as faculty positions, partnerships, or curricula—have “a life beyond ATE funding” (nsf.gov/ate). By definition, sustainability “happens” after NSF funding ends—and thus, after the project or center’s evaluation has concluded. So how can sustainability be addressed in an evaluation? There are three sources of information that can help with a prospective assessment of sustainability, whether for external evaluation purposes or to support project planning and implementation:

(1) Every ATE proposal is supposed to include a sustainability plan that describes what aspects of the grant will be sustained beyond the funding period and how. (2) Every proposal submitted in 2012 or later required a data management plan. This plan should have described how the project’s data and other products would be preserved and made available to others. Both the sustainability and data management plans should be reviewed to determine if the project will be able to deliver on what was promised. (3) Developed by Wayne Welch, the Checklist for Assessing the Sustainability of ATE Projects and Centers can be used to determine a project’s strengths and weaknesses in regard to sustainability. The checklist addresses diverse dimensions of sustainability related to program content and delivery, collaboration, materials, facilities, revenue, and other issues. See bit.ly/18l2Fcb.

Newsletter: Do you have a model for writing up survey results?

Posted on October 1, 2013 by  in Newsletter - ()

There is not a one-size-fits-all template for writing up survey results—it depends in the survey’s scale, the reason it was conducted, and who needs to use the results. But here are some guidelines: First, determine if you really need a full, narrative report. If the survey was relatively short and the results are mainly for internal project use, you may not need a detailed report. For example, at EvaluATE, we conduct a brief survey at the end of each of our webinars. The report is simply a summary of the results. The only additional analysis beyond what is automatically generated by our Webbased survey system (hostedsurvey.com) is our categorization of the open-ended responses by topic, so we can gain a better sense of the overall perceptions of the strengths and weaknesses of the webinar. We do not write any narrative for this basic survey results summary (see an example 2013 Webinar Evaluation.)

If the survey results need to go to an external audience and/or it was a larger scale survey whose results warrant an indepth look, then you should probably develop a formal report. This survey report should include details about the survey’s purpose, administration mode (e.g., online v. paper-and-pencil), sample (who was asked to complete the survey), and response rate (what percentage of survey recipients completed the survey). If analytic techniques beyond basic descriptive statistics were used to describe the results, then there should also be a section to explain what analyses were performed and why. An important decision to make is how much of the quantitative and qualitative results to present in the body of the report. If there were numerous items on the survey, it may not be practical to report all the results for every item—and it makes for tedious reading. In this case, you can combine results to highlight trends and anomalies in the data. The detailed results can go in an appendix. Don’t be tempted to simply quantify qualitative results—it is better to describe the overall themes from the qualitative data and include a few representative examples in the body of the report. The responses to open-ended questions may be included in an appendix for readers who want to dig into the details.

A survey may be one of multiple data sources for a larger evaluation. In this case, the survey results should be included in the mixed-methods report. This type of evaluation report is best organized by evaluation question (that is, the overarching questions about a project’s progress, quality, and impact) rather than by data source or collection method. Ideally, determining which survey data points relate to which of the larger evaluation questions was part of the survey development process. If not, then that needs to happen as part of the report writing process. The details about the survey methodology and analysis should be included in the report, either in the Methods section or in a technical appendix.

For yet another example of how to present survey results, see the data snapshots based on the annual ATE survey. These onepage documents are visual depictions of results from a limited number of closely related survey items. See p. 3 to learn more.

Newsletter: The ATE Survey: Telling the ATE Story Since 2000

Posted on October 1, 2013 by  in Newsletter - ()

For the past 14 of the ATE program’s 20 years, an annual survey has documented the activities and accomplishments of ATE projects and centers. With more than 90 percent of grantees participating each year, more than 1,600 individual surveys have been completed, yielding an excess of one million data points. These data provide a unique view of the overall program’s productivity and achievements.

In addition to annual fact sheets that summarize survey results, EvaluATE has produced a series of data snapshots that provide a more detailed view of certain aspects of the program. The snapshot on women in ATE, for example, underscores the ongoing challenge that advanced technology programs face in attracting women, with men comprising more than 60 percent of the students in 14 of the 17 ATE disciplines for which we have data. On the other hand, the snapshot of underrepresented minority (URM) students in ATE shows that most ATE disciplines have been successful in attracting URM students to their programs. Thirty-seven percent of ATE students are members of racial/ethnic groups known to be underrepresented in STEM, compared with 33 percent in the general population. Check out these snapshots, along with others on business and industry collaboration, URM recruitment and retention practices, and grant-level evaluation practices, on our annual survey page.

Data from the survey can be used by individual projects and centers to support their research and evaluation efforts. If you have a specific interest in a topic addressed by the survey, we can provide you with summarized results or deidentified raw data for independent analysis. Email your request or question to Corey Smith (add link to contact).

Newsletter: Developing a Culture of Evaluation

Posted on October 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

As an ATE project, you and your team collect a lot of data: You complete the annual monitoring survey, you work with your evaluator to measure outcomes, you may even track your participants longitudinally in order to learn how they integrate their experiences into their lives. As overwhelming as it may seem at times to manage all the data collection logistics and report writing, these data are important to telling the story of your project and the ATE program. Developing a culture of evaluation in your project and your organization can help to meaningfully put these data to use.

Fostering a culture of evaluation in your project means that evaluation practices are not disconnected from program planning, implementation, and reporting. You’re thinking of evaluation in planning project activities and looking for ways to use data to reflect on and improve your work. During implementation, you consult your evaluator regularly so that you can hear what they’re learning from the data collection, and ensure that they know what’s new in the project. And at analysis and reporting times, you’re ensuring that the right people are thinking about how to use the evaluation findings to make improvements and demonstrate your project’s value to important stakeholder audiences. You and your team are reflecting on how the evaluation went and what can be improved. In a project that has an “evaluation culture,” evaluators are partners, collecting important information to inform decision making.

A great example of evaluators-as partners came from an NSF PI who shared that he regularly talks with his evaluator, peer-to-peer, about the state of the field, not just about his particular project. He wants to now what his evaluator is learning about practice in the field from other projects, workshops, conferences and meetings, and he uses these insights to help him reflect on his own work.