Newsletter - Real Questions | Real Answers

Newsletter: Do you have a model for writing up survey results?

Posted on October 1, 2013 by  in Newsletter - ()

There is not a one-size-fits-all template for writing up survey results—it depends in the survey’s scale, the reason it was conducted, and who needs to use the results. But here are some guidelines: First, determine if you really need a full, narrative report. If the survey was relatively short and the results are mainly for internal project use, you may not need a detailed report. For example, at EvaluATE, we conduct a brief survey at the end of each of our webinars. The report is simply a summary of the results. The only additional analysis beyond what is automatically generated by our Webbased survey system (hostedsurvey.com) is our categorization of the open-ended responses by topic, so we can gain a better sense of the overall perceptions of the strengths and weaknesses of the webinar. We do not write any narrative for this basic survey results summary (see an example 2013 Webinar Evaluation.)

If the survey results need to go to an external audience and/or it was a larger scale survey whose results warrant an indepth look, then you should probably develop a formal report. This survey report should include details about the survey’s purpose, administration mode (e.g., online v. paper-and-pencil), sample (who was asked to complete the survey), and response rate (what percentage of survey recipients completed the survey). If analytic techniques beyond basic descriptive statistics were used to describe the results, then there should also be a section to explain what analyses were performed and why. An important decision to make is how much of the quantitative and qualitative results to present in the body of the report. If there were numerous items on the survey, it may not be practical to report all the results for every item—and it makes for tedious reading. In this case, you can combine results to highlight trends and anomalies in the data. The detailed results can go in an appendix. Don’t be tempted to simply quantify qualitative results—it is better to describe the overall themes from the qualitative data and include a few representative examples in the body of the report. The responses to open-ended questions may be included in an appendix for readers who want to dig into the details.

A survey may be one of multiple data sources for a larger evaluation. In this case, the survey results should be included in the mixed-methods report. This type of evaluation report is best organized by evaluation question (that is, the overarching questions about a project’s progress, quality, and impact) rather than by data source or collection method. Ideally, determining which survey data points relate to which of the larger evaluation questions was part of the survey development process. If not, then that needs to happen as part of the report writing process. The details about the survey methodology and analysis should be included in the report, either in the Methods section or in a technical appendix.

For yet another example of how to present survey results, see the data snapshots based on the annual ATE survey. These onepage documents are visual depictions of results from a limited number of closely related survey items. See p. 3 to learn more.

Newsletter: Evaluator Qualifications: Which is more important- evaluation or subject matter expertise?

Posted on July 1, 2013 by  in Newsletter - ()

When shopping for an external evaluator, a common question is whether it is better to hire someone who really understands your project’s content area or if you need someone with technical expertise in evaluation, research, and measurement.

In an ideal world, an evaluator or evaluation team would have high levels of both types of expertise. Since most evaluators hail from a non-evaluation content area, it may be possible for your project to find an evaluator who is both an expert in your content area and a skilled evaluator. But such combinations are relatively rare. So, back to the original question: If you do have to make a decision regarding expertise, which way should you lean? My answer is, evaluation expertise.

Most professional evaluators have experience working in many different content areas. For example, here at The Evaluation Center at Western Michigan University, where EvaluATE is based, in the past year alone we have evaluated the proposal review process for the Swiss National Science Foundation, a project focused on performative design, a graduate-level optical science program, institutional reform at a small university, a community literacy initiative, substance abuse treatment, a career and technical education research center, a preschool music education program, and a set of International Labour Organisation evaluations—among other things. Situational Practice, which is “attending to the unique interests, issues, and contextual circumstances in which evaluation skills are being applied” is a core competency for professional evaluators (Canadian Evaluation Society, 2010). Regardless of expertise, any evaluator should take the time to learn about the content and context of your project, asking questions and doing background research to fill in any significant knowledge gaps. Additionally, if you have an internal evaluation component to your project, you most likely already have subject-matter expertise embedded into your evaluation activities.

While it might feel more natural to lean toward someone with a high level of subject-matter expertise who “speaks your language,” a risk of hiring an evaluator for his or her subject-matter knowledge rather than evaluation expertise is that in the absence of a strong foundation in education and social science-based research skills, they may rely more on their opinions and experiences to formulate judgments about a project instead of systematic inquiry and interpretation. If you find that the perspectives and guidance of subject-matter experts bring value to your work, a better role for them might be that of an advisor.

Regardless of the type of evaluator you select, a key to a successful evaluation is regular and open communication between project stakeholders and the evaluation team. Your project is unique, so even if your evaluator has been involved with similar efforts, he or she will still benefit from learning about your unique and specific context.

For more information about evaluator competencies see http://bit.ly/10v3dc3.

Newsletter: What does the switch from FastLane to Research.gov mean for ATE annual reporting?

Posted on April 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Along with the change from FastLane to Research.gov, there are a few changes in the reporting categories. Additionally, public Project Outcomes Reports are now required in addition to Final Reports 90 days after grant expires.

ANNUAL REPORTS

The annual report categories have largely remained the same, but there are a few noteworthy changes. The Participants section has not changed—this is where you identify the individuals, organizations, and other collaborators that have contributed to your grant work. A new Accomplishments section replaces the old “Activities and Findings” component. This is where most of the new requirements are found. In addition to identifying the project’s major goals, PIs must provide information for at least one of the following categories: major activities, specific objectives; significant results (including findings, developments, or conclusions) and key outcomes or other achievements. I recommend reporting your evaluation results under “key outcomes.” The Products section of the report is very similar to what was included in the FastLane system, but now in addition to publications, websites, and other products, there are separate areas to identify (a) technologies or techniques and (b) inventions, patent applications, and/or other licenses. The former “Conference Proceedings” section is now subsumed in this category as well. A new Impact section replaces what was formerly called “Contributions.” In addition to contributions to the principal discipline, other disciplines, human resources, infrastructure resources, and other aspects of public welfare (now labeled “beyond science and technology”), PIs are now asked to also report on technology transfer and identify significant problems or changes in the project.

PROJECT OUTCOMES REPORTS

Project Outcomes Reports are 200-800 word summaries of projects and their outcomes. In particular, PIs should address results that address NSF’s intellectual merit and broader impacts review criteria. For intellectual merit, you should address how the project has advanced knowledge and understanding around technician education and/or how the project has been especially creative, original, or transformative. Here you can refer to how you have described your disciplinary contributions in the Impact section of your annual report. As for broader impacts, this is your opportunity to describe your impact on students; what you have done to broaden participation of underrepresented groups; how you have enhanced infrastructure for technological education and research through facilities, instrumentation, networks, and partnerships; and/or other ways your grant-funded work has benefitted society.

To learn more about Research.gov, annual reporting requirements, and project outcomes reports, go to 1.usa.gov/16NJqXr.

To view examples of ATE Project Outcomes Reports, go to 1.usa.gov/13j3Us7, then click on the box for “Show Only Awards with Project Outcomes Reports” and in the box for “Program,” type “Advanced Technological Education.”