Newsletter - Spring 2013

Newsletter: Evaluation that Seriously Gets to the Point- and Conveys it Brilliantly

Posted on April 1, 2013 by  in Newsletter - ()

Evaluation, much as we love it, has a reputation among nonevaluators for being overly technical and academic, lost in the details, hard work to wade through, and in the end, not particularly useful. Why is this? Many evaluators were originally trained in the social sciences. There we added numerous useful frameworks and methodologies into our toolkits. But, along the way, we were inculcated with several approaches, habits, and ways of communicating that are absolutely killing our ability to deliver the value we could be adding. Here are the worst of them:

  1. Writing question laundry lists – asking long lists of evaluation questions that are far too narrow and detailed (often at the indicator level)
  2. Leaping to measurement – diving into identifying intended outcomes and designing data collection instruments without a clear sense of who or what the evaluation is for
  3. Going SMART but unintelligent – focusing on what’s most easily measurable rather than making intelligent choices to go after what’s most important (SMART = specific, measurable, achievable, relevant, and time-based)
  4. Rorschach inkblotting – assuming that measures, metrics, indicators, and stories are the answers; they are not!
  5. Shirking valuing – treating evaluation as an opinion-gathering exercise rather than actually taking responsibility for drawing evaluative conclusions based on needs, aspirations, and other relevant values
  6. Getting lost in the details – leaving the reader wading through data instead of clearly and succinctly delivering the answers they need
  7. Burying the lead – losing the most important key messages by loading way too many “key points” into the executive summaries, not to mention the report itself, or using truly awful data visualization techniques
  8. Speaking in tongues – using academic and technical language that just makes no sense to normal people

Thankfully, hope is at hand! Breakthrough thinking and approaches are all around us, but many evaluators just aren’t aware of them . Some have been there for decades. Here’s a challenge for 2013. Seek out and get really serious about infusing the following into your evaluation work:

  • Evaluation-Specific Methodology (ESM) – the methodologies that are distinctive to evaluation, i.e., the ones that go directly after values. Examples include needs and values assessment; merit determination methodologies; importance weighting methodologies; evaluative synthesis methodologies; and value-for-money analysis
  • Actionable Evaluation – a pragmatic, utilization-focused framework for evaluation that asks high-level explicitly evaluative questions, and delivers direct answers to them using ESM
  • Data Visualization & Effective Reporting – the best of the best of dataviz, reporting, and communication to deliver insights that are not just understandable but unforgettable

Newsletter: Evaluation Use

Posted on April 1, 2013 by  in Newsletter - ()

All evaluators want their evaluations to be useful and used. Evaluation clients need evaluation to bring value to their work to make the investment worthwhile. What does evaluation use look like in your context? It should be more than accountability reporting. Here are common types of evaluation use as defined in the evaluation literature:

Instrumental Use is using evaluation for decision-making purposes. These decisions are most commonly focused on improvement, such as changing marketing strategies or modifying curriculum. Or, they can be more summative in nature, such as deciding to continue, expand, or reinvent a project.

Process Use happens when involvement in an evaluation leads to learning or different ways of thinking or working.

Conceptual Use is evaluation use for knowledge. For example, a college dean might use an evaluation of her academic programs to further understand an issue related to another aspect of STEM education. This evaluation influences her thinking, but does not trigger any specific action.

Symbolic Use is use of evaluation findings to forward an existing agenda. Using evaluation to market an ATE program or to apply for further funding could be examples.

Newsletter: What does the switch from FastLane to Research.gov mean for ATE annual reporting?

Posted on April 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Along with the change from FastLane to Research.gov, there are a few changes in the reporting categories. Additionally, public Project Outcomes Reports are now required in addition to Final Reports 90 days after grant expires.

ANNUAL REPORTS

The annual report categories have largely remained the same, but there are a few noteworthy changes. The Participants section has not changed—this is where you identify the individuals, organizations, and other collaborators that have contributed to your grant work. A new Accomplishments section replaces the old “Activities and Findings” component. This is where most of the new requirements are found. In addition to identifying the project’s major goals, PIs must provide information for at least one of the following categories: major activities, specific objectives; significant results (including findings, developments, or conclusions) and key outcomes or other achievements. I recommend reporting your evaluation results under “key outcomes.” The Products section of the report is very similar to what was included in the FastLane system, but now in addition to publications, websites, and other products, there are separate areas to identify (a) technologies or techniques and (b) inventions, patent applications, and/or other licenses. The former “Conference Proceedings” section is now subsumed in this category as well. A new Impact section replaces what was formerly called “Contributions.” In addition to contributions to the principal discipline, other disciplines, human resources, infrastructure resources, and other aspects of public welfare (now labeled “beyond science and technology”), PIs are now asked to also report on technology transfer and identify significant problems or changes in the project.

PROJECT OUTCOMES REPORTS

Project Outcomes Reports are 200-800 word summaries of projects and their outcomes. In particular, PIs should address results that address NSF’s intellectual merit and broader impacts review criteria. For intellectual merit, you should address how the project has advanced knowledge and understanding around technician education and/or how the project has been especially creative, original, or transformative. Here you can refer to how you have described your disciplinary contributions in the Impact section of your annual report. As for broader impacts, this is your opportunity to describe your impact on students; what you have done to broaden participation of underrepresented groups; how you have enhanced infrastructure for technological education and research through facilities, instrumentation, networks, and partnerships; and/or other ways your grant-funded work has benefitted society.

To learn more about Research.gov, annual reporting requirements, and project outcomes reports, go to 1.usa.gov/16NJqXr.

To view examples of ATE Project Outcomes Reports, go to 1.usa.gov/13j3Us7, then click on the box for “Show Only Awards with Project Outcomes Reports” and in the box for “Program,” type “Advanced Technological Education.”

Newsletter: Connecting the Dots between Data and Conclusions

Posted on April 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Data don’t speak for themselves. But the social and educational research traditions within which many evaluators have been trained offer little in the way of tools to support the task of translating data into meaningful, evaluative conclusions in transparent and justifiable ways (see Jane Davidson’s article). However, we can draw on what educators already do when they develop and use rubrics for grading student writing, presentations, and other assessment tasks. Rubrics can be used in similar ways to aid in the interpretation of project evaluation results. Rubrics can be developed for individual indicators, such as the number of women in a degree program or percentage of participants expressing satisfaction with a professional development workshop. Or, a holistic rubric can be created to assess larger aspects of a project for which it is impractical to parse into distinct data points. Rubrics are a means for increasing transparency in terms of how conclusions are generated from data. For example, if a project claimed that it would increase enrollment of students from underrepresented minority (URM) groups, an important variable would be the percentage increase in URM enrollment. The evaluator could engage project stakeholders in developing a rubric to interpret the date for this variable, in consultation with secondary sources such as the research literature and/or national data. When the results are in, the evaluator can refer to the rubric to determine the degree to which the project was successful on this dimension. To learn more about how to connect the dots between data and conclusions, see the recording, handout, and slides from EvaluATE’s March webinar evalu-ate.org/events/march_2013/.

Newsletter: Setting the Stage for Useful Reporting

Posted on April 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

In my fifteen years as an evaluator, I’ve written quite a few reports and thought a lot about what makes an evaluation report useful. In addition, I was a program officer at NSF in the Division of Research on Learning, where I was an evaluation client and strove to put evaluation findings to good use. Here are some thoughts on how you can ensure that evaluation information gets used.

Communicating early and often is the foundation for strong evaluation reporting and use. PIs initiate these conversations about reporting with their evaluators, expressing needs and expectations about when they’d like evaluation reports, about what, and in what form.

Would you like a brief report about data collection activities? Talk with your evaluator about how you’d like this to look, what you might do with the data, and how these reports will get included in the annual report. This could be just bullet points about the key findings, or it could be data tables generated from a survey.

Do you want monthly progress reports? Talk with your evaluator about a template for an easy-to-read format. This report might detail funds expended to date, highlight upcoming tasks, and offer a place to raise questions and issues that need timely management.

Would you like a report that you can share with community stakeholders? This could be a one-page list of significant findings, a three-page executive summary, a PowerPoint presentation, or even a shortened version of the full report.

PIs and evaluators can talk about what’s possible, how your choices will affect budget and how you plan to work together to ensure that the evaluation reports are targeted for maximum use.