Evaluation, much as we love it, has a reputation among nonevaluators for being overly technical and academic, lost in the details, hard work to wade through, and in the end, not particularly useful. Why is this? Many evaluators were originally trained in the social sciences. There we added numerous useful frameworks and methodologies into our toolkits. But, along the way, we were inculcated with several approaches, habits, and ways of communicating that are absolutely killing our ability to deliver the value we could be adding. Here are the worst of them:

  1. Writing question laundry lists – asking long lists of evaluation questions that are far too narrow and detailed (often at the indicator level)
  2. Leaping to measurement – diving into identifying intended outcomes and designing data collection instruments without a clear sense of who or what the evaluation is for
  3. Going SMART but unintelligent – focusing on what’s most easily measurable rather than making intelligent choices to go after what’s most important (SMART = specific, measurable, achievable, relevant, and time-based)
  4. Rorschach inkblotting – assuming that measures, metrics, indicators, and stories are the answers; they are not!
  5. Shirking valuing – treating evaluation as an opinion-gathering exercise rather than actually taking responsibility for drawing evaluative conclusions based on needs, aspirations, and other relevant values
  6. Getting lost in the details – leaving the reader wading through data instead of clearly and succinctly delivering the answers they need
  7. Burying the lead – losing the most important key messages by loading way too many “key points” into the executive summaries, not to mention the report itself, or using truly awful data visualization techniques
  8. Speaking in tongues – using academic and technical language that just makes no sense to normal people

Thankfully, hope is at hand! Breakthrough thinking and approaches are all around us, but many evaluators just aren’t aware of them . Some have been there for decades. Here’s a challenge for 2013. Seek out and get really serious about infusing the following into your evaluation work:

  • Evaluation-Specific Methodology (ESM) – the methodologies that are distinctive to evaluation, i.e., the ones that go directly after values. Examples include needs and values assessment; merit determination methodologies; importance weighting methodologies; evaluative synthesis methodologies; and value-for-money analysis
  • Actionable Evaluation – a pragmatic, utilization-focused framework for evaluation that asks high-level explicitly evaluative questions, and delivers direct answers to them using ESM
  • Data Visualization & Effective Reporting – the best of the best of dataviz, reporting, and communication to deliver insights that are not just understandable but unforgettable

About the Authors

Jane Davidson

Jane Davidson box with arrow

Jane Davidson is the author of Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (Sage, 2005) and the e-book, Actionable Evaluation Basics: Getting Succinct Answers to the Most Important Questions (Real Evaluation, 2012). She is owner/ director of Real Evaluation (realevaluation.com). You can learn more about evaluation-specific methodologies in the blog she writes with Patricia Rogers at genuineevaluation.com.

Creative Commons

Except where noted, all content on this website is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Nation Science Foundation Logo EvaluATE is supported by the National Science Foundation under grant number 1841783. Any opinions, findings, and conclusions or recommendations expressed on this site are those of the authors and do not necessarily reflect the views of the National Science Foundation.