Talbot Bielefeldt

Independent Educational Program Evaluator

Talbot Bielefeldt (talbot@clearwatereval.com) is an independent educational program evaluator based in Eugene, Oregon. His clients include school districts, universities, and nonprofit organizations. Much of his work focuses on National Science Foundation, U.S. Department of Education, and corporate initiatives to enhance STEM education and training.

Blog: The Shared Task of Evaluation

Posted on November 18, 2015 by  in Blog (, )

Independent Educational Program Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation was an important strand at the recent ATE meeting in Washington, DC. As I reflected on my own practice as an external evaluator and listened to the comments of my peers, I was impressed once again with how dependent evaluation is on a shared effort by project stakeholders. Ironically, the more external an evaluator is to a project, the more important it is to collaborate closely with PIs, program staff, and participating institutions. Many assessment and data collection activities that are technically part of the outside evaluation are logistically and financially dependent on the internal workings of the project.

This has implications for the scope of work for evaluation and for the evaluation budget. A task might appear in the project proposal as, “survey all participants,” and it would likely be part of the evaluator’s scope of work. But in practice, tasks such as deciding what to ask on the survey, reaching the participants, and following up with nonresponders are likely to require work by the PIs or their assistants.

Occasionally you hear certain percentages cited as appropriate levels of effort for evaluation. Whatever overall portion evaluation plays in a project, my approach is to think of that portion as the sum of my efforts and those of my clients. This has several advantages:

  • During planning, it immediately highlights data that might be difficult to collect. It is much easier to come up with a solution or an alternative in advance and avoid a big gap in the evidence record.
  • It makes clear who is responsible for what activities and avoids embarrassing confrontations along the lines of, “I thought you were going to do that.”
  • It keeps innocents on the project and evaluation staffs from being stuck with long (and possibly uncompensated) hours trying to carry out tasks outside their expected job descriptions.
  • It allows for more accurate budgeting. If I know that a particular study involves substantial clerical support for pulling records from school databases, I can reduce my external evaluation fee, while at the same time warning the PI to anticipate those internal evaluation costs.

The simplest way to assure that these dependencies are identified is to consider them during the initial logic modelling of the project. If an input is professional development, and its output is instructors who use the professional development, and the evidence for the output is use of project resources, who will have to be involved in collecting that evidence? Even if the evaluator proposes to visit every instructor and watch them in practice, it is likely that those visits will have to be coordinated by someone close to the instructional calendar and daily schedule. Specifying and fairly sharing those tasks produces more data, better data, and happier working relationships.