Kirk Knestis

CEO, Hezel Associates

Dr. Kirk Knestis is CEO of Hezel Associates, a Syracuse, NY, research, evaluation, and planning firm with particular expertise studying STEM and workforce development innovations and programs. Dr. Knestis leads Hezel Associates’ team of nine researchers, managing a portfolio that includes contributions to more than a dozen NSF projects across seven programs. He came to manage the firm having earned a Ph.D. in education policy and evaluation, and has experience as a small business owner, STEM and career technology classroom teacher, higher education faculty member, and researcher affiliated with university and nonprofit agencies.


Blog: Documenting Evaluations to Meet Changing Client Needs: Why an “Evaluation Plan” Isn’t Enough

Posted on April 11, 2018 by  in Blog ()

CEO, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

No plan of action survives first contact with the enemy – Helmuth van Moltke (paraphrased)

Evaluations are complicated examinations of complex phenomena. It is optimistic to assume that the details of an evaluation won’t change, particularly for a multiyear project. So how can evaluators deal with the inevitable changes? I propose that purposeful documentation of evaluations can help. In this blog, I focus on the distinctions among three types of documents—the contract, scope of work, and study protocol—each serving a specific purpose.

  • The contract codifies legal commitments between the evaluator and client. Contracts inevitably outline the price of the work, period of the agreement, and specifics like payment terms. They are hard to change after execution, and institutional clients often insist on using their own terms. Given this, while it is possible to revise a contract, it is impractical to use the contract to manage and document changes in the evaluation. I advocate including operational details in a separate “scope of work” (SOW) document, which can be external or appended to the contract.
  • The scope of the work translates the contract into an operational business relationship, listing the responsibilities of both the evaluator and client, tasks, deliverables, and timeline in detail sufficient for effective management of quality and cost. Because the scope of an evaluation will almost certainly change (timelines seem to be the first casualty), it is necessary to establish a process to document “change orders”—detailing revisions to SOW details, who proposed (by either party), who accepted—to avoid conflict. If a change to the scope does not affect the price of the work, it may be possible to manage and record changes without having to revisit the contract. I encourage evaluators to maintain “working copies” of the SOW, with changes, dates, and details of approval communications from clients. At Hezel Associates, practice is to share iterations of the SOW with the client when the work changes, with version dates to document the evaluation-as-implemented so everyone has the same picture of the work.
Working Scope of Work

Click to enlarge.

  • The study protocol then goes further, defining technical aspects of the research central to the work being performed. A complex evaluation project might require more than one protocol (e.g., for formative feedback and impact analysis), each being similar in concept to the Methods section of a thesis or dissertation. A protocol details questions to be answered, the study design, data needs, populations, data collection strategies and instrumentation, and plans for analyses and reporting. A protocol frames processes to establish and maintain appropriate levels of study rigor, builds consensus among team members, and translates evaluation questions into data needs and instrumentation to assure collection of required data before it is too late. Technical aspects of the evaluation are central to the quality of the work but likely to be mostly opaque to the client. I argue that it is crucial that such changes be formally documented in the protocol, but I suggest maintaining such technical information as internal documents for the evaluation team—unless a given change impacts the SOW, at which point the scope must be formally revised as well.

Each of these types of documentation serves an entirely different function as part of what might be called an “evaluation plan,” and all are important to a successful, high-quality project. Any part may be combined with others in a single file, transmitted to the client as part of a “kit,” maintained separately, or perhaps not shared with the client at all. Regardless, our experience has been that effective documentation will help avoid confusion after marching onto the evaluation field of battle.

Blog: Addressing Challenges in Evaluating ATE Projects Targeting Outcomes for Educators

Posted on November 21, 2017 by  in Blog ()

CEO, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kirk Knestis—CEO of Hezel Associates and ex-career and technology educator and professional development provider—here to share some strategies addressing challenges unique to evaluating Advanced Technological Education (ATE) projects that target outcomes for teachers and college faculty.

In addition to funding projects that directly train future technicians, the National Science Foundation (NSF) ATE program funds initiatives to improve abilities of grade 7-12 teachers and college faculty—the expectation being that improving their practice will directly benefit technical education. ATE tracks focusing on professional development (PD), capacity building for faculty, and technological education teacher preparation all count implicitly on theories of action (typically illustrated by a logic model) that presume outcomes for educators will translate into outcomes for student technicians. This assumption can present challenges to evaluators trying to understand how such efforts are working. Reference this generic logic model for discussion purposes:

Setting aside project activities acting directly on students, any strategy aimed at educators (e.g., PD workshops, faculty mentoring, or preservice teacher training) must leave them fully equipped with dispositions, knowledge, and skills necessary to implement effective instruction with students. Educators must then turn those outcomes into actions to realize similar types of outcomes for their learners. Students’ action outcomes (e.g., entering, persisting in, and completing training programs) depend, in turn, on them having the dispositions, knowledge, and skills educators are charged with furthering. If educators fail to learn what they should, or do not activate those abilities, students are less likely to succeed. So what are the implications—challenges and possible solutions—of this for NSF ATE evaluations?

  • EDUCATOR OUTCOMES ARE OFTEN NOT WELL EXPLICATED. Work with program designers to force them to define the new dispositions, understandings, and abilities that technical educators require to be effective. Facilitate discussion about all three outcome categories to lessen the chance of missing something. Press until outcomes are defined in terms of persistent changes educators will take away from project activities, not what they will do during them.
  • EDUCATORS ARE DIFFICULT TO TEST. To truly understand if an ATE project is making a difference in instruction, it is necessary to assess if precursor outcomes for them are realized. Dispositions (attitudes) are easy to assess with self-report questionnaires, but measuring real knowledge and skills requires proper assessments—ideally, performance assessments. Work with project staff to “bake” assessments into project strategies, to be more authentic and less intrusive. Strive for more than self-report measures of increased abilities.
  • INSTRUCTIONAL PRACTICES ARE DIFFICULT AND EXPENSIVE TO ASSESS. The only way to truly evaluate instruction is to see it, assessing pedagogy, content, and quality with rubrics or checklists. Consider replacing expensive on-site visits with the collection of digital videos or real-time, web-based telepresence.

With clear definitions of outcomes and collaboration with ATE project designers, evaluators can assess whether technician training educators are gaining the necessary dispositions, knowledge, and skills, and if they are implementing those practices with students. Assessing students is the next challenge, but until we can determine if educator outcomes are being achieved, we cannot honestly say that educator-improvement efforts made any difference.

Blog: Common Guidelines Offer New Opportunity to Frame Education Research and Evaluation

Posted on December 3, 2014 by  in Blog ()

CEO, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hezel Associates is a research and evaluation firm with a long history of studying education innovations. As CEO, my colleagues and I work with developers of programs and innovations intended to realize lasting outcomes for STEM education stakeholders. Our NSF-funded work has included evaluations of two ATE grants, along with ten projects across eight other programs over the past five years.

Those projects vary substantially but one constant is what I call “The NSF Conundrum”: Too-common, problematic inconsistencies in distinctions between RESEARCH to “advance knowledge” in STEM education (the Intellectual Merit review criterion) and EVALUATION of Foundation-funded activities. In Hezel Associates’ internal lexicon, PIs should collect and analyze data for “research” aims defined for the project. The evaluator, typically external, is responsible for a “program evaluation” examining the PI’s work—including research. Both require data collection and analysis, but for different uses.

I make this distinction not suggesting that it is universal, but instead to illuminate instances where individuals—PIs, program officers, evaluators, and review panelists—make important decisions based on different assumptions and principles. A particularly egregious example occurred for a Hezel Associates client proposing an ITEST Strategies project in early 2014. One panelist, supporting a “POOR” rating, wrote…

“The authors provide a general description of evaluation questions, data needs, instruments, but with no statistical analysis for the data once collected. The authors are advised to develop a comprehensive evaluation plan that would provide credible results from the comparison of the two programs…”

…apparently missing that such a plan was detailed as half of the “research and development” (R&D) effort proposed earlier in the document—including random assignment, validated instruments, and inferential statistics examining group differences (ANOVA) with tests of assumptions and post hoc corrections. The “research” compared the two program offerings, while the “evaluation” assessed the rigor and implementation of the collaborative R&D activities. This panelist’s personal definition of terms predisposed him to focus in the wrong place for crucial proposal content.

Help is here, however, in the form of The Common Guidelines for Education Research and Development (http://www.nsf.gov/pubs/2013/nsf13126/nsf13126.pdf). Released in 2013 by NSF and the U.S. Department of Education, this report aims to “enhance the efficiency and effectiveness of both agencies’ STEM education research and development programs.” The Guidelines enumerate six “types” of research, differentiating purposes and defining standards of rigor for studies ranging from foundational research to effectiveness tests of innovations implemented “at scale.” The 2014 ATE program solicitation specifically invokes the Guidelines for the Targeted Research on Technician Education track, although all activities described as appropriate for projects in any of the ATE program tracks could arguably be situated in that framework. And the new Guidelines have the potential to largely resolve the conundrum.

So consider this a plea for NSF staff, PIs, evaluators, and other stakeholders to adopt the R&D orientation and framework defined by the Guidelines and to agree to key definitions. The resulting conception of research as a development function and the opportunity to clarify the evaluator’s role, should help us all better support the Foundation’s goals.

Webinar: Evaluation and Research in the ATE Program

Posted on November 10, 2014 by , , , in Webinars ()

Presenter(s): Jason Burkhardt, Kirk Knestis, Lori Wingate, Will Tyson
Date(s): December 10, 2014
Time: 1:00 – 2:30 PM EST
Recording: http://youtu.be/QoIZMreQ60I?t=12s

The Common Guidelines for Education Research and Development (http://bit.ly/nsf-ies_guide) define the National Science Foundation’s and Department of Education’s shared understanding and expectations regarding types of research and development projects funded by these agencies. Issued in 2013, these guidelines represent a major step toward clarifying and unifying the NSF’s and Department of Education’s policies regarding research and development, particularly with regard to different types of research and development projects and the nature of evidence needed for each type. In this webinar, we’ll provide an orientation to these relatively new guidelines; clarify the distinctions between research, development, and evaluation; and learn about targeted research within NSF’s Advanced Technological Education program.

Presenters:
Lori Wingate, Director of EvaluATE
Kirk Knestis, CEO of Hezel Associates
Will Tyson, Associate Professor of Sociology at the University of South Florida and PI for the ATE-funded research project, PathTech

Resources:
Slide PDF
Overview of the Common Guidelines for Education Research and Development
Checklists for the Common Guidelines for Education Research and Development
Edith Gummer’s Presentation on the Common Guidelines at the 2014 ATE PI Conference
PathTech Guide
Evaluation of NSF ATE Program Research and Development