Jason Burkhardt

EvaluATE Blog Editor

Jason is currently a project manager at the Evaluation Center at Western Michigan University. He is also a PhD student in the Interdisciplinary PhD in evaluation program. He enjoys music, art, and the finer things in life.

Webinar: Naked Reporting: Shedding the Narrative

Posted on April 2, 2015 by , , in Webinars ()

Presenter(s): Emma Perk, Jason Burkhardt, Lori Wingate
Date(s): May 13, 2015
Time: 1-2:00 p.m. EDT
Recording: https://youtu.be/ED4H4tM7Llw

When packaged effectively, the basic facts about a project can go a long way toward communicating its achievements and capacity. In this webinar, participants will learn step-by-step strategies for developing documents that convey essential project information with minimal narrative text. Project resumes and fact sheets are not intended to replace full evaluation reports. However, these straightforward, readable presentations of basic project information are useful complements to traditional reports.  Join us to learn how to create and use consumer-friendly documents to communicate about projects achievements and evaluation results.

Slides PDF
Project Resume Checklist
Creating a Project Fact Sheet
Article on Project Resumes
EvaluATE Webinar Fact Sheet- 2015

Newsletter: Dashboards

Posted on April 1, 2015 by  in Newsletter - ()

EvaluATE Blog Editor

Dashboards are a way to present data about the “trends of an organization’s key performance indicators.”1 Dashboards are designed to provide information to decision makers about important trends and outcomes related to key program activities in real time. Think of a car’s dashboard. It gives you information about the amount of gas the car has, the condition of the engine, and the speed—all of which allow you to pay more attention to what is going on around you. Dashboards optimally work by combining data from a number of sources into one document (or web page) that is focused on giving the user the “big picture,” and keeping them from getting lost in the details. For example, a single dashboard could present data on event attendance, participant demographics, web analytics, and student outcomes, which can give the user important information about project reach, as well as potential avenues for growth.

As a project or center’s complexity increases, it’s easy to lose sight of the big picture. By using a dashboard that is designed to integrate many pieces of information about the project or center, staff and stakeholders can make well-balanced decisions and can see the results of their work in a more tangible way. Evaluators can also take periodic readings from the dashboard to inform their own work, providing formative feedback to support good decisions.

For some real-world examples, check out bit.ly/db-examples

1 bit.ly/what-is-db

Blog: Indicators and the Difficulty With Them

Posted on January 21, 2015 by  in Blog ()

EvaluATE Blog Editor

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluators that are working in education contexts are often required to use externally created criteria and standards, such as GPA targets, graduation rates, and other such metrics when evaluating program success. These standardized goals create a problem that program directors and their evaluators should be on the lookout for. It is called goal displacement, which occurs when one chases a target indicator at the expense of the other parts of a larger mission (Cronin and Gugimoto, 2014). An example of goal displacement was provided in a recent blog post by Bernard Marr (https://www.linkedin.com/today/post/article/20140324073422-64875646-caution-when-kpis-turn-to-poison?trk=mp-author-card).

“Another classic example comes from a Russian nail factory. When the government centrally planned the economy it created targets of output for the factory, measured in weight. The result was that the factory produced a small number of very heavy nails. Obviously, people in Russia didn’t just need massively big nails so the target was changed to the amount of nails the factory had to produce. As a consequence, the nail factory produced a massive amount of only tiny nails.”

The lesson here is that we have to understand that indicators are not truth, they are pointers to truth. As such, it is bad assessment practice to only use a single indicator in assessment and evaluation. In the Russian nail factory example ,suppose what you were really trying to measure was success of the factory in meeting the country’s needs for nails. Clearly, even though the factory was able to meet the targets for the weight or quantity indicators, it failed at its ultimate target, which was meeting the need for the right kind of nails.

I was moved to write about this issue when thinking about a real-world evaluation of an education program that has to meet federally mandated performance indicators, such as percentage of students who meet a certain GPA. The program works with students who tend towards low academic performance and who have little role modeling for success. In order to fully understand the program’s value, it was important to look at not only the number of people who met the federal target, but also statistics related to how students with different initial GPAs and different levels of parental support performed over time. This trend data showed the real story: Even those students who were not meeting the uniform federal target were still improving. More often, the students with less educated role models started with lower GPAs and increased those GPAs over time in the program, while students who had more educated role models, tended to start off better, but did not improve as much. This means that through mentoring, the program was having immense impact on the most needy students (low initial performers), whether or not they met the full federal standard. Although the program still needs to make improvements to reach the federal standards, we now know an important leverage point that can help the students improve even further – increased mentoring to compensate for a lack of educated role models in their personal lives. Thus we were able to look past just the indicator, and found what was really important to the program’s success!

Wouters, P. (2014). 3 The Citation: From Culture to Infrastructure. Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact, 47.

Newsletter: Secondary Data

Posted on January 1, 2015 by  in Newsletter - () ()

EvaluATE Blog Editor

Secondary data is data that is repurposed from its original use, typically collected by a different entity. This is different from primary data, which is data you collect and analyze for your own needs. Secondary data may include, but is not limited to, data already collected by other departments at your institution, by national agencies, or even by other grants. Secondary data can be useful for planning, benchmarking, and evaluation.

Using secondary data in evaluation could involve using institutional data about student ethnicity and gender to help determine your project’s impact on underrepresented minority graduation rates. National education statistics can be used for benchmarking purposes. A national survey of educational pipelines into industry can help you direct your recruitment planning.

The primary benefit of using secondary data is that it is often cheaper to acquire than primary data in terms of time, labor, and financial expenses, which is especially important if you are involved in a small grant with limited resources. However, secondary data sources may not provide all the information needed for your evaluation—you will still have to do some primary data collection in order to get the full picture of your project’s quality and effectiveness.

One final note: Accessing institutional data may require working closely with offices that are not part of your grant, so you must plan accordingly. It is helpful to connect your evaluator with those offices to facilitate access throughout the evaluation.

Webinar: High-Impact, Low-Cost Evaluation for Small Projects

Posted on December 8, 2014 by , , , , in Webinars

Presenter(s): Dennis Faber, Elaine Craft, Jason Burkhardt, Lori Wingate, Mentor-Connect
Date(s): February 18, 2015
Time: 1:00 PM EST
Recording: http://youtu.be/1JPVHEOAEYg

“Small Grants for Institutions New to the ATE Program” is a funding track specifically designed for community colleges that have not had an ATE award within the past 10 years. Like all ATE awards, these projects—up to $200,000 over three years—are required to have an external evaluation that matches the scope of the project. In this webinar, EvaluATE and Mentor-Connect are teaming up to provide guidance on evaluation to current and prospective recipients of small ATE awards—or anyone tasked with producing a meaningful and useful evaluation for a small-scale project.  We’ll discuss how to design an evaluation that will generate the evidence needed to support claims of project success and set the stage for larger-scale projects in the future.

Slide PDF
Handout PDF
Recording: Maximizing Evaluation Impact & Minimizing Evaluation

Blog: Holiday Break

Posted on November 26, 2014 by  in Blog

EvaluATE Blog Editor

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello from the EvaluATE team,

We normally publish our blogs on Wednesday afternoon, but with tomorrow’s holiday, we have taken a brief publication break. We hope that all of our readers get to take a well deserved rest, and that you and your families have a safe and ejoyable weekend. Our next blog post will be 12/3/14.

Webinar: Evaluation and Research in the ATE Program

Posted on November 10, 2014 by , , , in Webinars ()

Presenter(s): Jason Burkhardt, Kirk Knestis, Lori Wingate, Will Tyson
Date(s): December 10, 2014
Time: 1:00 – 2:30 PM EST
Recording: http://youtu.be/QoIZMreQ60I?t=12s

The Common Guidelines for Education Research and Development (http://bit.ly/nsf-ies_guide) define the National Science Foundation’s and Department of Education’s shared understanding and expectations regarding types of research and development projects funded by these agencies. Issued in 2013, these guidelines represent a major step toward clarifying and unifying the NSF’s and Department of Education’s policies regarding research and development, particularly with regard to different types of research and development projects and the nature of evidence needed for each type. In this webinar, we’ll provide an orientation to these relatively new guidelines; clarify the distinctions between research, development, and evaluation; and learn about targeted research within NSF’s Advanced Technological Education program.

Lori Wingate, Director of EvaluATE
Kirk Knestis, CEO of Hezel Associates
Will Tyson, Associate Professor of Sociology at the University of South Florida and PI for the ATE-funded research project, PathTech

Slide PDF
Overview of the Common Guidelines for Education Research and Development
Checklists for the Common Guidelines for Education Research and Development
Edith Gummer’s Presentation on the Common Guidelines at the 2014 ATE PI Conference
PathTech Guide
Evaluation of NSF ATE Program Research and Development