Archive: data-based decisions

Blog: Indicators and the Difficulty With Them

Posted on January 21, 2015 by  in Blog ()

EvaluATE Blog Editor

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluators that are working in education contexts are often required to use externally created criteria and standards, such as GPA targets, graduation rates, and other such metrics when evaluating program success. These standardized goals create a problem that program directors and their evaluators should be on the lookout for. It is called goal displacement, which occurs when one chases a target indicator at the expense of the other parts of a larger mission (Cronin and Gugimoto, 2014). An example of goal displacement was provided in a recent blog post by Bernard Marr (https://www.linkedin.com/today/post/article/20140324073422-64875646-caution-when-kpis-turn-to-poison?trk=mp-author-card).

“Another classic example comes from a Russian nail factory. When the government centrally planned the economy it created targets of output for the factory, measured in weight. The result was that the factory produced a small number of very heavy nails. Obviously, people in Russia didn’t just need massively big nails so the target was changed to the amount of nails the factory had to produce. As a consequence, the nail factory produced a massive amount of only tiny nails.”

The lesson here is that we have to understand that indicators are not truth, they are pointers to truth. As such, it is bad assessment practice to only use a single indicator in assessment and evaluation. In the Russian nail factory example ,suppose what you were really trying to measure was success of the factory in meeting the country’s needs for nails. Clearly, even though the factory was able to meet the targets for the weight or quantity indicators, it failed at its ultimate target, which was meeting the need for the right kind of nails.

I was moved to write about this issue when thinking about a real-world evaluation of an education program that has to meet federally mandated performance indicators, such as percentage of students who meet a certain GPA. The program works with students who tend towards low academic performance and who have little role modeling for success. In order to fully understand the program’s value, it was important to look at not only the number of people who met the federal target, but also statistics related to how students with different initial GPAs and different levels of parental support performed over time. This trend data showed the real story: Even those students who were not meeting the uniform federal target were still improving. More often, the students with less educated role models started with lower GPAs and increased those GPAs over time in the program, while students who had more educated role models, tended to start off better, but did not improve as much. This means that through mentoring, the program was having immense impact on the most needy students (low initial performers), whether or not they met the full federal standard. Although the program still needs to make improvements to reach the federal standards, we now know an important leverage point that can help the students improve even further – increased mentoring to compensate for a lack of educated role models in their personal lives. Thus we were able to look past just the indicator, and found what was really important to the program’s success!

Wouters, P. (2014). 3 The Citation: From Culture to Infrastructure. Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact, 47.

Blog: Evaluating Impact: How I Moved From Pipeline to Interstate

Posted on December 10, 2014 by  in Blog ()

CKO of Sener Knowledge, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

John Sener of Sener Knowledge LLC, an external evaluator for over a dozen ATE and other NSF grants over the past eight years, discusses how to move beyond the “pipeline” metaphor for describing technician education and adopt the more useful “interstate” metaphor instead.

Evaluate an NSF ATE center or project long enough, and eventually you’ll hear the word pipeline—as in, “closing gaps in the education pipeline,” or “increasing the cybersecurity workforce pipeline.” The pipeline metaphor describes the ATE technician education process pretty well to some extent:

A) Students proceed through an NSF-funded ATE program “pipeline” in well-defined cohorts on a relatively uniform timetable.

B) The pipeline feeds program graduates to employers, often in a relatively limited number of local or regional employers in the related field.

C) Evaluators aim to document that B is the result of A.

However, I have long been dissatisfied with the pipeline metaphor because it traps its users into limited thinking, making it easy to overlook important sources of project impact. As noted in my book, The Seven Futures of American Education, many students exhibit characteristics that reflect a greater degree of choice than the pipeline metaphor implies: “stopouts” who drop in and out of school, “swirlers” who attend multiple institutions, “stay-longers” who exceed the prescribed time period for program completion. I’ve found that an interstate highway is a much more useful metaphor to understand learners who:

  • Choose among multiple entry and exit points rather than following a prescribed path;
  • Travel at different speeds and on different schedules through their education programs;
  • Sometimes seek alternate routes and multiple destinations during their educational journeys.

Here are some ways I use the interstate metaphor to find sources of project impact.

Pipelines move their cargo from Point A to Point B, while interstates support two-way traffic; look in both directions for indications of project impact, for instance:

  • Career changers with bachelor’s or graduate degrees returning to community colleges for additional training or certification
  • Four-year students mentoring two-year students to prepare for student competitions
  • Community college students mentoring high school students or serving as judges for high school student competitions
  • Business and industry practitioners getting involved on community college curriculum advisory boards or forming three-way partnerships with faculty and students to enhance the learning/assessment experience or create knowledge collaboratively

Expand the realm of acceptable outcomes. ATE projects have significant impact beyond program completion or employment placement. Students sometimes find jobs before completing a program. Alternatively, intermediate outcomes—such as certificate or multiple course completion—may also indicate progress, especially for students returning to college to enhance their existing professional prospects.

Look in other places for impact. Extracurricular activities, such as student competitions, clubs, or organizations, are one good place to look. I sometimes think of such places as “toll booths”—activities where impact can be measured more easily as students pass through them.

Blog: Figures at Your Fingertips

Posted on October 28, 2014 by  in Blog ()

Co-Principal Investigator, Op-Tec Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In formative evaluation, programs or projects are typically assessed during their development or early implementation to provide information about how best to revise and modify for improvement. (www.austinisd.org)

 I’m Gordon Snyder and I’m currently the principal investigator of the National Center for Information and Communications Technologies (ICT Center). My experience as new ATE center PI back in July of 2001 did not get off to a very smooth start. With the retirement of the founding PI after three years, I was moving from a co-PI position and faced with making a number of decisions and changes in a short period of time.  We were right in the middle of the “dot com bust” and the information and communications technology field was in free fall. I knew our decisions needed to be data-driven, smart, focused, quick, and correct if we were going to continue to be a resource for faculty and students.

As a center co-PI during the boom times between 1998 and 2000, my role was focused on curriculum development and helping faculty learn and teach new technology in their classrooms and labs. I honestly did not understand nor pay much attention to the work our evaluator was doing – that was something the PI liked to handle, and I was perfectly fine with that.

In my new role as a PI, things changed. One of the first things I did was read the evaluation reports for the past two years. I found a lot of flowery complimentary language with little else in those reports – I recall using the term “pile of fluff” along with a few others that I won’t repeat here. I found nothing substantial that was going to help me making any decisions.

In August of 2001, I received our year 3 annual evaluation report and this one was even more “fluffy.” Lesson learned: Within a month I dismissed that evaluator, replacing that individual with someone more in tune with what we needed. Things were much better with the new evaluator, but I still found it difficult making intelligent data-based decisions.  I did not have the information I needed. There had to be a better way.

Fast forward to today: ATE PIs need even more access to valid, reliable, useful, evaluative data for decision making. This data needs to be available in real time, or close to real time throughout the funding cycle, more frequently than the typical annual evaluation reports. However, most PIs still simply do not have the time, resources, and expertise required to systematize the collection and use of this kind of information.

Logic models are one method that’s catching on to keep track of and use information to make formative data-driven decisions. I’ve been working on the FAS4ATE project (Formative Assessment for ATE) with Western Michigan University that will ultimately develop some logic model-based online tools to streamline data collection and more effectively scope and plan evaluation activities to include formative and summative processes. We’re early in the development process. Here’s a short video demonstrating our prototype of one of the tools.

Logic models are a great way to keep up with project work and more quickly and confidently make data-based decisions. If you’d like to learn more about this formative assessment project, contact me at gordonfsnyder@gmail.com