Steven Budd

Research and Evaluation Consultant, Steven Budd Consulting

Dr. Steven Budd is a former community college president and a past president of the Council for Resource Development (CRD). His career spans more than thirty years in all aspects of community college leadership including Institutional Development, Enrollment Management, Public Relations, Marketing and Government Relations. Dr. Budd has developed and implemented workforce development projects under the U.S. Departments of Labor, Commerce and Education. Dr. Budd also served as the Principal Investigator for CRD’s NSF funded faculty professional development program and has since pursued a career in project evaluation and research. He holds an MBA and Ed.D. from the University of Massachusetts, Amherst.


Blog: Finding Opportunity in Unintended Outcomes

Posted on April 15, 2015 by  in Blog (, , , )

Research and Evaluation Consultant, Steven Budd Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Working with underage students bears an increased responsibility for their supervision. Concerns may arise during the implementation of activities that were never envisioned when the project was designed. These unintended consequences may be revealed during an evaluation, thus presenting an opportunity for PIs and evaluators to both learn and intervene.

One project I’m evaluating includes a website designed for young teens, and features videos from ATETV and other sources. The site encourages our teen viewers to share information about the site with their peers and to explore links to videos hosted on other popular sites like YouTube. The overarching goal is to attract kids to STEM and technician careers by piquing their interest with engaging and accurate science content. What we didn’t anticipate was the volume of links to pseudoscience, science denial, and strong political agendas they would encounter. The question for the PI and Co-PIs became, “How do we engage our young participants in a conversation about good versus not-so-good science and how to think critically about what they see?”

As the internal project evaluator, I first began a conversation with the project PI and senior personnel around the question of responsibility. What is the responsibility of the PIs to engage our underage participants in a conversation about critical thinking and learning, so they can discriminate between questionable and solid content? Such content is readily accessible to young teens as they surf the Web, so a more important question was how the project team might capture this reality and capitalize on it. In this sense, was a teaching moment at hand?

As evaluators on NSF-funded projects, we know that evaluator engagement is critical right from the start. Formative review becomes especially important when even well-designed and well thought out activities take unanticipated turns. Our project incorporates a model of internal evaluation, which enables project personnel to gather data and provide real-time assessment of activity outcomes. We then present the data with comment to our external evaluator. The evaluation team works with the project leadership to identify concerns as they arise and strategize a response. That response might include refining activities and how they are implemented or by creating entirely new activities that address a concern directly.

After thinking it through, the project leadership chose to open a discussion about critical thinking and science content with the project’s teen advisory group. Our response was to:

  • Initiate more frequent “check-ins” with our teen advisers and have more structured conversations around science content and what they think.
  • Sample other teen viewers as they join their peers in the project’s discussion groups and social media postings.
  • Seek to better understand how teens engage Internet-based content and how they make sense of what they see.
  • Seek new approaches to activities that engage young teens in building their science literacy and critical thinking.

Tips to consider

  • Adjust your evaluation questions to better understand the actual experience of your project’s participants, and then look for the teaching opportunities in response to what you hear.
  • Vigilant evaluation may reveal the first signs of unintended impacts.

Newsletter: Lessons Learned about Building Evaluation into ATE Proposals: A Grant Writer’s Perspective

Posted on July 1, 2014 by  in Newsletter - ()

Research and Evaluation Consultant, Steven Budd Consulting

Having made a career of community college administration, first as a grant writer and later as a college president, I know well the power of grants in advancing a college’s mission. Somewhere in the early 1990s, the NSF was one of the first grantmakers in higher education to recognize the role of community colleges in STEM undergraduate education. Ever since, two-year faculty have strived to enter the NSF world with varied success.

Unlike much of the grant funding from federal sources, success in winning NSF grants is predicated on innovation and advancing knowledge, which stands in stark contrast to a history of colleges making the case for support based on institutional need. Colleges that are repeatedly successful in winning NSF grants are those that demonstrate their strengths and their ability to deliver what the grantor wants. I contend that NSF grants will increasingly go to new or “first-time” institutions once they recognize and embrace their capacity for innovation and knowledge advancement. With success in winning grants comes the responsibility to document achievements through effective evaluation.

I am encouraged by what I perceive as a stepped-up discussion among grant writers, project PIs, and program officers about evaluation and its importance. As a grant writer/developer, my main concern was to show that the activities I proposed were actually accomplished and that the anticipated courses, curricula, or other project deliverables had been implemented. Longer-term outcomes pertaining to student achievement were generally considered to be beyond a project’s scope. However, student outcomes have now become the measure for attracting public funding, and the emphasis on outcomes will only increase in this era of performance-based budgeting.

When I was a new president of an institution that had never benefitted from grant funding, I had the pleasure of rolling up my sleeves and joining the faculty in writing a proposal to the Advanced Technological Education (ATE) program. College presidents think long and hard about performance measures like graduation rates, length of enrollment until completion, and the gainful employment of graduates, yet such measures may seem distant to faculty who must focus on getting more students to pass their courses. The question rises as how to reconcile equally important interests in outcomes—at the course and program levels for faculty and the institutional level for the president. While I was not convinced that student outcomes were beyond the scope of the grant, the faculty and I agreed that our ATE evaluation ought to be a step in a larger process.

Most evaluators would agree that longitudinal studies of student outcomes cannot fall within the typical three-year grant period. By the same token, I think the new emphasis on logic models that demonstrate the progression from project inputs and activities through short-, mid-, and long-term outcomes allows grant developers to better tailor evaluation designs to the funded work, as well as extend project planning beyond the immediate funding period. The notion of “stackable credentials” so popular with the college completion agenda should now be part of our thinking about grant development. For example, we might look to develop proposals for ATE Targeted Research that build upon more limited project evaluation results. Or perhaps the converse is the way to go: Let’s plan our ATE projects with a mind toward long-term results, supported by evaluation and research designs that ultimately get us the data we need to “make the case” for our colleges as innovators and advancers of knowledge.