Newsletter - Featured

Newsletter: Meet EvaluATE’s Community College Liaison Panel

Posted on January 1, 2014 by , , , in Newsletter - ()

The ATE program is community college-based, and as such EvaluATE places a priority on meeting the needs of this constituency. To help ensure the relevancy and utility of its resources, EvaluATE has convened a Community College Liaison Panel (CCLP). CCLP members Michael Lesiecki, Marilyn Barger, Jane Ostrander, and Gordon Snyder are tasked with keeping the EvaluATE team tuned into the needs and concerns of 2-year college stakeholders and engaging the ATE community in the review and pilot testing of EvaluATE-produced materials.

These resources distill relevant elements of evaluation theory, principles, and best practices so that a user can quickly understand and apply them for a specific evaluation-related task. They are intended to support members of the ATE community to enhance the quality of their evaluations.

The CCLP’s role is to coordinate a three-phase review process. CCLP members conduct a first-level review of an EvaluATE resource. The EvaluATE team revises it based on the CCLP’s feedback, then each of the four CCLP members reaches out to diverse members of the ATE community—PIs, grant developers, evaluators, and others—to review the material and provide confidential, structured feedback and suggestions. After another round of revisions, the CCLP engages another set of ATE stakeholders to actually try out the resource to ensure it “works” as intended in the real world. Following this pilot testing, EvaluATE finalizes the resource for wide dissemination.

The CCLP has shepherded two resources through the entire review process: the ATE Evaluation Primer and ATE Evaluation Planning Checklist. In the hopper for review in the next few months are the ATE Logic Model Template and Evaluation Planning Matrix, Evaluation Questions Checklist, ATE Evaluation Reporting Checklist, and Professional Development Feedback Survey Template. In addition, CCLP members are leading the development of a Guide to ATE Evaluation Management—by PIs for PIs.

The CCLP invites anyone interested in ATE evaluation to participate in the review process. For a few hours of your time, you’ll get a first look at and tryout of new resources. And your inputs will help shape and strengthen the ATE evaluation community. We also welcome recommendations of tools and materials that others have developed that would be of interest to the ATE community.

To get involved, email CCLP Director Mike Lesiecki at mlesiecki@gmail.com. Tell him you would like to help make EvaluATE be the go-to evaluation resource for people like yourself.

Newsletter: 20 Years of ATE Evaluation

Posted on October 1, 2013 by  in Newsletter - ()

Evaluation has been required of ATE projects and centers since the program began in 1993. Many evaluations were concerned more with numbers of students and faculty impacted, rather than the effectiveness of the intervention. The sophistication of evaluation expectations has been increasing over time. Early in the program, there was a shortage of evaluators who understood both the disciplinary content and the methods of evaluation. Through a separate grant, Arlen Gullickson at the Western Michigan University Evaluation Center provided an internship program for novice evaluators who spent six months evaluating a component of an ATE project. Several ATE evaluators got their start in this program, and several PIs learned what evaluation could do for them and their projects.

The ATE program responded to the Government Performance Results Act by developing a project monitoring survey that provided a snapshot of the program. The survey is still administered annually by EvaluATE (see p. 3). Although this monitoring system also emphasized “body counts,” as time went on the survey was modified with the input of program officers to include questions that encouraged evaluation of project effectiveness.

For example, questions were asked if the project’s evaluation investigated the extent to which the participants in professional development actually implemented the content correctly and the resulting impact on student learning, following the ideas of the Kirkpatrick model for evaluation. The evaluations reported in renewal proposals still concentrate on “body counts.” Proposal reviewers ask, “What happened as a result?” To develop project evaluations that could be aggregated to determine how the ATE program was meeting its goals, a workshop was held with evaluators from centers. The participants suggested that projects could be evaluated along eight dimensions: impact on students, faculty, the college, the community, industry, interaction among colleges, the region, and the nation. A review of several project and center annual reports found that all categories were addressed, and very few items could not be accommodated in this scheme.

Following the evaluation in NSF’s Math Science Partnerships program, I have encouraged project and center leaders to make a FEW claims about the effectiveness of their projects. The evaluator should provide evidence for the extent to which the claims are justified. This view is consistent with the annual report template in Research.gov, which asks for the major goals of the project. It also limits summative evaluation to a few major issues. Much of the emphasis, both here and in general, has been on summative evaluation focused on impact and effectiveness. Projects should also be engaged in formative evaluation to inform project improvements. This requires a short feedback cycle that is usually not possible with only external evaluation. An internal evaluator working with an external evaluator may be useful for collecting data and providing timely feedback to the project. A grant has recently been awarded to strengthen the practice and use for formative evaluation by ATE grantees. Arlen Gullickson, EvaluATE’s co-PI, is leading this work, in cooperation with EvaluATE.

Gerhard Salinger is a founding program officer of the ATE program. The ideas expressed here are his alone and may not reflect the views of the National Science Foundation.

Newsletter: What grant writers need to know about evaluation

Posted on July 1, 2013 by  in Newsletter - ()

Coordinator of Grants Development and Management

Fellow grant writers: Do you ever stop and ask yourselves, “Why do we write grants?” Do you actually enjoy herding cats, pulling teeth, or the inevitable stress of a looming proposal deadline? I hope not. Then what is the driver? We shouldn’t just write a grant to simply get funded or to earn prestige for our colleges. Those benefits may be motivators, but we should write to get positive results for the faculty, students, and institutions involved. And we should be able to evaluate those results in useful and meaningful ways so that we can identify ways to improve and demonstrate the project’s value.

Evaluation isn’t just about satisfying a promise or meeting a requirement to gather and report data, it’s about gathering meaningful data that can be utilized to determine the effectiveness of an activity and the impact of a project. When developing a grant proposal, one often starts with the goals, then thinks of the objectives and then plans the activities, hoping that in the end, the evaluation data will prove that the goals were met and the project was a success. That is putting a lot of faith in “hope.” I find it more promising to begin with the end in mind from an evaluation perspective: What is the positive change that we hope to achieve and how will it be evidenced? What does success mean? How can we tell? When will we know? And, how can we get participants to provide the information we will need for the evaluation?

The role of a grant writer is too often like that of a quilt maker—sections of the grant solicitation are delegated to different members of the institution with the evaluation section often outsourced to a third-party evaluator. Each party submits their content, then the grant writer scrambles to patch it all together. Now instead of a quilt, consider the construction of a tapestry. Instead of chunks of material stitched together in independent sections, each thread is carefully woven in a thoughtful way to create a larger, more cohesive overall design. It is important that the entire development team work together to fully understand each aspect of the proposal in order to collaboratively develop a coherent plan to obtain the desired outcomes. The project workplan, budget, and evaluation components should not be designed or executed independently—they occur simultaneously and are dependent upon each other.

I encourage you to think like an evaluator as you develop your proposals. Prepare yourself and challenge your team to be able to justify the value of each goal, objective, and activity and be able to explain how that value will be measured. If at all possible, involve your external or internal evaluator early on in the proposal development. The better the evaluator understands your overall concept and activities, the better he or she can tailor the evaluation plan to derive the desired results. A strong workplan and evaluation plan will help proposal reviewers connect the dots and see the potential of your proposal. It will also serve as a roadmap to success for your project implementation team.

Newsletter: Evaluation that Seriously Gets to the Point- and Conveys it Brilliantly

Posted on April 1, 2013 by  in Newsletter - ()

Evaluation, much as we love it, has a reputation among nonevaluators for being overly technical and academic, lost in the details, hard work to wade through, and in the end, not particularly useful. Why is this? Many evaluators were originally trained in the social sciences. There we added numerous useful frameworks and methodologies into our toolkits. But, along the way, we were inculcated with several approaches, habits, and ways of communicating that are absolutely killing our ability to deliver the value we could be adding. Here are the worst of them:

  1. Writing question laundry lists – asking long lists of evaluation questions that are far too narrow and detailed (often at the indicator level)
  2. Leaping to measurement – diving into identifying intended outcomes and designing data collection instruments without a clear sense of who or what the evaluation is for
  3. Going SMART but unintelligent – focusing on what’s most easily measurable rather than making intelligent choices to go after what’s most important (SMART = specific, measurable, achievable, relevant, and time-based)
  4. Rorschach inkblotting – assuming that measures, metrics, indicators, and stories are the answers; they are not!
  5. Shirking valuing – treating evaluation as an opinion-gathering exercise rather than actually taking responsibility for drawing evaluative conclusions based on needs, aspirations, and other relevant values
  6. Getting lost in the details – leaving the reader wading through data instead of clearly and succinctly delivering the answers they need
  7. Burying the lead – losing the most important key messages by loading way too many “key points” into the executive summaries, not to mention the report itself, or using truly awful data visualization techniques
  8. Speaking in tongues – using academic and technical language that just makes no sense to normal people

Thankfully, hope is at hand! Breakthrough thinking and approaches are all around us, but many evaluators just aren’t aware of them . Some have been there for decades. Here’s a challenge for 2013. Seek out and get really serious about infusing the following into your evaluation work:

  • Evaluation-Specific Methodology (ESM) – the methodologies that are distinctive to evaluation, i.e., the ones that go directly after values. Examples include needs and values assessment; merit determination methodologies; importance weighting methodologies; evaluative synthesis methodologies; and value-for-money analysis
  • Actionable Evaluation – a pragmatic, utilization-focused framework for evaluation that asks high-level explicitly evaluative questions, and delivers direct answers to them using ESM
  • Data Visualization & Effective Reporting – the best of the best of dataviz, reporting, and communication to deliver insights that are not just understandable but unforgettable