Newsletter - Featured

Newsletter: Getting the Most out of Your Logic Model

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

I recently led two workshops at the American Evaluation Association’s Summer Evaluation Institute. To get a sense of the types of projects that the participants were working on, I asked them to send me a brief project description or logic model in advance of the Institute. I received more than 50 responses, representing a diverse array of projects in the areas of health, human rights, education, and community development. While I have long advocated for logic models as a succinct way to communicae the nature and purpose of projects, it wasn’t until I received these responses that I realized how efficient logic models really are in terms of conveying what a project does, whom it serves, and how it is intended to bring about change.

In reviewing the logic models, I was able to quickly understand the main project activities and outcomes.  My workshops were on developing evaluation questions, and I was amazed how quickly I could frame evaluation questions and indicators based on what was presented in the models. It wasn’t as straight forward with the narrative project descriptions, which were much less consistent in terms of the types of information  conveyed and the degree to which the elements were linked conceptually.  When participants would show me their models in the workshop, I quickly remembered their projects and could give them specific feedback based on my previous review of their models.

Think of NSF proposal reviewers who have to read numerous 15-page project descriptions. It’s not easy to keep straight all the details of a single project, let alone that of 10 or more 15-page proposals. In a logic model, all the key information about a project’s activities, products, and outcomes is presented in one graphic. This helps reviewers consume the project information as a “package.”  For reviewers who are especially interested in the quality of the evaluation plan, a quick comparison of the evaluation plan against the model will reveal how well the plan is aligned to the project’s activities, scope, and purpose.  Specifically, mentally mapping the evaluation questions and indicators onto the logic model provides a good sense of whether the evaluation will adequately address both project implementation and outcomes.

One of the main reasons for creating a logic model—other than the fact it may be required by a funding agency—is to illustrate how key project elements logically relate to one another. I have found that representing a project’s planned activities, products, and outcomes in a logic model format can reveal weaknesses in the project’s plan. For example, there may be an activity that doesn’t seem to lead anywhere or ambitious outcomes that aren’t adequately supported by activities or outputs.  It is much better if you, as a project proposer, spot those weaknesses before an NSF reviewer does. A strong logic model can then serve as a blueprint for the narrative project description—all key elements of the model should be apparent in the project description and vice versa.

I don’t think there is such a thing as the perfect logic model. The trick is to recognize when it is good enough. Check to make sure the elements are located in the appropriate sections of the model, that all main project activities (or activity areas) and outcomes are included, and that they are logically linked. Ask someone from outside your team to review it; revise if they see problems or opportunities to increase clarity. But don’t overwork it—treat it as a living document that you can update when and if necessary

Download the logic model template from http://bit.ly/lm-temp.

Newsletter: Three Questions and Examples to Spur Action from Your Evaluation Report

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

1) Are there any unexpected findings in the report? The EvaluATE team has been surprised to learn that we are attracting a large number of grant writers and other grant professionals to our webinars. We initially assumed that principal investigators (PIs) and evaluators would be our main audience. With growing attendance among grant writers, we became aware that they are often the ones who first introduce PIs to evaluation, guiding them on what should go in the evaluation section of a proposal and how to find an evaluator. The unexpected finding that grant writers are seeking out EvaluATE for guidance caused us to realize that we should develop more tailored content for this important audience as we work to advance evaluation in the ATE program.

Talk with your team and your evaluator to determine if any action is needed related to your unexpected results.

2) What’s the worst/least favorable evaluation finding from your evaluation? Although it can be uncomfortable to focus on a project’s weak points, this is where the greatest opportunity for growth and improvement lies. Consider the probable causes of the problem and potential solutions. Can you solve the problem with your current resources? If so, make an action plan. If not, decide if the problem is important enough to address through a new initiative.

At EvaluATE, we serve both evaluators and evaluation consumers who have a wide range of interests and experience. When asked what EvaluATE needs to improve, several respondents to our external evaluation survey noted that they want webinars to be more tailored to their specific needs and skill levels. Some noted that our content was too technical, while others remarked that it was too basic. To address this issue, we decided to develop an ATE evaluation competency framework. Webinars will be keyed to specific competencies, which will help our audience decide which are appropriate for them. We couldn’t implement this research and development work with our current resources, so we wrote this activity into the renewal proposal we submitted last fall.

Don’t sweep an unfavorable result or criticism under the rug. Use it as a lever for positive change.

3) What’s the most favorable finding from your evaluation? Give yourself a pat on the back, then figure out if it points to an aspect of your project you should expand. If you need more information to make that decision, determine what additional evidence could be obtained in the next round of the evaluation. Help others to learn from your successes—the ATE Principal Investigators Conference is an ideal place to share aspects of your work that are especially strong, along with your lessons learned and practical advice about implementing ATE projects.

At EvaluATE, we have been astounded at the interest in and positive response to our webinars. But we don’t yet have full understanding of the extent to which webinar attendance translates to improvements in evaluation practice. So, we decided to start collecting follow-up data from webinar participants to check on use of our content. With that additional evidence in hand, we’ll be better positioned to make an informed decision about expanding or modifying our webinar series.

Don’t just feel good about your positive results—use them as leverage for increased impact.

If you’ve considered your evaluation results carefully, but still aren’t able to identify a call to action, it may be time to rethink your evaluation’s focus. You may need to make adjustments to ensure it produces useful, actionable information. Evaluation plans should be fluid and responsive—it is expected that plans will evolve to address emerged needs.

Newsletter: Revisiting Intellectual Merit and Broader Impact

Posted on January 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

If you have ever written a proposal to the National Science Foundation (NSF) or participated in a proposal review panel for NSF, you probably instantly recognize the terms Intellectual Merit and Broader Impacts as NSF’s merit review criteria. Proposals are rated and funding decisions are made based on how well they address these criteria. Therefore, proposers must describe the potential of their proposed work to advance knowledge and understanding (Intellectual Merit) and benefit society (Broader Impacts).

Like cramming for an exam and then forgetting 90 percent of what you memorized, it’s all too easy for principal investigators to lose sight of Intellectual Merit and Broader Impacts after proposal submission. But there are two important reasons to maintain focus on Intellectual Merit and Broader Impacts after an award is made and throughout project implementation.

First, the goals and activities expressed in a proposal are commitments about how a particular project will advance knowledge (Intellectual Merit) and bring tangible benefits to individuals, institutions, communities, and/or our nation (Broader Impacts). Simply put, PIs have an ethical obligation to follow through on these commitments to the best of their abilities.

Second, when funded PIs seek subsequent grants from NSF, they must describe the results of their prior NSF funding in terms of Intellectual Merit and Broader Impacts. In other words, proposers must explain how they used their NSF funding to actually advance knowledge and understanding and benefit society. PIs who have evidence of their accomplishments in these areas and can convey it succinctly will be well-positioned to seek additional funding. To ensure evidence of both Intellectual Merit and Broader Impacts are being captured, PIs should revisit project evaluation plans with their evaluators, crosschecking the proposal’s claims about potential Intellectual Merit and Broader Impacts in relation to the evaluation questions and data collection plan to make sure compelling evidence is captured.

Last October, I conducted a workshop on this topic at the ATE Principal Investigators Conference with colleague Kirk Knestis, an evaluator from Hezel Associates. Dr. Celeste Carter, ATE program co-lead, spoke about how to frame results of prior NSF support in proposals. She noted that a common misstep that she has seen in proposals is when proposers speak to results from prior support by simply reiterating what they said they were going to do in their funded proposals, rather than describing the actual outcomes of the grant. Project summaries (one-page descriptions that address a proposed project’s Intellectual Merit and Broader Impacts that are required as part of all NSF proposals) are necessarily written in a prospective, future-oriented manner because the work hasn’t been initiated yet. In contrast, the Results of Prior NSF Support sections are about completed work and therefore are written in past tense and should include evidence of accomplishments. Describing achievements and presenting evidence of the quality and impact of those achievements shows reviewers that the proposer is a responsible steward of federal funds, can deliver on promises, and is building on prior success.

Take time now, well before it is time to submit a new proposal or a Project Outcomes Report, to make sure you haven’t lost sight of the Intellectual Merit and Broader Impact aspects of your grant and how you promised to contribute to these national priorities.

Newsletter: Shorten the Evaluation Learning Curve: Avoid These Common Pitfalls

Posted on October 1, 2015 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

This EvaluATE newsletter issue is focused on getting started with evaluation. It’s oriented to new ATE principal investigators who are getting their projects off the ground, but I think it holds some good reminders for veteran PIs as well. To shorten the evaluation learning curve, avoid these common pitfalls:

Searching for the truth about “what NSF wants from evaluation.” NSF is not prescriptive about what an ATE evaluation should or shouldn’t look like. So, if you’ve been concerned that you’ve somehow missed the one document that spells out exactly what NSF wants from an ATE evaluation—rest assured, you haven’t overlooked anything. But there is information that NSF requests from all projects in annual reports and that you are asked to report on the annual ATE survey. So it’s worthwhile to preview the Research.gov reporting template (bit.ly/nsf_prt) and the ATE annual survey questions (bit.ly/ATEsurvey16). And if you’re doing research, be sure to review the Common Guidelines for Education Research and Development – which are pretty cut-and-dried criteria for different types of research (bit.ly/cg-checklist). Most importantly, put some time into thinking about what you, as a project leader, need to learn from the evaluation. If you’re still concerned about meeting expectations, talk to your program officer.

Thinking your evaluator has all the answers. Even for veteran evaluators, every evaluation is new and has to be tailored to context. Don’t expect your evaluator to produce a detailed, actionable evaluation plan on Day 1. He or she will need to work out the details of the plan with you. And if something doesn’t seem right to you, it’s OK to ask for something different.

Putting off dealing with the evaluation until you are less busy. “Less busy” is a mythical place and you will probably never get there. I am both an evaluator and a client of evaluation services, and even I have been guilty of paying less attention to evaluation in favor of “more urgent” matters. Here are some tips for ensuring your project’s evaluation gets the attention it needs: (a) Set a recurring conference call or meeting with your evaluator (e.g., every two to three weeks); (b) Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation matters; (c) Give someone other than the PI responsibility for attending to the evaluation—not to replace the PI’s attention, but to ensure the PI and other project members are staying on top of the evaluation and communicating regularly with the evaluator; (d) Commit to using the evaluation results in a timely way—if you do something on a recurring basis, make sure you gather feedback from those involved and use it to improve the next activity.

Assuming you will need your first evaluation report at the end of Year 1. PIs must submit their annual reports to NSF within the 90 days prior to the end of the current budget period. So if your grant started on September 1, your first annual report is due between the beginning of June and the end of August. And it will take some time to prepare, so you should probably start writing a month or so before you plan to submit it. You’ll want to include at least some of your evaluation results, so start working with your evaluator now to figure what information is most important to collect for your Year 1 report.

Veteran PIs: What tips do you have for shortening the evaluation learning curve?  Submit a blog to EvaluATE and tell your story and lessons learned for the benefit of new PIs: evalu-ate.org/category/blog/.

Newsletter: An Evaluative Approach to Proposal Development

Posted on July 1, 2015 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

A student came into my office to ask me a question. Soon after she launched into her query, I stopped her and said I wasn’t the right person to help because she was asking about a statistical method that I wasn’t up-to-date on. She said, “Oh, you’re a qualitative person?” And I answered, “Not really.” She left looking puzzled. The exchange left me pondering the vexing question, “What am I?” (Now imagine these words echoing off my office walls in a spooky voice for a couple of minutes.) After a few uncomfortable moments, I proudly concluded, “I am a critical thinker!”

Yes, evaluators are trained specialists with an arsenal of tools, strategies, and approaches for data collection, analysis, and reporting. But critical thinking—evaluative thinking—is really what drives good evaluation. In fact, the very definition of critical thinking as “the mental process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and evaluating information to reach an answer or conclusion”1 describes the evaluation process to a T. Applying your critical, evaluative thinking skills in developing your funding proposal will go a long way toward ensuring your submission is competitive.

Make sure all the pieces of your proposal fit together like a snug puzzle. Your proposal needs both a clear statement of the need for your project and a description of the intended outcomes—make sure these match up. If you struggle with the outcome measurement aspect of your evaluation plan, go back to the rationale for your project. If you can observe a need or problem in your context, you should be able to observe the improvements as well. Show linkages between the need you intend to address, your activities and products, and expected outcomes.
Be logical. Develop a logic model to portray how your project will translate its resources into outcomes that address a need in your context. Sometimes simply putting things in a graphic format can reveal shortcomings in a project’s logical foundation (like when important outcomes can’t be tracked back to activities). The narrative description of your project’s goals, objectives, deliverables, and activities should match the logic model.

Be skeptical. Project planning and logic model development typically happen from an optimistic point of view. (“If we build it, they will come.”) While crafting your work plan, step back from time to time and ask yourself and your colleagues, what obstacles might we face? What could really mess things up? Where are the opportunities for failure? And perhaps most importantly, is this really the best solution to the need we’re trying to address? Identify your plan’s weaknesses and build in safeguards against those threats. I’m all for an optimistic outlook, but proposal reviewers won’t be wearing rose-colored glasses when they critique your proposal and compare it with others written by smart people with great ideas, just like you. Be your own worst critic and your proposal will be stronger for it.

Evaluative thinking doesn’t replace specialized training in evaluation. But even the best evaluator and most rigorous evaluation plan cannot compensate for a disheveled, poorly crafted project plan. Give your proposal a competitive edge by applying your critical thinking skills and infusing an evaluative perspective throughout your project description.

1 dictionary.com

Newsletter: Why Does the NSF Worry about Project/Center Evaluation?

Posted on April 1, 2015 by  in Newsletter - ()

Lead Program Director, ATE, National Science Foundation

I often use a quick set of questions that Dr. Gerhard Salinger developed in response to the question, “How do you develop an excellent proposal?” Question 4 is especially relevant to the issue of project/center evaluation:

  1. What is the need that will be addressed?
  2. How do you specifically plan to address this need?
  3. Does your project team have the necessary expertise to carry out your plan?
  4. How will you know if you succeed?
  5. How will you tell other people about the results and outcomes?

Question 4 is addressing the evaluation activities of a project or center, and I hope you consider it essential for conducting an effective and successful project. Formative assessment guides you and lets you know if your strategy is working; it gives you the information to shift strategies if needed. A summative assessment then provides you and others with information on the overall project goals and objectives. Evaluation adds the concept of value to your project. For example, the evaluation activities might provide you with information on the participants’ perceived value of the workshop, and follow-on evaluation activities might provide you with information as to how many faculty used what they learned in a course. A final step might be to evaluate the impact on student learning in the course following the course change.

As a program officer, I can quickly scan the project facts (e.g., how many of this or that), but I tend to spend much more time on the evaluation data as it provides the value component to your project activities. Let’s go back to the faculty professional development workshops. Program officers definitely want to know if the workshops were held and how many people attended, but it is essential to provide information on the value of the workshops. It’s great to know that faculty “liked” the workshop, but of greater importance is the impact on their teaching practices and student learning that occurred due to the change. Your annual reports (yes, we do read them carefully) can provide the entire evaluation report as an attachment, but it would be really helpful if you, the PI, provided an overview of what you see as your project value added within the body of the report.

There are several reasons evaluation information is important to NSF program officers. First, each federal dollar that you expend carrying out your project is one that the taxpayers expect both you and the NSF to be accountable for. Second, within the NSF, program portfolios are scrutinized to determine programmatic impact and effectiveness. Third, the ATE program is congressionally mandated and program data and evaluation are often used to respond to congressional questions. Put more concisely, NSF wants to know if the investment in your project/center was a wise one and if value was generated from this investment.

Newsletter: Have You Overlooked Data That Might Strengthen Your Project Evaluation Reports or Grant Proposals?

Posted on January 1, 2015 by , in Newsletter - () ()

Many institutions of higher education collect very useful quantitative data as part of their regular operational and reporting processes. Why not put it to good use for your project evaluations or grant proposals? An office of institutional research, which often participates in the reporting process, can serve as a guide for data definitions and can often assist in creating one-time reports on this data and/or provide training to access and use existing reports.

Course Offerings: How many classes are offered in statistics? How frequently are they offered? Getting a sense of course enrollment numbers over time can illustrate need in a grant narrative. If a project involves the creation of new curricular elements, pre- and post-intervention enrollment numbers can serve as an outcome measure in an evaluation.

Student Transcripts: Is there a disproportionate number of veterans taking Spanish? How do they fare? Where do students major and minor? These data can serve as proxies for interest in different majors, identify gateway courses that might need support, uncover course-taking patterns, and/or relationships to GPA or full-/part-time status. Many of these can become outcomes or benchmarks in the evaluations, as well as context for a narrative.

Student Demographic and Admissions Data: Who are our students? How do they shape the institutional narrative?  Examine academic origin (high school, community college); incoming characteristics such as GPA, SAT, or ACT scores; race/ethnicity; veteran status; age; gender; Pell-grant eligibility status; underrepresented minority status; resident/nonresident status; and on-/off-campus housing. Student populations can be broken down into treatment cohorts for an evaluation of groups shown by research to benefit most from the intervention.

Faculty Demographic Information: Who are our faculty?  Examining full-time/part-time status, race/ethnicity, and gender can yield interesting observations. How do faculty demographics match students’ demographics? What is the student/faculty ratio? This information can enhance narrative descriptions of how students are served.

Financial Aid Data: How do we support our students fiscally? Information about cost of attendance vs tuition, net cost vs. “sticker cost,” percentage of students graduating with loans and average loan burden can be important to describe. It can also be a way of dividing students when evaluating outcomes, and can be an outcome measure in itself for grants intended to affect financial aid or financial literacy.

Student Outcomes: What does persistence look like at your institution? What are the one-year,  retention rates; four-, five-, and six-year graduation rates; and number of graduates by CIP (Classification by Instructional Programs) code?  These are almost always the standard benchmarks for interventions intended to affect retention and completion.

To further your case and provide context, comparison data for most of these are available in IPEDS (Integrated Postsecondary Education Data System) and may be tracked by federal surveys like the Beginning Postsecondary Students longitudinal survey and National Postsecondary Student Aid Survey, all of which are potential sources for external benchmarking. Of course, collecting these types of data can be addictive as you discover new ways to enliven your narrative and empower your evaluation with the help of institutional research. Happy hunting!

To learn more about institutional data from Carolyn and Russ, read their contribution to EvaluATE’s blog at evalu-ate.org/blog/brennancannon-feb15.

Newsletter: Everyday Evaluation

Posted on October 1, 2014 by , , in Newsletter - ()

At EvaluATE, evaluation is a shared responsibility. We have a wonderful external evaluator, Dr. Lana Rucks, with whom we meet a few times a year in person and talk to on the phone about every other month. Dr. Rucks is responsible for determining our center’s mid- and long-term impact on the individuals who engage with us and on the ATE projects they influence. We supplement her external evaluation with surveys of workshop and webinar participants to obtain their immediate feedback on our activities. In addition, we carefully track the extent to which we are reaching our intended audiences. But for our team, evaluation is not just about the formal activities related to data collection and analysis. It’s how we do our work on a daily basis. Here are some examples:

  • Everyone gives and gets constructive criticism. Every presentation, webinar, newsletter article, or other product we create gets reviewed by the whole team. This improves our final products, whether it means catching embarrassing typos, completely revamping a presentation to improve its relevance, or going back to the drawing board. We all have thick skins and understand that criticism is not personal; it’s essential to high-quality work.
  • We are willing to admit when something’s not working or when we’ve bit off more than we can chew. We all realize it’s better to scrap an idea early and refocus rather than push it to completion with mediocre results.
  • We look backward when moving forward. For example, when we begin developing a new webinar, we review the feedback from the previous one to determine what our audiences perceived as its strengths and weaknesses. Perhaps the most painful, yet valuable exercise is watching the recording of a prior webinar together, stopping to note what really worked and what didn’t— from the details of audio quality to the level of audience participation.
  • We engage our advisors. Getting an external perspective on our work is invaluable. They ask us tough questions and cause us to check our assumptions.
  • We use data every day. Whether determining which social media strategies are most effective or identifying which subgroups within our ATE constituency need more attention, we use the data we have in hand to inform decisions about our operations and priorities.
  • We use our mission as a compass to plot our path forward. We are faced with myriad opportunities in the work that we do as a resource center. We consider options in terms of their potential to advance our mission. That keeps us focused and ensures that resources are expended on mission-critical efforts.

Integrating these and other evaluative activities and perspectives into our daily work gives us better results, as apparent in our formal evaluation results. Importantly, we share a belief that excellence is never achieved—it is something we continually strive for. What we did yesterday may have been pretty good, but we believe we can do better tomorrow.

As you plan your evaluation for this year, consider things you can do with your team to critique and improve your work on an ongoing basis.

Newsletter: Lessons Learned about Building Evaluation into ATE Proposals: A Grant Writer’s Perspective

Posted on July 1, 2014 by  in Newsletter - ()

Research and Evaluation Consultant, Steven Budd Consulting

Having made a career of community college administration, first as a grant writer and later as a college president, I know well the power of grants in advancing a college’s mission. Somewhere in the early 1990s, the NSF was one of the first grantmakers in higher education to recognize the role of community colleges in STEM undergraduate education. Ever since, two-year faculty have strived to enter the NSF world with varied success.

Unlike much of the grant funding from federal sources, success in winning NSF grants is predicated on innovation and advancing knowledge, which stands in stark contrast to a history of colleges making the case for support based on institutional need. Colleges that are repeatedly successful in winning NSF grants are those that demonstrate their strengths and their ability to deliver what the grantor wants. I contend that NSF grants will increasingly go to new or “first-time” institutions once they recognize and embrace their capacity for innovation and knowledge advancement. With success in winning grants comes the responsibility to document achievements through effective evaluation.

I am encouraged by what I perceive as a stepped-up discussion among grant writers, project PIs, and program officers about evaluation and its importance. As a grant writer/developer, my main concern was to show that the activities I proposed were actually accomplished and that the anticipated courses, curricula, or other project deliverables had been implemented. Longer-term outcomes pertaining to student achievement were generally considered to be beyond a project’s scope. However, student outcomes have now become the measure for attracting public funding, and the emphasis on outcomes will only increase in this era of performance-based budgeting.

When I was a new president of an institution that had never benefitted from grant funding, I had the pleasure of rolling up my sleeves and joining the faculty in writing a proposal to the Advanced Technological Education (ATE) program. College presidents think long and hard about performance measures like graduation rates, length of enrollment until completion, and the gainful employment of graduates, yet such measures may seem distant to faculty who must focus on getting more students to pass their courses. The question rises as how to reconcile equally important interests in outcomes—at the course and program levels for faculty and the institutional level for the president. While I was not convinced that student outcomes were beyond the scope of the grant, the faculty and I agreed that our ATE evaluation ought to be a step in a larger process.

Most evaluators would agree that longitudinal studies of student outcomes cannot fall within the typical three-year grant period. By the same token, I think the new emphasis on logic models that demonstrate the progression from project inputs and activities through short-, mid-, and long-term outcomes allows grant developers to better tailor evaluation designs to the funded work, as well as extend project planning beyond the immediate funding period. The notion of “stackable credentials” so popular with the college completion agenda should now be part of our thinking about grant development. For example, we might look to develop proposals for ATE Targeted Research that build upon more limited project evaluation results. Or perhaps the converse is the way to go: Let’s plan our ATE projects with a mind toward long-term results, supported by evaluation and research designs that ultimately get us the data we need to “make the case” for our colleges as innovators and advancers of knowledge.

Newsletter: Expectations to Change (E2C): A Process to Promote the Use of Evaluation for Project Improvement

Posted on April 1, 2014 by  in Newsletter - ()

How can we make sure evaluation findings are used to improve projects? This is a question on the minds of evaluators, project staff, and funders alike. The Expectations to Change (E2C) process is one answer. E2C is a six-step process through which evaluation stakeholders are guided from establishing performance standards (i.e., “expectations”) to formulating action steps toward desired change. The process can be completed in one or more working sessions with those evaluation stakeholders best positioned to put the findings to use. E2C is designed as a process of self-evaluation for projects, and the role of the evaluator is that of facilitator, teacher, and technical consultant. The six steps of the E2C process are summarized in the table below. While the specific activities used to carry out each step should be tailored to the setting, the suggested activities are based on various implementations of the process to date.

E2C Process Overview

Step Objective Suggested Activities
1. Set Expectations Establish standards to serve as a frame of reference for determining whether the findings are “good” or “bad” Instruction, worksheets, and consensus building process
2. Review Findings Examine the findings, compare them to established expectations, and form an initial reaction; celebrate successes Instruction, individual processing, and round-robin group discussion
3. Identify Key Findings Identify the findings that fall below expectations and require immediate attention Ranking process and facilitated group discussion
4. Interpret Key Findings Generate interpretations of what the key findings mean Brainstorming activity such as “Rotating Flip Charts”
5. Make Recommendations Generate recommendations for change based on interpretations of the findings Brainstorming activity such as “Rotating Flip Charts”
6. Plan for Change Formulate an action plan for implementing recommendations Planning activities that enlist all of the stakeholders and result in concrete next steps, such as sticky wall, and small group work

To find out if the E2C process does in fact encourage projects to use evaluation for improvement, we asked a group of staff and administrators from a nonprofit, human service organization to participate in an online survey one year after their E2C workshop. The findings revealed an increase in staff knowledge and awareness of clients’ experiences receiving services, as well as specific changes to the way services were delivered. The findings also showed that participation in the E2C workshop fostered the service providers’ appreciation for, increased their knowledge of, and enhanced their ability to engage in evaluation activities.

Based on these findings and our experiences with the process to date, by providing program stakeholders with the opportunity to systematically compare their evaluation results to agreed-upon performance standards, celebrate successes and address weaknesses, the E2C process facilitates self-evaluation for the purpose of project improvement.

E2C Process Handout

E2C was co-created with Nkiru Nnawulezi, M.A., and Lela Vandenberg, Ph.D., Michigan State University. For more information, contact Adrienne Adams at adamsadr@msu.edu.