Blog




Blog: Becoming a Sustainability Sleuth: Leaving and Looking for Clues of Long-Term Impact

Posted on August 1, 2018 by  in Blog ()

Director, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I’m Rebecca from SageFox Consulting Group, and I’d like to start a conversation about measuring sustainability. Many of us work on ambitious projects with long-term impacts that cannot be achieved within the grant period and require sustained grant activities. Projects are often tasked with providing evidence of sustainability but are not given the funding to assess sustainability and impact after grant funding. In five, 10, or 15 years, if someone were to pick up your final report, would they be able to use it to get a baseline understanding of what occurred during the grant, and would they know where to look for evidence of impact and sustainability? Below are some suggestions for documenting “clues” for sustainability:

Relationships are examples of how projects are sustained. You may want to consider documenting evidence of the depth of relationships: are they person-dependent, or has it become a true partnership between entities? Evidence of the depth of relationships is often revealed when a key supporter leaves their position, but the relationship continues. You might also try to distinguish a person from a role. For example, one project I worked on lost the support of a key contact (due to a reorganization) at a federal agency that hosted student interns during the summer. There was enough goodwill and experience, however, continued efforts from the project leadership resulted in more requests for interns than there were students available for.

Documenting how and why the innovation evolves can provide evidence of sustainability. Often the adopter, user, or customer finds their own value in relation to their unique context. Understanding how and why someone adapts the product or process gives great insight into what elements may go on and in what contexts. For example, you might ask users, “What modifications were needed for your context and why?”

In one of my projects, we began with a set of training modules for students, but we found that an online test preparation module for a certification was also valuable. Through a relationship with the testing agency, a revenue stream was developed that also allowed the project to continue classroom work with students.

Institutionalization (adoption of key products or processes by an institution)—often through a dedicated line item in a budget for a previously grant-funded student support position—reflects sustainability. For example, when a grant-funded program found a permanent home at the university by expanding its student-focused training in entrepreneurship to faculty members, it aligned itself with the mission of the department. Asking “What components of this program are critical for the host institution?” is one way to uncover institutionalization opportunities.

Revenue generation is another indicator of customer demand for the product or process. Many projects are reluctant to commercialize their innovations, but commercialization can be part of a sustainability plan. There are even National Science Foundation (NSF) programs to help plan for commercialization (e.g., NSF Innovation Corps), and seed money to get started is also available (e.g., NSF Small Business Innovation Research).

Looking for clues of sustainability often requires a qualitative approach to evaluation through capturing the story from the leadership team and participants. It also involves being on the lookout for unanticipated outcomes in addition to the deliberate avenues a project takes to ensure the longevity of the work.

Blog: Successful Practices in ATE Evaluation Planning

Posted on July 19, 2018 by  in Blog ()

President, Mullins Consulting, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this essay, I identify what helps me create a strong evaluation plan when working with new Advanced Technological Education (ATE) program partners. I hope my notes add value to current and future proposal-writing conversations.

Become involved as early as possible in the proposal-planning process. With ATE projects, as with most evaluation projects, the sooner an evaluator is included in the project planning, the better. Even if the evaluator just observes the initial planning meetings, their involvement helps them become familiar with the project’s framework, the community partnerships, and the way in which project objectives are taking shape. Such involvement also helps familiarize the evaluator with the language used to frame project components and the new or established relationships expected for project implementation.

Get to know your existing and anticipated partners. Establishing or strengthening partnerships is a core component of ATE planning, as ATE projects often engage with multiple institutions through the creation of new certifications, development of new industry partnerships, and explanation of outreach efforts in public schools. The evaluator should take detailed notes on the internal and external partnerships involved with the project. Sometimes, to support my own understanding as an evaluator, it helps for me to visually map these relationships. Also, the evaluator should prepare for the unexpected. Sometimes, partners will change during the planning process as partner roles and program purposes become more clearly defined.

Integrate evaluation thinking into conversations early on. Once the team gets through the first couple of proposal drafts, it helps if the evaluator creates an evaluation plan and the team makes time to review it as a group. This will help the planning team clarify the evaluation questions to be addressed and outcomes to be measured. This review also allows the team to see how their outcomes can be clearly attached to program activities and measured through specific methods of data collection. Sometimes during this process, I speak up if a component could use further discussion (e.g., cohort size, mentoring practices). If an evaluator has been engaged from the beginning and has gotten to know the partners, they have likely built the trust necessary to add value to the discussion of the proposal’s central components.

Operate as an illuminator. A colleague I admire once suggested that evaluation be used as a flashlight, not as a hammer. This perspective of prioritizing exploration and illumination over determination of cause and effect has informed my work. Useful evaluations certainly require sound evaluation methodology, but they also require the crafting of results into compelling stories, told with data guiding the way. This requires working with others as interpretations unfold, discovering how findings can be communicated to different audiences, and listening to what stakeholders need to move their initiatives forward.

ATE programs offer participants critical opportunities to be a part of our country’s future workforce. Stakeholders are passionate about their programs. Careful, thoughtful engagement throughout the proposal-writing process builds trust while contributing to a quality proposal with a strong evaluation plan.

Blog: Evaluation Feedback Is a Gift

Posted on July 3, 2018 by  in Blog ()

Chemistry Faculty, Anoka-Ramsey Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Christopher Lutz, chemistry faculty at Anoka-Ramsey Community College. When our project was initially awarded, I was a first-time National Science Foundation (NSF) principal investigator. I understood external evaluation was required for grants but saw it as an administrative hurdle in the grant process. I viewed evaluation as proof for the NSF that we did the project and as a metric for outcomes. While both of these aspects are important, I learned evaluation is also an opportunity to monitor and improve your process and grant. Working with our excellent external evaluators, we built a stronger program in our grant project. You can too, if you are open to evaluation feedback.

Our evaluation team was composed of an excellent evaluator and a technical expert. I started working with both about halfway through the proposal development process (a few months before submission) to ensure they could contribute to the project. I recommend contacting evaluators during the initial stages of proposal development and checking in several times before submission. This gives adequate time for your evaluators to develop a quality evaluation plan and gives you time to understand how to incorporate your evaluator’s advice. Our funded project yielded great successes, but we could have saved time and achieved more if we had involved our evaluators earlier in the process.

After receiving funding, we convened grant personnel and evaluators for a face-to-face meeting to avoid wasted effort at the project start. Meeting in person allowed us to quickly collaborate on a deep level. For example, our project evaluator made real-time adjustments to the evaluation plan as our academic team and technical evaluator worked to plan our project videos and training tools. Include evaluator travel funds in your budget and possibly select an evaluator who is close by. We did not designate travel funds for our Kansas-based evaluator, but his ties to Minnesota and understanding of the value of face-to-face collaboration led him to use some of his evaluation salary to travel and meet with our team.

Here are three ways we used evaluation feedback to strengthen our project:

Example 1: The first-year evaluation report showed a perceived deficiency in the project’s provision of hands-on experience with MALDI-MS instrumentation. In response, we had students make small quantities of liquid solution instead of giving pre-mixed solutions, and let them analyze more lab samples. This change required minimal time but led students to regard the project’s hands-on nature as a strength in the second-year evaluation.

Example 2: Another area for improvement was students’ lack of confidence in analyzing data. In response to this feedback, project staff create Excel data analysis tools and a new training activity for students to practice with literature data prior to analyzing their own. The subsequent year’s evaluation report indicated increased student confidence.

Example 3: Input from our technical evaluator allowed us to create videos that have been used in academic institutions in at least three US states, the UK’s Open University system, and Iceland.

Provided here are some overall tips:

  1. Work with your evaluator(s) early in the proposal process to avoid wasted effort.
  2. Build in at least one face-to-face meeting with your evaluator(s).

Review evaluation data and reports with the goal of improving your project in the next year.

Consider external evaluators as critical friends who are there to help improve your project. This will help move your project forward and help you have a greater impact for all.

Blog: Creating Interactive Documents

Posted on June 20, 2018 by  in Blog ()

Executive Director, Healthy Climate Alliance

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In 2016, I was an intern at the Evaluation Office (EVAL) of the International Labour Organization, where the constant question was, “How do we get people to read the reports that we spend so much time and energy on?” I had been looking for a new project that would be useful to my colleagues in EVAL, and a bolt of inspiration hit me: what if I could use the key points and information from one of the dense reports to make an interactive summary report? That project led me to the general concept of interactive documents, which can be used for reports, timelines, logic models, and more.

I recommend building interactive documents in PowerPoint and then exporting them as PDFs. I use Adobe Acrobat Pro to add clickable areas to the PDF that will lead readers to a particular section of the PDF or to a webpage. Interactive documents are not intended to be read from beginning to end. It should be easy for readers to navigate directly from the front page to the content that interests them, and back to the front page.

While building my interactive documents in PowerPoint, I follow Nancy Duarte’s Slidedocs principles to create visual documents that are intended to be read rather than presented. She suggests providing content that is clear and concise, using small chunks of text, and interspersing visuals. I use multiple narrow columns of text, with visuals on each page.

Interactive documents include a “launch page,” which gives a map-like overview of the whole document.

The launch page (see figure) allows readers to absorb the structure and main points of the document and to decide where they want to “zoom in” for more detail. I try to follow the wise advice of Edward Tufte: “Don’t ‘know your audience.’ Know your content and trust your audience.” He argues that we shouldn’t try to distill key points and simplify our data to make it easier for audiences to absorb. Readers will each have their own agendas and priorities, and we should make it as easy as possible for them to access the data that is most useful to them.

The launch page of an interactive document should have links all over it; every item of content on the launch page should lead readers to more detailed information on that topic. Every subsequent page should be extremely focused on one topic. If there is too much content within one topic, you can create another launch page focused on that particular topic (e.g., the “Inputs” section of the logic model).

The content pages should have buttons (i.e., links) that allow readers to navigate back to the main launch page or forward to the following page. If there’s a more detailed document that you’re building from, you may also want to link to that document on every page.

Try it out! Remember to keep your interactive document concise and navigable.

Blog: Modifying Grant Evaluation Project Objectives

Posted on June 11, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Evelyn Brown
Director, Extension Research and Development
NC State Industry Expansion Solutions
Leressa Suber
Evaluation Coordinator
NC State Industry Expansion Solutions

When performing grant evaluations, our clients develop specific project objectives to drive attainment of overall grant goals. We work with principal investigators (PIs) to monitor work plan activities and project outcomes to ensure objectives are attainable, measurable, and sustainable.

However, what happens when the project team encounters obstacles to starting the activities related to project objectives? What shifts need to be made to meet grant goals?

When the team determines that the project objective cannot be achieved as initially planned, it’s important for the PI and evaluator to determine how to proceed. In the table below, we’ve highlighted three scenarios in which it may be necessary to shift, change, or eliminate a project objective. Then, if changes are made, based on the extent of the project objective modifications, the team can determine if or when the PI should notify the project funder.

Example: Shift in Project Objective

Grant Goal Help underclassmen understand what engineers do by observing the day-to-day activities of a local engineer.
Problem The advisory board members (engineers) in the field were unavailable.
Objective Current: Shadow advisory board member. Change: Shadow young engineering alumni.
Result The goal is still attainable.
PI Notify Funder No, but provide explanation/justification in the end-of-year report.

Example: Change a Project Objective

Grant Goal To create a method by which students at the community college will earn a credential to indicate they are prepared for employment in a specific technical field.
Problem The state process to establish a new certificate is time consuming and can’t occur within the grant period.
Objective Current: Complete degree in specific technical field. Change: Complete certificate in specific technical field.
Result The goal is still attainable.
PI Notify Funder Yes, specifically contact the funding program officer.

Example: Eliminate the Project Objective

Grant Goal The project participant’s salary will increase as result of completing specific program.
Problem Following program exit, salary data is unavailable.
Objective Current: Compare participant’s salary at start of program to salary three months after program completion. Change: Unable to maintain contact with program completers to obtain salary information.
Result The goal cannot realistically be measured.
PI Notify Funder Yes, specifically contact funding program officer.

In our experience working with clients, we’ve found that the best way to minimize the need to modify project objectives is to ensure they are well written during the grant proposal phase.

Tips: How to write attainable project objectives.

1. Thoroughly think through objectives during grant development phase.

The National Science Foundation (NSF) provides guidance to assist PIs with constructing realistic project goals and objectives. Below, we’ve linked to the NSF’s proposal development guide. However, here are a few key considerations:

  • Are the project objectives clear?
  • Are the resources necessary to accomplish the objectives clearly identified?
  • Are their barriers to accessing the resources needed?

2. Seek evaluator assistance early in the grant proposal process.

Link to additional resources: NSF – A Guide for Proposal Writing

Blog: Measure What Matters: Time for Higher Education to Revisit This Important Lesson

Posted on May 23, 2018 by  in Blog (, )

Senior Partner, Cosgrove & Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If one accepts Peter Drucker’s premise that “what gets measured, gets managed,” then two things are apparent: measurement is valuable, but measuring the wrong thing has consequences. Data collection efforts focusing on the wrong metrics lead to mismanagement and failure to recognize potential opportunities. Focusing on the right measures matters. For example, in Moneyball, Michael Lewis describes how the Oakland Athletics improved their won-loss record by revising player evaluation metrics to more fully understand players’ potential to score runs.

The higher education arena has equally high stakes concerning evaluation. A growing number of states (more than 30 in 2017)[1] have adopted performance funding systems to allocate higher education funding. Such systems focus on increasing the number of degree completers and have been fueled by calls for increased accountability. The logic of performance funding seems clear: Tie funding to the achievement of performance metrics, and colleges will improve their performance. However, research suggests we might want to re-examine this logic.  In “Why Performance-Based College Funding Doesn’t Work,” Nicholas Hillman found little to no evidence to support the connection between performance funding and improved educational outcomes.

Why are more states jumping on the performance-funding train? States are under political pressure, with calls for increased accountability and limited taxpayer dollars. But do the chosen performance metrics capture the full impact of education? Do the metrics result in more efficient allocation of state funding? The jury may be still out on these questions, but Hillman’s evidence suggests the answer is no.

The disconnect between performance funding and improved outcomes may widen even more when one considers open-enrollment colleges or colleges that serve a high percentage of adult, nontraditional, or low-income students. For example, when a student transfers from a community college (without a two-year degree) to a four-year college, should that behavior count against the community college’s degree completion metric? Might that student have been well-served by their time at the lower-cost college? When community colleges provide higher education access to adult students who enroll on a part-time basis, should they be penalized for not graduating such students within the arbitrary three-year time period? Might those students and that community have been well-served by access to higher education?

To ensure more equitable and appropriate use of performance metrics, college and states would be well-served to revisit current performance metrics and more clearly define appropriate metrics and data collection strategies. Most importantly, states and colleges should connect the analysis of performance metrics to clear and funded pathways for improvement. Stepping back to remember that the goal of performance measurement is to help build capacity and improve performance will place both parties in a better position to support and evaluate higher education performance in a more meaningful and equitable manner.

[1] Jones, T., & Jones, S. (2017, November 6). Can equity be bought? A look at outcomes-based funding in higher ed [Blog post].

Blog: Attending to culture, diversity, and equity in STEM program evaluation (Part 2)

Posted on May 9, 2018 by  in Blog ()

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In my previous post, I gave an overview of two strategies you can use to inform yourself about the theoretical aspect of engagement with culture, diversity, and equity in evaluation. I now present two practical strategies, which I believe should follow the theoretical strategies presented in my previous post.

Strategy three: Engage with related sensitive topics informally

To begin to feel comfortable with these topics, engage with these issues during interactions with your evaluation team members, clients, or other stakeholders. Evaluators should acknowledge differing stakeholder opinions, while also attempting to assist stakeholders in surfacing their own values, prejudices, and subjectivities (Greene, Boyce, & Ahn, 2011).

To do this, bring up issues of race, power, inequity, diversity, and culture for dialogue in meetings, emails, and conversations (Boyce, 2017). Call out and discuss micro-aggressions (Sue, 2010) and practice acts of micro-validation (Packard, Gagnon, LaBelle, Jeffers, & Lynn, 2011). For example, when meeting with clients, you might ask them to discuss how they plan to ensure not just diversity but inclusivity within their program. You also can ask them to chart out program goals through a logic model but also ask them to consider if they think underrepresented participants might experience the program differently than their majority participants. Ask clients if they have considered cultural sensitivity training for program managers and/or participants.

Strategy four: Attend to issues of culture, equity, and diversity formally

Numerous scholars have addressed the implications of cultural responsiveness in practice (Frierson, Hood, Hughes, & Thomas, 2010; Hood, Hopson, & Kirkhart, 2015), with some encouraging contemplation surrounding threats to, as well as evidence for, multicultural validity by examining relational, consequential, theoretical, experiential, and methodological justificatory perspectives (Kirkhart, 2005, 2010). I believe the ultimate goal is to be able to attend to culture and context in all formal aspects of the research and evaluation. It is especially important to take a strengths-based, anti-defect approach (Chun & Evans, 2009) and focus on research intersectionality (Collins, 2000).

To do this, you can begin with the framing of the program goals. My programs aim to give underrepresented minorities in STEM skills to survive in the field. This perspective assumes that something is inherently wrong with these students. Instead, think about rewording evaluation questions to examine the culture of the department or program, to explore why more underrepresented groups (at least to have parity with the percentage in population) don’t thrive. Further, evaluators can attempt to include these topics in evaluation questions, develop culturally commensurate data instruments, and be sensitive to these issues during data collection, analysis, and reporting. Challenge yourself to think about this attendance as more than the inclusion of symbolic and politically correct buzzwords (Boyce & Chouinard, 2017), but as a true infusion of these aspects into your practice. For example, I always include an evaluation question about diversity, equity, and culture in my evaluation plans.

These two blog posts are really just the tip of the iceberg. I hope you find these strategies useful as you begin to engage with culture, equity, and diversity in your work. As I previously noted, I have included citations throughout so that you can read more about these important concepts. In a recently published article, my colleague Jill Anne Chouinard and I discuss how we trained evaluators to work through these strategies in a Culturally Responsive Approaches to Research and Evaluation course (Boyce & Chouinard, 2017).

References

Blog: Attending to culture, diversity, and equity in STEM program evaluation (Part 1)

Posted on May 1, 2018 by  in Blog ()

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The conversation, both practical and theoretical, surrounding culture, diversity, and equity in evaluation has increased in recent years. As many STEM education programs aim to broaden participation of women, ethnic minority groups, and persons with disabilities, attention to culture, diversity, and equity is paramount. In two blog posts, I will provide a brief overview of four strategies to meaningfully and respectfully engage with these important topics. In this first blog, I will focus on strategies that are helpful in learning more about these issues but that are theoretical and not directly related to evaluation practice. I will also help you learn more about these issues. I should note that I purposely have included a number of citations so that you may read further about these topics.

Strategy one: Recognize social inquiry is a cultural product

Social science knowledge of minority populations, constructed with narrow worldviews, has demeaned characteristics, distorted interpretations of conditions and potential, and remained limited in its capacity to inform efforts to improve the life chances of historically disadvantaged populations (Ladson-Billings, 2000). Begin by educating yourself about the role communicentric bias—the tendency to make one’s own community, often the majority class, the center of conceptual frames that constrains all thought (Gordon, Miller, & Rollock, 1990)—and individual, institutional, societal, and civilizational racism play in education and the social sciences (Scheurich & Young, 2002). Seek to understand the culture, context, historical perspectives, power, oppressions, and privilege in each new context (Greene, 2005; Pon, 2009).

To do this, you can read and discuss books, articles, and chapters related to epistemologies— theories of knowledge—of difference, racialized discourses, and critiques about the nature of social inquiry. Some excellent examples include Stamped from the Beginning by Ibram X. Kendi, The Shape of the River by William G. Bowen and Derek Bok, and Race Matters by Cornel West. Each of these books is illuminating and a must-read as you begin or continue your journey to better understand race and privilege in America. Perhaps start a book club so that you can process these ideas with colleagues and friends.

Strategy two: Locate your own values, prejudices, and identities

The lens through which we view the world influences all evaluation processes, from design to implementation and interpretations (Milner, 2007; Symonette, 2015). In order to think crtically bout issues of culture, power, equity, class, race, and diversity, evaluators should understand their own personal and cultural values (Symonette, 2004). As Peshkin (1988) has noted, the practice of locating oneself can result in a better understanding of one’s own subjectivities. In my own work, I always attempt to acknowledge the role my education, gender, class, and ethnicity will play in my work.

To do this, you can reflect on your own educational background, personal identities, experiences, values, prejudices, predispositions, beliefs, and intuition. Focus on your own social identity, the identities of others, whether you belong to any groups with power and privilege, and how your educational background and identities shape your beliefs, role as an evaluator, and experiences. To unearth some of the more underlying values, you might consider participating in a privilege walk exercise and reflecting on your responses to current events.

These two strategies are just the beginning. In my second blog post, I will focus on engaging with these topics informally and formally within your evaluation practice.

References

Blog: Documenting Evaluations to Meet Changing Client Needs: Why an “Evaluation Plan” Isn’t Enough

Posted on April 11, 2018 by  in Blog ()

CEO, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

No plan of action survives first contact with the enemy – Helmuth van Moltke (paraphrased)

Evaluations are complicated examinations of complex phenomena. It is optimistic to assume that the details of an evaluation won’t change, particularly for a multiyear project. So how can evaluators deal with the inevitable changes? I propose that purposeful documentation of evaluations can help. In this blog, I focus on the distinctions among three types of documents—the contract, scope of work, and study protocol—each serving a specific purpose.

  • The contract codifies legal commitments between the evaluator and client. Contracts inevitably outline the price of the work, period of the agreement, and specifics like payment terms. They are hard to change after execution, and institutional clients often insist on using their own terms. Given this, while it is possible to revise a contract, it is impractical to use the contract to manage and document changes in the evaluation. I advocate including operational details in a separate “scope of work” (SOW) document, which can be external or appended to the contract.
  • The scope of the work translates the contract into an operational business relationship, listing the responsibilities of both the evaluator and client, tasks, deliverables, and timeline in detail sufficient for effective management of quality and cost. Because the scope of an evaluation will almost certainly change (timelines seem to be the first casualty), it is necessary to establish a process to document “change orders”—detailing revisions to SOW details, who proposed (by either party), who accepted—to avoid conflict. If a change to the scope does not affect the price of the work, it may be possible to manage and record changes without having to revisit the contract. I encourage evaluators to maintain “working copies” of the SOW, with changes, dates, and details of approval communications from clients. At Hezel Associates, practice is to share iterations of the SOW with the client when the work changes, with version dates to document the evaluation-as-implemented so everyone has the same picture of the work.
Working Scope of Work

Click to enlarge.

  • The study protocol then goes further, defining technical aspects of the research central to the work being performed. A complex evaluation project might require more than one protocol (e.g., for formative feedback and impact analysis), each being similar in concept to the Methods section of a thesis or dissertation. A protocol details questions to be answered, the study design, data needs, populations, data collection strategies and instrumentation, and plans for analyses and reporting. A protocol frames processes to establish and maintain appropriate levels of study rigor, builds consensus among team members, and translates evaluation questions into data needs and instrumentation to assure collection of required data before it is too late. Technical aspects of the evaluation are central to the quality of the work but likely to be mostly opaque to the client. I argue that it is crucial that such changes be formally documented in the protocol, but I suggest maintaining such technical information as internal documents for the evaluation team—unless a given change impacts the SOW, at which point the scope must be formally revised as well.

Each of these types of documentation serves an entirely different function as part of what might be called an “evaluation plan,” and all are important to a successful, high-quality project. Any part may be combined with others in a single file, transmitted to the client as part of a “kit,” maintained separately, or perhaps not shared with the client at all. Regardless, our experience has been that effective documentation will help avoid confusion after marching onto the evaluation field of battle.

Blog: Summarizing Project Milestones

Posted on March 28, 2018 by  in Blog ()

Evaluation Specialist, Thomas P. Miller & Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

With any initiative, it can be valuable to document and describe the implementation to understand what occurred and what shifts or changes were made to the original design (e.g., fidelity to the model). This understanding helps when replicating, scaling, or seeking future funding for the initiative.

Documentation can be done by the evaluator and be shared with the grantee (as a way to validate an evaluator’s understanding of the project). Alternatively, project staff can document progress and share this with the evaluator as a way to keep the evaluation team up to date (which is especially helpful on small-budget evaluation projects).

The documentation of progress can be extremely detailed or high level (e.g., a snapshot of the initiative’s development). When tracking implementation milestones, consider:

  1. What is the goal of the document?
  2. Who is the audience?
  3. What are the most effective ways to display and group the data?

For example, if you are interested in understanding a snapshot of milestones and modifications of the original project design, you might use a structure like the one below:

click to enlarge and download

If you are especially interested in highlighting the effect of delays on project implementation and the cause, you may adjust the visual to include directional arrows and shading:

click to enlarge and download

In these examples, we organized the snapshot by quarterly progress, but you can group milestones by month or even include a timeline of the events. Similarly, in Image 2 we categorized progress in buckets (e.g., curriculum, staffing) based on key areas of the grant’s goals and activities. These categories should change to align with the unique focus of each initiative. For example, if professional development is a considerable part of the grant, then perhaps placing that into a separate category (instead of combining it with staffing) would be best.

Another important consideration is the target audience. We have used this framework when communicating with project staff and leadership to show, at a high level, what is taking place within the project. This diagramming has also been valuable for sharing knowledge across our evaluation staff members, leading to discussions around fidelity to the model and any shifts or changes that may need to occur within the evaluation design, based on project implementation. Some of your stakeholders, such as project funders, may want more information than just the snapshot. In these cases, you may consider adding additional detail to the snapshot visual, or starting your report with the snapshot and then providing an additional narrative around each bucket and/or time period covered within the visual.

Also, the framework itself can be modified. If, for example, you are more concerned about showing the cause and effect instead of adjustments, you may group everything together as “milestones” instead of having separate categories for “adjustments” and “additional milestones.”

For our evaluation team, this approach has been a helpful way to consolidate, disseminate, and discuss initiative milestones with key stakeholder groups such as initiative staff, evaluators, college leadership, and funders. We hope this will be valuable to you as well.