Blog




Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Using Mixed-Mode Survey Administration to Increase Response

Posted on September 26, 2018 by  in Blog ()

Program Evaluator, Cold Spring Harbor Laboratory, DNA Learning Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Why aren’t people responding?”

This is the perpetual question asked by anyone doing survey research, and it’s one that I am no stranger to myself. There are common strategies to combat low survey participation, but what happens when they fail?

Last year, I was co-principal investigator on a small Advanced Technological Education (ATE) grant to conduct a nationwide survey of high school biology teachers. This was a follow-up to a 1998 survey done as part of an earlier ATE grant my institution had received. In 1998, the survey was done entirely by mail and had a 35 percent response rate. In 2018, we administered an updated version of this survey to nearly 13,000 teachers. However, this time, there was one big difference: we used email.

After a series of four messages over two months (pre-notice, invitation, and two reminders), an incentivized survey, and intentional targeting of high school biology teachers, our response rate was only 10 percent. We anticipated that teachers would be busy and that a 15-minute survey might be too much for many of them to deal with at school. However, there appeared to be a bigger problem: nearly two-thirds of our messages were never opened and perhaps never even seen.

To boost our numbers, we decided to return to what had worked previously: the mail. Rather than send more emails, we mailed an invitation to individuals who had not completed the survey, followed by postcard reminders. Individuals were reminded of the incentive and directed to a web address where they could complete the survey online. The end result was a 14 percent response rate.

I noticed that, particularly when emailing teachers at their school-provided email addresses, many messages never reach the intended recipients. Although use of a mail-exclusive design may never be likely, an alternative would be to heed the advice of Millar and Dillman (2011): administer a mixed-mode, web-then-mail messaging strategy to ensure that spam filters don’t prevent participants from being a part of surveys. Asking the following questions can help guide your method-of-contact decisions and help avoid troubleshooting a low response rate mid-survey.

  1. Have I had low response rates from a similar population before?
  2. Do I have the ability to contact individuals via multiple methods?
  3. Is using the mail cost- or time-prohibitive for this particular project?
  4. What is the sample size necessary for my sample to reasonably represent the target population?
  5. Have I already made successful contact with these individuals over email?
  6. Does the survey tool I’m using (Survey Monkey, Qualtrics, etc.) tend to be snagged by spam filters if I use its built-in invitation management features?

These are just some of the considerations that may help you avoid major spam filter issues in your forthcoming project. Spam filters may not be the only reason for a low response rate, but anything that can be done to mitigate their impact is a step toward a better response rate for your surveys.


Reference

Millar, M., & Dillman, D. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75, 249–269.

Blog: Using Rubrics to Demonstrate Educator Mastery in Professional Development

Posted on September 18, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Nena Bloom
Evaluation Coordinator
Center for Science Teaching and Learning, Northern Arizona University
Lori Rubino-Hare
Professional Development Coordinator
Center for Science Teaching and Learning, Northern Arizona University

We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.

One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.

GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.

We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.

Practices we used to develop and utilize the rubric included the following:

  • We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
  • We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
  • Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
  • We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
  • We pilot tested the rubric with several units, using several scorers, and revised accordingly.

This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.


References and resources

Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.

Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric

Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.

Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.

Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.

Blog: Four Personal Insights from 30 Years of Evaluation

Posted on August 30, 2018 by  in Blog ()

Haddix Community Chair of STEM Education, University of Nebraska Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I complete my 30th year in evaluation, I feel blessed to have worked with so many great people. In preparation for this blog, I spent a reflective morning with some hot coffee, cereal, and wheat toast (that morning donut is no longer an option), and I looked over past evaluations. I thought about any personal insights that I might share, and I came up with four:

  1. Lessons Learned Are Key: I found it increasingly helpful over the years to think about a project evaluation as a shared learning journey, taken with the project leadership. In this context, we both want to learn things that we can share with others.
  2. Evaluator Independence from Project Implementation Is Critical: Nearly 20 years ago, a program officer read in a project annual report that I had done a workshop on problem-based learning for the project. In response, he kindly asked if I had “gone native,” which is slang for a project evaluator getting so close to the project it threatens independence. As I thought it over, he had identified something that I was becoming increasingly uncomfortable with. It became difficult to offer suggestions on implementing problem-based learning when I had offered the training. That quick, thoughtful inquiry helped me to navigate that situation. It also helped me to think about my own future evaluator independence.
  3. Be Sure to Update Plans after Funding: I always adjust a project evaluation plan after the award. Once funded, everyone really digs in, and opportunities typically surface to make the project and its evaluation even better. I have come to embrace that process. I now typically include an “evaluation plan update” phase before we initiate an evaluation, to ensure that the evaluation plan is the best it can truly be when we implement it.
  4. Fidelity Is Important: It took me 10 years in evaluation before I fully understood the “fidelity issue.” Fidelity, for a loose definition, is essentially how faithful program implementers are to the recipe of a program intervention. The first time I became concerned with fidelity I was evaluating the implementation of 50 hours of curriculum. As I interviewed the teachers, it became clear that teachers were spending vastly different amounts of time on topics and activities. Like all good teachers, they had made the curriculum their own, but in many ways, the intended project intervention disappeared. This made it hard to learn much about the intervention. I evolved to include a fidelity feedback process in projects, to statistically adjust for that natural variation or to help examine differing impacts based on intervention fidelity.

In the last 30 years, program evaluation as a field has become increasingly useful and important. Like my days of eating donuts for breakfast, increasingly gone are the days of “superficial” evaluation. This has been replaced by evaluation strategies that are collaboratively planned, engaged, and flexible, which (like my wheat toast and cereal) gets evaluators and project leadership further on the shared journey. Although I do periodically miss the donuts, I never miss the superficial evaluations. Overall, I am always really glad that I now have the cereal and toast—and that I conduct strong and collaborative program evaluations.

Blog: The Life-Changing Magic of a Tidy Evaluation Plan

Posted on August 16, 2018 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Effective tidying involves only three essential actions. All you need to do is take the time to examine every item you own, decide whether or not you want to keep it, then choose where to put what you keep. Designate a place for each thing.”

―Marie Kondo, The Life-Changing Magic of Tidying Up

I’ve noticed a common problem with some proposal evaluation plans: It’s not so much that they don’t include key information; it’s that they lack order. They’re messy. When you have only about two pages of a 15-page National Science Foundation proposal to describe an evaluation, you need to be exceptionally clear and efficient. In this blog, I offer tips on how to “tidy up” your proposal’s evaluation plan to ensure it communicates key information clearly and coherently.

First of all, what does a messy evaluation plan look like? It meanders. It frames the evaluation’s focus in different ways in different places in the proposal, or even within the evaluation section itself, leaving the reviewer confused about the evaluation’s purpose. It discusses data and data collection without indicating what those data will be used to address. It employs different terms to mean the same thing in different places. It makes it hard for reviewers to discern key information from the evaluation plan and understand how that information fits together.

Three Steps to Tidy up a Messy Evaluation Plan

It’s actually pretty easy to convert a messy evaluation plan into a tidy one:

  • State the evaluation’s focus succinctly. List three to seven evaluation questions that the evaluation will address. These questions should encompass all of your planned data collection and analysis—no more, no less. Refer to these as needed later in the plan, rather than restating them differently or introducing new topics later in the plan. Do not express the evaluation’s focus in different ways in different places.
  • Link the data you plan to collect to the evaluation questions. An efficient way to do this is to present the information in a table. I like to include evaluation questions, indicators, data collection methods and sources, analysis, and interpretation in a single table to clearly show the linkages and convey that my team has carefully thought about how we will answer the evaluation questions. Bonus: Presenting information in a table saves space and makes it easy for reviewers to locate key information. (See EvaluATE’s Evaluation Data Matrix Template.)
  • Use straightforward language—consistently. Don’t assume that reviewers will share your definition of evaluation-related terms. Choose your terms carefully and do not vary how you use them throughout the proposal. For example, if you are using the terms measures, metrics, and indicators, ask yourself if you are really referring to different things. If not, stick with one term and use it consistently. If similar words are actually intended to mean different things, include brief definitions to avoid any confusion about your meaning.

Can a Tidy Evaluation Plan Really Change Your Life?

If it moves a very good proposal toward excellent, then yes! In the competitive world of grant funding, every incremental improvement counts and heightens your chances for funding, which can mean life-changing opportunities for the project leaders, evaluators, and—most importantly—individuals who will be served by the project.

Blog: Becoming a Sustainability Sleuth: Leaving and Looking for Clues of Long-Term Impact

Posted on August 1, 2018 by  in Blog ()

Director, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I’m Rebecca from SageFox Consulting Group, and I’d like to start a conversation about measuring sustainability. Many of us work on ambitious projects with long-term impacts that cannot be achieved within the grant period and require sustained grant activities. Projects are often tasked with providing evidence of sustainability but are not given the funding to assess sustainability and impact after grant funding. In five, 10, or 15 years, if someone were to pick up your final report, would they be able to use it to get a baseline understanding of what occurred during the grant, and would they know where to look for evidence of impact and sustainability? Below are some suggestions for documenting “clues” for sustainability:

Relationships are examples of how projects are sustained. You may want to consider documenting evidence of the depth of relationships: are they person-dependent, or has it become a true partnership between entities? Evidence of the depth of relationships is often revealed when a key supporter leaves their position, but the relationship continues. You might also try to distinguish a person from a role. For example, one project I worked on lost the support of a key contact (due to a reorganization) at a federal agency that hosted student interns during the summer. There was enough goodwill and experience, however, continued efforts from the project leadership resulted in more requests for interns than there were students available for.

Documenting how and why the innovation evolves can provide evidence of sustainability. Often the adopter, user, or customer finds their own value in relation to their unique context. Understanding how and why someone adapts the product or process gives great insight into what elements may go on and in what contexts. For example, you might ask users, “What modifications were needed for your context and why?”

In one of my projects, we began with a set of training modules for students, but we found that an online test preparation module for a certification was also valuable. Through a relationship with the testing agency, a revenue stream was developed that also allowed the project to continue classroom work with students.

Institutionalization (adoption of key products or processes by an institution)—often through a dedicated line item in a budget for a previously grant-funded student support position—reflects sustainability. For example, when a grant-funded program found a permanent home at the university by expanding its student-focused training in entrepreneurship to faculty members, it aligned itself with the mission of the department. Asking “What components of this program are critical for the host institution?” is one way to uncover institutionalization opportunities.

Revenue generation is another indicator of customer demand for the product or process. Many projects are reluctant to commercialize their innovations, but commercialization can be part of a sustainability plan. There are even National Science Foundation (NSF) programs to help plan for commercialization (e.g., NSF Innovation Corps), and seed money to get started is also available (e.g., NSF Small Business Innovation Research).

Looking for clues of sustainability often requires a qualitative approach to evaluation through capturing the story from the leadership team and participants. It also involves being on the lookout for unanticipated outcomes in addition to the deliberate avenues a project takes to ensure the longevity of the work.

Blog: Successful Practices in ATE Evaluation Planning

Posted on July 19, 2018 by  in Blog ()

President, Mullins Consulting, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this essay, I identify what helps me create a strong evaluation plan when working with new Advanced Technological Education (ATE) program partners. I hope my notes add value to current and future proposal-writing conversations.

Become involved as early as possible in the proposal-planning process. With ATE projects, as with most evaluation projects, the sooner an evaluator is included in the project planning, the better. Even if the evaluator just observes the initial planning meetings, their involvement helps them become familiar with the project’s framework, the community partnerships, and the way in which project objectives are taking shape. Such involvement also helps familiarize the evaluator with the language used to frame project components and the new or established relationships expected for project implementation.

Get to know your existing and anticipated partners. Establishing or strengthening partnerships is a core component of ATE planning, as ATE projects often engage with multiple institutions through the creation of new certifications, development of new industry partnerships, and explanation of outreach efforts in public schools. The evaluator should take detailed notes on the internal and external partnerships involved with the project. Sometimes, to support my own understanding as an evaluator, it helps for me to visually map these relationships. Also, the evaluator should prepare for the unexpected. Sometimes, partners will change during the planning process as partner roles and program purposes become more clearly defined.

Integrate evaluation thinking into conversations early on. Once the team gets through the first couple of proposal drafts, it helps if the evaluator creates an evaluation plan and the team makes time to review it as a group. This will help the planning team clarify the evaluation questions to be addressed and outcomes to be measured. This review also allows the team to see how their outcomes can be clearly attached to program activities and measured through specific methods of data collection. Sometimes during this process, I speak up if a component could use further discussion (e.g., cohort size, mentoring practices). If an evaluator has been engaged from the beginning and has gotten to know the partners, they have likely built the trust necessary to add value to the discussion of the proposal’s central components.

Operate as an illuminator. A colleague I admire once suggested that evaluation be used as a flashlight, not as a hammer. This perspective of prioritizing exploration and illumination over determination of cause and effect has informed my work. Useful evaluations certainly require sound evaluation methodology, but they also require the crafting of results into compelling stories, told with data guiding the way. This requires working with others as interpretations unfold, discovering how findings can be communicated to different audiences, and listening to what stakeholders need to move their initiatives forward.

ATE programs offer participants critical opportunities to be a part of our country’s future workforce. Stakeholders are passionate about their programs. Careful, thoughtful engagement throughout the proposal-writing process builds trust while contributing to a quality proposal with a strong evaluation plan.

Blog: Evaluation Feedback Is a Gift

Posted on July 3, 2018 by  in Blog ()

Chemistry Faculty, Anoka-Ramsey Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Christopher Lutz, chemistry faculty at Anoka-Ramsey Community College. When our project was initially awarded, I was a first-time National Science Foundation (NSF) principal investigator. I understood external evaluation was required for grants but saw it as an administrative hurdle in the grant process. I viewed evaluation as proof for the NSF that we did the project and as a metric for outcomes. While both of these aspects are important, I learned evaluation is also an opportunity to monitor and improve your process and grant. Working with our excellent external evaluators, we built a stronger program in our grant project. You can too, if you are open to evaluation feedback.

Our evaluation team was composed of an excellent evaluator and a technical expert. I started working with both about halfway through the proposal development process (a few months before submission) to ensure they could contribute to the project. I recommend contacting evaluators during the initial stages of proposal development and checking in several times before submission. This gives adequate time for your evaluators to develop a quality evaluation plan and gives you time to understand how to incorporate your evaluator’s advice. Our funded project yielded great successes, but we could have saved time and achieved more if we had involved our evaluators earlier in the process.

After receiving funding, we convened grant personnel and evaluators for a face-to-face meeting to avoid wasted effort at the project start. Meeting in person allowed us to quickly collaborate on a deep level. For example, our project evaluator made real-time adjustments to the evaluation plan as our academic team and technical evaluator worked to plan our project videos and training tools. Include evaluator travel funds in your budget and possibly select an evaluator who is close by. We did not designate travel funds for our Kansas-based evaluator, but his ties to Minnesota and understanding of the value of face-to-face collaboration led him to use some of his evaluation salary to travel and meet with our team.

Here are three ways we used evaluation feedback to strengthen our project:

Example 1: The first-year evaluation report showed a perceived deficiency in the project’s provision of hands-on experience with MALDI-MS instrumentation. In response, we had students make small quantities of liquid solution instead of giving pre-mixed solutions, and let them analyze more lab samples. This change required minimal time but led students to regard the project’s hands-on nature as a strength in the second-year evaluation.

Example 2: Another area for improvement was students’ lack of confidence in analyzing data. In response to this feedback, project staff create Excel data analysis tools and a new training activity for students to practice with literature data prior to analyzing their own. The subsequent year’s evaluation report indicated increased student confidence.

Example 3: Input from our technical evaluator allowed us to create videos that have been used in academic institutions in at least three US states, the UK’s Open University system, and Iceland.

Provided here are some overall tips:

  1. Work with your evaluator(s) early in the proposal process to avoid wasted effort.
  2. Build in at least one face-to-face meeting with your evaluator(s).

Review evaluation data and reports with the goal of improving your project in the next year.

Consider external evaluators as critical friends who are there to help improve your project. This will help move your project forward and help you have a greater impact for all.

Blog: Creating Interactive Documents

Posted on June 20, 2018 by  in Blog ()

Executive Director, Healthy Climate Alliance

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In 2016, I was an intern at the Evaluation Office (EVAL) of the International Labour Organization, where the constant question was, “How do we get people to read the reports that we spend so much time and energy on?” I had been looking for a new project that would be useful to my colleagues in EVAL, and a bolt of inspiration hit me: what if I could use the key points and information from one of the dense reports to make an interactive summary report? That project led me to the general concept of interactive documents, which can be used for reports, timelines, logic models, and more.

I recommend building interactive documents in PowerPoint and then exporting them as PDFs. I use Adobe Acrobat Pro to add clickable areas to the PDF that will lead readers to a particular section of the PDF or to a webpage. Interactive documents are not intended to be read from beginning to end. It should be easy for readers to navigate directly from the front page to the content that interests them, and back to the front page.

While building my interactive documents in PowerPoint, I follow Nancy Duarte’s Slidedocs principles to create visual documents that are intended to be read rather than presented. She suggests providing content that is clear and concise, using small chunks of text, and interspersing visuals. I use multiple narrow columns of text, with visuals on each page.

Interactive documents include a “launch page,” which gives a map-like overview of the whole document.

The launch page (see figure) allows readers to absorb the structure and main points of the document and to decide where they want to “zoom in” for more detail. I try to follow the wise advice of Edward Tufte: “Don’t ‘know your audience.’ Know your content and trust your audience.” He argues that we shouldn’t try to distill key points and simplify our data to make it easier for audiences to absorb. Readers will each have their own agendas and priorities, and we should make it as easy as possible for them to access the data that is most useful to them.

The launch page of an interactive document should have links all over it; every item of content on the launch page should lead readers to more detailed information on that topic. Every subsequent page should be extremely focused on one topic. If there is too much content within one topic, you can create another launch page focused on that particular topic (e.g., the “Inputs” section of the logic model).

The content pages should have buttons (i.e., links) that allow readers to navigate back to the main launch page or forward to the following page. If there’s a more detailed document that you’re building from, you may also want to link to that document on every page.

Try it out! Remember to keep your interactive document concise and navigable.

Blog: Modifying Grant Evaluation Project Objectives

Posted on June 11, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Evelyn Brown
Director, Extension Research and Development
NC State Industry Expansion Solutions
Leressa Suber
Evaluation Coordinator
NC State Industry Expansion Solutions

When performing grant evaluations, our clients develop specific project objectives to drive attainment of overall grant goals. We work with principal investigators (PIs) to monitor work plan activities and project outcomes to ensure objectives are attainable, measurable, and sustainable.

However, what happens when the project team encounters obstacles to starting the activities related to project objectives? What shifts need to be made to meet grant goals?

When the team determines that the project objective cannot be achieved as initially planned, it’s important for the PI and evaluator to determine how to proceed. In the table below, we’ve highlighted three scenarios in which it may be necessary to shift, change, or eliminate a project objective. Then, if changes are made, based on the extent of the project objective modifications, the team can determine if or when the PI should notify the project funder.

Example: Shift in Project Objective

Grant Goal Help underclassmen understand what engineers do by observing the day-to-day activities of a local engineer.
Problem The advisory board members (engineers) in the field were unavailable.
Objective Current: Shadow advisory board member. Change: Shadow young engineering alumni.
Result The goal is still attainable.
PI Notify Funder No, but provide explanation/justification in the end-of-year report.

Example: Change a Project Objective

Grant Goal To create a method by which students at the community college will earn a credential to indicate they are prepared for employment in a specific technical field.
Problem The state process to establish a new certificate is time consuming and can’t occur within the grant period.
Objective Current: Complete degree in specific technical field. Change: Complete certificate in specific technical field.
Result The goal is still attainable.
PI Notify Funder Yes, specifically contact the funding program officer.

Example: Eliminate the Project Objective

Grant Goal The project participant’s salary will increase as result of completing specific program.
Problem Following program exit, salary data is unavailable.
Objective Current: Compare participant’s salary at start of program to salary three months after program completion. Change: Unable to maintain contact with program completers to obtain salary information.
Result The goal cannot realistically be measured.
PI Notify Funder Yes, specifically contact funding program officer.

In our experience working with clients, we’ve found that the best way to minimize the need to modify project objectives is to ensure they are well written during the grant proposal phase.

Tips: How to write attainable project objectives.

1. Thoroughly think through objectives during grant development phase.

The National Science Foundation (NSF) provides guidance to assist PIs with constructing realistic project goals and objectives. Below, we’ve linked to the NSF’s proposal development guide. However, here are a few key considerations:

  • Are the project objectives clear?
  • Are the resources necessary to accomplish the objectives clearly identified?
  • Are their barriers to accessing the resources needed?

2. Seek evaluator assistance early in the grant proposal process.

Link to additional resources: NSF – A Guide for Proposal Writing