Archive: communication

Blog: The Life-Changing Magic of a Tidy Evaluation Plan

Posted on August 16, 2018 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Effective tidying involves only three essential actions. All you need to do is take the time to examine every item you own, decide whether or not you want to keep it, then choose where to put what you keep. Designate a place for each thing.”

―Marie Kondo, The Life-Changing Magic of Tidying Up

I’ve noticed a common problem with some proposal evaluation plans: It’s not so much that they don’t include key information; it’s that they lack order. They’re messy. When you have only about two pages of a 15-page National Science Foundation proposal to describe an evaluation, you need to be exceptionally clear and efficient. In this blog, I offer tips on how to “tidy up” your proposal’s evaluation plan to ensure it communicates key information clearly and coherently.

First of all, what does a messy evaluation plan look like? It meanders. It frames the evaluation’s focus in different ways in different places in the proposal, or even within the evaluation section itself, leaving the reviewer confused about the evaluation’s purpose. It discusses data and data collection without indicating what those data will be used to address. It employs different terms to mean the same thing in different places. It makes it hard for reviewers to discern key information from the evaluation plan and understand how that information fits together.

Three Steps to Tidy up a Messy Evaluation Plan

It’s actually pretty easy to convert a messy evaluation plan into a tidy one:

  • State the evaluation’s focus succinctly. List three to seven evaluation questions that the evaluation will address. These questions should encompass all of your planned data collection and analysis—no more, no less. Refer to these as needed later in the plan, rather than restating them differently or introducing new topics later in the plan. Do not express the evaluation’s focus in different ways in different places.
  • Link the data you plan to collect to the evaluation questions. An efficient way to do this is to present the information in a table. I like to include evaluation questions, indicators, data collection methods and sources, analysis, and interpretation in a single table to clearly show the linkages and convey that my team has carefully thought about how we will answer the evaluation questions. Bonus: Presenting information in a table saves space and makes it easy for reviewers to locate key information. (See EvaluATE’s Evaluation Data Matrix Template.)
  • Use straightforward language—consistently. Don’t assume that reviewers will share your definition of evaluation-related terms. Choose your terms carefully and do not vary how you use them throughout the proposal. For example, if you are using the terms measures, metrics, and indicators, ask yourself if you are really referring to different things. If not, stick with one term and use it consistently. If similar words are actually intended to mean different things, include brief definitions to avoid any confusion about your meaning.

Can a Tidy Evaluation Plan Really Change Your Life?

If it moves a very good proposal toward excellent, then yes! In the competitive world of grant funding, every incremental improvement counts and heightens your chances for funding, which can mean life-changing opportunities for the project leaders, evaluators, and—most importantly—individuals who will be served by the project.

Blog: Evaluation Feedback Is a Gift

Posted on July 3, 2018 by  in Blog ()

Chemistry Faculty, Anoka-Ramsey Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Christopher Lutz, chemistry faculty at Anoka-Ramsey Community College. When our project was initially awarded, I was a first-time National Science Foundation (NSF) principal investigator. I understood external evaluation was required for grants but saw it as an administrative hurdle in the grant process. I viewed evaluation as proof for the NSF that we did the project and as a metric for outcomes. While both of these aspects are important, I learned evaluation is also an opportunity to monitor and improve your process and grant. Working with our excellent external evaluators, we built a stronger program in our grant project. You can too, if you are open to evaluation feedback.

Our evaluation team was composed of an excellent evaluator and a technical expert. I started working with both about halfway through the proposal development process (a few months before submission) to ensure they could contribute to the project. I recommend contacting evaluators during the initial stages of proposal development and checking in several times before submission. This gives adequate time for your evaluators to develop a quality evaluation plan and gives you time to understand how to incorporate your evaluator’s advice. Our funded project yielded great successes, but we could have saved time and achieved more if we had involved our evaluators earlier in the process.

After receiving funding, we convened grant personnel and evaluators for a face-to-face meeting to avoid wasted effort at the project start. Meeting in person allowed us to quickly collaborate on a deep level. For example, our project evaluator made real-time adjustments to the evaluation plan as our academic team and technical evaluator worked to plan our project videos and training tools. Include evaluator travel funds in your budget and possibly select an evaluator who is close by. We did not designate travel funds for our Kansas-based evaluator, but his ties to Minnesota and understanding of the value of face-to-face collaboration led him to use some of his evaluation salary to travel and meet with our team.

Here are three ways we used evaluation feedback to strengthen our project:

Example 1: The first-year evaluation report showed a perceived deficiency in the project’s provision of hands-on experience with MALDI-MS instrumentation. In response, we had students make small quantities of liquid solution instead of giving pre-mixed solutions, and let them analyze more lab samples. This change required minimal time but led students to regard the project’s hands-on nature as a strength in the second-year evaluation.

Example 2: Another area for improvement was students’ lack of confidence in analyzing data. In response to this feedback, project staff create Excel data analysis tools and a new training activity for students to practice with literature data prior to analyzing their own. The subsequent year’s evaluation report indicated increased student confidence.

Example 3: Input from our technical evaluator allowed us to create videos that have been used in academic institutions in at least three US states, the UK’s Open University system, and Iceland.

Provided here are some overall tips:

  1. Work with your evaluator(s) early in the proposal process to avoid wasted effort.
  2. Build in at least one face-to-face meeting with your evaluator(s).

Review evaluation data and reports with the goal of improving your project in the next year.

Consider external evaluators as critical friends who are there to help improve your project. This will help move your project forward and help you have a greater impact for all.

Blog: Summarizing Project Milestones

Posted on March 28, 2018 by  in Blog ()

Evaluation Specialist, Thomas P. Miller & Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

With any initiative, it can be valuable to document and describe the implementation to understand what occurred and what shifts or changes were made to the original design (e.g., fidelity to the model). This understanding helps when replicating, scaling, or seeking future funding for the initiative.

Documentation can be done by the evaluator and be shared with the grantee (as a way to validate an evaluator’s understanding of the project). Alternatively, project staff can document progress and share this with the evaluator as a way to keep the evaluation team up to date (which is especially helpful on small-budget evaluation projects).

The documentation of progress can be extremely detailed or high level (e.g., a snapshot of the initiative’s development). When tracking implementation milestones, consider:

  1. What is the goal of the document?
  2. Who is the audience?
  3. What are the most effective ways to display and group the data?

For example, if you are interested in understanding a snapshot of milestones and modifications of the original project design, you might use a structure like the one below:

click to enlarge and download

If you are especially interested in highlighting the effect of delays on project implementation and the cause, you may adjust the visual to include directional arrows and shading:

click to enlarge and download

In these examples, we organized the snapshot by quarterly progress, but you can group milestones by month or even include a timeline of the events. Similarly, in Image 2 we categorized progress in buckets (e.g., curriculum, staffing) based on key areas of the grant’s goals and activities. These categories should change to align with the unique focus of each initiative. For example, if professional development is a considerable part of the grant, then perhaps placing that into a separate category (instead of combining it with staffing) would be best.

Another important consideration is the target audience. We have used this framework when communicating with project staff and leadership to show, at a high level, what is taking place within the project. This diagramming has also been valuable for sharing knowledge across our evaluation staff members, leading to discussions around fidelity to the model and any shifts or changes that may need to occur within the evaluation design, based on project implementation. Some of your stakeholders, such as project funders, may want more information than just the snapshot. In these cases, you may consider adding additional detail to the snapshot visual, or starting your report with the snapshot and then providing an additional narrative around each bucket and/or time period covered within the visual.

Also, the framework itself can be modified. If, for example, you are more concerned about showing the cause and effect instead of adjustments, you may group everything together as “milestones” instead of having separate categories for “adjustments” and “additional milestones.”

For our evaluation team, this approach has been a helpful way to consolidate, disseminate, and discuss initiative milestones with key stakeholder groups such as initiative staff, evaluators, college leadership, and funders. We hope this will be valuable to you as well.

Blog: Getting Your New ATE Project’s Evaluation off to a Great Start

Posted on October 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

New ATE project principal investigators (PIs): When you worked with your evaluator to develop an evaluation plan for your project proposal, you were probably focused on the big picture—how to gather credible and meaningful evidence about the quality and impact of your work. To ensure your evaluation achieves its aims, take these four steps now to make sure your project provides the human resources, time, and information needed for a successful evaluation:

  1. Schedule regular meetings with your evaluator. Regular meetings help ensure that your project’s evaluation receives adequate attention. These exchanges should be in real time—via phone call, web meetings, or face-to-face—not just email. See EvaluATE’s new Communication Plan Checklist for ATE PIs and Evaluators for a list of other communication issues to discuss with your evaluator at the start of a project.
  1. Work with your evaluator to create a project evaluation calendar. This calendar should span the life of your project and include the following:
  • Due dates for National Science Foundation (NSF) annual reports: You should include your evaluation reports or at least information from the evaluation in these reports. Work backward from their due dates to determine when evaluation reports should be completed. To find out when your annual report is due, go to Research.gov, enter your NSF login information, select “Awards & Reporting,” then “Project Reports.”
  • Advisory committee meeting dates: You may want your evaluator to attend these meetings to learn more about your project and to communicate directly with committee members.
  • Project events: Activities such as workshops and outreach events present valuable opportunities to collect data directly from the individuals involved in the project. Make sure your evaluator is aware of them.
  • Due dates for new proposal submissions: If submitting to NSF again, you will need to include evidence of your current project’s intellectual merit and broader impacts. Working with your evaluator now will ensure you have compelling evidence to support a future submission.
  1. Keep track of what you’re doing and who is involved. Don’t leave these tasks to your evaluator or wait until the last minute. Taking an active—and proactive—role in documenting the project’s work will save you time and result in more accurate information. Your evaluator can then use that information when preparing their reports. Moreover, you will find it immensely useful to have good documentation at your fingertips when preparing your annual NSF report.
  • Maintain a record of project activities and products—such as conference presentations, trainings, outreach events, competitions, publications—as they are completed. Check out EvaluATE’s project vita as an example.
  • Create a participant database (or spreadsheet): Everyone who engages with your project should be listed. Record their contact information, role in the project, and pertinent demographic characteristics (such as whether a student is a first-generation college student, a veteran, or part of a group that has been historically underrepresented in STEM). You will probably find several uses for this database, such as for follow-up with participants for evaluation purposes, for outreach, and as evidence of your project’s broader impacts.

An ounce of prevention is worth of pound of cure: Investing time up front to make sure your evaluation is on solid footing will save headaches down the round.

Checklist: Communication Plan for ATE Principal Investigators and Evaluators

Posted on October 17, 2017 by , in Resources (, )

Creating a clear communication plan at the beginning of an evaluation can help project personnel and evaluators avoid confusion, misunderstandings, or uncertainty. The communication plan should be an agreement between the project’s principal investigator and the evaluator, and followed by members of their respective teams. This checklist highlights the decisions that need to made when developing a clear communication plan.

File: Click Here
Type: Checklist
Category: Checklist, Evaluation Design
Author(s): Lori Wingate, Lyssa Becho

Blog: Best Practices for Two-Year Colleges to Create Competitive Evaluation Plans

Posted on September 28, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ball
Kelly Ball
Ball
Jeff Grebinoski

Northeast Wisconsin Technical College’s (NWTC) Grants Office works closely with its Institutional Research Office to create ad hoc evaluation teams in order to meet the standards of evidence required in funders’ calls for proposals. Faculty members at two-year colleges often make up the project teams that are responsible for National Science Foundation (NSF) grant project implementation. However, they often need assistance navigating among terms and concepts that are traditionally found in scientific research and social science methodology.

Federal funding agencies are now requiring more evaluative rigor in their grant proposals than simply documenting deliverables. For example, the NSF’s Scholarships in Science, Technology, Engineering, and Mathematics (S-STEM) program saw dramatic changes in 2015: The program solicitation increased the amount of non-scholarship budget from 15% of the scholarship amount to 40% of the total project budget to increase supports for students and to investigate the effectiveness of those supports.

Technical colleges, in particular, face a unique challenge as solicitations change: These colleges traditionally have faculty members from business, health, and trades industries. Continuous improvement is a familiar concept to these professionals; however, they tend to have varying levels of expertise evaluating education interventions.

The following are a few best practices we have developed for assisting project teams in grant proposal development and project implementation at NWTC.

  • Where possible, work with an external evaluator at the planning stage. External evaluators can provide the expertise that principal investigators and project teams might lack as external evaluators are well-versed on current evaluation methods, trends, and techniques.
  • As they develop their projects, teams should meet with their Institutional Research Office to better understand data gathering and research capacity. Some data needed for evaluation plans might be readily available, whereas others might require some advanced planning to develop a system to track information. Conversations about what the data will be used for and what questions the team wants to answer will help ensure that the correct data are able to be gathered.
  • After a grant is awarded, have a conversation early with all internal and external evaluative parties about clarifying data roles and responsibilities. Agreeing to reporting deadlines and identifying who will collect the data and conduct further analysis will help avoid delays.
  • Create a “data dictionary” for more complicated projects and variables to ensure that everyone is on the same page about what terms mean. For example, “student persistence” can be defined term-to-term or year-to-year and all parties need to understand which data will need to be tracked.

With some planning and the right working relationships in place, two-year colleges can maintain their federal funding competitiveness even as agencies increase evaluation requirements.

Blog: Good Communication Is Everything!

Posted on February 3, 2016 by  in Blog ()

Evaluator, South Carolina Advanced Technological Education Resource Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I am new to the field of evaluation, and the most important thing that I learned in my first nine months is that effective communication is critical to the success of the evaluation of a project. Whether primarily virtual or face-to-face, knowing the communication preferences of your client is important. Knowing the client’s schedule is also important. For example, if you are working with faculty, having a copy of their teaching and office hours schedule for each semester can help.

While having long lead times to get to know the principal investigator and project team is desirable and can promote strong relationship building in advance of implementing evaluation strategies, that isn’t always possible. With my first project, contracts were finalized with the client and evaluators only days before a major project event. There was little time to prepare and no opportunity to get to know the principal investigator or grant team before launching into evaluation activities. In preparation, I had an evaluation plan, a copy of the proposal as submitted, and other project-related documents. Also, I was working with a veteran evaluator who knew the PI and had experience evaluating another project for the client. Nonetheless, there were surprises that caught both the veteran evaluator and me off guard. As the two evaluators worked with the project team to hone in on the data needed to make the evaluation stronger, we discovered that the goals, objectives, and some of the activities had been changed during the project’s negotiations with NSF prior to funding. As evaluators, we discovered that we were working from a different playbook than the PI and other team members! The memory of this discovery still sends chills down my back!

A mismatch regarding communication styles and anticipated response times can also get an evaluation off to a rocky start. If not addressed, unmet expectations can lead to disappointment and animosity. In this case, face-to-face interaction was key to keeping the evaluation moving forward. Even when a project is clearly doing exciting and impactful work, it isn’t always possible to collect all of the data called for in the evaluation plan. I’ve learned firsthand that the tug-of-war that exists between an evaluator’s desire and preparation to conduct a rigorous evaluation and the need to be flexible and to work within the constraints of a particular situation isn’t always comfortable.

Lessons learned

From this experience, I learned some important points that I think will be helpful to new evaluators.

  • Establishing a trusting relationship can be as important as conducting the evaluation. Find out early if you and the principal investigator are compatible and can work together. The PI and evaluator should get to know each other and establish some common expectations at the earliest possible date.
  • Determine how you will communicate and ensure a common understanding of what constitutes a reasonable response time for emails, telephone calls, or requests for information from either party. Individual priorities differ and thus need to be understood by both parties.
  • Be sure that you ask at the onset if there have been changes to the goals and objectives for the project since the proposal was submitted. Adjust the evaluation plan accordingly.
  • Determine the data that can be and will be collected and who will be responsible for providing what information. In some situations, it helps to secure permission to work directly with an institutional research office or internal evaluator for a project to collect data.
  • When there are differences of opinion or misunderstandings, confront them head on. If the relationship continues to be contentious in any way, changing evaluators may be the best solution.

I hope that some of my comments will help other newcomers to realize that the yellow brick road does have some potential potholes and road closures.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
Manu Platt Ayesha Boyce

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs).