Archive: evaluation management

Blog: Evaluation Management Skill Set

Posted on April 12, 2017 by  in Blog ()

CEO, SPEC Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We as evaluators, all know that managing an evaluation is quite different from managing a scientific research project. Sure, we need to take due diligence in completing the basic inquiry tasks:  deciding study questions/hypotheses; figuring out the strongest design, sampling plan, data collection methods and analysis strategies; and interpreting/reporting results. But evaluation’s purposes extend well beyond proving or disproving a research hypothesis. Evaluators must also focus on how the evaluation will lead to enlightenment and what role it plays in support of decision making. Evaluations can leave in place important processes that extend beyond the study itself, like data collection systems and changed organizational culture that places greater emphasis on data-informed decision making. Evaluations also exist within local and organizational political contexts, which are of less importance to academic and scientific research.

Very little has been written in the evaluation literature about evaluation management. Compton and Baizerman are the most prolific authors editing two issues of New Directions in Evaluation on the topic. They approach evaluation management from a theoretical perspective, discussing issues like the basic competencies of evaluation managers within different organizational contexts (2009) and the role of evaluation managers in advice giving (2012).

I would like to describe good evaluation management in terms of the actual tasks that an evaluation manager must excel in—what evaluation managers must be able to actually do. For this, I looked to the field of project management. There is a large body of literature about project management, and whole organizations, like the Project Management Institute, dedicated to the topic. Overlaying evaluation management onto the core skills of a project manager, here is the skill set I see as needed to effectively manage an evaluation:

Technical Skills:

  • Writing an evaluation plan (including but not limited to descriptions of basic inquiry tasks)
  • Creating evaluation timelines
  • Writing contracts between the evaluation manager and various members of the evaluation team (if they are subcontractors), and with the client organization
  • Completing the application for human subjects institutional review board (HSIRB) approval, if needed

Financial Skills:

  • Creating evaluation budgets, including accurately estimating hours each person will need to devote to each task
  • Generating or justifying billing rates of each member of the evaluation team
  • Tracking expenditures to assure that the evaluation is completed within the agreed-upon budget

Interpersonal Skills:

  • Preparing a communications plan outlining who needs to be apprised of what information or involved in which decisions, how often and by what method
  • Using appropriate verbal and nonverbal communication skills to assure that the evaluation not only gets done, but good relationships are maintained throughout
  • Assuming leadership in guiding the evaluation to its completion
  • Resolving the enormous number of conflicts that can arise both within the evaluation team and between the evaluators and the stakeholders

I think that this framing can provide practical guidance for what new evaluators need to know to effectively manage an evaluation and guidance for how veteran evaluators can organize their knowledge for practical sharing. I’d be interested in comments as to the comprehensiveness and appropriateness of this list…am I missing something?

Blog: Good Communication Is Everything!

Posted on February 3, 2016 by  in Blog ()

Evaluator, South Carolina Advanced Technological Education Resource Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I am new to the field of evaluation, and the most important thing that I learned in my first nine months is that effective communication is critical to the success of the evaluation of a project. Whether primarily virtual or face-to-face, knowing the communication preferences of your client is important. Knowing the client’s schedule is also important. For example, if you are working with faculty, having a copy of their teaching and office hours schedule for each semester can help.

While having long lead times to get to know the principal investigator and project team is desirable and can promote strong relationship building in advance of implementing evaluation strategies, that isn’t always possible. With my first project, contracts were finalized with the client and evaluators only days before a major project event. There was little time to prepare and no opportunity to get to know the principal investigator or grant team before launching into evaluation activities. In preparation, I had an evaluation plan, a copy of the proposal as submitted, and other project-related documents. Also, I was working with a veteran evaluator who knew the PI and had experience evaluating another project for the client. Nonetheless, there were surprises that caught both the veteran evaluator and me off guard. As the two evaluators worked with the project team to hone in on the data needed to make the evaluation stronger, we discovered that the goals, objectives, and some of the activities had been changed during the project’s negotiations with NSF prior to funding. As evaluators, we discovered that we were working from a different playbook than the PI and other team members! The memory of this discovery still sends chills down my back!

A mismatch regarding communication styles and anticipated response times can also get an evaluation off to a rocky start. If not addressed, unmet expectations can lead to disappointment and animosity. In this case, face-to-face interaction was key to keeping the evaluation moving forward. Even when a project is clearly doing exciting and impactful work, it isn’t always possible to collect all of the data called for in the evaluation plan. I’ve learned firsthand that the tug-of-war that exists between an evaluator’s desire and preparation to conduct a rigorous evaluation and the need to be flexible and to work within the constraints of a particular situation isn’t always comfortable.

Lessons learned

From this experience, I learned some important points that I think will be helpful to new evaluators.

  • Establishing a trusting relationship can be as important as conducting the evaluation. Find out early if you and the principal investigator are compatible and can work together. The PI and evaluator should get to know each other and establish some common expectations at the earliest possible date.
  • Determine how you will communicate and ensure a common understanding of what constitutes a reasonable response time for emails, telephone calls, or requests for information from either party. Individual priorities differ and thus need to be understood by both parties.
  • Be sure that you ask at the onset if there have been changes to the goals and objectives for the project since the proposal was submitted. Adjust the evaluation plan accordingly.
  • Determine the data that can be and will be collected and who will be responsible for providing what information. In some situations, it helps to secure permission to work directly with an institutional research office or internal evaluator for a project to collect data.
  • When there are differences of opinion or misunderstandings, confront them head on. If the relationship continues to be contentious in any way, changing evaluators may be the best solution.

I hope that some of my comments will help other newcomers to realize that the yellow brick road does have some potential potholes and road closures.

Newsletter: Creating an Evaluation Scope of Work

Posted on October 1, 2015 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

One of the most common requests we get at EvaluATE is for examples of independent contractor agreements and scope of work statements for external evaluators. First, let’s be clear about the difference between these two types of documents.

An independent contractor agreement is typically 90 percent boilerplate language required by your institution. Here at Western Michigan University, contracts are run through one of three offices (Business Services, Research and Sponsored Programs, Grants and Contracts, or Purchasing), depending on the type of contract and the nature of the work/service. We can’t tell you the name of the office at your institution, but there definitely is one and they probably have boilerplate contract forms that you will need to use.

A scope of work statement should be attached to and referenced by the independent contractor agreement (or other type of contract). But unlike the contract, it should not be written in legalese, but in plain language understandable to all parties involved. The key issues to cover in a scope of work statement include the following:

Evaluation questions (or objectives): Including information about the purpose of the evaluation is a good reminder to those involved about why the evaluation is being done. It may serve as a useful reference down the road if the evaluation starts to experience scope creep (or shrinkage).

Main tasks and deliverables (with timelines or deadlines): This information should make clear what services and products the evaluator will provide. Common examples include a detailed evaluation plan (what was included in your proposal probably doesn’t have enough detail), data collection instruments, reports, and presentations.

It’s critical to include timelines (generally when things will occur) and deadlines (when they must be finished) in this statement.

Conditions for payment: You most likely specified a dollar amount for the evaluation in your grant proposal, but you probably do not plan on paying that in a lump sum either at the beginning or end of the evaluation or even yearly. Specify in what increments payments should be made and what conditions must be met for payment. Rather than tying payment(s) to certain dates, consider making payment(s)contingent on the completion of certain tasks or deliverables.

Be sure to come to agreement on these terms in collaboration with your evaluator. This is an opportunity to launch your working relationship from a place of open communication and shared expectations.

Blog: Five Questions All Evaluators Should Ask Their Clients

Posted on July 8, 2015 by  in Blog ()

Senior Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the things that I love about program evaluation is the diversity of models and methods that you must think about to analyze a program. But even before you get to the point of developing and solidifying your evaluation design, there’s a lot of legwork you need to do up front. In successful evaluations, that process starts by asking the right questions. So where does this process start? Here are just a few questions you can start with to get a conversation rolling with your client and have confidence that your evaluation is moving in the right direction.

1. What do you hope to achieve with this program?

A common challenge for all organizations is goal setting, and in an evaluation setting, having clear and measurable goals is absolutely essential. Too often goals are defined, but may not actually be matched to participant or organizational needs. As evaluators, we should pay close attention to these distinctions, as they enable us to help clients improve the implementation of their programs and guide them towards their anticipated outcomes.

2. What’s the history of this program?

New program or old, you’re going to need to know the background of the initiative. That will lead you to understand the funding, core stakeholders, requirements, and any necessary information needed to evaluate the program. You might learn interesting stories about why the program has struggled, which can help you to design your evaluation and create research questions. It’s also a great way to get to know a client and learn about their pain points in the past and really understand what their objectives are for the evaluation.

3. What kind of data do you plan on collecting or do you have access to?

Every program evaluator has faced the challenge of getting the data they need to conduct an evaluation. You need to know what’s needed early on and what kind of data you’ll need to do the evaluation. Don’t wait to have those conversations with your clients. If you’re putting this on hold until you are ready to conduct your tests, it may very well be too late.

4. What challenges do you foresee with program implementation?

Program designs might change as challenges that impact program design and delivery arise. But if you can spot some red flags early on, you might be able to help your client navigate implementation challenges and avoid roadblocks. The key is being flexible and working with your client to understand and anticipate implementation issues and work to address them in advance.

5. What excites you about this program?

This question allows you to get to know the client a bit more, understand their interests, and build a relationship with the client. I love this question because it reinforces the idea of an evaluator as a partner in the program. By acting as a partner, you can provide your clients with the right kind of evaluation, and build a partnership along the way.

Program evaluation presents some very challenging and complex questions for evaluators. Starting with these five questions will allow you to focus the evaluation and set your client and the evaluation team up for success.

 

 

Blog: Evaluation Procurement: Regulations, Rules and Red Tape… Oh My!

Posted on April 8, 2015 by  in Blog (, )

Grants Specialist, Virginia Western Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Jacqueline Rearick, and I am a Grants Specialist at Virginia Western Community College where I support our NSF/ATE projects and sub-awards, among other grants. I’m also an evaluation advocate and can get a bit overzealous about logic models, outcomes, surveys, and assessments. Recently, our grants office had to work through the process of procurement to secure evaluation services for our ATE project. Although we referenced an external evaluator in the project design, the policies and procedures of our individual state procurement regulations trumped the grant proposal and became the focus of a steep learning curve for all involved.

Because we have different priorities it may appear that the grants office and the procurement office can be in direct opposition with one another. Grant proposals that require evaluation services, like ATE, work best when the evaluator is part of the process and can assist with developing the plan and then execute the evaluation. Procurement regulations at your individual institution could require a bid process; which may or may not result in securing the evaluator who helped you write the initial evaluation plan.

Hot Tip: Invite the procurement office to the table early

Securing evaluation services for your ATE project is important; so is following internal procurement rules. Touch base with your procurement office early in the evaluation development process. Are there local or state regulations that will require a bid process? If your ATE evaluator assists with the writing of your evaluation section in the proposal, will you be able to use that same evaluator if the grant is funded? Have an honest conversation with your evaluator about the procurement process.

Hot Tip: Levels of procurement, when the rules change

While working through the procurement process, we discovered that state rules change when the procurement of goods or services reach different funding levels. What was a simple evaluation procurement for our first small ATE grant ($200k) turned into much larger scale procurement for our second ATE project grant ($900k), based on our state guidelines. Check with your institution to determine thresholds and the required guidelines for consultant services at various funding levels.

Lesson Learned: All’s well that ends well

The process of securing evaluation services through procurement is designed to be one that allows the PI to review all competitors to determine quality evaluation services at a reasonable price. The evaluator who helped write our evaluation in the proposal was encouraged to bid on the project. What’s even better, this evaluator is now set up as a vendor in our state system and will be available to other colleges in the state as they seek quality ATE evaluation services.

Blog: Evaluation Plan Development for Grant Writing

Posted on March 25, 2015 by  in Blog (, )

Dean of Grants and Federal Programs, Pensacola State College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As dean of institutional effectiveness and grants I have varied responsibilities, but at heart, I am a grant writer. I find it easy to write a needs statement based on available data; more challenging is the process of developing an effective evaluation plan for a proposed project.

A lot of time and effort – taxpayer supported – go into project evaluation, an increasingly significant component of federal grant applications, as illustrated by the following examples:

  • My college partners on two existing U.S. Department of Labor Trade Adjustment Assistance Community College and Career Training (TAACCCT) Grants – almost $2 billion nationally to expand training for the unemployed – which allow up to 10 percent of project budgets to be designated for mandatory external evaluation.
  • We have an $8.5 million U.S. Department of Health & Human Services Health Profession Opportunity Grant demonstration project. Part of that “demonstration” included mandatory participation in activities conducted by contracted external evaluators.

We recently submitted grant applications under the highly competitive U.S. Department of Education Student Support Services (SSS) Program. My college has a long-term SSS program, which meets all of its objectives so we’ll receive “extra” prior experience points. We are assured refunding, right? Maybe, as long as we address competitive preference priorities and score better than perfect – every point counts.

Although external evaluation is not required, when comparing language excerpted from the last three SSS competitions, it is clear that there is a much greater emphasis on the details of an evaluation plan. The guidelines require a detailed description of what types of data will be collected and how the applicant will use the information collected in the evaluation of project activities. It is no longer sufficient to just say “project staff will collect quantitative and qualitative data and use this information for project improvement.”

Our successful evaluation plans start with a detailed logic model, which allows us to make realistic projections of what we hope will happen and plan data collection around the project’s key activities and outcomes. We use these guiding questions to help formulate the details:

  • What services will be provided?
  • What can be measured?
    • perceptions, participation, academic progress
  • What information sources will be available?
  • What types of data will be collected?
    • student records, surveys, interviews, activity-specific data
  • How will we review and analyze the data collected?
  • What will we do with the findings?
    • Specific actions

Unlike universities, most community and state colleges are not hotbeds of research and evaluation. So what can grant writers do to prepare themselves to meet the “evaluation plan” challenge?

  • Make friends with a statistician; they tend to hang out in the Mathematics or Institutional Research departments.
  • Take a graduate level course in educational statistics. If you’re writing about something it is helpful to have rudimentary knowledge of what you write.
  • Find good resources. I have several textbook-like evaluation manuals, but my go-to, dog-eared guide for developing an evaluation plan is the National Science Foundation’s “2010 User-Friendly Handbook for Project Evaluation” (Logic Model information in Chapter 3).
  • An open-access list of Institutional Research (IR) Links located on the Association for Institutional Research website (AIR; a membership organization), provides more than 2200 links to external IR Web pages on a variety of topics related to data and decisions for higher education.
  • Community College Research Center (CCRC) resources, such as publications on prior research, can guide evaluation plan development (http://ccrc.tc.columbia.edu/). The CCRC FAQs Web page provides national data useful for benchmarking your grant program’s projected outcomes.

Blog: Mistakes Made and Lessons Learned, Part I – Working with Your Evaluator

Posted on March 18, 2015 by  in Blog ()

Director, Experiential Learning Center, Truckee Meadows Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

When I assumed the PI-ship of the Scenario-Based Learning Project in 2006, I had worked closely with the prior PI as the project’s instructional designer, knew I enjoyed the ATE community, shared their vision of an innovative 21st century technician workforce, and had management and teaching experience. What more was there to know? A lot, as it turned out. It quickly became apparent to me that the world of ATE projects and centers was a different place than any I had worked in before.

When I took over as PI on the second grant proposal, the former PI suggested we use the evaluators from a previous non-ATE grant she had led. Big mistake. Those evaluators reported to my National Visiting Committee (NVC) during our initial committee meeting that they didn’t have any results to report because they did not plan to collect data until the end of the project year. My NVC was not happy. I was not happy. My stakeholders were not happy.

How did this happen? In my naivety, I didn’t even discuss the evaluation with the evaluators beyond an initial outline of a plan involving questions to be answered by the evaluation—I thought the evaluators knew what they were doing because they were evaluators. I didn’t understand the complex nature of the profession of evaluation. Since then I have joined the American Evaluation Association, attended their annual conference, and regularly attend EvaluATE’s webinars. I made a mistake, learned from it, and the project improved.

I quickly learned that some evaluators and funders are all about the summative report. The project said they would do A, here is the data to show they did or did not do A. End of report. In contrast, the ATE program is interested in how we are doing as we progress through our work. Formative reports from the evaluator serve as a check-in on where you are in your work plan and outcomes. Your evaluator needs to be a critical friend—an advisor who keeps a distance and is critical where needed yet still supportive with ideas, solutions, and contacts.

Choose an evaluator with ATE experience and expertise in collecting and analyzing the kind of data you will need. Confirm who will collect the data, how it will be collected, from whom, and when early in your discussions with your evaluator and project team. Confirm that your evaluator is willing to provide mentoring to your data collection team as needed if you decide to collect the data yourselves and have the evaluator do the analysis (saves money but requires time).

You might need interim reports/summaries from your evaluator for meetings with stakeholders, your NVC, advisory boards, the ATE annual survey, and your annual and final reports. It is a good idea to align your data collection with your reporting needs to best use your resources.

Learn more about the process of evaluation every chance you get. Choose an evaluator with the expertise appropriate to your project or center. Think of your evaluation as a resource and your evaluator as an ally to help you and your team to create the best project or center possible.

Blog: Air Travel: Getting Down to the Nitty-Gritty

Posted on January 28, 2015 by  in Blog (, )

Managing Director, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Have you ever been surprised when you get home from a work trip and realize how much it cost? There are a lot of hidden costs associated with travel, and I would like to share some tips to help you eliminate your after travel sticker shock.

These tips are for domestic air travel only; keep an eye out for my next blog on foreign travel!

Pre-Travel. Prior to travel, create a budget. Making a budget for your trip allows you to see exactly what your expenses will be. (Download our travel budget template.) Now I will go into details about various expenses (please note this is not an exhaustive list).

Tip: Check your organization’s travel policies prior to booking and traveling.

Flight. Gather your estimated flight cost from your desired carrier. I recommend adding $100 to the total cost to help cover flight changes prior to booking. If you use a booking service such as AAA, check to see if there is a booking fee and calculate this in (normally $10-$20).

Tip: If you find a lower flight that what AAA is offering, you can let them know and they may match the lower price.

Hotel. You can access prices through the hotel websites, but make sure tax is included in your calculations.

Tip: Always use your correct travel dates when price checking, hotel rates can vary by both day and week.

Food. I would suggest using government per diem rates to calculate food cost. The GSA Per Diem Rates page lets you enter the city you are traveling to and provides you with the cost per day. The per diem rate includes costs for meals, lodging, and incidentals, for this purpose just use the meal rate.

Tip: Some institutions only allow 75% of per diem for first and last day of travel—you may want to check on this.

Miscellaneous. The major categories have been addressed, so what else might be missing from the budget?

  • Checked Baggage. This varies by airline but can be $25 per bag/way (check with your airline for charges).
  • Ground Transportation. Will you be using a taxi, rental car, or ground transportation? Estimate all these costs and add them into your budget. You can search online to get estimated charges for all transportation.
  • Airport Parking. Are you parking your vehicle at the airport? Fees can range from $8-$20 per day, depending on location and duration.
  • Mileage/Gas. Are you driving to the airport or renting a car? Make sure to include a budget for mileage or gas. Check with your organization regarding mileage reimbursement rates.
  • Internet? If you are planning on using the internet at your hotel, there may be a fee associated. I have seen these vary from a flat fee to a per-day charge. Check with the hotel and calculate in any fees.

Tip: Do you have an external evaluator or an advisory committee? Make sure your organization’s travel policy is correctly reflected in their contract, this could be an issue when they are processing travel.

Once your budget is finalized, I would suggest adding a buffer of $100-$200 to the final budgeted amount. This helps cover any unforeseen incidentals. It’s always better to over budget then to under budget. Happy Traveling!

Example Travel Budget to Orlando, FL

Cost Buffer Total
Flight $350.00 $100.00 $450.00
Hotel $600.00 $600.00
Food $150.00 $150.00
Bags (include both ways) $50.00 $50.00
Ground Transportation $50.00 $50.00
Parking $35.00 $35.00
Mileage/Gas $28.00 $28.00
Internet $10.00 $10.00
Buffer $100.00 $100.00
Total Estimated Budget     $ 1,473.00

Blog: This is How “We” Do Data: Collect, Unravel, Summarize!

Posted on November 19, 2014 by  in Blog ()

Executive Director, FLATE – Florida Advanced Technological Education Center of Excellence

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

At this time of year (early spring), we at FLATE stop to review, tally, analyze, and aggregate our collected data from the previous calendar year. We look for new comparative data sets from other organizations to see how we are doing in specific activities, or groups of activities related to FLATE’s outreach, curriculum reform and professional development efforts. We look for ways to refine our data collection by identifying unclear survey questions or gaps in information we retrieve from various surveys. Here’s how we do it and how it all comes together:

We start with a good review and scrub off raw data that has been collected and recorded. Although we regularly review data and trends during the year to prepare for our year-end summaries, we take a final look to be sure all the data we use is “good data.” Once this somewhat tedious stage is complete, the treasure hunt for “hidden data nuggets” begins.

Many times we are just looking for summary data that we have to report and trends, hopefully positive trends, in the impact of our activities, resources, and events. Beyond just reporting, this kind of information tells us if we are still reaching the same kinds of participants and the same numbers of stakeholders for different kinds of events, resources, and activities. Reviewing this information carefully helps us target specific “missed” audiences in the coming year at various events.

After we complete reviewing, cleaning, organizing, and summarizing our annual reporting data, we continue to probe and assess what else we can learn from it. In various stages of data review and summarizing, we often find ourselves asking: “I wonder if…?”; “how could I know if this is connected to that?” or “wouldn’t it be great if we also could know…?” These are questions we revisit by bringing data together from different sources and looking at the data from different perspectives. The exercise becomes a game of puzzling together different results, trying to reveal more impact.

We move from the observations “wows,” “oh my’s,” and “ah-ha’s!” to filtering which of the ideas or questions will give us the most “bang for our buck,” as well as help us better answer the questions that NSF and our valued stakeholders ask. The cycle of continuous improvement underlies this whole process. How can we do what we do better by being more strategic in our activities, events, and resources? Can we ask better survey questions that will reveal more and better information? Can we totally change our approach to get more impactful data?

Here is an example using our websites and blogs data: FLATE collects monthly data using Google Analytics for its websites and newsletter and reviews this information quarterly with staff and the leadership team. The objective is to compare website and newsletter activity and growth (as measured by visits) against benchmarks and evaluate usage trends. The process of collecting and taking a look at this information led to a further question and action item: In addition to increasing use of these FLATE products, we now wish to increase the percentage of returning visitors, providing evidence that our community of practice not only uses, but relies on FLATE as an information resource.

Blog: Managing Your Evaluator

Posted on November 12, 2014 by  in Blog () ()

Director, South Carolina Advanced Technological Education Center of Excellence, Florence-Darlington Technical College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I am Elaine Craft, Director of the SC ATE Center of Excellence since 1995 and President/CEO of SCATE Inc. since 2005. My dual roles mean that I am both a grantee and an evaluator. I’ve seen the ups, the downs, the good, and the bad on both sides of evaluation.

Managing your evaluator begins even before you contract for this service, as the contract sets parameters for the work ahead. It is your responsibility to see that your evaluator and the evaluation are serving your project well. Keep in mind that you will need to include much of the information the evaluator will be generating in the “Results of Prior Support” section of your next NSF ATE proposal!

It is helpful if your evaluator not only knows the essentials of project evaluation, but also understands the NSF ATE program and the two-year college environment. If you have an evaluator who hasn’t “walked a mile in your community college moccasins,” you will need to devote time to helping him or her understand your environment and the students you serve. There may also be terminology that is specific to community colleges, your institution, or your discipline that needs to be explained.

Everyone is busy, so scheduling should be a top priority. Share a copy of your institution’s calendar and discuss good times and bad times for certain activities. For example, the timing of student surveys is particularly sensitive to the academic calendar. Also, your evaluator may want to attend special project events such as advisory board meetings, professional development events, or summer camps. These dates should be scheduled with your evaluator as early as possible, as the evaluator is likely to have other clients and commitments that must be taken into consideration.

Make sure that you have a clear understanding with your evaluator about when reports are due. You should ask to receive your annual evaluation report before your annual report to the NSF is due. You will want to have time to review the report and work with your evaluator to correct any errors of fact before it is finalized and presented to the NSF or others.

Don’t settle for fewer evaluation services than you have contracted for, but also avoid adding things that were not in the original contract. The evaluator may be amenable to some modifications in the scope of work, but keeping your project and evaluation aligned with the original plan will help avoid mission creep for both the project and the evaluator.

Last, speak up! Your evaluator can’t adjust to better meet your expectations if you don’t articulate areas that are especially great and/or areas of concern. If both grantee and evaluator are on the same page and communicate often around the topics above, evaluation becomes a win-win for both. If your evaluator is not proactive in contacting you, you need to be proactive to keep communications flowing.

Tip: “Good advice is not often served in our favorite flavor.” Tim Fargo

An evaluator’s role is to see you better than you can see yourself. Let your evaluator know that you appreciate both accolades and guidance for improvement.