Newsletter - Summer 2013

Newsletter: What grant writers need to know about evaluation

Posted on July 1, 2013 by  in Newsletter - ()

Coordinator of Grants Development and Management

Fellow grant writers: Do you ever stop and ask yourselves, “Why do we write grants?” Do you actually enjoy herding cats, pulling teeth, or the inevitable stress of a looming proposal deadline? I hope not. Then what is the driver? We shouldn’t just write a grant to simply get funded or to earn prestige for our colleges. Those benefits may be motivators, but we should write to get positive results for the faculty, students, and institutions involved. And we should be able to evaluate those results in useful and meaningful ways so that we can identify ways to improve and demonstrate the project’s value.

Evaluation isn’t just about satisfying a promise or meeting a requirement to gather and report data, it’s about gathering meaningful data that can be utilized to determine the effectiveness of an activity and the impact of a project. When developing a grant proposal, one often starts with the goals, then thinks of the objectives and then plans the activities, hoping that in the end, the evaluation data will prove that the goals were met and the project was a success. That is putting a lot of faith in “hope.” I find it more promising to begin with the end in mind from an evaluation perspective: What is the positive change that we hope to achieve and how will it be evidenced? What does success mean? How can we tell? When will we know? And, how can we get participants to provide the information we will need for the evaluation?

The role of a grant writer is too often like that of a quilt maker—sections of the grant solicitation are delegated to different members of the institution with the evaluation section often outsourced to a third-party evaluator. Each party submits their content, then the grant writer scrambles to patch it all together. Now instead of a quilt, consider the construction of a tapestry. Instead of chunks of material stitched together in independent sections, each thread is carefully woven in a thoughtful way to create a larger, more cohesive overall design. It is important that the entire development team work together to fully understand each aspect of the proposal in order to collaboratively develop a coherent plan to obtain the desired outcomes. The project workplan, budget, and evaluation components should not be designed or executed independently—they occur simultaneously and are dependent upon each other.

I encourage you to think like an evaluator as you develop your proposals. Prepare yourself and challenge your team to be able to justify the value of each goal, objective, and activity and be able to explain how that value will be measured. If at all possible, involve your external or internal evaluator early on in the proposal development. The better the evaluator understands your overall concept and activities, the better he or she can tailor the evaluation plan to derive the desired results. A strong workplan and evaluation plan will help proposal reviewers connect the dots and see the potential of your proposal. It will also serve as a roadmap to success for your project implementation team.

Newsletter: Data Management Plan (DMP)

Posted on July 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

DMPs are not evaluation plans, but they should address how evaluation data will be handled and possibly shared.

DMPs are required for all NSF proposals, uploaded as a Supplementary Document in the FastLane.gov proposal submission system. They can be up to two pages long and should describe

  • the kind of data you will gather
  • how you’ll format the data and metadata (metadata is documentation about what your primary data are)
  • what policies will govern the access and use of your data by others
  • how you’ll protect privacy, confidentiality, security, and intellectual property
  • who would be interested in accessing your data and in what ways they might use it
  • your plans for archiving the data and preserving access to them

To learn more about data management plans, check out these resources:

Newsletter: Evaluator Qualifications: Which is more important- evaluation or subject matter expertise?

Posted on July 1, 2013 by  in Newsletter - ()

When shopping for an external evaluator, a common question is whether it is better to hire someone who really understands your project’s content area or if you need someone with technical expertise in evaluation, research, and measurement.

In an ideal world, an evaluator or evaluation team would have high levels of both types of expertise. Since most evaluators hail from a non-evaluation content area, it may be possible for your project to find an evaluator who is both an expert in your content area and a skilled evaluator. But such combinations are relatively rare. So, back to the original question: If you do have to make a decision regarding expertise, which way should you lean? My answer is, evaluation expertise.

Most professional evaluators have experience working in many different content areas. For example, here at The Evaluation Center at Western Michigan University, where EvaluATE is based, in the past year alone we have evaluated the proposal review process for the Swiss National Science Foundation, a project focused on performative design, a graduate-level optical science program, institutional reform at a small university, a community literacy initiative, substance abuse treatment, a career and technical education research center, a preschool music education program, and a set of International Labour Organisation evaluations—among other things. Situational Practice, which is “attending to the unique interests, issues, and contextual circumstances in which evaluation skills are being applied” is a core competency for professional evaluators (Canadian Evaluation Society, 2010). Regardless of expertise, any evaluator should take the time to learn about the content and context of your project, asking questions and doing background research to fill in any significant knowledge gaps. Additionally, if you have an internal evaluation component to your project, you most likely already have subject-matter expertise embedded into your evaluation activities.

While it might feel more natural to lean toward someone with a high level of subject-matter expertise who “speaks your language,” a risk of hiring an evaluator for his or her subject-matter knowledge rather than evaluation expertise is that in the absence of a strong foundation in education and social science-based research skills, they may rely more on their opinions and experiences to formulate judgments about a project instead of systematic inquiry and interpretation. If you find that the perspectives and guidance of subject-matter experts bring value to your work, a better role for them might be that of an advisor.

Regardless of the type of evaluator you select, a key to a successful evaluation is regular and open communication between project stakeholders and the evaluation team. Your project is unique, so even if your evaluator has been involved with similar efforts, he or she will still benefit from learning about your unique and specific context.

For more information about evaluator competencies see http://bit.ly/10v3dc3.

Newsletter: Evaluation Planning Checklist for ATE Proposals

Posted on July 1, 2013 by  in Newsletter - ()

and other resources to assist in proposal development and evaluation planning

To assist ATE proposers navigate the intersection of proposal development and evaluation planning, EvaluATE developed an Evaluation Planning Checklist for ATE Proposals. There is more to addressing evaluation in your proposal than including a section on evaluation. Information pertinent to your evaluation should also be evident in your project summary, references, results of prior NSF support (first part of the project description for those who’ve received NSF funding before), budget and budget justification, and supplementary documents. Organized by proposal component, the checklist provides details about what you need to know and do in order to integrate evaluation into your proposal. This checklist was originally released last fall. Since then, it has undergone revisions based on feedback from members of the ATE community.

We also recommend you read the advice of Elizabeth Teles, former ATE program co-lead and member of EvaluATE’s National Visiting Committee. You can access Dr. Teles’s 10 Helpful Hints and 10 Fatal Flaws: Writing Better Evaluation Sections in Your Proposals.

Another resource proposers may find useful is EvaluATE’s Logic Model Template. Preformatted with editable text boxes, this one-page document is designed so that you can quickly and easily modify it to suit your own needs. Logic models are useful for project development, evaluation planning, and monitoring progress.

Newsletter: What makes a good evaluation section of a proposal?

Posted on July 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

As a program officer, I read hundreds of proposals for different NSF programs and I saw many different approaches to writing a proposal evaluation section. From my vantage point, here are a few tips that may help to ensure that your evaluation section shines.

First, make sure to involve your evaluator in writing the proposal’s evaluation section. Program officers and reviewers can tell when an evaluation section was written without the consultation of an evaluator. This makes them think you aren’t integrating evaluation into your project planning.

Don’t just call an evaluator a couple weeks before the proposal is due! A strong evaluation section comes from a thoughtful, robust, tailored evaluation plan. This takes collaboration with an evaluator! Get them on board early and talk with them often as you develop your proposal. They can help you develop measureable objectives, add insight to proposal organization, and, of course, work with you to develop an appropriate evaluation plan.

Reviewers and program officers look to see that the evaluator understands the project. This can be done using a logic model or in a paragraph that justifies the evaluation design, based on the proposed project design. The evaluation section should also connect the project objectives and targeted outcomes to evaluation questions, data collection methods and analysis, and dissemination plans. This can be done in a matrix format, which helps the reader to see clearly which data will answer which evaluation question and how these are connected to the objectives of the project.

A strong evaluation plan shows that the evaluator and the project team are in synch and working together, applies a rigorous design and reasonable data collection methods, and answers important questions that will help to demonstrate the value of the project and surface areas for improvement.