Archive: evaluation

Blog: Building ATE Social Capital Through Evaluation Activities

Posted on February 24, 2021 by  in Blog () ()

President, Mullins Consulting, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Social networks have value. Social capital refers to the value of social networks, or whom people know, and the inclinations that arise from these networks to do things for each other. Thus, people benefit from the trust, reciprocity, information, and cooperation of these social networks” (Robert D. Putnam, Harvard Kennedy School of Government, 2018).

Within the context of “new-to-ATE” grants, many novice PIs have low social capital compared to more experienced PIs. New PIs are often not familiar with the norms of NSF grant proposal writing, reporting, and other communication; other PIs and collaborators in the community; and other elements that empower more experienced PIs. While proposal-writing mentoring programs are available, not all ATE applicants are granted this opportunity, and this mentoring typically ends once a program is funded.

The evaluator is in a unique position to strengthen social capital by offering new PIs access to their client pool of ATE grantees to facilitate networking and the sharing of information. Connections can be made through the evaluator, new knowledge shared, and relationships cultivated. Increasing access to networks and information can lead to stronger program implementation strategies as well as increased PI confidence in the process.

Here are three tips on when and how an evaluator can connect clients to each other.

1.     The First Six Months. My evaluation team continually discusses how the ATE programs we are evaluating might logically connect (e.g., discipline/area, program components). When a challenge arises, we see what connections can be made so that novice PIs have someone to use as a resource in navigating the challenge. Most experienced PIs are willing to share their experiences in order help others.

2.     National ATE PI Conference. As a lead evaluator, I find time at the ATE conference to introduce clients to one another over coffee or before or after sessions being attended by my clients. I preface these face-to-face meetings with inquiries beforehand to make sure clients are interested in meeting and have available time. Most report it helpful to meet others in similar fields and get a chance to talk to each other about their programs.

3.     Year One Reporting Time. I have found that, traditionally, novice ATE PIs are very anxious about writing their first annual report to the NSF. To address this challenge, I established a meeting of new and more experienced PIs to discuss year one reporting. In the meeting, a seasoned PI presents how they approached first-year reporting and answers questions alongside a former NSF program officer who provides further guidance. The positive feedback from this meeting has been tremendous.

Connecting new PIs with more experienced PIs facilitates the growth of social capital, resulting in better collaborative inquiry, stronger networks, persistence with project implementation, and subsequent reporting of impact.

 

Reference:

Harvard Kennedy School of Government. (2018). ​Social Capital Primer. http://robertdputnam.com/bowling-alone/social-capital-primer/

 

Blog: Creating an Evaluation Design That Allows for Flexibility

Posted on January 13, 2021 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Holly Connell Allison Teeter
Evaluator
Kansas State University
Assistant Director
Strategic Initiatives and Development

There is no better time than now to talk about the need for flexibility in evaluation design and implementation. It is natural for long-term projects involving many partners, institutions, and objectives to experience changes as they progress. This is especially apparent in the age of the coronavirus pandemic, where many projects are faced with decisions about how to move forward, while still needing to make and demonstrate impact. Having an evaluation design that is too rigid does not allow for adjustments throughout the implementation process.

This blog provides a general guide for building a flexible evaluation design.

Design the Evaluation

Develop an evaluation plan that provides you with four to six evaluation questions that align with the project’s goals and objectives but provides you with ample flexibility to allow for changes throughout the project’s implementation. A sound evaluation design will guide how you conduct the evaluation activities while answering your key evaluation questions. The design will include factors such as:

  • Methods of data collection: Consider your audience and what method will work best and will yield the most robust results. Further, if the method chosen does not yield results, consider whether this method should be used again later, or used at all. Ensure one activity is not responsible for collecting data towards all or most of the evaluation questions. It is best practice to use a triangulation approach; use multiple methods of data collection to strengthen the quality of your results. Wrap in evidence towards as many evaluation questions as applicable in each of your data collections. If an evaluation activity falls through or does not pan out as anticipated, you will still have data to provide evidence towards the evaluation.
  • Sample sizes: Consider at what point a sample size is too small¾or too large¾for what you have originally planned. Develop a backup plan for this situation. Collect data from a variety of stakeholders. Changes in project implementation can affect your target audiences differently. Build this into your evaluation plan by ensuring all applicable target audiences are represented throughout your data collections.
  • Timing of data collection: Be mindful of major events in the lives of the target audience. For example, holding an online survey during exam season will likely reduce your sample size. Do not limit yourself to specific timing of an evaluation activity unless necessary. For example, if a survey can take place at any time during the summer, specify “Summer 2021” rather than “August 2021.”

Keep in mind that most evaluation projects do not go completely as planned and that various aspects of the project may undergo changes.

Being flexible with your design can yield much more meaningful and impactful results rather than using the plan originally in place. Changes and revisions may be needed as the project evolves, or due to unforeseen circumstances. Don’t hesitate to revise the evaluation plan; just make sure to document and justify the changes being made. Defining a list of potential limitations (e.g., of methods, data sources, potential bias, etc.) while developing your initial evaluation design could assist later on when determining if it is best to stay on course with the original plan, or to make a revision to the evaluation design.

Find out more about developing evaluation plans in the Pell Institute Evaluation Toolkit.

Webinar: Evaluation Crash Course for Non-evaluators

Posted on January 12, 2021 by , in Webinars

Presenter(s): Emma Leeburg, Lyssa Wilson Becho
Date(s): February 24, 2021
Time: 1p.m.- 2p.m. Eastern
Recording: https://youtu.be/kOWKhsHwQLg

Do you have questions about evaluation? Like, what is it? Why is it required for projects funded by the National Science Foundation? How much does it cost? Who can do it? What does an evaluation look like? How can evaluation help me and my project?

We will answer all these questions and more in this webinar. This session is for those who are new to evaluation or want a refresher on the basics. The examples in this webinar are especially tailored for two-year college faculty and grants specialists who are planning on submitting proposals to NSF’s Advanced Technological Education (ATE) program. However, anyone who is interested in learning more about program evaluation is welcome to attend.

Resources:
Handout
Slides

Blog: Bending Our Evaluation and Research Studies to Reflect COVID-19

Posted on September 30, 2020 by  in Blog ()

CEO and President, CSEdResearch.org

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting education research and evaluation during the season of COVID-19 may make you feel like you are the lone violinist playing tunes on the deck of a sinking ship. You desperately want to continue your research, which is important and meaningful to you and to others. You know your research contributes to important advances in the large mural of academic achievement among student learners. Yet reality has derailed many of your careful plans.

 If you are able to continue your research and evaluation in some capacity, attempting to shift in a meaningful way can be confusing. And if you are able to continue collecting data, understanding how COVID-19 affects your data presents another layer of challenges.

In a recent discussion with other K–12 computer science evaluators and researchers, I learned that some were rapidly developing scales to better understand how COVID-19 has impacted academic achievement. In their generous spirit of sharing, these collaborators have shared scales and items they are using, including two complete surveys, here:

  • COVID-19 Impact Survey from Panorama Education. This survey considers the many ways (e.g., well-being, internet access, engagement, student support) in which the shift to distance, hybrid, or in-person learning during this pandemic may be impacting students, families, and teachers/staff.
  • Parent Survey from Evaluation by Design. This survey is designed to measure environment, school support, computer availability and learning, and other concerns from the perspective of parents.

These surveys are designed to measure critical aspects within schools that are being impacted by COVID-19. They can provide us with information needed to better understand potential changes in our data over the next few years.

One of the models I’ve been using lately is the CAPE Framework for Assessing Equity in Computer Science Education, recently developed by Carol Fletcher and Jayce Warner at the University of Texas at Austin. This framework measures capacity, access, participation, and experiences (CAPE) in K–12 computer science education.

Figure 1. Image from https://www.tacc.utexas.edu/epic/research. Used with permission. From Fletcher, C.L. and Warner, J. R., (2019). Summary of the CAPE Framework for Assessing Equity in Computer Science Education.

 

Although this framework was developed for use in “good times,” we can use it to assess current conditions by asking how COVID-19 has impacted each of the critical components of CAPE needed to bring high-quality computer science learning experiences to underserved students. For example, if computer science is classified as an elective course at a high school, and all electives are cut for the 2020–21 academic year, this will have a significant impact on access for those students.The jury is still out on how COVID-19 will impact students this year, particularly minoritized and low-socio-economic-status students, and how its lingering effects will change education. In the meantime, if you’ve created measures to understand COVID-19’s impact, consider sharing those with others. It may not be as meaningful as sending a raft to a violinist on a sinking ship, but it may make someone else’s research goals a bit more attainable.

(NOTE: If you’d also like your instruments/scales related to COVID-19 shared in our resource center, please feel free to email them to me.)

Blog: Quick Reference Guides Evaluators Can’t Live Without

Posted on August 5, 2020 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on May 15, 2020:
https://aea365.org/blog/quick-reference-guides-evaluators-cant-live-without-by-kelly-robertson/

My name is Kelly Robertson, and I work at The Evaluation Center at Western Michigan University and EvaluATE, the National Science Foundation–funded evaluation hub for Advanced Technological Education.

I’m a huge fan of quick reference guides. Quick reference guides are brief summaries of important content that can be used to improve practice in real time. They’re also commonly referred to as job aids or cheat sheets.

I found quick reference guides to be especially helpful when I was just learning about evaluation. For example, Thomas Guskey’s Five Critical Levels of Professional Development Evaluation helped me learn about different levels of outcomes (e.g., reaction, learning, organizational support, application of skills, and target population outcomes).

Even with 10-plus years of experience, I still turn to quick reference guides every now and then. Here are a few of my personal favorites:

My colleague Lyssa Becho is also a huge fan of quick reference guides, and together we compiled a list of over 50 evaluation-related quick reference guides. The list draws on the results from a survey we conducted as part of our work at EvaluATE. It includes quick reference guides that 45 survey respondents rated as most useful for each stage of the evaluation process.

Here are some popular quick reference guides from the list:

  • Evaluation Planning: Patton’s Evaluation Flash Cards introduce core evaluation concepts such as evaluation questions, standards, and reporting in an easily accessible format.
  • Evaluation Design: Wingate’s Evaluation Data Matrix Template helps evaluators organize information about evaluation indicators, data collection sources, analysis, and interpretation.
  • Data Collection: Wingate and Schroeter’s Evaluation Questions Checklist for Program Evaluation provides criteria to help evaluators understand what constitutes high-quality evaluation questions.
  • Data Analysis: Hutchinson’s You’re Invited to a Data Party! explains how to engage stakeholders in collective data analysis.
  • Evaluation Reporting: Evergreen and Emery’s Data Visualization Checklist is a guide for the development of high-impact data visualizations. Topics covered include text, arrangement, color, and lines.

If you find any helpful evaluation-related quick reference guides are missing from the full collection please contact kelly.robertson@wmich.edu.

Blog: Three Ways to Boost Network Reporting

Posted on April 29, 2020 by  in Blog ()

Assistant Director, Collin College’s National Convergence Technology Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The National Convergence Technology Center (CTC), a national ATE center focusing on IT infrastructure technology, manages a community called the Convergence College Network (CCN)The CCN consists of 76 community colleges and four-year universities across 26 statesFaculty and administrators from the CCN meet regularly to share resources, trade know-how, and discuss common challenges 

 Because so much of the CTC’s work is directed to supporting the CCN, we ask the member colleges to submit a “CCN Yearly Report” evaluation each FebruaryThe data from that “CCN Yearly Report” informs the reporting we deliver to the NSF, to our National Visiting Committee, and to the annual ATE surveyEach of those three groups need slightly different information, so we’ve worked hard to include everything in a single evaluation tool. 

 We’re always trying to improve that “CCN Yearly Report” by improving the questions we ask, removing the questions we don’t need, and making any other adjustments that could improve the response rateWe want to make it easy on the respondentsOur efforts seem to be workingWe received 37 reports from the 76 CCN member colleges this past February, a 49% response rate. 

 We attribute this success to three strategies.  

  1. 1. Prepare them in advance.We start talking about the February “CCN Yearly Report” due date in the summerThe CCN community gets multiple email reminders, and we often mention the report deadline at our quarterly meetingsWe don’t want anyone to say they didn’t know about the report or its deadlinePart of this ongoing preparation also involves making sure everyone in the network understands the importance of the data we’re seekingWe emphasize that we need their help to accurately report grant impact to the NSF.
  1. Share the results.If we go to such lengths to make sure everyone understands the importance of the report up front, it makes sense to do the same after the results are inWe try to deliver a short overview of the results at our July quarterly meetingDoing so underscores the importance of the survey. Beyond that, research tells us that one key to nurturing a successful community of practice like the CCN is to provide positive feedback about the value of the groupBy sharing highlights of the report, we remind CCN members that they are a part of a thriving, successful group of educators. 
  1. Reward participation.Grant money is a great carrotBecause the CTC so often provides partial travel reimbursement to faculty from CCN member colleges so they can attend conferences and professional development events, we can incentivize the submission of yearly reports.  Colleges that want the maximum membership benefits, which include larger travel caps, must deliver a report.  Half of the 37 reports we received last year were from colleges seeking those maximum benefits. 

 We’re sure there are other grants with similar communities of organizations and institutions. We hope some of these strategies can help you get the data you need from your communities. 

 

References:  

 Milton, N. (2017, January 16). Why communities of practice succeed, and why they fail [Blog post].

Blog: Backtracking Alumni: Using Institutional Research and Reflective Inquiry to Improve Organizational Learning

Posted on April 2, 2020 by , in Blog
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

In a recent blog post, we shared practical tips for developing an alumni tracking program to assess students’ employment outcomes. Alumni tracking is an effective tool for assessing the quality of educational programs and helping determine whether programs have the intended impact 

In this post, we share the Backtracking technique, aadvanced approach that supplements alumni tracking data with students’ institutionally archived recordsBacktracking assumes that institutions and programs already gather student outcomes information (e.g., employment, salary, and advanced educational data) from alumni on a periodic basis (e.g., annually or every three years) 

The technique uses institutional research (IR) archives to match students’ employment outcomes to academic and demographic variables (e.g., academic GPA, courses taken, grades, major, additional certifications, internships, gender, race/ethnicity)By pairing student outcomes data with academic and demographic variables, we can contextualize student pathways and explore the whole pathway, not just a moment in time. 

Figure 1 shows an example of the Backtracking technique for two-year Associate of Arts (AA) and Associate of Science (AS).  

Figure 1. Backtracking Technique for AA/AS Programs 

Figure 1 illustrates three data collection layersLayer 1, Institutional Research College Data, provides student completion data, academic history, and contact informationAdvanced and transfer-degree data are also available through the National Student Clearinghouse, which can reveal the major that former student (or graduate) entered after completing the AA/AS degreeLayer 2, Alumni Transfer Employment Data, includes student employment and advanceddegree information self-reported in alumni surveys 

Layer 3, Pathway Explanatory Dataembeds a qualitative component within the Backtracking technique in order to let alumni explain their undergraduate experiences. This layer helps us understand what happened during and after collegeMost importantly, it lets us identify the critical junctures that students faced and the facilitators and hindrances that allowed students to overcome (or that caused) setbacks during these difficult periods 

To provide alumni with the best opportunities to share their experiences, we use IR archives to formulate questions based on key facts about students’ experiencesFor example, if IR records show that a student transferred from college A to university B, we may ask the student about that specific experience. For a student who failed Calculus 1 once but passed it on the second try, we may ask what allowed that success. 

Although individual student pathways are useful, we can also stratify these data by race and gender (or other factors) and then aggregate them to better understand student groupsWe demonstrate how we aggregate the pathways in this short video. 

The Backtracking technique requires skilled personnel with technical knowledge in IR and data collection and analysis or an Academic IR (who possesses both IR and research skills)Investing in such skill and knowledge is worthwhile  

    • Institutional research is powerful when used for formative and internal improvement and for generation of new knowledge 
    • Findings about former students using the Backtracking technique can provide useful information to improve program and institutional services (e.g., advising, formal practices, informal learning opportunities, etc.) 
    • Looking back at what worked or failed for past students can inform current practices and serve as a source of institutional learning 

References: 

Jones, F. R., Mardis, M. A. (2019, May 15)Alumni Tracking: The ultimate source for evaluating completer outcomes [Blog post]Retrieved from https://www.evalu-ate.org/blog/jones2-may19/

Blog: Contracting for Evaluator Services

Posted on November 13, 2019 by  in Blog ()

CREATE Energy Center Principal investigator, Madison Area Technical College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Contracting for Evaluator Services

You are excited to be working on a new grant proposal. You have a well-defined project objective, a solid plan to address the challenge at hand, a well-assembled team to execute the project, and a means for measuring your project’s outcomes. The missing ingredient is an evaluation plan for your project, and that means that you will need to retain the services of an evaluator!

New principal investigators often have limited prior experience with project evaluation, and  identifying and contracting with an evaluator can be a question mark for many. Fortunately, there are resources to help and recommended practices to make these processes easier.

The first tip is to explore the grant agency requirements and your institution’s procurement policies regarding evaluation services. Federal agencies such as the National Science Foundation (NSF) may accept a general evaluation plan written by the principal investigator, with agreement that an evaluator will be named later, or they may require the use of an external evaluator who is named in the proposal. Federal requirements can differ even within a single agency and may change from one year to the next. So it is important to be certain of the current program requirements.

Additionally, some institutions may require that project evaluation be conducted by independent third parties not affiliated with the college. Furthermore, depending on the size of the proposed project, and the scope of the evaluation plan, many colleges may have procurement policies that require a competitive request for quotes or bids for evaluator contracts. There may also be requirements that a request for bids must be publicly posted, and there may be rules dictating the minimum number of bids that must be received. Adhering to your school’s procurement policy may take several months to complete, so it is highly advisable to begin the search for an evaluator as early as possible.

The American Evaluation Association has a helpful website that includes a Find an Evaluator page, which can be used to search for evaluators by location. AEA members can also post a request for evaluator services to solicit bids. The EvaluATE website lists information specific to the NSF Advanced Technological Education (ATE) program and maintains a List of Current ATE Evaluators that may serve as a good starting point for identifying prospective evaluators.

When soliciting bids, it is advisable to create a detailed request that provides a summary of the project, a description of the services you are seeking, and specifies the information that you would like applicants to provide. At a minimum, you will want to request a copy of the evaluator’s CV and biosketch, and a description of their prior evaluation work.

If your institution requires you to entertain multiple bids, it is a good idea to develop a rubric that you can use to judge the bids that you receive. In most cases, you will not want to restrict yourself to accepting the lowest bid that is submitted. Instead, it is in the best interest of your project to make a selection based on both the experience and the qualifications of the prospective evaluator candidate, and on the perceived value of the services they can provide. In our past experience, we have found that hourly rates for evaluator services can vary by as much as 400%, so if a sufficiently large pool of bids are received, this can help to make sure that quoted rates are reasonable.

Blog: How Can You Make Sure Your Evaluation Meets the Needs of Multiple Stakeholders?*

Posted on October 31, 2019 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We talk a lot about stakeholders in evaluation. These are the folks who are involved in, affected by, or simply interested in the evaluation of your project. But what these stakeholders want or need to know from the evaluation, the time they have available for the evaluation, and their level of interest are probably quite variable. The table below is a generic guide to the types of ATE evaluation stakeholders, what they might need, and how to meet those needs.

ATE Evaluation Stakeholders

Stakeholder groups What they might need Tips for meeting those needs
Project leaders (PI, co-PIs) Information that will help you improve the project as it unfolds

Results you can include in your annual reports to NSF to demonstrate accountability and impact

Communicate your needs clearly to your evaluator, including when you need the information in order to make use of it.
Advisory committees or National Visiting Committees Results from the evaluation that show whether the project is on track for meeting its goals, and if changes in direction or operations are warranted

Summary information about the project’s strengths and weaknesses

Many advisory committee members donate their time, so they probably aren’t interested in reading lengthy reports. Provide a brief memo and/or short presentation with key findings at meetings, and invite questions about the evaluation. Be forthcoming about strengths and weaknesses.
Participants who provide data for the evaluation Access to reports in which their information was used

Summaries of what actions were taken based on the information they needed to provide

The most important thing for this group is to demonstrate use of the information they provided. You can share reports, but a personal message from project leaders along the lines of “we heard you and here is what we’re doing in response” is most valuable.
NSF program officers Evidence that the project is on track to meet its goals

Evidence of impact (not just what was done, but what difference the work is making)

Evidence that the project is using evaluation results to make improvements

Focus on Intellectual Merit (the intrinsic quality of the work and potential to advance knowledge) and Broader Impacts (the tangible benefits for individuals and progress toward desired societal outcomes). If you’re not sure about what your program officer needs from your evaluation, ask for clarification.
College administrators (department chairs, deans, executives, etc.) Results that demonstrate impact on students, faculty, institutional culture, infrastructure, and reputation Make full reports available upon request, but most busy administrators probably don’t have the time to read technical reports or don’t need the fine-grained data points. Prepare memos or share presentations that focus on the information they’re most interested in.
Partners and collaborators Information that helps them assess the return on the investment of their time or other resources

In case you didn’t read between the lines, the underlying message here is to provide stakeholders with the information that is most relevant to their particular “stake” in your project. A good way not to meet their needs is to only send everyone a long, detailed technical report with every data point collected. It’s good to have a full report available for those who request it, but many simply won’t have the time or level of interest needed to consume that quantity of evaluative information about your project.

Most importantly, don’t take our word about what your stakeholders might need: Ask them!

Not sure what stakeholders to involve in your evaluation or how? Check out our worksheet Identifying Stakeholders and Their Roles in an Evaluation at bit.ly/id-stake.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in October 2015.

Blog: What Grant Writers Need to Know About Evaluation

Posted on September 4, 2019 by  in Blog ()

District Director of Grants and Educational Services, Coast Community College District

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Fellow grant writers: Do you ever stop and ask yourselves, “Why do we write grants?” Do you actually enjoy herding cats, pulling teeth, and the inevitable stress of a looming proposal deadline? I hope not. Then what is the driver? We shouldn’t write a grant just to get funded or to earn prestige for our colleges. Those benefits may be motivators, but we should write to get funding and support to positively impact our students, faculty, and the institutions involved. And we should be able to evaluate those results in useful and meaningful ways so that we can identify how to improve and demonstrate the project’s value.

Evaluation isn’t just about satisfying a promise or meeting a requirement to gather and report data. It’s about gathering meaningful data that can be utilized to determine the effectiveness of an activity and the impact of a project. When developing a grant proposal, one often starts with the goals, then thinks of the objectives, and then plans the activities, hoping that in the end, the evaluation data will prove that the goals were met and the project was a success. That requires a lot of “hope.”

I find it more promising to begin with the end in mind from an evaluation perspective: What is the positive change that we hope to achieve and how will it be evidenced? What does success mean? How can we tell if we have been successful? When will we know? And how can we get participants to provide the information we will need for the evaluation?

The role of a grant writer is too often like that of a quilt maker, delegating sections of the proposal’s development to different members of the institution, with the evaluation section often outsourced to a third-party evaluator. Each party submits their content, then the grant writer scrambles to patch it all together.

Instead of quilt making, the process should be more like the construction of a tapestry. Instead of chunks of material stitched together in independent sections, each thread is carefully woven in a thoughtful way to create a larger, more cohesive overall design. It is important that the entire professional development team works together to fully understand each aspect of the proposal. In this way, they can collaboratively develop a coherent plan to obtain the desired outcomes. The project work plan, budget, and evaluation components should not be designed or executed independently—they occur simultaneously and are dependent upon each other. Thus, they should tie together in a thoughtful manner.

I encourage you to think like an evaluator as you develop your proposals. Prepare yourself and challenge your team to be able to justify the value of each goal, objective, and activity and be able to explain how that value will be measured. If at all possible, involve your external or internal evaluator early on in proposal development. The better the evaluator understands your overall concept and activities, the better they can tailor the evaluation plan to derive the desired results. A strong work plan and evaluation plan will help proposal reviewers connect the dots and see the potential of your proposal. These elements will also serve as road maps to success for your project implementation team.

 

For questions or further information please reach out to the author, Lara Smith.