Blog




Vlog: Checklist for Program Evaluation Report Content

Posted on December 6, 2017 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This video provides an overview of EvaluATE’s Checklist for Program Evaluation Report Content, and three reasons why this checklist is useful to evaluators and clients.

Blog: Addressing Challenges in Evaluating ATE Projects Targeting Outcomes for Educators

Posted on November 21, 2017 by  in Blog ()

CEO, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kirk Knestis—CEO of Hezel Associates and ex-career and technology educator and professional development provider—here to share some strategies addressing challenges unique to evaluating Advanced Technological Education (ATE) projects that target outcomes for teachers and college faculty.

In addition to funding projects that directly train future technicians, the National Science Foundation (NSF) ATE program funds initiatives to improve abilities of grade 7-12 teachers and college faculty—the expectation being that improving their practice will directly benefit technical education. ATE tracks focusing on professional development (PD), capacity building for faculty, and technological education teacher preparation all count implicitly on theories of action (typically illustrated by a logic model) that presume outcomes for educators will translate into outcomes for student technicians. This assumption can present challenges to evaluators trying to understand how such efforts are working. Reference this generic logic model for discussion purposes:

Setting aside project activities acting directly on students, any strategy aimed at educators (e.g., PD workshops, faculty mentoring, or preservice teacher training) must leave them fully equipped with dispositions, knowledge, and skills necessary to implement effective instruction with students. Educators must then turn those outcomes into actions to realize similar types of outcomes for their learners. Students’ action outcomes (e.g., entering, persisting in, and completing training programs) depend, in turn, on them having the dispositions, knowledge, and skills educators are charged with furthering. If educators fail to learn what they should, or do not activate those abilities, students are less likely to succeed. So what are the implications—challenges and possible solutions—of this for NSF ATE evaluations?

  • EDUCATOR OUTCOMES ARE OFTEN NOT WELL EXPLICATED. Work with program designers to force them to define the new dispositions, understandings, and abilities that technical educators require to be effective. Facilitate discussion about all three outcome categories to lessen the chance of missing something. Press until outcomes are defined in terms of persistent changes educators will take away from project activities, not what they will do during them.
  • EDUCATORS ARE DIFFICULT TO TEST. To truly understand if an ATE project is making a difference in instruction, it is necessary to assess if precursor outcomes for them are realized. Dispositions (attitudes) are easy to assess with self-report questionnaires, but measuring real knowledge and skills requires proper assessments—ideally, performance assessments. Work with project staff to “bake” assessments into project strategies, to be more authentic and less intrusive. Strive for more than self-report measures of increased abilities.
  • INSTRUCTIONAL PRACTICES ARE DIFFICULT AND EXPENSIVE TO ASSESS. The only way to truly evaluate instruction is to see it, assessing pedagogy, content, and quality with rubrics or checklists. Consider replacing expensive on-site visits with the collection of digital videos or real-time, web-based telepresence.

With clear definitions of outcomes and collaboration with ATE project designers, evaluators can assess whether technician training educators are gaining the necessary dispositions, knowledge, and skills, and if they are implementing those practices with students. Assessing students is the next challenge, but until we can determine if educator outcomes are being achieved, we cannot honestly say that educator-improvement efforts made any difference.

Blog: Partnering with Clients to Avoid Drive-by Evaluation

Posted on November 14, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
   
 John Cosgrove

Senior Partner, Cosgrove & Associates

 Maggie Cosgrove

Senior Partner, Cosgrove & Associates

If a prospective client says, “We need an evaluation, and we will send you the dataset for evaluation,” our advice is that this type of “drive-by evaluation” may not be in their best interest.

As calls for program accountability and data-driven decision making increase, so does demand for evaluation. Given this context, evaluation services are being offered in a variety of modes. Before choosing an evaluator, we recommend the client pause to consider what they would like to learn about their efforts and how evaluation can add value to such learning. This perspective requires one to move beyond data analysis and reporting of required performance measures to examining what is occurring inside the program.

By engaging our clients in conversations related to what they would like to learn, we are able to begin a collaborative and discovery-oriented evaluation. Our goal is to partner with our clients to identify and understand strengths, challenges, and emerging opportunities related to program/project implementation and outcomes. This process will help clients not only understand which strategies worked, but why they worked and lays the foundation for sustainability and scaling.

These initial conversations can be a bit of a dance, as clients often focus on funder-required accountability and performance measures. This is when it is critically important to elucidate the differences between evaluation and auditing or inspecting. Ann-Murray Brown examines this question and provides guidance as to why evaluation is more than just keeping score in Evaluation, Inspection, Audit: Is There a Difference? As we often remind clients, “we are not the evaluation police.”

During our work with clients to clarify logic models, we encourage them to think of their logic model in terms of storytelling. We pose commonsense questions such as: When you implement a certain strategy, what changes to you expect to occur? Why do you think those changes will take place? What do you need to learn to support current and future strategy development?

Once our client has clearly outlined their “story,” we move quickly to connect data collection to client-identified questions and, as soon as possible, we engage stakeholders in interpreting and using their data. We incorporate Veena Pankaj and Ann Emery’s (2016) data placemat process to engage clients in data interpretation.  By working with clients to fully understand their key project questions, focus on what they want to learn, and engage in meaningful data interpretation, we steer clear of the potholes associated with drive-by evaluations.

Pankaj, V. & Emery, A. (2016). Data placemats: A facilitative technique designed to enhance stakeholder understanding of data. In R. S. Fierro, A. Schwartz, & D. H. Smart (Eds.), Evaluation and Facilitation. New Directions for Evaluation, 149, 81-93.

Blog: Thinking Critically about Critical Thinking Assessment

Posted on October 31, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Vera Beletzan
Senior Special Advisor Essential Skills
Humber College
Paula Gouveia
Dean, School of Liberal Arts and Sciences
Humber College

Humber College, as part of a learning outcomes assessment consortium funded by the Higher Education Quality Council of Ontario (HEQCO), has developed an assessment tool to measure student gains in critical thinking (CT) as expressed through written communication (WC).

In Phase 1 of this project, a cross-disciplinary team of faculty and staff researched and developed a tool to assess students’ CT skills through written coursework. The tool was tested for usability by a variety of faculty and in a variety of learning contexts. Based on this pilot, we revised the tool to focus on two CT dimensions: comprehension and integration of writer’s ideas, within which are six variables: interpretation, analysis, evaluation, inference, explanation, and self-regulation.

In Phase 2, our key questions were:

  1. What is the validity and reliability of the assessment tool?
  2. Where do students experience greater levels of CT skill achievement?
  3. Are students making gains in learning CT skills over time?
  4. What is the usability and scalability of the tool?

To answer the first question, we examined the inter-rater reliability of the tool, as well as compared CTWC assessment scores with students’ final grades. We conducted a cross-sectional analysis by comparing diverse CT and WC learning experiences in different contexts, namely our mandatory semester I and II cross-college writing courses, where CTWC skills are taught explicitly and reinforced as course learning outcomes; vocationally-oriented courses in police foundations, where the skills are implicitly embedded as deemed essential by industry; and a critical thinking course in our general arts and sciences programs, where CT is taught as content knowledge.

We also performed a longitudinal analysis by assessing CTWC gains in a cohort of students across two semesters in their mandatory writing courses.

Overall, our tests showed positive results for reliability and validity. Our cross-sectional analysis showed the greatest CT gains in courses where the skill is explicitly taught. Our longitudinal analysis showed only modest gains, indicating that a two-semester span is insufficient for significant improvement to occur.

In terms of usability, faculty agreed that the revised tool was straightforward and easy to apply. However, there was less agreement on the tool’s meaningfulness to students, indicating that further research needs to include student feedback.

Lessons learned:

  • Build faculty buy-in at the outset and recognize workload issues
  • Ensure project team members are qualified
  • For scalability, align project with other institutional priorities

Recommendations:

  • Teach CT explicitly and consistently, as a skill, and over time
  • Strategically position courses where CT is taught explicitly throughout a program for maximum reinforcement
  • Assess and provide feedback on students’ skills at regular intervals
  • Implement faculty training to build a common understanding of the importance of essential skills and their assessment
  • For the tool to be meaningful, students must understand which skills are being assessed and why

Our project will inform Humber’s new Essential Skills Strategy, which includes the development of an institutional learning outcomes framework and assessment process.

A detailed report, including our assessment tool, will be available through HEQCO in the near future. For further information, please contact the authors: vera.beletzan@humber.ca  or paula.gouveia@humber.ca

Getting Your New ATE Project’s Evaluation off to a Great Start

Posted on October 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

New ATE project principal investigators (PIs): When you worked with your evaluator to develop an evaluation plan for your project proposal, you were probably focused on the big picture—how to gather credible and meaningful evidence about the quality and impact of your work. To ensure your evaluation achieves its aims, take these four steps now to make sure your project provides the human resources, time, and information needed for a successful evaluation:

  1. Schedule regular meetings with your evaluator. Regular meetings help ensure that your project’s evaluation receives adequate attention. These exchanges should be in real time—via phone call, web meetings, or face-to-face—not just email. See EvaluATE’s new Communication Plan Checklist for ATE PIs and Evaluators for a list of other communication issues to discuss with your evaluator at the start of a project.
  1. Work with your evaluator to create a project evaluation calendar. This calendar should span the life of your project and include the following:
  • Due dates for National Science Foundation (NSF) annual reports: You should include your evaluation reports or at least information from the evaluation in these reports. Work backward from their due dates to determine when evaluation reports should be completed. To find out when your annual report is due, go to Research.gov, enter your NSF login information, select “Awards & Reporting,” then “Project Reports.”
  • Advisory committee meeting dates: You may want your evaluator to attend these meetings to learn more about your project and to communicate directly with committee members.
  • Project events: Activities such as workshops and outreach events present valuable opportunities to collect data directly from the individuals involved in the project. Make sure your evaluator is aware of them.
  • Due dates for new proposal submissions: If submitting to NSF again, you will need to include evidence of your current project’s intellectual merit and broader impacts. Working with your evaluator now will ensure you have compelling evidence to support a future submission.
  1. Keep track of what you’re doing and who is involved. Don’t leave these tasks to your evaluator or wait until the last minute. Taking an active—and proactive—role in documenting the project’s work will save you time and result in more accurate information. Your evaluator can then use that information when preparing their reports. Moreover, you will find it immensely useful to have good documentation at your fingertips when preparing your annual NSF report.
  • Maintain a record of project activities and products—such as conference presentations, trainings, outreach events, competitions, publications—as they are completed. Check out EvaluATE’s project vita as an example.
  • Create a participant database (or spreadsheet): Everyone who engages with your project should be listed. Record their contact information, role in the project, and pertinent demographic characteristics (such as whether a student is a first-generation college student, a veteran, or part of a group that has been historically underrepresented in STEM). You will probably find several uses for this database, such as for follow-up with participants for evaluation purposes, for outreach, and as evidence of your project’s broader impacts.

An ounce of prevention is worth of pound of cure: Investing time up front to make sure your evaluation is on solid footing will save headaches down the round.

Sustaining Private Evaluation Practices: Overcoming Challenges by Collaborating within Our ATE Community of Practice

Posted on September 27, 2017 by  in Blog ()

President, Impact Allies

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Ben Reid. I am the founder of Impact Allies, a private evaluation firm. The focus of this post is on the business, rather than technical aspects, of evaluation. My purpose is to present a challenge to sustaining a private evaluation practice and best serving clients and propose an opportunity to overcome that challenge by collaborating within our community of practice.

Challenge

Often evaluators act as one-person shows. It is important to give a single point of contact to a principal investigator (PI) and project team and for that evaluator of record to have thorough knowledge of the project and its partners. However, the many different jobs required of an evaluation contract simply cross too many specialties and personality types for one person to effectually serve a client best.

Opportunity

The first opportunity is to become more professionally aware of our strengths and weaknesses. What are your skills? And equally important, where are you skill-deficit (don’t know how to do it) and where are you performance-deficient (have the skill but aren’t suited for it—because of anxiety, frustration, no enthusiasm, etc.)?

The second opportunity is to build relationships within our community of practice. Get to know other evaluators, where their strengths are unique and whom they use for ancillary services (their book of contractors). (The upcoming NSF ATE PI conference is a great place to do this).

Example

My Strengths: Any evaluator can satisfactorily perform the basics – EvaluATE certainly has done a tremendous job of educating and training us. In this field, I am unique in my strengths of external communications, opportunity identification and assessment, strategic and creative thinking, and partnership development. Those skills and a background in education, marketing and branding, and project management, have helped me contribute broadly, which has proven useful time and again when working with small teams. Knowing clients well and having an entrepreneurial mindset allows me to do what is encouraged in NSF’s 2010 User-Friendly Handbook for Project Evaluation: “Certain evaluation activities can help meet multiple purposes, if used judiciously” (p. 119).

My Weaknesses: However, an area where I could use some outside support is graphic design and data visualization. This work, because it succinctly tells the story and successes of a project, is very important when communicating to multiple stakeholders, in published works, or for promotional purposes. Where I once performed these tasks (with much time and frustration and at a level which isn’t noteworthy), I now contract with an expert—and my clients are thereby better served.

Takeaway

“Focus on the user and all else will follow,” is the number one philosophy of Google, the company that has given us so much and in turn done so well for itself. Let us also focus on our clients, serving their needs by building our businesses where we are skilled and enthusiastic and collaborating (partnering, outsourcing, or referring) within our community of practice where another professional can do a better job for our clients.

Blog: Using Mutual Interviewing to Gather Student Feedback

Posted on September 18, 2017 by  in Blog ()

Owner, Applied Inference

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

It’s hard to do a focus group with 12 or more students! With so many people and so little time, you know it’s going to be hard to hear from the quieter students, but that may be who you need to hear from the most! Or, maybe some of the questions are sensitive – the students with the most to lose may be the least likely to speak up. What can you do? Mutual interviewing. Here is how you do it.

This method works great for non-English speakers, as long as all participants speak the same language and your co-facilitator and one person in each group is bilingual.

Warnings: The setup is time-consuming, it is very hard to explain, pandemonium ensues, people start out confused, they are often afraid they’ll do it wrong, and it is noisy (a big room is best!).

Promises: Participants are engaged, it accommodates much larger groups than a traditional focus group, everyone participates and no one can dominate, responses are anonymous except to the one individual who heard the response, it builds community and enthusiasm, and it is an empowering process.

Preparation (the researcher):

  1. Create interview guides for four distinct topics related to a project, such as program strengths, program weaknesses, suggestions for improvement, future plans, how you learn best, or personal challenges that might interfere. The challenge is to identify the most important topics that are far enough apart that people do not give the same answer to each question.
  2. Create detailed interview guides, perhaps including probe questions the group members can use to help their interviewees get traction on the question. Each group member interviewer will need four copies of his or her group’s interview guide (one for each interviewee, plus one to fill out himself/herself).
  3. Prepare nametags with a group and ID number (to ensure confidentiality) (e.g., 3-1 would be Group 3, member 1). Make sure the groups are equal in size – with between 3 and 6 members per group. This method allows you to conduct the focus group activity with up to 24 people! The nametags help member-interviewers and member-interviewees from the designated groups find each other during the rounds.
  4. Create a brief demographic survey, and pre-fill it with Group and ID numbers to match the nametags. (The ID links the survey with interview responses gathered during the session, and this helps avoid double counting responses during analysis. You can also disaggregate interview responses by demographics.)
  5. Set up four tables for the groups and label them clearly with group number.
  6. Provide good food and have it ready to welcome the participants as they walk in the door.

During the Session:

At the time of the focus group, help people to arrange themselves into 4 equal-sized groups of between 3 and 6 members. Assign one of the topics to each group. Group members will research their topic by interviewing someone from each of the other groups (and being interviewed in return by their counterparts). After completing all rounds of interviews, they come back to their own group and answer the questions themselves. Then they discuss what they heard with each other, and report it out. The report-out gives other people the opportunity to add thoughts or clarify their views.

  1. As participants arrive, give them a nametag with Group and ID number and have them fill out a brief demographic survey (which you have pre-filled with their group/id number). The group number will indicate the table where they should sit.
  2. Explain the participant roles: Group Leader, Interviewer, and Interviewee.
    1. One person in each group is recruited to serve as a group leader and note taker during the session. This person will also participate in a debrief afterward to provide insights or tidbits gleaned during their group’s discussion.
    2. The group leader will brief their group members on their topic and review the interview guide.
    3. Interviewer/Interviewee: During each round, each member is paired with one person from another group, and they take turns interviewing each other about their group’s topic.
  3. Give members 5 minutes to review the interview guide and answer the questions on their own. They can also discuss with others in their group.
  4. After 5 minutes, pair each member up with a partner from another group for the interview (i.e., Group 1 and Group 2, Group 3 and Group 4). The members should mark a fresh interview sheet with their interviewee’s Group-ID number and then they take turns interviewing each other. Make sure they take notes during the interview. Give the first interviewer 5 minutes, then switch roles, and repeat the process.
  5. Rotate and pair each member with someone from a different group (Group 1 and 3, Group 2 and 4) and repeat the interviews using a fresh interview sheet, marked with the new interviewee’s Group-ID number. Again, each member will interview the other for five minutes.
  6. Finally, rotate again and pair up members from Groups 1 and 4 and Groups 2 and 3 for the final round. Mark the third clean interview sheet with each interviewee’s Group-ID number and interview each other for five minutes.
  7. Once all pairings are finished, members return to their original groups. Each member takes 5 minutes to complete or revise their own interview form, possibly enriched by the perspectives of 3 other people.
  8. The Group Leader facilitates a 15-minute discussion, during which participants compare notes and prepare a flip chart to report out their findings. The Group Leader should take notes during the discussion. (Tip: Sometimes it’s helpful to provide guiding questions for the report-out.)
  9. Each group then has about five minutes to report the compiled findings. (Tip: During the reports, have some questions prepared to further spark conversation).

After the Session:

  1. Hold a brief (10-15 minute) meeting with the Group Leaders and have them talk about the process, insights, or tidbits that did not make it to the flip chart and provide any additional feedback to the researcher.

Results of the process:

You will now have demographic data from the surveys, notes from the individual interview sheets, the group leaders’ combined notes, and the flip charts of combined responses to each question.

To learn more about mutual interviewing, see pages 51-55 of Research Toolkit for Program Evaluation and Needs Assessments Summary of Best Practices.

Vlog: Resources to Help with Evaluation Planning for ATE Proposals

Posted on September 6, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation is an important element of an ATE proposals.  EvaluATE has developed several resources to help you develop your evaluation plans and integrate them into your ATE proposals.  This video highlights a few of them—these and more can be accessed from the links below the video.

Additional Resources:

Blog: Not Just an Anecdote: Systematic Analysis of Qualitative Evaluation Data

Posted on August 30, 2017 by  in Blog ()

President and Founder, Creative Research & Evaluation LLC (CR&E)

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a Ph.D. trained anthropologist, I spent many years learning how to shape individual stories and detailed observations into larger patterns that help us understand social and cultural aspects of human life.  Thus, I was initially taken aback when I realized that program staff or program officers often initially think of qualitative evaluation as “just anecdotal.” Even people who want “stories” in their evaluation reports can be surprised at what is revealed through a systematic analysis of qualitative data.

Here are a few tips that can help lead to credible findings using qualitative data.  Examples are drawn from my experience evaluating ATE programs.

  • Organize your materials so that you can report which experiences are shared among program participants and what perceptions are unusual or unique. This may sound simple, but it takes forethought and time to provide a clear picture of the overall range and variation of participant perceptions. For example, in analyzing two focus group discussions held with the first cohort of students in an ATE program, I looked at each transcript separately to identify the program successes and challenges raised in each focus group. Comparing major themes raised by each group, I was confident when I reported that students in the program felt well prepared, although somewhat nervous about upcoming internships. On the other hand, although there were multiple joking comments about unsatisfactory classroom dynamics, I knew these were all made by one person and not taken seriously by other participants because I had assigned each participant a label and I used these labels in the focus group transcripts.
  • Use several qualitative data sources to provide strength to a complex conclusion. In technical terms, this is called “triangulation.” Two common methods of triangulation are comparing information collected from people with different roles in a program and comparing what people say with what they are observed doing. In some cases, data sources converge and in some cases they diverge. In collecting early information about an ATE program, I learned how important this program is to industry stakeholders. In this situation, there was such a need for entry-level technicians that stakeholders, students, and program staff all mentioned ways that immediate job openings might have a short-term priority over continuing immediately into advanced levels in the same program.
  • Think about qualitative and quantitative data together in relation to each other.  Student records and participant perceptions show different things and can inform each other. For example, instructors from industry may report a cohort of students as being highly motivated and uniformly successful at the same time that institutional records show a small number of less successful students. Both pieces of the picture are important here for assessing a project’s success; one shows high level of industry enthusiasm, while the other can provide exact percentages about participant success.

Additional Resources

The following two sources are updated classics in the fields of qualitative research and evaluation.

Miles, M. B., Huberman, A. M., & Saldana, J. (2014). Qualitative data analysis: A methods sourcebook. Thousand Oaks, CA: Sage.

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice: The definitive text of qualitative inquiry frameworks and options (4th ed.). Thousand Oaks, CA: Sage.

Blog: Reporting Anticipated, Questionable, and Unintended Project Outcomes

Posted on August 16, 2017 by  in Blog ()

Education Administrator, Independent

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Project evaluators are aware that evaluation aims to support learning and improvement. Through a series of planned interactions, event observations, and document reviews, the evaluator is charged with reporting to the project leadership team and ultimately the project’s funding agency, informing audiences of the project’s merit. This is not to suggest that reporting should only aim to identify positive impacts and outcomes of the project. Equally, there is substantive value in informing audiences of unintended and unattained project outcomes.

Evaluation reporting should discuss aspects of the project’s outcomes, whether anticipated, questionable, or unintended. When examining project outcomes the evaluator analyzes obtained information and facilitates project leadership through reflective thinking exercises for the purpose of defining the significance of the project and summarizing why outcomes matter.

Let’s be clear, outcomes are not to be regarded as something negative. In fact, with the projects that I have evaluated over the years, outcomes have frequently served as an introspective platform informing future curriculum decisions and directions internal to the institutional funding recipient. For example, the outcomes of one STEM project that focused on renewable energy technicians provided the institution with information that prompted the development of subsequent proposals and projects targeting engineering pathways.

Discussion and reporting of project outcomes also encapsulates lessons learned and affords the opportunity for the evaluator to ask questions such as:

  • Did the project increase the presence of the target group in identified STEM programs?
  • What initiatives will be sustained during post funding to maintain an increased presence of the target group in STEM programs?
  • Did project activities contribute to the retention/completion rates of the target group in identified STEM programs?
  • Which activities seemed to have the greatest/least impact on retention/completion rates?
  • On reflection, are there activities that could have more significantly contributed to retention/completion rates that were not implemented as part of the project?
  • To what extent did the project supply regional industries with a more diverse STEM workforce?
  • What effect will this have on regional industries during post project funding?
  • Were partners identified in the proposal realistic contributors to the funded project? Did they ensure a successful implementation enabling the attainment of anticipated outcomes?
  • What was learned about the characteristics of “good” and “bad” partners?
  • What are characteristics to look for and avoid to maximize productivity with future work?

Factors influencing outcomes include, but are not limited to:

  • Institutional changes, e.g., leadership;
  • Partner constraints or changes; and
  • Project/budgetary limitations.

In some instances, it is not unusual for the proposed project to be somewhat grandiose in identifying intended outcomes. Yet, when project implementation gets underway, intended activities may be compromised by external challenges. For example, when equipment is needed to support various aspects of a project, procurement and production channels may contribute to delays in equipment acquisition, thus adversely effecting project leadership’s ability to launch planned components of the project.

As a tip, it is worthwhile for those seeking funding to pose the outcome questions at the front-end of the project – when the proposal is being developed. Doing this will assist them in conceptualizing the intellectual merit and impact of the proposed project.

Resources and Links:

Developing an Effective Evaluation Report: Setting the Course for Effective Program Evaluation. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity, 2013.