Newsletter - Real Questions | Real Answers

Newsletter: What’s the Difference Between Outputs, Outcomes, and Impacts?

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University


A common source of confusion among individuals who are learning about logic models is the difference between outputs, outcomes, and impacts. While most people generally understand that project activities are the things that a project does, the other terms may be less straightforward.

Outputs are the tangible products of project activities. I think of outputs as things whose existence can be observed directly, such as websites, videos, curricula, labs, tools, software, training materials, journal articles, and books. They tend to be the things that remain after a project ends or goes away.

Outcomes are the changes brought about through project activities and outputs/products.  Outcomes may include changes in individual knowledge, skills, attitudes, awareness, or behaviors; organizational practices; and broader social/economic conditions.  In her blog post “Outputs are for programs, outcomes are for people” (, Sheila Robinson offers this guidance: “OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:  How will program participants change as a result of their participation in the program?” This is a great way to check to see if your logic model elements are located in the right place.  If the outcomes in your logic model include things that don’t sound like an appropriate answer to that question, then you may need to move things around.

The term impact is usually used to refer to outcomes that are especially large in scope or the ultimate outcomes a project is seeking to bring about. Sometimes the terms impacts and long-term outcomes are used interchangeably.

For example, one of EvaluATE’s main activities are webinars. Outputs of these webinars include resource materials, presentation slides, and recordings. Short-term outcomes for webinar participants are expected to include increased knowledge of evaluation. Mid-term outcomes include modifications or changes in their evaluation practice. Long-term outcomes are improved quality and utility of ATE project evaluations. The ultimate intended impact is for ATE projects to achieve better outcomes through strategic use of high-quality evaluations.

Keep in mind that not all logic models use these specific terms, and not everyone adheres to these particular definitions. That’s OK! The important thing to remember when developing a logic model is to understand what YOU mean in using these terms and to use and apply them consistently in your model and elsewhere.  And regardless of how you define them, each column in your model should present new information, not a reiteration of something already communicated.

Newsletter: Where and how should I report on my evaluation in my annual report to the National Science Foundation?

Posted on April 1, 2016 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University is the online reporting system used by all National Science Foundation grantees. The system is designed to accommodate reporting on all types of work supported by NSF—from research on changes in ocean chemistry to developing technician education programs. Not all NSF programs require grantees to conduct project-level evaluations, so the system does not have a specific section for reporting evaluation results. This may leave some ATE grantees wondering where and how they are supposed to include information from their evaluations in their annual reports. There is no one right way to do this, but here is my advice:

Upload your external evaluation report as a supporting file in the Accomplishments section of the system. If the main body of this report exceeds 25 pages, be sure that it includes a 1-3 page executive summary that highlights key findings and conclusions. Although NSF program officers are very interested in project evaluation results, they simply do not have time to read lengthy detailed reports for all the grants they oversee.

Highlight key findings from your evaluation in the Activities, Outcomes, and Impacts sections of your annual report, as appropriate. For example, if you have data on the number and type of individuals served through your grant activities and their satisfaction with that experience, include some of these findings or related conclusions as you report on your activities. If you have data on changes brought about by your grant work at the individual, organizational, or community levels, summarize that evidence in your Outcomes or Impacts sections.

The Impacts section of the annual report is for describing how projects

  • developed human resources by providing opportunities for research, teaching, and mentoring
  • improved capacity of underrepresented groups to engage in STEM research, teaching, and learning
  • provided STEM experiences to teachers, youth, and the public
  • enhanced the knowledge base of the project’s principal discipline or other disciplines
  • expanded physical (labs, instrumentation, etc.) or institutional resources to increase capacity for STEM research, teaching, and learning.

Many—not all—of these types of impacts are relevant to the projects and centers supported by the ATE program, which is focused on improving the quality and quantity of technicians in the workforce. It is appropriate to indicate “not applicable” if you don’t have results that align with these categories. If you happen to have other types of results that don’t match these categories, report them in the Outcomes section of the reporting system.

Refer to the uploaded evaluation report for additional information. Each section in the reporting system has an 8,000 character limit, so it’s unlikely you can include detailed evaluation results. (To put that in perspective, this article has 3,515 characters.) Instead, convey key findings or conclusions in your annual report and refer to the uploaded evaluation report for details and additional information.

Finally, if the evaluation revealed problems with the project that point to a need to change how it is being implemented, include that information in the Changes/Problems section of the report. One reason that evaluation is required for all ATE projects is to support continuous improvement. If the evaluation reveals something is not working as well as expected, it’s best to be transparent about the problem and how it is being addressed.

Newsletter: How can PIs demonstrate that their projects have “advanced knowledge”?

Posted on January 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

NSF’s Intellectual Merit criterion is about advancing knowledge and understanding within a given field or across fields. Publication in peer-reviewed journals provides strong evidence of the Intellectual Merit of completed work. It is an indication that the information generated by a project is important and novel. The peer review process ensures that articles meet a journal’s standard of quality, as determined by a panel of reviewers who are subject matter experts.

In addition, publishing in an academic journal is the best way of ensuring that the new knowledge you have generated is available to others, becomes part of a shared scientific knowledge base, and is sustained over time. Websites and digital libraries tend to come and go with staff and funding changes. Journals are archived by libraries worldwide and, importantly, indexed to enable searches using standard search terms and logic. Even if a journal is discontinued, its articles remain available through libraries. Conference presentations are important dissemination vehicles, but don’t have the staying power of publishing. Some conferences publish presented papers in conference proceedings documents, which helps with long-term accessibility of information presented at these events.

The peer review process that journals employ to determine if they should publish a given manuscript is essentially an evaluative process. A small group of reviewers assesses the manuscript against criteria established for the journal. If the manuscript is accepted for publication, it met the specified quality threshold. Therefore, it is not necessary for the quality of published articles produced by ATE projects to be separately evaluated as part of the project’s external evaluation. However, it may be worthwhile to investigate the influence of published works, such as through citation analysis (i.e., determination of the impact of a published article based on the number of times it has been cited—to learn more, see

Journals focused on two-year colleges and technical education are good outlets for ATE-related publications. Examples include Community College Enterprise, Community College Research Journal, Community College Review, Journal of Applied Research in the Community College, New Directions for Community Colleges, Career and Technical Education Research, Journal of Career and Technical Education, and Journal of Education and Work. (For more options, see the list of journals maintained by the Center of Education and Work (CEW) at the University of Wisconsin at

NSF’s Intellectual Merit criterion is about contributing to collective knowledge. For example, if a project develops embedded math modules for inclusion in an electrical engineering e-book, students may improve their understanding of math concepts and how they relate to a technical task—and that is certainly important given the goals of the ATE program. However, if the project does not share what was learned about developing, implementing, and evaluating such modules and present evidence of their effectiveness so that others may learn from and build on those advances, the project hasn’t advanced disciplinary knowledge and understanding.

If you are interested in preparing a journal manuscript to disseminate knowledge generated by your project, first look at the type of articles that are being published in your field (check out CEW’s list of journals referenced above). You will get an idea of what is involved and how the articles are typically structured. Publishing can become an important part of a PI’s professional development, as well as a project’s overall effort to disseminate results and advance knowledge.

Newsletter: What should I do if my college procurement office won’t let me name an evaluator in our proposal?

Posted on July 1, 2015 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

It is generally considered best practice to identify your intended external evaluator by name in an ATE proposal and work with him or her to write the evaluation section. In some cases, college procurement policies may be at odds with this long-standing practice (e.g., see Jacqueline Rearick’s blog post on this topic at If you have to proceed with evaluation planning without the benefit of involvement by an external evaluator, here are some tips for DIY (do-it-yourself) evaluation planning:

Develop a project logic model that specifies your project’s activities, outputs (products), and outcomes. Yes, you can do this! The task of logic model development often falls to an evaluator, although it’s really just project planning. But it provides a great foundation for framing your evaluation plan. Try out our ATE Logic Model Template (

Specify the focus of the evaluation by formulating evaluation questions. These should be clearly tied to what is in the logic model. Here are some generic evaluation questions: How well did the project reach and engage its intended audience? How satisfied are participants with the project’s activities and products? To what extent did the project bring about changes in participants’ knowledge, skills, attitudes, and/or behaviors? How well did the project meet the needs it was designed to address? How sustainable is the project? Ask questions about both the project’s implementation and outcomes and avoid questions that can be answered with a yes/no or single number.

Describe the data collection plan. Identify the data and data sources that will be used to answer each of the evaluation questions. Keep in mind most evaluation questions will need multiple sources of evidence in order to answer adequately. Utilizing both qualitative and quantitative data will strengthen your evidence base. Use our Data Collection Planning Matrix to work out the details of your plan (see p. 3- Data Collection Planning Matrix).

Describe the analytical and interpretive procedures to be used for making sense of the evaluation data. For DIY evaluation plans, keep it simple. In fact, most project evaluations (not including research projects) rely mainly on basic descriptive statistics (e.g., percentages, means, aggregate numbers) for analysis. As appropriate, compare data over time, by site, by audience type, and/or against performance targets to aid in interpretation.

Identify the main evaluation deliverables. These are the things the evaluation effort specifically (not the overall project) will produce. Typical deliverables include a detailed evaluation plan (i.e., an expanded version of the plan included in the proposal that is developed after the project is funded), data collection instruments, and evaluation reports. NSF also wants to see how the project will use the evaluation findings, conclusions, and recommendations to inform and improve ongoing project work.

Include references to the evaluation literature. At minimum, consult and reference the NSF User Friendly Handbook for Project Evaluation ( and the Program Evaluation Standards (

Include a line item in your budget for evaluation. The average allocation among ATE projects for evaluation is 7 percent (see Survey Says on p. 1).

Finally, if you’re including a DIY evaluation plan in your proposal, specify the policy prohibiting you from identifying and working with a particular evaluator at the proposal stage. Make it absolutely clear to reviewers why you have not engaged an external evaluator and what steps you will take to procure one once an award is made.

Newsletter: How is an NSF Project Outcomes Report Different from a Final Annual Report?

Posted on April 1, 2015 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

All NSF projects awarded in January 2010 or later are required to submit a project outcomes report within 90 days of the grant’s expiration, along with a final annual report. In addition to the fact that a project outcomes report is a few paragraphs (200-800 words) and annual reports are typically several pages long, there are three other ways a project outcomes report is distinct from a final annual report.

1. A project outcomes report is solely about outcomes. A final annual report addresses many other topics. Project outcomes reports should describe what a project developed and the changes it brought about with regard to advancing knowledge (intellectual merit) and contributing to desired social outcomes (broader impacts). The focus should be on products and results, not project implementation. Publications are important evidence of intellectual merit, and a list of publications will be generated automatically from the project’s annual reports submitted to Other products generated with grant funds should be listed, such as data sets, software, or educational materials. If these products are available online, links may be provided.1 An accounting of grant products demonstrates a project’s productivity and intellectual merit. To address the project’s broader impacts, reports should highlight achievements in areas such as increasing participation in STEM by underrepresented minorities, improving teaching and learning, and developing the technical workforce.

2. A project outcomes report provides a “complete picture of the results” of a project.2 A final annual report covers the last year of the project only. A project outcomes report is not a progress report. It is the final word on what a project achieved and produced. PIs should think carefully about how they want their work to be portrayed to the public for decades to come and craft their reports accordingly. Dr. Joan Strassman of Washington University provides this cogent advice about crafting outcomes reports:

[A project outcomes report] is where someone … can go to see where NSF is spending its tax dollars. This document is not the plan, not the hopes, but the actual outcomes, so this potential reader can get direct information on what the researcher says she did. It pulls up along with the original funding abstracts, so see to it they coordinate as much as possible. Work hard to be clear, accurate, and compelling. (Read more at

3. A project outcomes report is a public document.3 A final annual report goes to the project’s NSF program officer only. A big difference between these audiences is that a project’s program officer probably has expertise in the project’s content area and is certainly familiar with the overall aims of the program through which the project was funded. For the benefit of lay readers, project outcomes report authors should use plain language to ensure comprehension by the general public (see Authors may check the report’s readability by having a colleague from outside the project’s content area review it. It’s important to include complete, yet succinct documentation that is readily understandable by individuals outside the project’s content area.

1 ATE grants awarded in 2014 or later are required to archive their materials with ATE Central.

2 For NSF’s guidelines regarding project outcomes reports, see

3 To access ATE project outcomes reports: (1) Go to (2) Enter “ATE” in the keyword box; (3) Check the box for “Show Only Awards with Project Outcomes Reports.”

Newsletter: Can the ATE Survey Data be Used for Benchmarking?

Posted on January 1, 2015 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

Benchmarking is a process for comparing your organization’s activities and achievements with those of other organizations. In the business world, benchmarking emphasizes measuring one’s performance against organizations “known to be leaders in one or more aspects of their operations” ( In education contexts, benchmarking tends to be more about comparing an institution’s performance with its peer institutions. This may be done by using data from the National Community College Benchmark Project ( and the National Survey of Student Engagement (,1 which has data specific to community colleges as well as four-year institutions. In short, benchmarking can be used to assess organizational performance against what is typical or exceptional, depending on your needs.

The ATE survey, conducted annually since 2000, provides aggregate information about ATE-funded projects and centers. The survey data may be used for comparing your individual project or center against the program as a whole. Such a comparison could be used to make a case for addressing a continuing need within the ATE program or to demonstrate your grant’s performance in relation to the program overall. For example, one concern throughout the ATE program and NSF is the participation of women and underrepresented minorities. Based on the 2014 survey of ATE grantees, we know that

  • 42 percent of students served by the ATE program are from minority groups that are underrepresented in STEM; in comparison, individuals from these minority groups make up 31 percent of the U.S. population.
  • 25 percent of students in ATE are women, compared with 51 percent of the population; only in biotechnology does the percentage of women reflect that of the U.S. population.

These and other demographic data may be used to help your project or center assess how it’s doing with regard to broadening participation in comparison with the ATE program as a whole or within your discipline. Similarly, information about ATE project and center practices may help gain insights on grant operations. Results from the 2014 ATE survey indicate that

  • 85 percent of projects and centers collaborated with business and industry; of those, 63 percent obtained information about workforce needs from their collaborators.
  • 90 percent of ATE grantees have engaged an evaluator; most evaluators (84%) are external to both the institution and the grant.

Check out our ATE survey fact sheets and data snapshots to identify data points that you can use to assess your performance against other ATE projects and centers: If you would like a tailored snapshot report to assist your project or center with benchmarking against the ATE program, email To see a demonstration of how to compare grant-level, program-level, and national-level data, go to

Keep in mind that the ATE program should not be used as a proxy for all technician education in the U.S. See Corey Smith’s article on page 3 for a list of other sources of secondary data that may be of use for planning, evaluation, and benchmarking.

1Both entities restrict data access to institutional members.

Newsletter: What is the best way to coordinate internal and external evaluation?

Posted on October 1, 2014 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

All ATE projects are required to allocate funds for external evaluation services. So, when it comes to internal and external evaluation, the one certain thing is that you must have an external evaluator (in rare cases, alternative arrangements may be approved by a program officer). On the annual ATE survey, we distinguish between two types of external evaluators: (1) completely external to the institution and (2) external to the project, but internal to the institution (such as an institutional researcher or a faculty member from a different department from where the project is located). Both are considered external, as long as the Type 2 external evaluator is truly independent from the project. An internal evaluator is a member of the project staff, who is directly funded by the project, such as a project manager. More commonly, internal evaluation is a shared responsibility among team members. There are many options for coordinating internal and external evaluation functions. Over the years, I have noted four basic approaches:

(1) External Evaluator as Coach: The external evaluator provides guidance and feedback to the internal project team throughout the life of the grant. This is a good approach when there is already some evaluation competence among team members. The external evaluator’s involvement enhances the credibility of the evaluation and helps the team continue to build their evaluation knowledge and skills.

(2) External Evaluator as Heavy-Lifter: The external evaluator takes the lead in planning the evaluation, designing instruments, analyzing results, and writing reports. The internal team mainly gathers data and provides it to the external evaluator for processing. In this approach, the external evaluator should provide clear-cut data collection protocols to ensure systematic collection and handling of data by the internal team before they turn the information over to the external evaluator.

(3) External Evaluator as Architect: The external evaluator designs the overall evaluation and develops data collection instruments. The project team executes the plan, with technical assistance from the external evaluator as needed—particularly at critical junctures in the evaluation such as analysis and reporting. With this approach, it is important to front-load the evaluation budget in the first year of the project to allow for intensive involvement by the external evaluator.

(4) Divide-and-Conquer: The internal team is responsible for evaluating project implementation and immediate results. The external evaluator handles the evaluation of longer-term outcomes. This is the approach that EvaluATE uses. We carefully track and analyze data related to our reach and audience engagement and are responsible for assessing immediate outcomes of our webinars and workshops (i.e., participants’ satisfaction, self-reported learning, and intent to use content). Our external evaluator is responsible for determining and assessing the impact of our work in terms of application of our content and changes in evaluation practice.

Taking on part of an evaluation internally is often seen as a means of conserving project resources, and it can have that effect. But do not make the mistake of thinking internal evaluation is cost-free. At minimum, it takes time, which is sometimes a rarer commodity than money. In short, there is no one best way to coordinate internal and external evaluation. Your approach should make sense for your project in light of available resources (including staff time and expertise) and what you need your evaluation to do for your project.

Newsletter: What’s in the New ATE Program Solicitation with Regard to Evaluation?

Posted on July 1, 2014 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

The evaluation requirements and expectations expressed in the new ATE program solicitation are generally consistent with those that were in the prior version. However, there are two important changes that relate specifically to ATE centers:

First, the solicitation states that proposals for center renewals “may submit up to five pages on Results of Prior Support in the supplemental documents section and refer the reader to that section in the Project Description section.”  The requirement that all proposals must begin with a subsection titled Results of Prior Support has not changed. What is new is the option—for Centers only—of describing results of prior support in a supplementary document, allowing proposers to devote more of their 15-page project descriptions to what they intend to do, rather than what they have accomplished in the past. Whether embedded in the project description or appended as a supplementary document, this section should identify the prior grant’s outcomes and impacts, supported with evidence from the evaluation. Reviewers will be looking for strong evidence that NSF made a good investment in the center and that a renewal grant is warranted given the center’s track record.

Second, the new solicitation calls for national center proposals to include evaluation plans that describe how impacts on institutions, faculty, students, and industry will be assessed. This is a more specific expectation for the evaluation than in the previous solicitation, which called for evaluations to provide evidence of impacts relating to a center’s disciplinary focus. Thus, proposals for national centers should describe the intended impacts at each of these levels (institutions, faculty, students, industry) and the evaluation plan should explain what data will be used to determine the quality and magnitude of those impacts.

Although not directly related to evaluation, other notable changes in the 2014 solicitation include the following:

  • there is a new track for ATE projects called “ATE Coordination Networks”
  • the Targeted Research track has been expanded
  • Resource Centers have been renamed Support Centers
  • all grantees are required to work with ATE Central to archive materials developed with grant funds to ensure they remain available to the public after funding ends

The archiving requirement relates directly to the data management plans that are required with all NSF proposals. To learn more about DMPs and how to develop yours, check out the article on page 3 of this newsletter.

Also, note the submission deadline is earlier this year—October 9!  To learn more about developing an evaluation plan to include in your ATE proposal,  join our webinars on August 20 and 26 (see page 4).

Check out the new solicitation at


Newsletter: What do you do when your evaluator disagrees with a recommendation by your program officer?

Posted on April 1, 2014 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

This was a question submitted anonymously to EvaluATE by an ATE principal investigator (PI), so I do not know the specific nature of the recommendation in question. Therefore, my response isn’t about the substance of whatever this recommendation may have been about, but on the interpersonal and political dynamics of the situation.

Let’s put the various players’ roles into perspective:
As PI, you are ultimately responsible for your project—delivering what you outlined in your grant proposal/negotiations and making decisions about how best to conduct the project based on your experience, expertise, and input from various advisors. You are in the position of authority when it comes to how your project is implemented and what recommendations from what sources to implement in order to ensure the success of your project.

Your NSF program officer (PO) monitors your project, primarily based on information you provide in your annual report, submitted to him or her via Your NSF
program officer may provide extremely valuable guidance and advice, but the PO’s role is to comment on your project as described in the report. You are not obligated to accept the advice. However, the PO does approve the report, based on his or her assessment of whether the project is sufficiently meeting the expectations of the grant. If you choose not to accept your program officer’s recommendations—which is completely acceptable—you should be able to provide a clear rationale for your decision in a respectful and diplomatic way by addressing each of the issues raised. Such a response should be documented, such as in your annual report and/or a response to the evaluation report.

Your evaluator is a consultant you hired to provide a service to your project in exchange for compensation. You are not obligated to accept this person’s recommendations, either. Again,
however, you should give your evaluator’s recommendations—especially those based on evidence—careful consideration and express why or why not you believe the recommendations are or are not appropriate for your project. An evaluator should never “ding” your project for not implementing the evaluation recommendations.

If you are really not sure who is right and neither person’s position (the PO’s recommendation or the evaluator’s disagreement with it) especially resonates with you and your understanding of what your project needs, you should seek additional information. If you have an advisory panel, this is exactly the type of tricky situation they can help with. If you don’t, you might consult an experienced person at your institution or another ATE project or center PI. Whichever way you go, you should be able to provide a clear rationale for your position and communicate it to both parties. This is not a popularity contest between your evaluator and your program officer. This is about making the right decisions for your project.

Newsletter: What evaluation models do you recommend for ATE evaluations?

Posted on January 1, 2014 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Evaluators in any context should have working knowledge of multiple evaluation models. Models provide conceptual frameworks for determining the types of questions to be addressed, which stake-holders should be involved and how, the kinds of evidence needed, and other important considerations for an evaluation. However, evaluation practitioners rarely adhere strictly to any one model (Christie, 2003). Rather, they draw on them selectively. Below are a few popular models:

EvaluATE has previously highlighted the Kirkpatrick Model, developed by Donald Kirkpatrick for evaluating training effectiveness in business contexts. It provides a useful framework for focusing an evalua-tion of any type of professional develop-ment activity. It calls for evaluating a training intervention on four levels of im-pact (reaction, learning, behavior, and high-level results). A limitation is that it does not direct evaluators to consider whether the right audiences were reached or assess the quality of an intervention’s content and implementation— only its effects. See

Etienne Wenger reconceptualized the Kirkpatrick “levels” for evaluating value creation in communities of practice. He provides useful suggestions for the types of evidence that could be gathered for evaluating community of practice impacts at multiple levels. However, the emphasis on identifying types of “value” could lead those using this approach to overlook evidence of harm and/or overestimate net benefits. See

Three models that figure prominently in most formal evaluation training programs include Daniel Stufflebeam’s CIPP Model, Michael Scriven’s Key Evaluation Checklist, and Michael Quinn Patton’s Utilization- Focused Evaluation, described below. These authors have distilled their models into checklists—see

Stufflebeam’s CIPP Model is especially popular for education and human service evaluations. CIPP calls for evaluators to assess a project’s Context, Input, Process, and Prod-ucts (the latter encompasses effectiveness, sustainability, and transportability). CIPP evaluations ask What needs to be done? How should it be done? Is it being done? Did it succeed?

Scriven’s Key Evaluation Checklist calls for assessing a project’s processes, outcomes, and costs. It emphasizes the importance of identifying the needs being served by a pro-ject and determining how well those needs were met. Especially useful is the list of 21 sources of values/criteria to consider when evaluating pretty much anything.

Patton’s Utilization-Focused Evaluation calls for planning an evaluation around the information needs of “primary intended users” of the evaluation, i.e., those who are in a position to make decisions based on the evaluation results. He provides numerous practical tips for engaging stakeholders to maximize an evaluation’s utility.

This short list barely scratches the surface— for an overview of 22 different models, see Stufflebeam (2001). A firm grounding in evaluation theory will enhance any evaluator’s ability to design and conduct evaluations that are useful, feasible, ethical, and accurate (see
Stufflebeam, D. (2001). Evaluation models. New Directions for Evaluation, 89, 7–98.

Christie, C. A. (2003), Understanding evaluation theory and its role in guiding practice. New Directions for Evaluation, 97, 91–93.