Blog




The Value of Using a Psychosocial Framework to Evaluate STEM Outcomes Among Underrepresented Students

Posted on December 1, 2016 by , , in Blog ()
henderson2x breonte2x markert2x
Drs. Dawn X. Henderson, Breonte S. Guy, and Chad Markert serve as Co-Principal Investigators of an HBCU UP Targeted Infusion Project grant. Funded by the National Science Foundation, the project aims to explore how infusing lab-bench techniques into the Exercise Physiology curriculum informs undergraduate students’ attitudes about research and science and intentions to persist in STEM-related careers.

The National Science Foundation aims to fund projects that increase retention and persistence in STEM-related careers. Developing project proposals usually involves creating a logic model and an evaluation plan. The intervention, specifically one designed to change an individual’s behavior and outcomes, relies on a combination of psychological and social factors. For example, increasing the retention and persistence of underrepresented groups in the STEM education-to-workforce pipeline depends on attitudes about science, behavior, and the ability to access resources that lead to access, exploration, and exposure to STEM.

As faculty interested in designing interventions in STEM education, we developed a psychosocial framework to inform project design and evaluation and believe we offer an insightful strategy to investigators and evaluators. When developing a theory of change or logic model, you can create a visual map (see figure below) to identify underlying psychological and social factors and assumptions that influence program outcomes. In this post, we highlight a psychosocial framework for developing theories of change—specifically as it relates to underrepresented groups in STEM.

psychosocial_frameworkVisual mapping can outline the relationship between the intervention and psychological (cognitive) and social domains.

What do we mean by psychosocial framework?

Both retention and persistence rely on social factors, such as financial resources, mentoring, and other forms of social support. For example, in our work, we proposed introducing underrepresented students to lab-bench techniques in the Exercise Physiology curriculum and providing summer enrichment opportunities in research to receive funding and mentoring. Providing these social resources introduced students to scientific techniques they would not receive in a traditional curriculum. Psychological factors, such as individual attitudes about science and self-efficacy, are also key contributors to STEM persistence. For instance, self-efficacy is the belief one has the capacity to accomplish a specific task and achieve a specific outcome.

A practical exercise in developing the psychosocial framework is asking critical questions:

  • What are some social factors driving a project’s outcomes? For example, you may modify social factors by redesigning curriculum to engage students in hands-on experiences, providing mentoring or improving STEM teaching.
  • How can these social factors influence psychological factors? For example, improving STEM education can change the way students think about STEM. Outcomes then could relate to attitudes towards and beliefs about science.
  • How do psychological factors relate to persistence in STEM? For example, changing the way students think about STEM, their attitudes, and beliefs may shape their science identity and increase their likelihood to persist in STEM education (Guy, 2013).

What is the value-added?

Evaluation plans, specifically those seeking to measure changes in human behavior, hinge on a combination of psychological and social factors. The ways in which individuals think and form attitudes and behaviors, combined with their ability to access resources, influence programmatic outcomes. A psychosocial framework can be used to identify how psychological processes and social assets and resources contribute to increased participation and persistence of underrepresented groups in STEM-related fields and the workforce. More specifically, the recognition of psychological and social factors in shaping science attitudes, behaviors, and intentions to persist in STEM-related fields can generate value in project design and evaluation.

Reference
Guy, B. (2013). Persistence of African American men in science: Exploring the influence of scientist identity, mentoring, and campus climate. (Doctoral dissertation).

Useful Resource

Steve Powell’s AEA365 blog post, Theory Maker: Free web app for drawing theory of change diagrams

newton
Course Improvement Through Evaluation: Improving Undergraduate STEM Majors’ Capacity for Delivering Inquiry-Based Mathematics and Science Lessons

Posted on November 16, 2016 by  in Blog ()

Associate Professor, Graduate School of Education, University of Massachusetts Lowell

One of the goals of the University of Massachusetts (UMass) UTeach program is to produce mathematics and science teachers who not only are highly knowledgeable in their disciplines, but also can engage students through inquiry-based instruction. The Research Methods course is one of the core program courses designed to accomplish this goal. The Research Methods course is centered around a series of inquiry-based projects.

What We Did

Specifically, the first inquiry was a simple experiment. Students were asked to look around in their kitchens, come up with a research question, and carry out an experiment to investigate the question. The second inquiry required them to conduct a research project in their own disciplinary field. The third inquiry asked students to pretend to be teachers of a middle or high school math/science course who were about to teach their students topics that involve the concept of slope and its applications. This inquiry required students to develop and administer an assessment tool. In addition, they analyzed and interpreted assessment data in order to find out their pretend-students’ prior knowledge and understanding of the concept of slope and its applications in different STEM disciplines (i.e., using assessment information for lesson planning purposes).

Our Goal

We investigated whether our course achieved the goal of enhancing course enrollees’ development of pedagogical skills delivering inquiry-based instruction teaching mathematical or scientific concepts embedded in the inquiry projects.

What We Learned

Examinations of the quality of students’ written inquiry reports showed that students were able to do increasingly difficult work with a higher degree of competency as the course progressed.

Comparisons of students’ responses to pre-and-post course surveys that consisted of questions about a hypothetical experiment indicated that students gained skills at identifying and classifying experimental variables and sources of measurement errors. However, they struggled with articulating research questions and justifying whether a question was researchable. These results were consistent with what we observed in their written reports. As the course progressed, students were more explicit at identifying variables and their relationships and were better at explaining how their research designs addressed possible measurement errors. For most students, however, articulating a researchable question was the most difficult aspect of an inquiry project.

Students’ self-reflections and focus group discussions suggested that our course modeled inquiry-based learning quite well, which was a sharp departure from the step-by-step laboratory activities they were used to as K-12 students. Students also noted that the opportunity to independently conceptualize and carry out an experiment before getting peer and instructor feedback, revising, and producing a final product created a reflective process that they had not experienced in other university course work. Finally, students appreciated the opportunity to articulate the connection between practicing inquiry skills as part of their professional requirements (i.e., as STEM majors) and using inquiry as a pedagogical tool to teach the math and science concepts to middle or high school students. They also noted that knowing how to evaluate their own students’ prior knowledge was an important skill for lesson planning down the road.

Wingate2016
3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • —Set a recurring conference call or meeting with your evaluator—at least once a month.
  • — Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • —Assign one person on your project team to be the point-person for evaluation.
  • —Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.

 

Best Practices for Two-Year Colleges to Create Competitive Evaluation Plans

Posted on September 28, 2016 by , in Blog ()
Ball
Kelly Ball
Ball
Jeff Grebinoski

Northeast Wisconsin Technical College’s (NWTC) Grants Office works closely with its Institutional Research Office to create ad hoc evaluation teams in order to meet the standards of evidence required in funders’ calls for proposals. Faculty members at two-year colleges often make up the project teams that are responsible for National Science Foundation (NSF) grant project implementation. However, they often need assistance navigating among terms and concepts that are traditionally found in scientific research and social science methodology.

Federal funding agencies are now requiring more evaluative rigor in their grant proposals than simply documenting deliverables. For example, the NSF’s Scholarships in Science, Technology, Engineering, and Mathematics (S-STEM) program saw dramatic changes in 2015: The program solicitation increased the amount of non-scholarship budget from 15% of the scholarship amount to 40% of the total project budget to increase supports for students and to investigate the effectiveness of those supports.

Technical colleges, in particular, face a unique challenge as solicitations change: These colleges traditionally have faculty members from business, health, and trades industries. Continuous improvement is a familiar concept to these professionals; however, they tend to have varying levels of expertise evaluating education interventions.

The following are a few best practices we have developed for assisting project teams in grant proposal development and project implementation at NWTC.

  • Where possible, work with an external evaluator at the planning stage. External evaluators can provide the expertise that principal investigators and project teams might lack as external evaluators are well-versed on current evaluation methods, trends, and techniques.
  • As they develop their projects, teams should meet with their Institutional Research Office to better understand data gathering and research capacity. Some data needed for evaluation plans might be readily available, whereas others might require some advanced planning to develop a system to track information. Conversations about what the data will be used for and what questions the team wants to answer will help ensure that the correct data are able to be gathered.
  • After a grant is awarded, have a conversation early with all internal and external evaluative parties about clarifying data roles and responsibilities. Agreeing to reporting deadlines and identifying who will collect the data and conduct further analysis will help avoid delays.
  • Create a “data dictionary” for more complicated projects and variables to ensure that everyone is on the same page about what terms mean. For example, “student persistence” can be defined term-to-term or year-to-year and all parties need to understand which data will need to be tracked.

With some planning and the right working relationships in place, two-year colleges can maintain their federal funding competitiveness even as agencies increase evaluation requirements.

yanowitz
Possible Selves: A Way to Assess Identity and Career Aspirations

Posted on September 14, 2016 by  in Blog ()

Professor of Psychology, Arkansas State University

Children are often asked the question “What do you want to be when you grow up?” Many of us evaluate programs where the developers are hoping that participating in their program will change this answer. In this post, I’d like to suggest using “possible self” measures as a means of evaluating if a program changed attendees’ sense of identity and career aspirations.

What defines the term?

Possible selves are our representations of our future. We all think about what we ideally would like to become (the hoped-for possible self), things that we realistically expect to become (the expected possible self), and things that we are afraid of becoming (the feared-for possible self).[1][2] Possible selves can change many times over the lifespan and thus can be a useful measure to examine participants’ ideas about themselves in the future.

How can it be measured?

There are various ways to measure possible selves. One of the simplest is to use an open-ended measure that asks people to describe what they think will occur in the future. For example, we presented the following (adapted from Osyerman et al., 2006[2]) to youth participants in a science enrichment camp (funded by an NSF-ITEST grant to Arkansas State University):

Probably everyone thinks about what they are going to be like in the future. We usually think about the kinds of things that are going to happen to us and the kinds of people we might become.

  1. Please list some things that you most strongly hope will be true of you in the future.
  2. Please list some things that you think will most likely be true of you in the future.

The measure was used both before and after participating in the program. We purposely did not include a feared-for possible self, given the context of a summer camp.

What is the value-added?

Using this type of open-ended measure allows for participants’ own voices to be heard. Instead of imposing preconceived notions of what participants should “want” to do, it allows participants to tell us what is most important to them. We learned a great deal about participants’ world views and their answers helped us to fine-tune programs to better serve their needs and to be responsive to our participants. Students’ answers focused on careers, but also included hoped-for personal ideals. For instance, European-American students were significantly more likely to mention school success than African-American students.  Conversely, African-American students were significantly more likely to describe hoped-for positive social/emotional futures compared to European-American students. These results allowed program developers to gain a more nuanced understanding of motivations driving participants. Although we regarded the multiple areas of focus as a strength of the measure, evaluators considering using a possible self-measure may also want to include more directed, follow-up questions.

For more information on how to assess possible selves, see Professor Daphna Oyserman’s website.

References

[1] Markus, H. R., & Nurius, P. (1986). Possible selves. American Psychologist, 41, 954–969.

[2] Oyserman, D., Bybee, D., &Terry, K. (2006). Possible selves and academic outcomes: How and when possible selves impel action. Journal of Personality and Social Psychology, 91, 188–204.

Hubbard 150
Six Data Cleaning Checks

Posted on September 1, 2016 by  in Blog ()

Research Associate, WestEd’s STEM Program

Data cleaning is the process of verifying and editing data files to address issues of inconsistency and missing information. Errors in data files can appear at any stage of an evaluation, making it difficult to produce reliable data. Data cleaning is a critical step in program evaluation because clients rely on accurate results to inform decisions about their initiatives. Below are six essential steps I include in my data cleaning process to minimize issues during data analysis:

1. Compare the columns of your data file against the columns of your codebook.

Sometimes unexpected columns might appear in your data file or columns of data may be missing. Data collected from providers external to your evaluation team (e.g., school districts) might include sensitive participant information like social security numbers. Failures in software used to collect data can lead to responses not being recorded. For example, if a wireless connection is lost while a file is being downloaded, some information in that file might not appear in the downloaded copy. Unnecessary data columns should be removed before analysis and, if possible, missing data columns should be retrieved.

2. Check your unique identifier column for duplicate values.

An identifier is a unique value used to label a participant and can take the form of a person’s full name or a number assigned by the evaluator. Multiple occurrences of the same identifier in a data file usually indicate an error. Duplicate identifier values can occur when participants complete an instrument more than once or when a participant identifier is mistakenly assigned to multiple records. If participants move between program sites, they might be asked to complete a survey for a second time. Administrators might record a participant’s identifier incorrectly, using a value assigned to another participant. Data collection software can malfunction and duplicate rows of records. Duplicate records should be identified and resolved.

3. Transform categorical data into standard values.

Non-standard data values often appear in data gathered from external data providers. For example, school districts often provide student demographic information but vary in the categorical codes they use. For example, the following table shows a range of values I received from different districts to represent students’ ethnicities:

Hubbard-Graphic

To aid in reporting on participant ethnicities, I transformed these values into the race and ethnicity categories used by the National Center for Education Statistics.

When cleaning your own data, you should decide on standard values to use for categorical data, transform ambiguous data into a standard form, and store these values in a new data column.  OpenRefine is a free tool that facilitates data transformations.

4. Check your data file for missing values.

Missing values occur when participants choose not to answer an item, are absent the day of administration, or skip an item due to survey logic. If missing values are found, apply a code to indicate the reason for the missing data point. For example, 888888 can indicate an instrument was not administered and 999999 can indicate a participant chose not to respond to an item. The use of codes can help data analysts determine how to handle the missing data. Analysts sometimes need to report on the frequency of missing data, use statistical methods to replace the missing data, or remove the missing data before analysis.

5. Check your data file for extra or missing records.

Attrition and recruitment can occur at all stages of an evaluation. Sometimes people who are not participating in the evaluation are allowed to submit data. Check the number of records in your data file against the number of recruited participants for discrepancies. Tracking dates when participants join a project, leave a project, and complete instruments can facilitate this review.

6. Correct erroneous or inconsistent values.

When instruments are completed on paper, participants can enter unexpected values. Online tools may be configured incorrectly and allow illegal values to be submitted. Create a list of validation criteria for each data field and compare all values against this list. de Jonge and van der Loo provide a tutorial for checking invalid data using R.

Data cleaning can be a time-consuming process. These checks can help reduce the time you spend on data cleaning and get results to your clients more quickly.

Goodyear
Three Tips for a Strong NSF Proposal Evaluation Plan

Posted on August 17, 2016 by  in Blog ()

Principal Research Scientist, Education Development Center, Inc.

I’m Leslie Goodyear and I’m an evaluator who also served as a program officer for three years at the National Science Foundation in the Division of Research on Learning, which is in the Education and Human Resources Directorate. While I was there, I oversaw evaluation activities in the Division and reviewed many, many evaluation proposals and grant proposals with evaluation sections.

In May 2016, I had the pleasure of participating in the “Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants.” Hosted by Lori Wingate at EvaluATE and Ann Beheler at the Centers Collaborative for Technical Assistance, this webinar covered topics such as evaluation fundamentals; evaluation requirements and expectations; and evaluation staffing, budgeting and utilization.

On the webinar, I shared my perspective on the role of evaluation at NSF, strengths and weaknesses of evaluation plans in proposals, and how reviewers assess Results from Prior NSF Support sections of proposals, among other topics. In this blog, I’ll give a brief overview of some important takeaways from the webinar.

First, if you’re making a proposal to education or outreach programs, you’ll likely need to include some form of project evaluation in your proposal. Be sure to read the program solicitation carefully to know what the specific requirements are for that program. There are no agency-wide evaluation requirements—instead they are specified in each solicitation. Lori had a great suggestion on the webinar:  Search the solicitation for “eval” to make sure you find all the evaluation-related details.

Second, you’ll want to make sure that your evaluation plan is tailored to your proposed activities and outcomes. NSF reviewers and program officers can smell a “cookie cutter” evaluation plan, so make sure that you’ve talked with your evaluator while developing your proposal and that they’ve had the chance to read the goals and objectives of your proposed work before drafting the plan. You want the plan to be incorporated into the proposal so that it appears seamless.

Third, indicators of a strong evaluation plan include carefully crafted, relevant overall evaluation questions, a thoughtful project logic model, a detailed data collection plan that is coordinated with project activities, and a plan for reporting and dissemination of findings. You’ll also want to include a bio for your evaluator so that the reviewers know who’s on your team and what makes them uniquely qualified to carry out the evaluation of your project.

Additions that can make your plan “pop” include:

  • A table that maps out the evaluation questions to the data collection plans. This can save space by conveying lots of information in a table instead of in narrative.
  • Combining the evaluation and project timelines so that the reviewers can see how the evaluation will be coordinated with the project and offer timely feedback.

Some programs allow for using the Supplemental Documents section for additional evaluation information. Remember that reviewers are not required to read these supplemental docs, so be sure that the important information is still in the 15-page proposal.

For the Results of Prior NSF Support section, you want to be brief and outcome-focused. Use this space to describe what resulted from the prior work, not what you did. And be sure to be clear how that work is informing the proposed work by suggesting, for example, that these outcomes set up the questions you’re pursuing in this proposal.

Endres 150
National Science Foundation-funded Resources to Support Your Advanced Technological Education (ATE) Project

Posted on August 3, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Did you know that other National Science Foundation programs focused on STEM education have centers that provide services to projects? EvaluATE offers evaluation-specific resources for the Advanced Technological Education program, while some of the others are broader in scope and purpose. They offer technical support, resources, and information targeted at projects within the scope of specific NSF funding programs. A brief overview of each of these centers is provided below, highlighting evaluation-related resources. Make sure to check the sites out for further information if you see something that might be of value for your project!

The Community for Advancing Discovery Research in Education (CADRE) is a network for NSF’s Discovery Research K-12 program (DR K-12). The evaluation resource on the CADRE site is a paper on evaluation options (formative and summative), which differentiates evaluation from the research and development efforts carried out as part of project implementation.  There are other more general resources such as guidelines and tools for proposal writing, a library of reports and briefs, along with a video showcase of DR K-12 projects.

The Center for the Advancement of Informal Science Education (CAISE) has an evaluation section of its website that is searchable by type of resource (i.e., reports, assessment instruments, etc.), learning environment, and audience. For example, there are over 850 evaluation reports and 416 evaluation instruments available for review. The site hosts the Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects, which was developed as an initiative of the Visitor Studies Association and has sections such as working with an evaluator, developing an evaluation plan, creating evaluation tools and reporting.

The Math and Science Partnership Network (MSPnet) supports the math and science partnership network and the STEM+C (computer science) community. MSPnet has a digital library with over 2,000 articles; a search using the term “eval” found 467 listings, dating back to 1987. There is a toolbox with materials such as assessments, evaluation protocols and form letters. Other resources in the MSPnet library include articles and reports related to teaching and learning, professional development, and higher education.

The Center for Advancing Research and Communication (ARC) supports the NSF Research and Evaluation on Education in Science and Engineering (REESE) program through technical assistance to principal investigators. An evaluation-specific resource includes material from a workshop on implementation evaluation (also known as process evaluation).

The STEM Learning and Research Center (STELAR) provides technical support for the Innovative Technology Experiences for Students and Teachers (ITEST) program. Its website includes links to a variety of instruments, such as the Grit Scale, which can be used to assess students’ resilience for learning, which could be part of a larger evaluation plan.

How Real-time Evaluation Can Increase the Utility of Evaluation Findings

Posted on July 21, 2016 by , in Blog ()
Peery Wilkerson
Elizabeth Peery Stephanie B. Wilkerson

Evaluations are most useful when evaluators make relevant findings available to project partners at key decision-making moments. One approach to increasing the utility of evaluation findings is by collecting real-time data and providing immediate feedback at crucial moments to foster progress monitoring during service delivery. Based on our experience evaluating multiple five-day professional learning institutes for an ATE project, we discovered the benefits of providing real-time evaluation feedback and the vital elements that contributed to the success of this approach.

What did we do?

With project partners we co-developed online daily surveys that aligned with the learning objectives for each day’s training session. Daily surveys measured the effectiveness and appropriateness of each session’s instructional delivery, exercises and hands-on activities, materials and resources, content delivery format, and session length. Participants also rated their level of understanding of the session content and preparedness to use the information. They could submit questions, offer suggestions for improvement, and share what they liked most and least. Based on the survey data that evaluators provided to project partners after each session, partners could monitor what was and wasn’t working and identify where participants needed reinforcement, clarification, or re-teaching. Project partners could make immediate changes and modifications to the remaining training sessions to address any identified issues or shortcomings before participants completed the training.

Why was it successful?

Through the process, we recognized that there were a number of elements that made the daily surveys useful in immediately improving the professional learning sessions. These included the following:

  • Invested partners: The project partners recognized the value of the immediate feedback and its potential to greatly improve the trainings. Thus, they made a concentrated effort to use the information to make mid-training modifications.
  • Evaluator availability: Evaluators had to be available to pull the data after hours from the online survey software program and deliver it to project partners immediately.
  • Survey length and consistency: The daily surveys took less than 10 minutes to complete. While tailored to the content of each day, the surveys had a consistent question format that made them easier to complete.
  • Online format: The online format allowed for a streamlined and user-friendly survey. Additionally, it made retrieving a usable data summary much easier and timelier for the evaluators.
  • Time for administration: Time was carved out of the training sessions to allow for the surveys to be administered. This resulted in higher response rates and more predictable timing of data collection.

If real-time evaluation data will provide useful information that can help make improvements or decisions about professional learning trainings, it is worthwhile to seek resources and opportunities to collect and report this data in a timely manner.

Here are some additional resources regarding real-time evaluation:

Articulating Intended Outcomes Using Logic Models: The Roles Evaluators Play

Posted on July 6, 2016 by , in Blog ()
Wilkerson Peery
Stephanie B. Wilkerson Elizabeth Peery

Articulating project outcomes is easier said than done. A well-articulated outcome is one that is feasible to achieve within the project period, measurable, appropriate for the phase of project development, and in alignment with the project’s theory of change. A project’s theory of change represents causal relationships – IF we do these activities, THEN these intended outcomes will result. Understandably, project staff often frame outcomes as what they intend to do, develop, or provide, rather than what will happen as a result of those project activities. Using logic models to situate intended outcomes within a project’s theory of change helps to illustrate how project activities will result in intended outcomes.

Since 2008, my team and I have served as the external evaluator for two ATE project cycles with the same client. As the project has evolved over time, so too have its intended outcomes. Our experience using logic models for program planning and evaluation has illuminated four critical roles we as evaluators have played in partnership with project staff:

  1. Educator. Once funded, we spent time educating the project partners on the purpose and development of a theory of change and intended outcomes using logic models. In this role, our goal was to build understanding of and buy-in for the need to have logic models with well-articulated outcomes to guide project implementation.
  1. Facilitator. Next, we facilitated the development of an overarching project logic model with project partners. The process of defining the project’s theory of change and intended outcomes was important in creating a shared agreement and vision for project implementation and evaluation. Even if the team includes a logic model in the proposal, refining it during project launch is still an important process for engaging project partners. We then collaborated with individual project partners to build a “family” of logic models to capture the unique and complementary contributions of each partner while ensuring that the work of all partners was aligned with the project’s intended outcomes. We repeated this process during the second project cycle.
  1. Methodologist. The family of logic models became the key source for refining the evaluation questions and developing data collection methods that aligned with intended outcomes. The logic model thus became an organizing framework for the evaluation. Therefore, the data collection instruments, analyses, and reporting yielded relevant evaluation information related to intended outcomes.
  1. Critical Friend. As evaluators, our role as a critical friend is to make evidence-based recommendations for improving project activities to achieve intended outcomes. Sometimes evaluation findings don’t support the project’s theory of change, and as critical friends, we play an important role in challenging project staff to identify any assumptions they might have made about project activities leading to intended outcomes. This process helped to inform the development of tenable and appropriate outcomes for the next funding cycle.

Resources:

There are several resources for articulating outcomes using logic models. Some of the most widely known include the following:

Worksheet: Logic Model Template for ATE Projects & Centers: http://www.evalu-ate.org/resources/lm-template/

Education Logic Model (ELM) Application Tool for Developing Logic Models: http://relpacific.mcrel.org/resources/elm-app/

University of Wisconsin-Extension’s Logic Model Resources: http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

W.K. Kellogg Foundation Logic Model Development Guide: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide