Archive: logic model

Blog: Logic Models for Curriculum Evaluation

Posted on June 7, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Rachel Tripathy Linlin Li
Research Associate, WestEd Senior Research Associate, WestEd

At the STEM Program at WestEd, we are in the third year of an evaluation of an innovative, hands-on STEM curriculum. Learning by Making is a two-year high school STEM course that integrates computer programming and engineering design practices with topics in earth/environmental science and biology. Experts in the areas of physics, biology, environmental science, and computer engineering at Sonoma State University (SSU) developed the curriculum by integrating computer software with custom-designed experiment set-ups and electronics to create inquiry-based lessons. Throughout this project-based course, students apply mathematics, computational thinking, and the Next Generation Science Standards (NGSS) Scientific and Engineering Design Practices to ask questions about the world around them, and seek the answers. Learning by Making is currently being implemented in rural California schools, with a specific effort being made to enroll girls and students from minority backgrounds, who are currently underrepresented in STEM fields. You can listen to students and teachers discussing the Learning by Making curriculum here.

Using a Logic Model to Drive Evaluation Design

We derived our evaluation design from the project’s logic model. A logic model is a structured description of how a specific program achieves an intended learning outcome. The purpose of the logic model is to precisely describe the mechanisms behind the program’s effects. Our approach to the Learning by Making logic model is a variant on the five-column logic format that describes the inputs, activities, outputs, outcomes, and impacts of a program (W.K. Kellogg Foundation, 2014).

Learning by Making Logic Model

Click image to view enlarge

Logic models are read as a series of conditionals. If the inputs exist, then the activities can occur. If the activities do occur, then the outputs should occur, and so on. Our evaluation of the Learning by Making curriculum centers on the connections indicated by the orange arrows connecting outputs to outcomes in the logic model above. These connections break down into two primary areas for evaluation: 1) teacher professional development, and 2) classroom implementation of Learning by Making. The questions that correlate with the orange arrows above can be summarized as:

  • Are the professional development (PD) opportunities and resources for the teachers increasing teacher competence in delivering a computational thinking-based STEM curriculum? Does Learning by Making PD increase teachers’ use of computational thinking and project-based instruction in the classroom?
  • Does the classroom implementation of Learning by Making increase teachers’ use of computational thinking and project-based instruction in the classroom? Does classroom implementation promote computational thinking and project-based learning? Do students show an increased interest in STEM subjects?

Without effective teacher PD or classroom implementation, the logic model “breaks,” making it unlikely that the desired outcomes will be observed. To answer our questions about outcomes related to teacher PD, we used comprehensive teacher surveys, observations, bi-monthly teacher logs, and focus groups. To answer our questions about outcomes related to classroom implementation, we used student surveys and assessments, classroom observations, teacher interviews, and student focus groups. SSU used our findings to revise both the teacher PD resources and the curriculum itself to better situate these two components to produce the outcomes intended. By deriving our evaluation design from a clear and targeted logic model, we succeeded in providing actionable feedback to SSU aimed at keeping Learning by Making on track to achieve its goals.

Newsletter: Getting the Most out of Your Logic Model

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

I recently led two workshops at the American Evaluation Association’s Summer Evaluation Institute. To get a sense of the types of projects that the participants were working on, I asked them to send me a brief project description or logic model in advance of the Institute. I received more than 50 responses, representing a diverse array of projects in the areas of health, human rights, education, and community development. While I have long advocated for logic models as a succinct way to communicae the nature and purpose of projects, it wasn’t until I received these responses that I realized how efficient logic models really are in terms of conveying what a project does, whom it serves, and how it is intended to bring about change.

In reviewing the logic models, I was able to quickly understand the main project activities and outcomes.  My workshops were on developing evaluation questions, and I was amazed how quickly I could frame evaluation questions and indicators based on what was presented in the models. It wasn’t as straight forward with the narrative project descriptions, which were much less consistent in terms of the types of information  conveyed and the degree to which the elements were linked conceptually.  When participants would show me their models in the workshop, I quickly remembered their projects and could give them specific feedback based on my previous review of their models.

Think of NSF proposal reviewers who have to read numerous 15-page project descriptions. It’s not easy to keep straight all the details of a single project, let alone that of 10 or more 15-page proposals. In a logic model, all the key information about a project’s activities, products, and outcomes is presented in one graphic. This helps reviewers consume the project information as a “package.”  For reviewers who are especially interested in the quality of the evaluation plan, a quick comparison of the evaluation plan against the model will reveal how well the plan is aligned to the project’s activities, scope, and purpose.  Specifically, mentally mapping the evaluation questions and indicators onto the logic model provides a good sense of whether the evaluation will adequately address both project implementation and outcomes.

One of the main reasons for creating a logic model—other than the fact it may be required by a funding agency—is to illustrate how key project elements logically relate to one another. I have found that representing a project’s planned activities, products, and outcomes in a logic model format can reveal weaknesses in the project’s plan. For example, there may be an activity that doesn’t seem to lead anywhere or ambitious outcomes that aren’t adequately supported by activities or outputs.  It is much better if you, as a project proposer, spot those weaknesses before an NSF reviewer does. A strong logic model can then serve as a blueprint for the narrative project description—all key elements of the model should be apparent in the project description and vice versa.

I don’t think there is such a thing as the perfect logic model. The trick is to recognize when it is good enough. Check to make sure the elements are located in the appropriate sections of the model, that all main project activities (or activity areas) and outcomes are included, and that they are logically linked. Ask someone from outside your team to review it; revise if they see problems or opportunities to increase clarity. But don’t overwork it—treat it as a living document that you can update when and if necessary

Download the logic model template from http://bit.ly/lm-temp.

Newsletter: ATE Logic Model Template

Posted on July 1, 2016 by  in Newsletter - () ()

Director of Research, The Evaluation Center at Western Michigan University

A logic model is a graphic depiction of how a project translates its resources into activities and outcomes. The ATE Project Logic Model Template presents the basic format for a logic model with question prompts and examples  to guide users in distilling their project plans into succinct statements about planned activities and products and desired outcomes. Paying attention to the prompts and ATE-specific examples will help users avoid common logic model mistakes, like placing outputs (tangible products) under outcomes (changes in people, organizations or conditions brought about through project activities and outputs).

The template is in PowerPoint so you may use the existing elements and start creating your own logic model right away—just delete the instructional parts of the document and input your project’s information.  We have found that when a document has several graphic elements, PowerPoint is easier to work in than Word.  Alternatively, you could create a simple table in Word that mirrors the layout in the template.

Formatting tips:

  • If you find you need special paper to print the logic model and maintain its legibility, it’s too complicated.  It should be readable on a 8.5” x 11” sheet of paper.  If you simply have too much information to include in a single page, include general summary statements/categories, and include detailed explanations in a proposal narrative or other project planning document.
  • You may wish to add arrows to connect specific activities to specific outputs or outcomes.  However, if you find that all activities are leading to all outcomes (and that is actually how the project is intended to work), there is no need to clutter your model with arrows leading everywhere.
  • Use a consistent font and font size.
  • Align, align, align! Alignment is one of the most important design principles. When logic model elements are out of alignment, it can make it seem messy and unprofessional.
  • Don’t worry if your logic model doesn’t capture all the subtle nuances of your project. It should provide an overview of what a project does and is intended to accomplish and  convey a clear logic as to how the pieces are connected.  Your proposal narrative or project plan is where the details go.

Download the template from http://bit.ly/lm-temp.

Blog: Adapting Based on Feedback

Posted on May 13, 2015 by  in Blog ()

Director, South Carolina Advanced Technological Education Center of Excellence, Florence-Darlington Technical College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 4 – Why making changes based on evidence is important

At the Mentor-Connect: Leadership Development and Outreach for ATE project (www.Mentor-Connect.org), formative feedback guides the activities we provide and resources we develop. It is the compass that keeps us heading in the direction of greatest impact. I’ll share three examples of how feedback in the different stages of the project’s life cycle helped us adapt the project. The first was feedback from an outside source; the second two were based on our internal feedback processes.

Craft LM1 Pic

The initial Mentor-Connect technical assistance workshop for each cohort focuses on developing grant writing skills for the NSF ATE program. The workshop was originally designed to serve teams of two STEM faculty members from participant colleges; however, we were approached by grant writers from those colleges who also wanted to attend. On a self-pay basis, we welcomed these additional participants. Post-workshop surveys and conversations with grant writers at the event indicated that during the workshop we should offer a special breakout session just for grant writers so that issues specific to their role in the grant development and submission process could be addressed. This breakout session was added and is now integral to our annual workshop.

Craft LM2 Pic

Second, feedback from our mentors about our activities caused us to change the frequency of our face-to-face workshops. Mentors reported that the nine-month time lag between the project’s January face-to-face workshop with mentors and the college team’s submission of a proposal the following October made it hard to maintain momentum. Mentors yearned for more face-to-face time with their mentees and vice versa. As a result, a second face-to-face workshop was added the following July. Evaluation feedback from this second gathering of mentors and mentees was resoundingly positive. This second workshop is now incorporated as a permanent part of Mentor-Connect’s annual programming.

Craft LM3 pic

Finally, one of our project outputs helps us keep our project on track. We use a brief reporting form that indicates a team’s progress along a grant development timeline. Mentors and their mentees independently complete and submit the same form. When both responses indicate “ahead of schedule” or “on time” or even “behind schedule,” this consensus is an indicator of good communications between the mentor and his or her college team. They are on the same page. If we observe a disconnect between the mentee’s and mentor’s progress reports, this provides an early alert to the Mentor-Connect team that an intervention may be needed with that mentee/mentor team. Most interventions prompted by this feedback process have been effective in getting the overall proposal back on track for success.

With NSF ATE projects, PIs have the latitude and are expected to make adjustments to improve project outcomes. After all, it is a grant and not a contract. NSF expects you to behave like a scientist and adjust based on evidence. So, don’t be glued to your original plan! Change can be a good thing. The key is to listen to those who provide feedback, study your evaluation data, and adjust accordingly.

Blog: Get the Right People In!

Posted on April 29, 2015 by  in Blog ()

Director, National Convergence Technology Center, Collin College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 2 – Why who you invite to your professional development makes a difference for your results.

What would happen if you hosted an event and were careless regarding the invitation list? You’d probably get plenty of people to come, but they might not be the ones you wanted to participate in order to make the event a success…and showing sheer numbers alone doesn’t indicate success.

At the National Convergence Technology Center, we offer professional development events called Working Connections. The purpose of these week-long institutes is to prepare community college faculty to teach new IT topics in upcoming semesters.

Who would be the “wrong” people to invite to Working Connections? Anyone BUT community college IT faculty!

Sullivan LM1 Pic

 

When you are working on the input section of your logic model, sometimes you need to look at the outputs and outcomes sections first (i.e., what kind of output and outcome do you want and what inputs are needed to achieve those?) In our case, we wanted to show impact from professional development of IT faculty. For example, did the faculty actually teach the courses they learned about at Working Connections? How did this training impact the way they teach? How did Working Connections sessions impact the students these professors taught? Did these new skills impact student learning?

We gather data from attendees at the completion of each Working Connections (overall and topic track surveys), then we follow up with longitudinal questions at six months, 18 months, 30 months, 42 months, and 54 months after the event.

When we first implemented the surveys, we noticed that some of the participants had not planned to teach the track they studied at Working Connections. We wondered why this was so, and we looked at a variety of possibilities and soon discovered that some of our registrants were not IT community college faculty.

We instituted a simple step in the registration process to verify the participant’s job, which was comprised by two items on the registration form: (1) Please provide your supervisor’s name, title, phone number, and email; and (2) What IT/convergence classes do you currently teach or supervise (Working Connections is intended solely for IT/convergence faculty or academic administrators.)

Soon our impact data started trending upward. We also highlighted this requirement in BOLD on our event website: http://summerworkingconnections2014.mobilectc.wikispaces.net/home.

In a practical sense, we also wanted to make sure that the money we invested in the event was going toward the right target. Ensuring that we had the “right” people come also ensured we were getting the best bang for the buck.

It seems like such a simple thing, but examining who you are involving in your program events can make a big difference in the success of your project.