Blog




Sustaining Private Evaluation Practices: Overcoming Challenges by Collaborating within Our ATE Community of Practice

Posted on September 27, 2017 by  in Blog ()

President, Impact Allies

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

My name is Ben Reid. I am the founder of Impact Allies, a private evaluation firm. The focus of this post is on the business, rather than technical aspects, of evaluation. My purpose is to present a challenge to sustaining a private evaluation practice and best serving clients and propose an opportunity to overcome that challenge by collaborating within our community of practice.

Challenge

Often evaluators act as one-person shows. It is important to give a single point of contact to a principal investigator (PI) and project team and for that evaluator of record to have thorough knowledge of the project and its partners. However, the many different jobs required of an evaluation contract simply cross too many specialties and personality types for one person to effectually serve a client best.

Opportunity

The first opportunity is to become more professionally aware of our strengths and weaknesses. What are your skills? And equally important, where are you skill-deficit (don’t know how to do it) and where are you performance-deficient (have the skill but aren’t suited for it—because of anxiety, frustration, no enthusiasm, etc.)?

The second opportunity is to build relationships within our community of practice. Get to know other evaluators, where their strengths are unique and whom they use for ancillary services (their book of contractors). (The upcoming NSF ATE PI conference is a great place to do this).

Example

My Strengths: Any evaluator can satisfactorily perform the basics – EvaluATE certainly has done a tremendous job of educating and training us. In this field, I am unique in my strengths of external communications, opportunity identification and assessment, strategic and creative thinking, and partnership development. Those skills and a background in education, marketing and branding, and project management, have helped me contribute broadly, which has proven useful time and again when working with small teams. Knowing clients well and having an entrepreneurial mindset allows me to do what is encouraged in NSF’s 2010 User-Friendly Handbook for Project Evaluation: “Certain evaluation activities can help meet multiple purposes, if used judiciously” (p. 119).

My Weaknesses: However, an area where I could use some outside support is graphic design and data visualization. This work, because it succinctly tells the story and successes of a project, is very important when communicating to multiple stakeholders, in published works, or for promotional purposes. Where I once performed these tasks (with much time and frustration and at a level which isn’t noteworthy), I now contract with an expert—and my clients are thereby better served.

Takeaway

“Focus on the user and all else will follow,” is the number one philosophy of Google, the company that has given us so much and in turn done so well for itself. Let us also focus on our clients, serving their needs by building our businesses where we are skilled and enthusiastic and collaborating (partnering, outsourcing, or referring) within our community of practice where another professional can do a better job for our clients.

Blog: Using Mutual Interviewing to Gather Student Feedback

Posted on September 18, 2017 by  in Blog ()

Owner, Applied Inference

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

It’s hard to do a focus group with 12 or more students! With so many people and so little time, you know it’s going to be hard to hear from the quieter students, but that may be who you need to hear from the most! Or, maybe some of the questions are sensitive – the students with the most to lose may be the least likely to speak up. What can you do? Mutual interviewing. Here is how you do it.

This method works great for non-English speakers, as long as all participants speak the same language and your co-facilitator and one person in each group is bilingual.

Warnings: The setup is time-consuming, it is very hard to explain, pandemonium ensues, people start out confused, they are often afraid they’ll do it wrong, and it is noisy (a big room is best!).

Promises: Participants are engaged, it accommodates much larger groups than a traditional focus group, everyone participates and no one can dominate, responses are anonymous except to the one individual who heard the response, it builds community and enthusiasm, and it is an empowering process.

Preparation (the researcher):

  1. Create interview guides for four distinct topics related to a project, such as program strengths, program weaknesses, suggestions for improvement, future plans, how you learn best, or personal challenges that might interfere. The challenge is to identify the most important topics that are far enough apart that people do not give the same answer to each question.
  2. Create detailed interview guides, perhaps including probe questions the group members can use to help their interviewees get traction on the question. Each group member interviewer will need four copies of his or her group’s interview guide (one for each interviewee, plus one to fill out himself/herself).
  3. Prepare nametags with a group and ID number (to ensure confidentiality) (e.g., 3-1 would be Group 3, member 1). Make sure the groups are equal in size – with between 3 and 6 members per group. This method allows you to conduct the focus group activity with up to 24 people! The nametags help member-interviewers and member-interviewees from the designated groups find each other during the rounds.
  4. Create a brief demographic survey, and pre-fill it with Group and ID numbers to match the nametags. (The ID links the survey with interview responses gathered during the session, and this helps avoid double counting responses during analysis. You can also disaggregate interview responses by demographics.)
  5. Set up four tables for the groups and label them clearly with group number.
  6. Provide good food and have it ready to welcome the participants as they walk in the door.

During the Session:

At the time of the focus group, help people to arrange themselves into 4 equal-sized groups of between 3 and 6 members. Assign one of the topics to each group. Group members will research their topic by interviewing someone from each of the other groups (and being interviewed in return by their counterparts). After completing all rounds of interviews, they come back to their own group and answer the questions themselves. Then they discuss what they heard with each other, and report it out. The report-out gives other people the opportunity to add thoughts or clarify their views.

  1. As participants arrive, give them a nametag with Group and ID number and have them fill out a brief demographic survey (which you have pre-filled with their group/id number). The group number will indicate the table where they should sit.
  2. Explain the participant roles: Group Leader, Interviewer, and Interviewee.
    1. One person in each group is recruited to serve as a group leader and note taker during the session. This person will also participate in a debrief afterward to provide insights or tidbits gleaned during their group’s discussion.
    2. The group leader will brief their group members on their topic and review the interview guide.
    3. Interviewer/Interviewee: During each round, each member is paired with one person from another group, and they take turns interviewing each other about their group’s topic.
  3. Give members 5 minutes to review the interview guide and answer the questions on their own. They can also discuss with others in their group.
  4. After 5 minutes, pair each member up with a partner from another group for the interview (i.e., Group 1 and Group 2, Group 3 and Group 4). The members should mark a fresh interview sheet with their interviewee’s Group-ID number and then they take turns interviewing each other. Make sure they take notes during the interview. Give the first interviewer 5 minutes, then switch roles, and repeat the process.
  5. Rotate and pair each member with someone from a different group (Group 1 and 3, Group 2 and 4) and repeat the interviews using a fresh interview sheet, marked with the new interviewee’s Group-ID number. Again, each member will interview the other for five minutes.
  6. Finally, rotate again and pair up members from Groups 1 and 4 and Groups 2 and 3 for the final round. Mark the third clean interview sheet with each interviewee’s Group-ID number and interview each other for five minutes.
  7. Once all pairings are finished, members return to their original groups. Each member takes 5 minutes to complete or revise their own interview form, possibly enriched by the perspectives of 3 other people.
  8. The Group Leader facilitates a 15-minute discussion, during which participants compare notes and prepare a flip chart to report out their findings. The Group Leader should take notes during the discussion. (Tip: Sometimes it’s helpful to provide guiding questions for the report-out.)
  9. Each group then has about five minutes to report the compiled findings. (Tip: During the reports, have some questions prepared to further spark conversation).

After the Session:

  1. Hold a brief (10-15 minute) meeting with the Group Leaders and have them talk about the process, insights, or tidbits that did not make it to the flip chart and provide any additional feedback to the researcher.

Results of the process:

You will now have demographic data from the surveys, notes from the individual interview sheets, the group leaders’ combined notes, and the flip charts of combined responses to each question.

To learn more about mutual interviewing, see pages 51-55 of Research Toolkit for Program Evaluation and Needs Assessments Summary of Best Practices.

Vlog: Resources to Help with Evaluation Planning for ATE Proposals

Posted on September 6, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation is an important element of an ATE proposals.  EvaluATE has developed several resources to help you develop your evaluation plans and integrate them into your ATE proposals.  This video highlights a few of them—these and more can be accessed from the links below the video.

Additional Resources:

Blog: Not Just an Anecdote: Systematic Analysis of Qualitative Evaluation Data

Posted on August 30, 2017 by  in Blog ()

President and Founder, Creative Research & Evaluation LLC (CR&E)

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a Ph.D. trained anthropologist, I spent many years learning how to shape individual stories and detailed observations into larger patterns that help us understand social and cultural aspects of human life.  Thus, I was initially taken aback when I realized that program staff or program officers often initially think of qualitative evaluation as “just anecdotal.” Even people who want “stories” in their evaluation reports can be surprised at what is revealed through a systematic analysis of qualitative data.

Here are a few tips that can help lead to credible findings using qualitative data.  Examples are drawn from my experience evaluating ATE programs.

  • Organize your materials so that you can report which experiences are shared among program participants and what perceptions are unusual or unique. This may sound simple, but it takes forethought and time to provide a clear picture of the overall range and variation of participant perceptions. For example, in analyzing two focus group discussions held with the first cohort of students in an ATE program, I looked at each transcript separately to identify the program successes and challenges raised in each focus group. Comparing major themes raised by each group, I was confident when I reported that students in the program felt well prepared, although somewhat nervous about upcoming internships. On the other hand, although there were multiple joking comments about unsatisfactory classroom dynamics, I knew these were all made by one person and not taken seriously by other participants because I had assigned each participant a label and I used these labels in the focus group transcripts.
  • Use several qualitative data sources to provide strength to a complex conclusion. In technical terms, this is called “triangulation.” Two common methods of triangulation are comparing information collected from people with different roles in a program and comparing what people say with what they are observed doing. In some cases, data sources converge and in some cases they diverge. In collecting early information about an ATE program, I learned how important this program is to industry stakeholders. In this situation, there was such a need for entry-level technicians that stakeholders, students, and program staff all mentioned ways that immediate job openings might have a short-term priority over continuing immediately into advanced levels in the same program.
  • Think about qualitative and quantitative data together in relation to each other.  Student records and participant perceptions show different things and can inform each other. For example, instructors from industry may report a cohort of students as being highly motivated and uniformly successful at the same time that institutional records show a small number of less successful students. Both pieces of the picture are important here for assessing a project’s success; one shows high level of industry enthusiasm, while the other can provide exact percentages about participant success.

Additional Resources

The following two sources are updated classics in the fields of qualitative research and evaluation.

Miles, M. B., Huberman, A. M., & Saldana, J. (2014). Qualitative data analysis: A methods sourcebook. Thousand Oaks, CA: Sage.

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice: The definitive text of qualitative inquiry frameworks and options (4th ed.). Thousand Oaks, CA: Sage.

Blog: Reporting Anticipated, Questionable, and Unintended Project Outcomes

Posted on August 16, 2017 by  in Blog ()

Education Administrator, Independent

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Project evaluators are aware that evaluation aims to support learning and improvement. Through a series of planned interactions, event observations, and document reviews, the evaluator is charged with reporting to the project leadership team and ultimately the project’s funding agency, informing audiences of the project’s merit. This is not to suggest that reporting should only aim to identify positive impacts and outcomes of the project. Equally, there is substantive value in informing audiences of unintended and unattained project outcomes.

Evaluation reporting should discuss aspects of the project’s outcomes, whether anticipated, questionable, or unintended. When examining project outcomes the evaluator analyzes obtained information and facilitates project leadership through reflective thinking exercises for the purpose of defining the significance of the project and summarizing why outcomes matter.

Let’s be clear, outcomes are not to be regarded as something negative. In fact, with the projects that I have evaluated over the years, outcomes have frequently served as an introspective platform informing future curriculum decisions and directions internal to the institutional funding recipient. For example, the outcomes of one STEM project that focused on renewable energy technicians provided the institution with information that prompted the development of subsequent proposals and projects targeting engineering pathways.

Discussion and reporting of project outcomes also encapsulates lessons learned and affords the opportunity for the evaluator to ask questions such as:

  • Did the project increase the presence of the target group in identified STEM programs?
  • What initiatives will be sustained during post funding to maintain an increased presence of the target group in STEM programs?
  • Did project activities contribute to the retention/completion rates of the target group in identified STEM programs?
  • Which activities seemed to have the greatest/least impact on retention/completion rates?
  • On reflection, are there activities that could have more significantly contributed to retention/completion rates that were not implemented as part of the project?
  • To what extent did the project supply regional industries with a more diverse STEM workforce?
  • What effect will this have on regional industries during post project funding?
  • Were partners identified in the proposal realistic contributors to the funded project? Did they ensure a successful implementation enabling the attainment of anticipated outcomes?
  • What was learned about the characteristics of “good” and “bad” partners?
  • What are characteristics to look for and avoid to maximize productivity with future work?

Factors influencing outcomes include, but are not limited to:

  • Institutional changes, e.g., leadership;
  • Partner constraints or changes; and
  • Project/budgetary limitations.

In some instances, it is not unusual for the proposed project to be somewhat grandiose in identifying intended outcomes. Yet, when project implementation gets underway, intended activities may be compromised by external challenges. For example, when equipment is needed to support various aspects of a project, procurement and production channels may contribute to delays in equipment acquisition, thus adversely effecting project leadership’s ability to launch planned components of the project.

As a tip, it is worthwhile for those seeking funding to pose the outcome questions at the front-end of the project – when the proposal is being developed. Doing this will assist them in conceptualizing the intellectual merit and impact of the proposed project.

Resources and Links:

Developing an Effective Evaluation Report: Setting the Course for Effective Program Evaluation. Atlanta, Georgia: Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, Division of Nutrition, Physical Activity and Obesity, 2013.

Blog: Integrating Perspectives for a Quality Evaluation Design

Posted on August 2, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
John Dorris

Director of Evaluation and Assessment, NC State Industry Expansion Solutions

Dominick Stephenson

Assistant Director of Research Development and Evaluation, NC State Industry Expansion Solutions

Designing a rigorous and informative evaluation depends on communication with program staff to understand planned activities and how those activities relate to the program sponsor’s objectives and the evaluation questions that reflect those objectives (see white paper related to communication). At NC State Industry Expansion Solutions, we have worked long enough on evaluation projects to know that such communication is not always easy because program staff and the program sponsor often look at the program from two different perspectives: The program staff focus on work plan activities (WPAs), while the program sponsor may be more focused on the evaluation questions (EQs). So, to help facilitate communication at the beginning of the evaluation project and assist in the design and implementation, we developed a simple matrix technique to link the WPAs and the EQs (see below).

Click to enlarge

For each of the WPAs, we link one or more EQs and indicate what types of data collection events will take place during the evaluation. During project planning and management, the crosswalk of WPAs and EQs will be used to plan out qualitative and quantitative data collection events.

Click to enlarge

The above framework may be more helpful with the formative assessment (process questions and activities). However, it can also enrich the knowledge gained by the participant outcomes analysis in the summative evaluation in the following ways:

Understanding how the program has been implemented will help determine fidelity to the program as planned, which will help determine the degree to which participant outcomes can be attributed to the program design.
Details on program implementation that are gathered during the formative assessment, when combined with evaluation of participant outcomes, can suggest hypotheses regarding factors that would lead to program success (positive participant outcomes) if the program is continued or replicated.
Details regarding the data collection process that are gathered during the formative assessment will help assess the quality and limitations of the participant outcome data, and the reliability of any conclusions based on that data.

So, for us this matrix approach is a quality-check on our evaluation design that also helps during implementation. Maybe you will find it helpful, too.

Blog: Evaluation’s Role in Helping Clients Avoid GroupThink

Posted on July 10, 2017 by  in Blog ()

Senior Evaluator, SmartStart Evaluation & Research

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In December of 2016, I presented a poster on a STEM-C education project at the Restore America’s Estuaries National Summit, co-hosted by The Coastal Society. Having a social science background, I assumed I’d be “out of my depth” amid restoration science topics. However, a documentary on estuarine restoration projects along New Jersey’s Hidden Coast inspired me with insights on the importance of evaluation in helping projects achieve effective outcomes. The film highlights the vital importance of horseshoe crabs as a keystone species beset by myriad threats: Their sustainability as a renewable resource was overestimated and their ecological importance undervalued until serious repercussions became impossible to ignore. Teams of biologists, ecologists, military veterans, communication specialists, and concerned local residents came together to help restore their habitat and raise awareness to help preserve this vital species.

This documentary was not the only project presented at the conference in which diverse teams of scientists, volunteers, educators, and others came together to work toward a shared goal. I began to reflect on how similar the composition of these groups and their need for successful collaboration was to contributors on many projects I evaluate. Time and again it was revealed that the various well-intended interdisciplinary team members often initially struggled to communicate effectively due to different expectations, priorities, and perspectives. Often presenters spoke about ways these challenges had been overcome, most frequently through extensive communication with open exchanges of ideas. However, these only represented successful projects promoting their outcomes as inspiration and guidance for others. How often might lack of open communication lead projects down a different path? When does this occur? and How can an evaluator help the leaders foresee and avoid potential pitfalls?

Often, the route to undesired and unsuccessful outcomes lies in lack of effective communication, which is a common symptom of GroupThink. Imagine the leadership team on any project you evaluate:

  • Are they a highly cohesive group?
  • Do they need to make important decisions, often under deadlines or other pressure?
  • Do members prefer consensus to conflict?

These are ideal conditions for GroupThink, when team members disregard information that does not fit with their shared beliefs, and dissenting ideas or opinions are unwelcome. Partners’ desire for harmony can lead them to ignore early warning signs of threats to achieving goals and lead to making poor decisions.

How do we, as evaluators, help them avoid GroupThink?

  • Examine perceived sustainability objectively: Horseshoe crabs are an ancient species, once so plentiful they covered Atlantic beaches during spawning, each laying 100,000 or more eggs. Perceived as a sustainable species, their usefulness as bait and fertilizer has led to overharvesting. Similarly, project leaders may have misconceptions about resources or little knowledge of other factors influencing capacity to maintain their activities. By using validated measures, such as Washington University’s Program Sustainability Assessment Tool (PSAT), evaluators can raise awareness among project leaders on factors contributing to sustainability and facilitate planning sessions to identify adaptation strategies and increase chances of success.
  • Investigate an unintended consequence of project’s activities: Horseshoe crabs’ copper-based blood is crucial to the pharmaceutical industry. However, they cannot successfully be raised in captivity. Instead, they are captured, drained of about 30 percent of their blood, and returned to the ocean. While survival rates are 70 percent or more, researchers are becoming concerned the trauma may impact breeding and other behaviors. Evaluators can help project leaders delve into cause-and-effect relationships underlying problems by employing techniques such as the Five Whys to identify root causes and developing logic models to clarify relationships between resources, activities, outputs, and outcomes.
  • Anticipate unintended chains of events: Horseshoe crabs’ eggs are the primary source of protein for migrating birds. The declining population of horseshoe crabs has put at least three species of birds’ survival at risk. As evaluators, we have many options (e.g., key informant interviews, risk assessments, negative program theory) to identify aspects of program activities with potentially negative impacts and make recommendations to mitigate the harm.

Horseshoe Crab-in-a-bottle sits on my desk to remind me not to be reticent about making constructive criticisms in order to help project leaders avoid GroupThink.

Blog: Evaluator, Researcher, Both?

Posted on June 21, 2017 by  in Blog ()

Professor, College of William & Mary

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Having served as a project evaluator and as a project researcher, it is apparent to me how critical it is to have conversations about roles at the onset of funded projects.  Early and open conversations can help avoid confusion, help eliminate missed timing to collect critical data, and highlight where differences exist for each project team role. The blurring of lines over time regarding strict differences between evaluator and researcher requires project teams, evaluators, and researchers to create new definitions for project roles, to understand scope of responsibility for each role, and to build data systems that allow for sharing information across roles.

Evaluation serves a central role in funded research projects. The lines between the role of the evaluator and that of the researcher can blur, however, because many researchers also conduct evaluations. Scriven (2003/2004) saw the role of evaluation as a means to determine “the merit, worth, or value of things” (para. #1), whereas social science research instead is “restricted to empirical (rather than evaluative) research, and bases its conclusion only on factual results—that is, observed, measured, or calculated data” (para. #2).  Consider too, how Powell (2006) posited “Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes” (p. 102).  It is easy to see how confusion arises.

Taking a step back can shed light on the differences in these roles and ways they are now being redefined. The role of researcher shows a different project perspective, as a goal of research is the production of knowledge, whereas the role of the external evaluator is to provide an “independent” assessment of the project and its outcomes. Typically, an evaluator is seen as a judge of a project’s merits, which assumes a perspective that a “right” outcome exists. Yet inherent in the role of evaluation are the values held by the evaluator, the project team, and the stakeholders as context influences the process and who makes decisions on where to focus attention, why, and how feedback is used (Skolits, Morrow, & Burr, 2009).  Knowing more about how the project team intends to use evaluation results to help improve project outcomes requires a shared understanding of the role of the evaluator (Langfeldt & Kyvik, 2011).

Evaluators seek to understand what information is important to collect and review and how to best use the findings to relate outcomes to stakeholders (Levin-Rozalis, 2003).  Researchers instead focus on diving deep into investigating a particular issue or topic with a goal of producing new ways of understanding in these areas. In a perfect world, the roles of evaluators and researchers are distinct and separate. But, given requirements for funded projects to produce outcomes that inform the field, new knowledge is also discovered by evaluators. The swirl of roles results in evaluators publishing results of projects that informs the field, researchers leveraging their evaluator roles to publish scholarly work, and both evaluators and researchers borrowing strategies from each other to conduct their work.

The blurring of roles requires project leaders to provide clarity about evaluator and researcher team functions. The following questions can help in this process:

  • How will the evaluator and researcher share data?
  • What are the expectations for publication from the project?
  • What kinds of formative evaluation might occur that ultimately changes the project trajectory? How do these changes influence the research portion of the project?
  • How does shared meaning of terms, role, scope of work, and authority for the project team occur?

Knowing how the evaluator and researcher will work together provides an opportunity to leverage expertise in ways that move beyond the simple additive effect of both roles.  Opportunities to share information is only possible when roles are coordinated, which requires advanced planning. It is important to move beyond siloed roles and towards more collaborative models of evaluation and research within projects. Collaboration requires more time and attention to sharing information and defining roles, but the time spent on coordinating these joint efforts is worth it given the contributions to both the project and to the field.


References

Levin-Rozalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation, 18(2):1-31.

Powell, R. R. (2006).  Evaluation research:  An overview.  Library Trends, 55(1), 102-120.

Scriven, M. (2003/2004).  Michael Scriven on the differences between evaluation and social science research.  The Evaluation Exchange, 9(4).

Blog: Logic Models for Curriculum Evaluation

Posted on June 7, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Rachel Tripathy Linlin Li
Research Associate, WestEd Senior Research Associate, WestEd

At the STEM Program at WestEd, we are in the third year of an evaluation of an innovative, hands-on STEM curriculum. Learning by Making is a two-year high school STEM course that integrates computer programming and engineering design practices with topics in earth/environmental science and biology. Experts in the areas of physics, biology, environmental science, and computer engineering at Sonoma State University (SSU) developed the curriculum by integrating computer software with custom-designed experiment set-ups and electronics to create inquiry-based lessons. Throughout this project-based course, students apply mathematics, computational thinking, and the Next Generation Science Standards (NGSS) Scientific and Engineering Design Practices to ask questions about the world around them, and seek the answers. Learning by Making is currently being implemented in rural California schools, with a specific effort being made to enroll girls and students from minority backgrounds, who are currently underrepresented in STEM fields. You can listen to students and teachers discussing the Learning by Making curriculum here.

Using a Logic Model to Drive Evaluation Design

We derived our evaluation design from the project’s logic model. A logic model is a structured description of how a specific program achieves an intended learning outcome. The purpose of the logic model is to precisely describe the mechanisms behind the program’s effects. Our approach to the Learning by Making logic model is a variant on the five-column logic format that describes the inputs, activities, outputs, outcomes, and impacts of a program (W.K. Kellogg Foundation, 2014).

Learning by Making Logic Model

Click image to view enlarge

Logic models are read as a series of conditionals. If the inputs exist, then the activities can occur. If the activities do occur, then the outputs should occur, and so on. Our evaluation of the Learning by Making curriculum centers on the connections indicated by the orange arrows connecting outputs to outcomes in the logic model above. These connections break down into two primary areas for evaluation: 1) teacher professional development, and 2) classroom implementation of Learning by Making. The questions that correlate with the orange arrows above can be summarized as:

  • Are the professional development (PD) opportunities and resources for the teachers increasing teacher competence in delivering a computational thinking-based STEM curriculum? Does Learning by Making PD increase teachers’ use of computational thinking and project-based instruction in the classroom?
  • Does the classroom implementation of Learning by Making increase teachers’ use of computational thinking and project-based instruction in the classroom? Does classroom implementation promote computational thinking and project-based learning? Do students show an increased interest in STEM subjects?

Without effective teacher PD or classroom implementation, the logic model “breaks,” making it unlikely that the desired outcomes will be observed. To answer our questions about outcomes related to teacher PD, we used comprehensive teacher surveys, observations, bi-monthly teacher logs, and focus groups. To answer our questions about outcomes related to classroom implementation, we used student surveys and assessments, classroom observations, teacher interviews, and student focus groups. SSU used our findings to revise both the teacher PD resources and the curriculum itself to better situate these two components to produce the outcomes intended. By deriving our evaluation design from a clear and targeted logic model, we succeeded in providing actionable feedback to SSU aimed at keeping Learning by Making on track to achieve its goals.

Blog: Evaluating New Technology

Posted on May 23, 2017 by  in Blog ()

Professor and Senior Associate Dean, Rochester Institute of Technology

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As a STEM practitioner and evaluator, I have had many opportunities to assess new and existing courses, workshops, and programs. But there are often requests that still challenge me, especially evaluating new technology. The problem lies in clarifying the role of new technology, and focusing the evaluation on the proper questions.

Well, ok, you ask, “what are the roles I need to focus on?” In a nutshell, new technologies rear their heads in two ways:

(1) As content to be learned in the instructional program and,

(2) As a delivery mechanism for the instruction.

These are often at odds with each other, and sometimes overlap in unusual ways. For example, a course on “getting along at work” could be delivered via an iPad. A client could suggest that we should “evaluate the iPads, too.” In this context, an evaluation of the iPad should be limited to its contribution to achieving the program outcomes. Among other questions, did it function in a way that students enjoyed (or didn’t hate) and in a way that contributed to (or didn’t interfere with) learning. In a self-paced program, the iPad might be the primary vehicle for content delivery. However, using FaceTime or Skype via an iPad only requires the system to be a communication device – it will provide little more than a replacement of other technologies. In both cases, evaluation questions would center on the impact of the iPad on the learning process. Note that this is no more of a “critical” question than “did the students enjoy (or not hate) the snacks provided to them.” Interesting, but only as a supporting process.

Alternatively, a classroom program could be devoted to “learning the iPad.” In this case, the iPad has become “subject matter” that is to be learned through the process of human classroom interaction. In this case, how much they learned about the iPad is the whole point of the program! Ironically, a student could learn things about the iPad (through pictures, simulations, or through watching demonstrations) without actually using an iPad! But remember, it is not only an enabling contributor to the program – it can be the object of study.

So, the evaluation of new technology means that the evaluator must determine which aspect of new technology is being evaluated: technology as a process for delivering instruction, or as a subject of study. And a specific, somewhat circular case exists as well: Learning about an iPad through training delivered on an iPad. In this case, we would try to generate evaluation questions that allow us to address iPads both as delivery tools and iPads as skills to be learned.

While this may now seem straightforward as you read about it, remember that it is not straightforward to clients who are making an evaluation request. It might help to print this blog (or save a link) to help make clear these different, but sometimes interacting, uses of technology.