Blog

Subscribe here for quick access to our latest blog posts. New to RSS feeds? Click here


Goodyear
Three Tips for a Strong NSF Proposal Evaluation Plan

Posted on August 17, 2016 by  in Blog ()

Principle Research Scientist, Education Development Center, Inc.

I’m Leslie Goodyear and I’m an evaluator who also served as a program officer for three years at the National Science Foundation in the Division of Research on Learning, which is in the Education and Human Resources Directorate. While I was there, I oversaw evaluation activities in the Division and reviewed many, many evaluation proposals and grant proposals with evaluation sections.

In May 2016, I had the pleasure of participating in the “Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants.” Hosted by Lori Wingate at EvaluATE and Ann Beheler at the Centers Collaborative for Technical Assistance, this webinar covered topics such as evaluation fundamentals; evaluation requirements and expectations; and evaluation staffing, budgeting and utilization.

On the webinar, I shared my perspective on the role of evaluation at NSF, strengths and weaknesses of evaluation plans in proposals, and how reviewers assess Results from Prior NSF Support sections of proposals, among other topics. In this blog, I’ll give a brief overview of some important takeaways from the webinar.

First, if you’re making a proposal to education or outreach programs, you’ll likely need to include some form of project evaluation in your proposal. Be sure to read the program solicitation carefully to know what the specific requirements are for that program. There are no agency-wide evaluation requirements—instead they are specified in each solicitation. Lori had a great suggestion on the webinar:  Search the solicitation for “eval” to make sure you find all the evaluation-related details.

Second, you’ll want to make sure that your evaluation plan is tailored to your proposed activities and outcomes. NSF reviewers and program officers can smell a “cookie cutter” evaluation plan, so make sure that you’ve talked with your evaluator while developing your proposal and that they’ve had the chance to read the goals and objectives of your proposed work before drafting the plan. You want the plan to be incorporated into the proposal so that it appears seamless.

Third, indicators of a strong evaluation plan include carefully crafted, relevant overall evaluation questions, a thoughtful project logic model, a detailed data collection plan that is coordinated with project activities, and a plan for reporting and dissemination of findings. You’ll also want to include a bio for your evaluator so that the reviewers know who’s on your team and what makes them uniquely qualified to carry out the evaluation of your project.

Additions that can make your plan “pop” include:

  • A table that maps out the evaluation questions to the data collection plans. This can save space by conveying lots of information in a table instead of in narrative.
  • Combining the evaluation and project timelines so that the reviewers can see how the evaluation will be coordinated with the project and offer timely feedback.

Some programs allow for using the Supplemental Documents section for additional evaluation information. Remember that reviewers are not required to read these supplemental docs, so be sure that the important information is still in the 15-page proposal.

For the Results of Prior NSF Support section, you want to be brief and outcome-focused. Use this space to describe what resulted from the prior work, not what you did. And be sure to be clear how that work is informing the proposed work by suggesting, for example, that these outcomes set up the questions you’re pursuing in this proposal.

Endres 150
National Science Foundation-funded Resources to Support Your Advanced Technological Education (ATE) Project

Posted on August 3, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Did you know that other National Science Foundation programs focused on STEM education have centers that provide services to projects? EvaluATE offers evaluation-specific resources for the Advanced Technological Education program, while some of the others are broader in scope and purpose. They offer technical support, resources, and information targeted at projects within the scope of specific NSF funding programs. A brief overview of each of these centers is provided below, highlighting evaluation-related resources. Make sure to check the sites out for further information if you see something that might be of value for your project!

The Community for Advancing Discovery Research in Education (CADRE) is a network for NSF’s Discovery Research K-12 program (DR K-12). The evaluation resource on the CADRE site is a paper on evaluation options (formative and summative), which differentiates evaluation from the research and development efforts carried out as part of project implementation.  There are other more general resources such as guidelines and tools for proposal writing, a library of reports and briefs, along with a video showcase of DR K-12 projects.

The Center for the Advancement of Informal Science Education (CAISE) has an evaluation section of its website that is searchable by type of resource (i.e., reports, assessment instruments, etc.), learning environment, and audience. For example, there are over 850 evaluation reports and 416 evaluation instruments available for review. The site hosts the Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects, which was developed as an initiative of the Visitor Studies Association and has sections such as working with an evaluator, developing an evaluation plan, creating evaluation tools and reporting.

The Math and Science Partnership Network (MSPnet) supports the math and science partnership network and the STEM+C (computer science) community. MSPnet has a digital library with over 2,000 articles; a search using the term “eval” found 467 listings, dating back to 1987. There is a toolbox with materials such as assessments, evaluation protocols and form letters. Other resources in the MSPnet library include articles and reports related to teaching and learning, professional development, and higher education.

The Center for Advancing Research and Communication (ARC) supports the NSF Research and Evaluation on Education in Science and Engineering (REESE) program through technical assistance to principal investigators. An evaluation-specific resource includes material from a workshop on implementation evaluation (also known as process evaluation).

The STEM Learning and Research Center (STELAR) provides technical support for the Innovative Technology Experiences for Students and Teachers (ITEST) program. Its website includes links to a variety of instruments, such as the Grit Scale, which can be used to assess students’ resilience for learning, which could be part of a larger evaluation plan.

How Real-time Evaluation Can Increase the Utility of Evaluation Findings

Posted on July 21, 2016 by , in Blog ()
Peery Wilkerson
Elizabeth Peery Stephanie B. Wilkerson

Evaluations are most useful when evaluators make relevant findings available to project partners at key decision-making moments. One approach to increasing the utility of evaluation findings is by collecting real-time data and providing immediate feedback at crucial moments to foster progress monitoring during service delivery. Based on our experience evaluating multiple five-day professional learning institutes for an ATE project, we discovered the benefits of providing real-time evaluation feedback and the vital elements that contributed to the success of this approach.

What did we do?

With project partners we co-developed online daily surveys that aligned with the learning objectives for each day’s training session. Daily surveys measured the effectiveness and appropriateness of each session’s instructional delivery, exercises and hands-on activities, materials and resources, content delivery format, and session length. Participants also rated their level of understanding of the session content and preparedness to use the information. They could submit questions, offer suggestions for improvement, and share what they liked most and least. Based on the survey data that evaluators provided to project partners after each session, partners could monitor what was and wasn’t working and identify where participants needed reinforcement, clarification, or re-teaching. Project partners could make immediate changes and modifications to the remaining training sessions to address any identified issues or shortcomings before participants completed the training.

Why was it successful?

Through the process, we recognized that there were a number of elements that made the daily surveys useful in immediately improving the professional learning sessions. These included the following:

  • Invested partners: The project partners recognized the value of the immediate feedback and its potential to greatly improve the trainings. Thus, they made a concentrated effort to use the information to make mid-training modifications.
  • Evaluator availability: Evaluators had to be available to pull the data after hours from the online survey software program and deliver it to project partners immediately.
  • Survey length and consistency: The daily surveys took less than 10 minutes to complete. While tailored to the content of each day, the surveys had a consistent question format that made them easier to complete.
  • Online format: The online format allowed for a streamlined and user-friendly survey. Additionally, it made retrieving a usable data summary much easier and timelier for the evaluators.
  • Time for administration: Time was carved out of the training sessions to allow for the surveys to be administered. This resulted in higher response rates and more predictable timing of data collection.

If real-time evaluation data will provide useful information that can help make improvements or decisions about professional learning trainings, it is worthwhile to seek resources and opportunities to collect and report this data in a timely manner.

Here are some additional resources regarding real-time evaluation:

Articulating Intended Outcomes Using Logic Models: The Roles Evaluators Play

Posted on July 6, 2016 by , in Blog ()
Wilkerson Peery
Stephanie B. Wilkerson Elizabeth Peery

Articulating project outcomes is easier said than done. A well-articulated outcome is one that is feasible to achieve within the project period, measurable, appropriate for the phase of project development, and in alignment with the project’s theory of change. A project’s theory of change represents causal relationships – IF we do these activities, THEN these intended outcomes will result. Understandably, project staff often frame outcomes as what they intend to do, develop, or provide, rather than what will happen as a result of those project activities. Using logic models to situate intended outcomes within a project’s theory of change helps to illustrate how project activities will result in intended outcomes.

Since 2008, my team and I have served as the external evaluator for two ATE project cycles with the same client. As the project has evolved over time, so too have its intended outcomes. Our experience using logic models for program planning and evaluation has illuminated four critical roles we as evaluators have played in partnership with project staff:

  1. Educator. Once funded, we spent time educating the project partners on the purpose and development of a theory of change and intended outcomes using logic models. In this role, our goal was to build understanding of and buy-in for the need to have logic models with well-articulated outcomes to guide project implementation.
  1. Facilitator. Next, we facilitated the development of an overarching project logic model with project partners. The process of defining the project’s theory of change and intended outcomes was important in creating a shared agreement and vision for project implementation and evaluation. Even if the team includes a logic model in the proposal, refining it during project launch is still an important process for engaging project partners. We then collaborated with individual project partners to build a “family” of logic models to capture the unique and complementary contributions of each partner while ensuring that the work of all partners was aligned with the project’s intended outcomes. We repeated this process during the second project cycle.
  1. Methodologist. The family of logic models became the key source for refining the evaluation questions and developing data collection methods that aligned with intended outcomes. The logic model thus became an organizing framework for the evaluation. Therefore, the data collection instruments, analyses, and reporting yielded relevant evaluation information related to intended outcomes.
  1. Critical Friend. As evaluators, our role as a critical friend is to make evidence-based recommendations for improving project activities to achieve intended outcomes. Sometimes evaluation findings don’t support the project’s theory of change, and as critical friends, we play an important role in challenging project staff to identify any assumptions they might have made about project activities leading to intended outcomes. This process helped to inform the development of tenable and appropriate outcomes for the next funding cycle.

Resources:

There are several resources for articulating outcomes using logic models. Some of the most widely known include the following:

Worksheet: Logic Model Template for ATE Projects & Centers: http://www.evalu-ate.org/resources/lm-template/

Education Logic Model (ELM) Application Tool for Developing Logic Models: http://relpacific.mcrel.org/resources/elm-app/

University of Wisconsin-Extension’s Logic Model Resources: http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

W.K. Kellogg Foundation Logic Model Development Guide: https://www.wkkf.org/resource-directory/resource/2006/02/wk-kellogg-foundation-logic-model-development-guide

zalles2016
Student Learning Assessments: Issues of Validity and Reliability

Posted on June 22, 2016 by  in Blog ()

Senior Educational Researcher, SRI International

In my last post, I talked about the difference between program evaluation and student assessment. I also touched on using existing assessments if they are available and appropriate, and if not, constructing new assessments.  Of course, that new assessment would need to meet test quality standards or otherwise it will not be able to measure what you need to have measured for your evaluation. Test quality has to do with validity and reliability.

When a test is valid, it means that when a student responds with a wrong answer, it would be reasonable to conclude that they did so because they did not learn what they were supposed to have learned. There are all kinds of impediments to an assessment’s validity. For example, if in a science class you are asking students a question aimed at determining if they understand the difference between igneous and sedimentary rocks, yet you know that some of them do not understand English, you wouldn’t want to ask them the question in English. In testing jargon, what you are introducing in such a situation is “construct irrelevant variance.” In this case, the variance in results may be as much due to whether they know English (the construct irrelevant part) as to whether they know the construct, which is the differences between the rock types. Hence, these results would not help you determine if your innovation is helping them learn the science better.

Reliability has to do with test design, administration, and scoring. Examples of unreliable tests are those that are too long, introducing test-taking fatigue that interfere with their being reliable measures of student learning. Another common example of unreliability is when the scoring directions or rubric are not designed well enough to be sufficiently clear about how to judge the quality of an answer. This type of problem will often result in inconsistent scoring, otherwise known as low interrater reliability.

To summarize, a student learning assessment can be very important to your evaluation if a goal of your project is to directly impact student learning. Then you have to make some decisions about whether you can use existing assessments or develop new ones, and if you make new ones, they need to meet technical quality standards of validity and reliability. For projects not directly aiming at improving student learning, an assessment may actually be inappropriate in the evaluation because the tie between the project activities and the student learning may be too loose. In other words, the learning outcomes may be mediated by other factors that are too far beyond your control to render the learning outcomes useful for the evaluation.

zalles2016
Using Learning Assessments in Evaluations

Posted on June 8, 2016 by  in Blog ()

Senior Educational Researcher, SRI International

If you need to evaluate your project, but get confused about whether there should be a student assessment component, it’s important to understand the difference between project evaluation and student assessment. Both are about rendering judgments about something and they are often used interchangeably. In the world of grant-funded education projects however, they have overlapping yet quite different meanings. When you do a project evaluation, you are looking at to what extent the project is meeting its goals and achieving its intended outcomes. When you are assessing, you are looking at student progress in meeting learning goals. The most commonly used instrument of assessment is a test, but there are other mechanisms for assessing learning as well, such as student reports, presentations, or journals.

Not all project evaluations require student assessments and not all assessments are components of project evaluations. For example, the goal of the project may be to restructure an academic program, introduce some technology to the classroom, or get students to persist through college. Of course, in the end, all projects in education aim to improve learning. Yet, by itself, an individual project may not aspire to directly influence learning, but rather influence it through a related effort. In turn, not all assessments are conducted as components of project evaluations. Rather, they are most frequently used to determine the academic progress of individual students.

If you are going to put a student assessment component in your evaluation, answer these questions:

  1. What amount of assessment data will you need to properly generalize from your results about how well your project is faring? For example, how many students are impacted by your program? Do you need to assess them all or can you limit your assessment administration to a representative sample?
  2. Should you administer the assessment early enough to determine if the project needs to be modified midstream? This would be called a formative assessment, as opposed to a summative assessment, which you would do at the end of a project, after you have fully implemented your innovation with the students.

Think also about what would be an appropriate assessment instrument. Maybe you could simply use a test that the school is already using with the students. This would make sense, for example, if your goal is to provide some new curricular innovations in a particular course that the students are already taking. If your project fits into this category, it makes sense because it is likely that those assessments would have already been validated, which means they would have been piloted and subsequently modified as needed to ensure that they truly measure what they are designed to measure.

An existing assessment instrument may not be appropriate for you, however. Perhaps your innovation is introducing new learnings that those tests are not designed to measure. For example, it may be facilitating their learning of new skills, such as using new mobile technologies to collect field data. In this situation, you would want your project’s goal statements to be clear about whether the intention of your project is to provide an improved pathway to already-taught knowledge or skills, or a pathway to new learnings entirely, or both. New learnings would require a new assessment. In my next post, I’ll talk about validity and reliability issues to address when developing assessments.

????????????????????????????????????
Designing Cluster Randomized Trials to Evaluate Programs

Posted on May 25, 2016 by  in Blog ()

Associate Professor, Education, Leadership, Research, and Technology, Western Michigan University

The push for rigorous evaluations of the impact of interventions has led to an increase in the use of randomized trials (RTs). In practice, it is often the case that interventions are delivered at the cluster level, such as a whole school reform model or a new curriculum. In these cases, the cluster (i.e., the school), is the logical unit of random assignment and I hereafter refer to these as cluster randomized trials (CRTs).

Designing a CRT is necessarily more complex than a RT for several reasons. First, there are two sample sizes, i.e., the number of students per school and the total number of schools. Second, the greater the variability in the outcome across schools, the more schools you will need to detect an effect of a given magnitude. The percentage of variance in the outcome that is between schools is commonly referred to as the intra-class correlation (ICC). For example, suppose I am testing an intervention and the outcome of interest is math achievement, there are 500 students per school, and a school level covariate explain 50 percent of the variation in the outcome. If the ICC is 0.20 and I want to detect an effect size difference of 0.2 standard deviations between the treatment and comparison conditions, 82 total schools, or 41 treatment and 41 comparison schools, would be needed to achieve statistical power equal to 0.80, the commonly accepted threshold. Instead, if the ICC is 0.05, the total number of schools would only be 24, a reduction of 54. Hence an accurate estimate of the ICC is critical in planning a CRT as it has a strong impact on the number of schools needed for a study.

The challenge is that the required sample size needs to be determined prior to the start of the study, hence I need to estimate the ICC since the actual data has not yet been collected. Recently there has been an increase in empirical studies which seek to estimate ICCs for different contexts. The findings suggest that the ICC varies depending on outcome type, unit of the clusters (i.e., schools, classrooms, etc.), grade and other features.

Resources:
Resources have started popping up to help evaluators planning CRTs find accurate estimates of the ICC. Two widely used in education include:

  1. The Online Variance Almanac: http://stateva.ci.northwestern.edu/
  2. The Optimal Design Plus Software: http://wtgrantfoundation.org/resource/optimal-design-with-empirical-information-od*

*Note that Optimal Design Plus is a free program that calculates power for CRTs. Embedded within the program is a data repository with ICC estimates.

In the event that empirical estimates are not available for your particular outcome type a search of the relevant literature may uncover estimates or a pilot study may be used to generate reasonable values. Regardless of the source, accurate estimates of the ICC are critical in determining the number of clusters needed in a CRT.

Endres 150
Professional Development Opportunities in Evaluation – What’s Out There?

Posted on April 29, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

To assist the EvaluATE community in learning more about evaluation, we have compiled a list of free and low-cost online and short-term professional development opportunities. There are always new things available, so this is only a place to start!  If you run across a good resource, please let us know and we will add it to the list.

Free Online Learning

Live Webinars

EvaluATE provides webinars created specifically for projects funded through the National Science Foundation’s Advanced Technological Education program. The series includes four live events per year. Recording, slides, and handouts of previous webinars are available.  http://www.evalu-ate.org/category/webinars/

Measure Evaluation is a USAID-funded project with resources targeted to the field of global health monitoring and evaluation. Webinars are offered nearly every month on various topics related to impact evaluation and data collection; recordings of past webinars are also available. http://www.cpc.unc.edu/measure/resources/webinars

Archived Webinars and Videos

Better Evaluation’s archives include recordings of an eight-part webinar series on impact evaluation commissioned by UNICEF. http://betterevaluation.org/search/site/webinar

Centers for Disease Control’s National Asthma Control Program offers recordings of its four-part webinar series on evaluation basics, including an introduction to the CDC’s Framework for Program Evaluation in Public Health. http://www.cdc.gov/asthma/program_eval/evaluation_webinar.htm

EvalPartners offered several webinars on topics related to monitoring and evaluation (M&E). They also have as series of self-paced e-learning courses. The focus of all programs is to improve competency in conducting evaluation, with an emphasis on evaluation in the community development context.  http://www.mymande.org/webinars

Engineers Without Borders partners with communities to help them meet their basic human needs. They offer recordings of their live training events focused on monitoring, evaluation, and reporting. http://www.ewb-usa.org/resources?_sfm_cf-resources-type=video&_sft_ct-international-cd=impact-assessment

The University of Michigan School of Social Work has created six free interactive Web-based learning modules on a range of evaluation topics. The target audience is students, researchers, and evaluators.  A competency skills test is given at the end of each module, and a printable certificate of completion is available at the end of each module. https://sites.google.com/a/umich.edu/self-paced-learning-modules-for-evaluation-research/

Low-Cost Online Learning

The American Evaluation Association (AEA) Coffee Break Webinars are 20-minute webinars on varying topics.  At this time non-members may register for the live webinars, but you must be a member of AEA to view the archived broadcasts. There are typically one or two sessions offered each month.  http://comm.eval.org/coffee_break_webinars/coffeebreak

AEA’s eStudy program is a series of in-depth real-time professional development opportunities and are not recorded.  http://comm.eval.org/coffee_break_webinars/estudy

The Canadian Evaluation Society (CES) offers webinars to members on a variety of evaluation topics. Reduced membership rates are available for members of AEA. http://evaluationcanada.ca/webinars

­Face-to-Face Learning

AEA Summer Evaluation Institute is offered annually in June, with a number of workshops and conference sessions.  http://www.eval.org/p/cm/ld/fid=232

The Evaluator’s Institute offers one- to five-day courses in Washington, DC in February and July. Four levels of certificates are available to participants. http://tei.cgu.edu/

Beyond these professional development opportunities, university degree and certificate programs are listed on the AEA website under the “Learn” tab.  http://www.eval.org/p/cm/ld/fid=43

Hwalek
Maximizing Stakeholder Engagement by Bringing Evaluation to Life!

Posted on April 13, 2016 by  in Blog ()

CEO, SPEC Associates

Melanie is CEO of SPEC Associates, a nonprofit program evaluation and process improvement organization headquartered in downtown Detroit.  Melanie is also on the faculty of Michigan State University, where she teaches Evaluation Management in the M.A. in Program Evaluation program. Melanie holds a Ph.D. in Applied Social Psychology and has directed evaluations for almost 40 years both locally and nationally. Her professional passion is making evaluation an engaging, fun and learning experience for both program stakeholders and evaluators. To this end, Melanie co-created EvaluationLive!, an evaluation practice model that guides evaluators in ways to breathe life into the evaluation experience.

Why is it that sometimes meetings with evaluation stakeholders seem to generate anxiety and boredom, while other times they generate excitement, a hunger for learning and, yes, even fun!?

My colleague, Mary Williams, and I started wondering and defining this about eight years ago. With 60 years of collective evaluation experience, we documented, analyzed cases, and conducted an in-depth literature review seeking an answer. We honed in on two things: (1) a definition of what exemplary stakeholder engagement looks and feels like, and (2) a set of factors that seem to predict when maximum stakeholder engagement exists.

To define “exemplary stakeholder engagement” we looked to the field of positive psychology and specifically to Mihaly Csikszentmihalyi’s  (2008) Flow Theory. Csikszentmihalyi defines “flow” as that highly focused mental state where time seems to stand still. Think of a musician composing a sonata. Think of a basketball player being in the “zone.” Flow theory says that this feeling of “flow” occurs when the person perceives that the task at hand is challenging and also perceives that she or he has the skill level sufficient to accomplish the task.

The EvaluationLive! model asserts that maximizing stakeholder engagement with an evaluation – having a flow-like experience during encounters between the evaluator and the stakeholders – requires certain characteristics of the evaluator/evaluation team, of the client organization, and of the relationship between them. Specifically, the evaluator/evaluation team must (1) be competent in the conduct of program evaluation; (2) have expertise in the subject matter of the evaluation; (3) have skills in the art of interpersonal, nonverbal, verbal and written communication; (4) be willing to be flexible in order to meet stakeholders’ needs typically for delivering results in time for decision making; and (5) approach the work with a non-egotistical learner attitude. The client organization must (1) be a learning organization open to hearing good, bad, and ugly news; (2) drive the questions that the evaluation will address; and (3) have a champion positioned within the organization who knows what information the organization needs when, and can put the right information in front of the right people at the right time. The relationship between the evaluator and client must be based on (1) trust, (2) a belief that both parties are equally expert in their own arenas, and (3) a sense that the evaluation will require shared responsibility on the part of the evaluator and the client organization.

Feedback from the field shows EvaluationLive!’s goalposts help evaluators develop strategies to emotionally engage clients in their evaluations. EvaluationLive! has been used to diagnose problem situations and to direct “next steps.” Evaluators are also using the model to guide how to develop new client relationships. We invite you to learn and get involved.

EvaluationLive! Model

Summary of the EvaluationLive! Model

 

Tackett
Color Scripting to Measure Engagement

Posted on March 30, 2016 by  in Blog ()

President, iEval

Last year I went to the D23 Expo in Anaheim, California. This was a conference for Disney fans everywhere. I got to attend panels where I learned past Disney secrets and upcoming Disney plans. I went purely for myself, since I love Disney everything, and I never dreamed I would learn something that could be applicable to my evaluation practice.

In a session with John Lasseter, Andrew Stanton, Pete Doctor, and others from Pixar, I learned about a technique created by Ralph Eggleston (who was there too) called color scripting. Color scripting is a type of story boarding, but Ralph would change the main colors of each panel to reflect the emotion the animated film was supposed to portray at that time. It helped the Pixar team understand what was going on in the film emotionally at a quick glance, and it also made it easier to create a musical score to enhance those emotions.

Then, a few weeks later, I was sitting in a large event for a client, observing from the back of the room. I started taking notes on the engagement and energy of the audience based on who was presenting. I created some metrics on the spot including number of people on their mobile devices, number of people leaving the event, laughter, murmuring, applause, etc. I thought I would create a simple chart with a timeline of the event, highlighting who was presenting at different times, and indicating if engagement was high/medium/low and if energy was high/medium/low. I quickly realized, when analyzing the data, that engagement and energy were 100% related. If engagement was high, then energy followed shortly as being high. So, instead of charting two dimensions, I really only needed to chart one: engagement & energy combined (see definitions of engagement and energy in the graphic below). That’s when it hit me – color scripting! Okay, I’m no artist like Ralph Eggleston, so I created a simple color scheme to use.

Graphic 1

In sharing this with the clients who put on the event, they could clearly see how the audience reacted to the various elements of the event. It was helpful in determining how to improve the event in the future. This was a quick and easy visual, made in Word, to illustrate the overall reactions of the audience.

I have since also applied this to a STEM project, color scripting how the teachers in a two-week professional development workshop felt at the end of each day based on one word they shared upon exiting the workshop each day. By mapping participant feelings in the different cohorts and comparing what and how things were taught each day, this resulted in thoughtful conversations with the trainers about how they want the participants to feel and what they need to change to match reality with intention.

Graphic 2

You never know where you’re going to learn a technique or tool that could be useful in your evaluation practice and useful to the client. Be open to learning everywhere you go.