Blog




Blog: How Evaluators Can Use InformalScience.org

Posted on December 13, 2018 by  in Blog ()

Evaluation and Research Manager, Science Museum of Minnesota and Independent Evaluation Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m excited to talk to you about the Center for Advancement of Informal Science Education (CAISE) and the support they offer evaluators of informal science education (ISE) experiences. CAISE is a National Science Foundation (NSF) funded resource center for NSF’s Advancing Informal STEM Learning program. Through InformalScience.org, CAISE provides a wide range of resources valuable to the EvaluATE community.

Defining Informal Science Education

ISE is lifelong learning in science, technology, engineering, and math (STEM) that takes place across a multitude of designed settings and experiences outside of the formal classroom. The video below is a great introduction to the field.

Outcomes of ISE experiences have some similarities to those of formal education. However, ISE activities tend to focus less on content knowledge and more on other types of outcomes, such as interest, attitudes, engagement, skills, behavior, or identity. CAISE’s Evaluation and Measurement Task Force investigates the outcome areas of STEM identity, interest, and engagement to provide evaluators and experience designers with guidance on how to define and measure these outcomes. Check out the results of their work on the topic of STEM identity (results for interest and engagement are coming soon).

Resources You Can Use

InformalScience.org has a variety of resources that I think you’ll find useful for your evaluation practice.

  1. In the section “Design Evaluation,” you can learn more about evaluation in the ISE field through professional organizations, journals, and projects researching ISE evaluation. The “Evaluation Tools and Instruments” page in this section lists sites with tools for measuring outcomes of ISE projects, and there is also a section about reporting and dissemination. I provide a walk-through of CAISE’s evaluation pages in this blog post: How to Use InformalScience.org for Evaluation.
  2. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects has been extremely useful for me in introducing ISE evaluation to evaluators new to the field.
  3. In the “News & Views” section are several evaluation-related blogs, including a series on working with an institutional review board and another one on conducting culturally responsive evaluations.
  4. If you are not affiliated with an academic institution, you can access peer-reviewed articles in some of your favorite academic journals by becoming a member InformalScienceorg. Click here to join; it’s free! Once you’re logged in, select “Discover Research” in the menu bar and scroll down to “Access Peer-Reviewed Literature (EBSCO).” Journals of interest include Science Education and Cultural Studies of Science Education. If you are already a member of InformalScience.org, you can immediately begin searching the EBSCO Education Source database.

My favorite part of InformalScience.org is the repository of evaluation reports—1,020 reports and growing—which is the largest collection of reports in the evaluation field. Evaluators can use this rich collection to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating ISE projects. Even if you don’t evaluate ISE experiences, I encourage you to take a minute to search the reports and see what you can find. And if you conduct ISE evaluations, consider sharing your own reports on InformalScience.org.

Do you have any questions about CAISE or InformalScience.org? Contact Melissa Ballard, communications and community manager, at mballard@informalscience.org.

Blog: Building Research-Practice Collaborations for Effective STEM + Computing Education Evaluation Design

Posted on November 29, 2018 by  in Blog ()

Director of Measurement, Evaluation, and Learning, Kapor Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

At the Kapor Center, our signature three-summer educational program (SMASH Academy) aims to prepare underrepresented high school students of color to pursue careers in science, technology, education, and mathematics (STEM) and computing through access to courses, support networks, and opportunities for social and personal development.

In the nonprofit sector, evaluations can be driven by funder requirements, which often focus on outcomes. However, by solely focusing on outcomes, teams can lose sight of the goal of STEM evaluation: to inform programming (through the creation of process evaluation tools such as observation protocols and course evaluations) to ensure youth of color are prepared for the future STEM economy.

To keep that goal in focus, the Kapor Center ensures that the evaluation method driving its work is utilization-focused evaluation. Utilization-focused evaluation begins with the premise that the success metric of an evaluation is the extent to which it is used by key stakeholders (Patton, 2008). This framework requires joint decision making between the evaluator and stakeholders to determine the purpose of the evaluation, the kind of data to be collected, the type of evaluation design to be created, and the uses of the evaluation. Using this framework shifts evaluation from a linear, top-down approach to a feedback loop involving practitioners.

Figure 1. Evaluation Cycle of SMASH Academy

The evaluation cycle at the Kapor Center, a collaboration between our research team and SMASH’s program team, is outlined below:

  1. Inquiry: This stage begins with conversations with the stakeholders (e.g., programs and leadership teams) about common understandings of short-, medium-, and long-term outcomes as well as the key strategies that drive outcomes. Delineating outcomes has been integral to working transparently toward program priorities.
  2. Instrument Development: Once groups are in agreement about the goal of the evaluation and our path to it, we develop instruments. Instrument mapping, linking each tool and question to specific outcomes, has been a good practice to open the communication channels among teams.
  3. Instrument Administration: When working with seasonal staff at the helm of evaluation administration, documentation of processes has been crucial for fidelity. Not surprisingly, with varying levels of experience among program staff, the creation of systems to standardize data collection has been key, including scoring rubrics to be used during observations and guides for survey administration.

Data Analysis and Reporting: When synthesizing data, analyses and reporting need to not only tell a broad impact story but also provide concrete targets and priorities for the program

  1. In this regard, analyses have encompassed pre-post outcome differences and reports on program experiences.
  2. Reflection and Integration: At the end of the program cycle, the program team reflects on the data together to inform their path forward. In such a meeting, the team engages in answering three questions: 1) What did you observe about the data? 2) What can you infer about the data and what evidence supports your inference? and 3) What are the next steps to develop and prioritize program modifications?

Developing stronger research-practice ties have been integral to the Kapor Center’s understanding of what works, for whom, and under what context to ensure more youth of color pursue and persist in STEM fields. Beyond the SMASH program, the practice of collective cooperation between researchers and practitioners provides an opportunity to impact strategies across the field.

 

References

Patton, M. Q. (2008). Utilization-focused evaluation. Newbury Park, CA: Sage.

 

Blog: Evaluating Educational Programs for the Future STEM Workforce: STELAR Center Resources

Posted on November 8, 2018 by  in Blog ()

Project Associate, STELAR Center, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello EvaluATE community! My name is Sarah MacGillivray, and I am a member of the STEM Learning and Research (STELAR) Center team, which supports the National Science Foundation Innovative Technology Experiences for Students and Teachers (NSF ITEST) program. Through ITEST, NSF funds the research and development of innovative models of engaging K-12 students in authentic STEM experiences. The goals of the program include building students’ interest and capacity to participate in STEM educational opportunities and developing the skills they will need for careers in STEM. While we target slightly different audiences than the Advanced Technological Education (ATE) program, our programs share the common goal of educating the future STEM workforce, and to support this goal, I invite you to access the many evaluation resources available on our website.

The STELAR website houses an extensive set of resources collected from and used by the ITEST community. These resources include a database of nearly 150 research and evaluation instruments. Each entry features a description of the tool, a list of relevant disciplines and topics, target participants, and a link to ITEST projects that have used the instrument in their work. Whenever possible, PDFs and/or URLs to the original resource are included, though some tools require a fee or membership to the third-party site for access. The instruments can be accessed at http://stelar.edc.org/resources/instruments, and the database can be searched or filtered by keywords common to ATE and ITEST projects, e.g., “participant recruitment and retention,” “partnerships and collaboration,” “STEM career opportunities and workforce development,” “STEM content and standards,” and “teacher professional development and pedagogy,” among others.

In addition to our extensive instrument library, our website also features more than 400 publications, curricular materials, and videos. Each library can be browsed individually, or if you would like to view everything that we have on a topic, you can search all resources on the main resources page: http://stelar.edc.org/resources. We are continually adding to our resources and have recently improved our collection methods to allow projects to upload to the website directly. We expect this will result in even more frequent additions, and we encourage you to visit often or join our mailing list for updates.

STELAR also hosts a free, self-paced online course in which novice NSF proposal writers develop a full NSF proposal. While focused on ITEST, the course can be generalized to any NSF proposal. Two sessions focus on research and evaluation, breaking down the process for developing impactful evaluations. Participants learn what key elements to include in research designs, how to develop logic models, what is involved in deciding the evaluation’s design, and how to align the research design and evaluation sections. The content draws from expertise within the STELAR team and elements from NSF’s Common Guidelines for Education Research and Development. Since the course is self-paced, you can learn more about the course and register to participate at any time: https://mailchi.mp/edc.org/invitation-itest-proposal-course-2

We hope that these resources are useful in your work and invite you to share suggestions and feedback with us at stelar@edc.org. As a member of the NSF Resource Centers network, we welcome opportunities to explore cross-program collaboration, working together to connect and promote our shared goals.

Blog: A Professional Home for Evaluators in the STEM Education and Training TIG of the American Evaluation Association

Posted on October 17, 2018 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Ann Martin

Project Manager and Evaluator, Oak Ridge Associated Universities

Erin Burr

Section Manager of Assessment and Evaluation, Oak Ridge Associated Universities

Kimberle Kelly

Project Manager and Senior Evaluator, Oak Ridge Associated Universities

The Advanced Technological Education (ATE) community is home to many researchers and evaluators, and today, we would like to invite this community to another professional home! We’re Ann Martin, Erin Burr, and Kimberle Kelly, all from Oak Ridge Associated Universities. We’re all also closely involved with the American Evaluation Association (AEA), and specifically involved with a topical interest group (TIG) within AEA dedicated to the evaluation of STEM education and workforce development initiatives. The STEM Education and Training TIG has been a welcoming professional home for evaluators in STEM-related fields since its founding in 2012.

The TIG promotes and organizes the STEM program track at the AEA annual conference each year. One of the TIG’s most substantial projects has been the development and prototyping of a repository of STEM-relevant evaluation resources, a project initiated several years ago.

In the past year, we’ve been working with the Computer Science Impact Network (CSIN), which grew out of an effort at Google and is now a community of over 50 evaluators working in the field of computer science education and training. CSIN has been an active partner with the TIG in designing the repository and has contributed resources related to their CS interests.

The TIG’s role within AEA helps us draw from and connect to the broader STEM evaluation community to include all relevant disciplines. Our goal is to consolidate the knowledge that lives in professional networks of STEM evaluators and to help other evaluators find comprehensive resources like the EvaluATE Library or the Center for the Advancement of Informal Science Education’s (CAISE) InformalScience, as well as specific tools, like our colleague Kathy Haynie’s Student Computer Science Attitude Survey. By working together across networks, we can strengthen our professional practice and increase access to high-quality STEM evaluation resources. We would especially welcome additions from the ATE evaluation network. Check out our prototype repository, and use our input form to submit resources that you’ve created or that you’ve found valuable in your own work.

If you are attending this year’s AEA conference in Cleveland (October 31 – November 3), we hope that you will come find us and attend TIG events! Join us at the STEM TIG’s networking reception on Thursday, November 1, from 6:45-7:30 p.m. (Convention Center Exhibit Level Foyer 10-15, with other TIGs in joint reception group 3). We’ll also host the TIG’s annual business meeting, which is a great opportunity to learn more about us and get involved. The TIG business meeting will take place Thursday, November 1, from 6:00 – 6:45 p.m. at the Hilton – Hope Ballroom C. Check out all of the TIG’s conference offerings on the conference website by searching the online program for the STEM Education and Training track.

During the conference, we’ll also assemble the TIG’s leadership team for 2019. While membership in AEA is required to serve as a TIG chair, program chair, or webmaster, our executive board also works on a variety of special projects—from communications to social  media to the development of the resource repository—that are open to all. We’d like to extend a special invitation to the EvaluATE community to consider getting involved!

If you won’t be attending AEA or aren’t a member of the association, we would love to hear from you. Reach out to Ann Martin at ann.martin@orau.org, or sign up for our e-list, which delivers periodic information.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Using Mixed-Mode Survey Administration to Increase Response

Posted on September 26, 2018 by  in Blog ()

Program Evaluator, Cold Spring Harbor Laboratory, DNA Learning Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Why aren’t people responding?”

This is the perpetual question asked by anyone doing survey research, and it’s one that I am no stranger to myself. There are common strategies to combat low survey participation, but what happens when they fail?

Last year, I was co-principal investigator on a small Advanced Technological Education (ATE) grant to conduct a nationwide survey of high school biology teachers. This was a follow-up to a 1998 survey done as part of an earlier ATE grant my institution had received. In 1998, the survey was done entirely by mail and had a 35 percent response rate. In 2018, we administered an updated version of this survey to nearly 13,000 teachers. However, this time, there was one big difference: we used email.

After a series of four messages over two months (pre-notice, invitation, and two reminders), an incentivized survey, and intentional targeting of high school biology teachers, our response rate was only 10 percent. We anticipated that teachers would be busy and that a 15-minute survey might be too much for many of them to deal with at school. However, there appeared to be a bigger problem: nearly two-thirds of our messages were never opened and perhaps never even seen.

To boost our numbers, we decided to return to what had worked previously: the mail. Rather than send more emails, we mailed an invitation to individuals who had not completed the survey, followed by postcard reminders. Individuals were reminded of the incentive and directed to a web address where they could complete the survey online. The end result was a 14 percent response rate.

I noticed that, particularly when emailing teachers at their school-provided email addresses, many messages never reach the intended recipients. Although use of a mail-exclusive design may never be likely, an alternative would be to heed the advice of Millar and Dillman (2011): administer a mixed-mode, web-then-mail messaging strategy to ensure that spam filters don’t prevent participants from being a part of surveys. Asking the following questions can help guide your method-of-contact decisions and help avoid troubleshooting a low response rate mid-survey.

  1. Have I had low response rates from a similar population before?
  2. Do I have the ability to contact individuals via multiple methods?
  3. Is using the mail cost- or time-prohibitive for this particular project?
  4. What is the sample size necessary for my sample to reasonably represent the target population?
  5. Have I already made successful contact with these individuals over email?
  6. Does the survey tool I’m using (Survey Monkey, Qualtrics, etc.) tend to be snagged by spam filters if I use its built-in invitation management features?

These are just some of the considerations that may help you avoid major spam filter issues in your forthcoming project. Spam filters may not be the only reason for a low response rate, but anything that can be done to mitigate their impact is a step toward a better response rate for your surveys.


Reference

Millar, M., & Dillman, D. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75, 249–269.

Blog: Using Rubrics to Demonstrate Educator Mastery in Professional Development

Posted on September 18, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Nena Bloom
Evaluation Coordinator
Center for Science Teaching and Learning, Northern Arizona University
Lori Rubino-Hare
Professional Development Coordinator
Center for Science Teaching and Learning, Northern Arizona University

We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.

One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.

GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.

We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.

Practices we used to develop and utilize the rubric included the following:

  • We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
  • We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
  • Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
  • We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
  • We pilot tested the rubric with several units, using several scorers, and revised accordingly.

This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.


References and resources

Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.

Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric

Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.

Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.

Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.

Blog: Four Personal Insights from 30 Years of Evaluation

Posted on August 30, 2018 by  in Blog ()

Haddix Community Chair of STEM Education, University of Nebraska Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I complete my 30th year in evaluation, I feel blessed to have worked with so many great people. In preparation for this blog, I spent a reflective morning with some hot coffee, cereal, and wheat toast (that morning donut is no longer an option), and I looked over past evaluations. I thought about any personal insights that I might share, and I came up with four:

  1. Lessons Learned Are Key: I found it increasingly helpful over the years to think about a project evaluation as a shared learning journey, taken with the project leadership. In this context, we both want to learn things that we can share with others.
  2. Evaluator Independence from Project Implementation Is Critical: Nearly 20 years ago, a program officer read in a project annual report that I had done a workshop on problem-based learning for the project. In response, he kindly asked if I had “gone native,” which is slang for a project evaluator getting so close to the project it threatens independence. As I thought it over, he had identified something that I was becoming increasingly uncomfortable with. It became difficult to offer suggestions on implementing problem-based learning when I had offered the training. That quick, thoughtful inquiry helped me to navigate that situation. It also helped me to think about my own future evaluator independence.
  3. Be Sure to Update Plans after Funding: I always adjust a project evaluation plan after the award. Once funded, everyone really digs in, and opportunities typically surface to make the project and its evaluation even better. I have come to embrace that process. I now typically include an “evaluation plan update” phase before we initiate an evaluation, to ensure that the evaluation plan is the best it can truly be when we implement it.
  4. Fidelity Is Important: It took me 10 years in evaluation before I fully understood the “fidelity issue.” Fidelity, for a loose definition, is essentially how faithful program implementers are to the recipe of a program intervention. The first time I became concerned with fidelity I was evaluating the implementation of 50 hours of curriculum. As I interviewed the teachers, it became clear that teachers were spending vastly different amounts of time on topics and activities. Like all good teachers, they had made the curriculum their own, but in many ways, the intended project intervention disappeared. This made it hard to learn much about the intervention. I evolved to include a fidelity feedback process in projects, to statistically adjust for that natural variation or to help examine differing impacts based on intervention fidelity.

In the last 30 years, program evaluation as a field has become increasingly useful and important. Like my days of eating donuts for breakfast, increasingly gone are the days of “superficial” evaluation. This has been replaced by evaluation strategies that are collaboratively planned, engaged, and flexible, which (like my wheat toast and cereal) gets evaluators and project leadership further on the shared journey. Although I do periodically miss the donuts, I never miss the superficial evaluations. Overall, I am always really glad that I now have the cereal and toast—and that I conduct strong and collaborative program evaluations.

Blog: The Life-Changing Magic of a Tidy Evaluation Plan

Posted on August 16, 2018 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Effective tidying involves only three essential actions. All you need to do is take the time to examine every item you own, decide whether or not you want to keep it, then choose where to put what you keep. Designate a place for each thing.”

―Marie Kondo, The Life-Changing Magic of Tidying Up

I’ve noticed a common problem with some proposal evaluation plans: It’s not so much that they don’t include key information; it’s that they lack order. They’re messy. When you have only about two pages of a 15-page National Science Foundation proposal to describe an evaluation, you need to be exceptionally clear and efficient. In this blog, I offer tips on how to “tidy up” your proposal’s evaluation plan to ensure it communicates key information clearly and coherently.

First of all, what does a messy evaluation plan look like? It meanders. It frames the evaluation’s focus in different ways in different places in the proposal, or even within the evaluation section itself, leaving the reviewer confused about the evaluation’s purpose. It discusses data and data collection without indicating what those data will be used to address. It employs different terms to mean the same thing in different places. It makes it hard for reviewers to discern key information from the evaluation plan and understand how that information fits together.

Three Steps to Tidy up a Messy Evaluation Plan

It’s actually pretty easy to convert a messy evaluation plan into a tidy one:

  • State the evaluation’s focus succinctly. List three to seven evaluation questions that the evaluation will address. These questions should encompass all of your planned data collection and analysis—no more, no less. Refer to these as needed later in the plan, rather than restating them differently or introducing new topics later in the plan. Do not express the evaluation’s focus in different ways in different places.
  • Link the data you plan to collect to the evaluation questions. An efficient way to do this is to present the information in a table. I like to include evaluation questions, indicators, data collection methods and sources, analysis, and interpretation in a single table to clearly show the linkages and convey that my team has carefully thought about how we will answer the evaluation questions. Bonus: Presenting information in a table saves space and makes it easy for reviewers to locate key information. (See EvaluATE’s Evaluation Data Matrix Template.)
  • Use straightforward language—consistently. Don’t assume that reviewers will share your definition of evaluation-related terms. Choose your terms carefully and do not vary how you use them throughout the proposal. For example, if you are using the terms measures, metrics, and indicators, ask yourself if you are really referring to different things. If not, stick with one term and use it consistently. If similar words are actually intended to mean different things, include brief definitions to avoid any confusion about your meaning.

Can a Tidy Evaluation Plan Really Change Your Life?

If it moves a very good proposal toward excellent, then yes! In the competitive world of grant funding, every incremental improvement counts and heightens your chances for funding, which can mean life-changing opportunities for the project leaders, evaluators, and—most importantly—individuals who will be served by the project.

Blog: Becoming a Sustainability Sleuth: Leaving and Looking for Clues of Long-Term Impact

Posted on August 1, 2018 by  in Blog ()

Director, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I’m Rebecca from SageFox Consulting Group, and I’d like to start a conversation about measuring sustainability. Many of us work on ambitious projects with long-term impacts that cannot be achieved within the grant period and require sustained grant activities. Projects are often tasked with providing evidence of sustainability but are not given the funding to assess sustainability and impact after grant funding. In five, 10, or 15 years, if someone were to pick up your final report, would they be able to use it to get a baseline understanding of what occurred during the grant, and would they know where to look for evidence of impact and sustainability? Below are some suggestions for documenting “clues” for sustainability:

Relationships are examples of how projects are sustained. You may want to consider documenting evidence of the depth of relationships: are they person-dependent, or has it become a true partnership between entities? Evidence of the depth of relationships is often revealed when a key supporter leaves their position, but the relationship continues. You might also try to distinguish a person from a role. For example, one project I worked on lost the support of a key contact (due to a reorganization) at a federal agency that hosted student interns during the summer. There was enough goodwill and experience, however, continued efforts from the project leadership resulted in more requests for interns than there were students available for.

Documenting how and why the innovation evolves can provide evidence of sustainability. Often the adopter, user, or customer finds their own value in relation to their unique context. Understanding how and why someone adapts the product or process gives great insight into what elements may go on and in what contexts. For example, you might ask users, “What modifications were needed for your context and why?”

In one of my projects, we began with a set of training modules for students, but we found that an online test preparation module for a certification was also valuable. Through a relationship with the testing agency, a revenue stream was developed that also allowed the project to continue classroom work with students.

Institutionalization (adoption of key products or processes by an institution)—often through a dedicated line item in a budget for a previously grant-funded student support position—reflects sustainability. For example, when a grant-funded program found a permanent home at the university by expanding its student-focused training in entrepreneurship to faculty members, it aligned itself with the mission of the department. Asking “What components of this program are critical for the host institution?” is one way to uncover institutionalization opportunities.

Revenue generation is another indicator of customer demand for the product or process. Many projects are reluctant to commercialize their innovations, but commercialization can be part of a sustainability plan. There are even National Science Foundation (NSF) programs to help plan for commercialization (e.g., NSF Innovation Corps), and seed money to get started is also available (e.g., NSF Small Business Innovation Research).

Looking for clues of sustainability often requires a qualitative approach to evaluation through capturing the story from the leadership team and participants. It also involves being on the lookout for unanticipated outcomes in addition to the deliberate avenues a project takes to ensure the longevity of the work.