Archive: evaluation

Blog: Repackaging Evaluation Reports for Maximum Impact

Posted on March 20, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Emma Perk Lyssa Wilson Becho

Evaluation reports take a lot of time to produce and are packed full of valuable information. To get the most out of your reports, think about “repackaging” your traditional report into smaller pieces.

Repackaging involves breaking up a long-form evaluation report into digestible pieces to target different audiences and their specific information needs. The goals of repackaging are to increase stakeholders’ engagement with evaluation findings, increase their understanding, and expand their use.

Let’s think about how we communicate data to various readers. Bill Shander from Beehive Media created the 4×4 Model for Knowledge Content, which illustrates different levels at which data can be communicated. We have adapted this model for use within the evaluation field. As you can see below, there are four levels, and each has a different type of deliverable associated with it. We are going to walk through these four levels and how an evaluation report can be broken up into digestible pieces for targeted audiences.

Figure 1. The four levels of delivering evaluative findings (image adapted from Shander’s 4×4 Model for Knowledge Content).

The first level, the Water Cooler, is for quick, easily digestible data pieces. The idea is to intrigue your viewer to want to learn more using a single piece of data from your report. Examples include a headline in a newspaper, a postcard, or social media post. In a social media post, you should include a graphic (photo or graph), a catchy title, and a link to the next communication level’s document. This information should be succinct and exciting. Use this level to catch the attention of readers who might not otherwise be invested in your project.

Figure 2. Example of social media post at the Water Cooler level.

The Café level allows you to highlight three to five key pieces of data that you really want to share. A Café level deliverable is great for busy stakeholders who need to know detailed information but don’t have time to read a full report. Examples include one-page reports, a short PowerPoint deck, and short briefs. Make sure to include a link to your full evaluation report to encourage the reader to move on to the next communication level.

Figure 3. One-page report at the Café level.

The Research Library is the level at which we find the traditional evaluation report. Deliverables at this level require the reader to have an interest in the topic and to spend a substantial amount of time to digest the information.

Figure 4. Full evaluation report at the Research Library level.

The Lab is the most intensive and involved level of data communication. Here, readers have a chance to interact with the data. This level goes beyond a static report and allows stakeholders to personalize the data for their interests. For those who have the knowledge and expertise in creating dashboards and interactive data, providing data at the Lab level is a great way to engage with your audience and allow the reader to manipulate the data to their needs.

Figure 5: Data dashboard example from Tableau Public Gallery (click image to interact with the data).

We hope this blog has sparked some interest in the different ways an evaluation report can be repackaged. Different audiences have different information needs and different amounts of time to spend reviewing reports. We encourage both project staff and evaluators to consider who their intended audience is and what would be the best level to communicate their findings. Then use these ideas to create content specific for that audience.

Blog: Evaluation Reporting with Adobe Spark

Posted on March 8, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Emma Perk Michael Harnar

This blog was originally published on AEA365 on December 28, 2018: https://aea365.org/blog/evaluation-reporting-with-adobe-spark-by-ouen-hunter-and-emma-perk/

Hi! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation Program, IDPE), Emma Perk (project manager at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. Recently, we used PhotoVoice in our evaluation of an Upward Bound program and wanted to share how we reported our PhotoVoice findings using the cost-free version of Adobe Spark.

Adobe Spark offers templates to make webpages, videos, flyers, reports, and more. It also hosts your product online for free. While there is a paid version of Adobe Spark, everything we discuss in this blog can be done using the free version. The software is very straightforward, and we were able to get our report online within an hour. We chose to create a webpage to increase accessibility for a large audience.

The free version of Adobe Spark has a lot of features, but it can be difficult to customize the layout. Therefore, we created our layouts in PowerPoint then uploaded them to Spark. This enabled us to customize the font, alignment, and illustrations. Follow these instructions to create a similar webpage:

  • Create a slide deck in PowerPoint. Use one slide per photo and text from the participant. The first slide serves as a template for the rest.
  • After creating the slides, you have a few options for saving the photos for upload.
    1. Use a snipping tool (Windows’ snipping or Mac’s screenshot function) to take a picture of each slide and save it as a PNG file.
    2. Save each as a picture in PowerPoint by selecting the image and the speech bubble, right clicking, and saving as a picture.
    3. Export as a PNG in PowerPoint. Go to File > Export then select PNG under the File Format drop-down menu. This will save all the slides as individual image files.
  • Create a webpage in Adobe Spark.
          1. Once on the site, you will be prompted to start a new account (unless you’re a returning user). This will allow your projects to be stored and give you access to create in the software.
          2. You have the option to change the theme to match your program or branding by selecting the Theme button.
          3. Once you have selected your theme, you are ready to add a title and upload the photos you created from PowerPoint. To upload the photos, press the plus icon. 
          4. Then select Photo. 
          5. Select Upload Photo. Add all photos and confirm the arrangement.
          6. After finalizing, remember to post the page online and click Share to give out the link. 

Though we used Adobe Spark to share our PhotoVoice results, there are many applications for using Spark. We encourage you to check out Adobe Spark to see how you can use it to share your evaluation results.

Hot Tips and Features:

  • Adobe Spark adjusts automatically for handheld devices.
  • Adobe Spark also automatically adjusts lines for you. No need to use a virtual ruler.
  • There are themes available with the free subscription, making it easy to design the webpage.
  • Select multiple photos during your upload. Adobe Spark will automatically separate each file for you.

*Disclaimer: Adobe Spark didn’t pay us anything for this blog. We wanted to share this amazing find with the evaluation community!

Blog: PhotoVoice: A Method of Inquiry in Program Evaluation

Posted on January 25, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ouen Hunter Emma Perk Michael Harnar

Hello, EvaluATE! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation, IDPE), Emma Perk (co-PI of EvaluATE at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. We recently used PhotoVoice in our evaluation of a Michigan-based Upward Bound (UB) program (a college preparation program focused on 14- to 19-year-old youth living in low-income families in which neither parent has a bachelor’s degree).

PhotoVoice is a method of inquiry that engages participants in creating photographs and short captions in response to specific prompts. The photos and captions provide contextually grounded insights that might otherwise be unreachable by those not living that experience. We opted to use PhotoVoice because the photos and narratives could provide insights into participants’ perspectives that cannot be captured using close-ended questionnaires.

We created two prompts, in the form of questions, and introduced PhotoVoice in person with the UB student participants (see the instructional handout below). Students used their cell phones to take one photo per prompt. For confidentiality reasons, we also asked the students to avoid taking pictures of human faces. Students were asked to write a two- to three-sentence caption for each photo. The caption was to include a short description of the photo, what was happening in the photo, and the reason for taking the photo.

PhotoVoice handout

Figure 1: PhotoVoice Handout

PhotoVoice participation was part of the UB summer programming and overseen by the UB staff. Participants had two weeks to complete the tasks. After receiving the photographs and captions, we analyzed them using MAXQDA 2018. We coded the pictures and the narratives using an inductive thematic approach.

After the preliminary analysis, we then went back to our student participants to see if our themes resonated with them. Each photo and caption was printed on a large sheet of paper (see figure 2 below) and posted on the wall. During a gallery walk, students were asked to review each photo and caption combination and to indicate whether they agree or disagree with our theme selections (see figure 3). We gave participants stickers and asked them to place the stickers in either the “agree” or “disagree” section on the bottom of each poster. After the gallery walk, we discussed the participants’ ratings to understand their photos and write-ups better.

Figure 2: Gallery walk layout (photo and caption on large pieces of paper)

Figure 3: Participants browsing the photographs

Using the participants’ insights, we finalized the analysis, created a webpage, and developed a two-page report for the program staff. To learn more about our reporting process, see our next blog. Below is a diagram of the activities that we completed during the evaluation.

Figure 4: Activities conducted in the Upward Bound evaluation

The PhotoVoice activity provided us with rich insights that we would not have received from the survey that was previously used. The UB student participants enjoyed learning about and being a part of the evaluation process. The program staff valued the reports and insights the method provided. The exclusion of faces in the photographs enabled us to avoid having to obtain parental permission to release the photos for use in the evaluation and by UB staff. Having the students use cell phone cameras kept costs low. Overall, the evaluation activity went over well with the group, and we plan to continue using PhotoVoice in the future.

Blog: The Business of Evaluation: Liability Insurance

Posted on January 11, 2019 by  in Blog ()

Luka Partners LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Bottom line: you may need liability insurance, and you have to pay for it.

The proposal has been funded, you are the named evaluator, you have created a detailed scope of work, and the educational institution has sent you a Professional Services Contract to sign (and read!).

This contract will contain many provisions, one of which is having insurance. I remember the first time I read it: The contractor shall maintain commercial general liability insurance against any claims that might incur in carrying out this agreement. Minimum coverage shall be $1,000,000.

I thought, well, this probably doesn’t pertain to me, but then I read further: Upon request, the contractor is required to provide a Certificate of Insurance. That got my attention.

You might find what happened next interesting. I called the legal offices at the community college. My first question was Can we just strike that from the contract? No, we were required by law to have it. Then she explained, “Mike that sort of liability thing is mostly for contractors coming to do physical work on our campus, in case there was an injury, brick falling on the head of a student, things like that.” She lowered her voice. “ I can tell you we are never going to ask you to show that certificate to us.”

However, sometimes, you will be asked to maintain and provide, on request, professional liability insurance, also called errors and omissions insurance (E&O insurance) or indemnity insurance. This protects your business if you are sued for negligently performing your services, even if you haven’t made a mistake. (OK, I admit, this doesn’t seem likely in our business of evaluation.)

Then the moment of truth came. A decent-sized contract arrived from a major university I shall not name located in Tempe, Arizona, with a mascot that is a devil with a pitchfork. It said if you want a purchase order from us, sign the contract and attach your Certificate of Insurance.

I was between the devil and a hard place. Somewhat naively, I called my local insurance agent (i.e., for home and car.) He actually had never heard of professional liability insurance and promised to get back to me. He didn’t.

I turned to Google, the fount of all things. (Full disclosure, I am not advocating for a particular company—just telling you what I did.) I explored one company that came up high in the search results. Within about an hour, I was satisfied that it was what I needed, had a quote, and typed in my credit card number. In the next hour, I had my policy online and printed out the one-page Certificate of Insurance with the university’s name as “additional insured.” Done.

I would like to clarify one point. I did not choose general liability insurance because there is no risk to physical damage to property or people that may be caused by my operations. In the business of evaluation that is not a risk.

I now have a $2 million professional liability insurance policy that costs $700 per year. As I add clients, if they require it, I can create a one-page certificate naming them as additional insured, at no extra cost.

Liability insurance, that’s one of the costs of doing business.

Blog: How Evaluators Can Use InformalScience.org

Posted on December 13, 2018 by  in Blog ()

Evaluation and Research Manager, Science Museum of Minnesota and Independent Evaluation Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m excited to talk to you about the Center for Advancement of Informal Science Education (CAISE) and the support they offer evaluators of informal science education (ISE) experiences. CAISE is a National Science Foundation (NSF) funded resource center for NSF’s Advancing Informal STEM Learning program. Through InformalScience.org, CAISE provides a wide range of resources valuable to the EvaluATE community.

Defining Informal Science Education

ISE is lifelong learning in science, technology, engineering, and math (STEM) that takes place across a multitude of designed settings and experiences outside of the formal classroom. The video below is a great introduction to the field.

Outcomes of ISE experiences have some similarities to those of formal education. However, ISE activities tend to focus less on content knowledge and more on other types of outcomes, such as interest, attitudes, engagement, skills, behavior, or identity. CAISE’s Evaluation and Measurement Task Force investigates the outcome areas of STEM identity, interest, and engagement to provide evaluators and experience designers with guidance on how to define and measure these outcomes. Check out the results of their work on the topic of STEM identity (results for interest and engagement are coming soon).

Resources You Can Use

InformalScience.org has a variety of resources that I think you’ll find useful for your evaluation practice.

  1. In the section “Design Evaluation,” you can learn more about evaluation in the ISE field through professional organizations, journals, and projects researching ISE evaluation. The “Evaluation Tools and Instruments” page in this section lists sites with tools for measuring outcomes of ISE projects, and there is also a section about reporting and dissemination. I provide a walk-through of CAISE’s evaluation pages in this blog post: How to Use InformalScience.org for Evaluation.
  2. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects has been extremely useful for me in introducing ISE evaluation to evaluators new to the field.
  3. In the “News & Views” section are several evaluation-related blogs, including a series on working with an institutional review board and another one on conducting culturally responsive evaluations.
  4. If you are not affiliated with an academic institution, you can access peer-reviewed articles in some of your favorite academic journals by becoming a member InformalScienceorg. Click here to join; it’s free! Once you’re logged in, select “Discover Research” in the menu bar and scroll down to “Access Peer-Reviewed Literature (EBSCO).” Journals of interest include Science Education and Cultural Studies of Science Education. If you are already a member of InformalScience.org, you can immediately begin searching the EBSCO Education Source database.

My favorite part of InformalScience.org is the repository of evaluation reports—1,020 reports and growing—which is the largest collection of reports in the evaluation field. Evaluators can use this rich collection to inform their practice and learn about a wide variety of designs, methods, and measures used in evaluating ISE projects. Even if you don’t evaluate ISE experiences, I encourage you to take a minute to search the reports and see what you can find. And if you conduct ISE evaluations, consider sharing your own reports on InformalScience.org.

Do you have any questions about CAISE or InformalScience.org? Contact Melissa Ballard, communications and community manager, at mballard@informalscience.org.

Blog: Evaluating Educational Programs for the Future STEM Workforce: STELAR Center Resources

Posted on November 8, 2018 by  in Blog ()

Project Associate, STELAR Center, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello EvaluATE community! My name is Sarah MacGillivray, and I am a member of the STEM Learning and Research (STELAR) Center team, which supports the National Science Foundation Innovative Technology Experiences for Students and Teachers (NSF ITEST) program. Through ITEST, NSF funds the research and development of innovative models of engaging K-12 students in authentic STEM experiences. The goals of the program include building students’ interest and capacity to participate in STEM educational opportunities and developing the skills they will need for careers in STEM. While we target slightly different audiences than the Advanced Technological Education (ATE) program, our programs share the common goal of educating the future STEM workforce, and to support this goal, I invite you to access the many evaluation resources available on our website.

The STELAR website houses an extensive set of resources collected from and used by the ITEST community. These resources include a database of nearly 150 research and evaluation instruments. Each entry features a description of the tool, a list of relevant disciplines and topics, target participants, and a link to ITEST projects that have used the instrument in their work. Whenever possible, PDFs and/or URLs to the original resource are included, though some tools require a fee or membership to the third-party site for access. The instruments can be accessed at http://stelar.edc.org/resources/instruments, and the database can be searched or filtered by keywords common to ATE and ITEST projects, e.g., “participant recruitment and retention,” “partnerships and collaboration,” “STEM career opportunities and workforce development,” “STEM content and standards,” and “teacher professional development and pedagogy,” among others.

In addition to our extensive instrument library, our website also features more than 400 publications, curricular materials, and videos. Each library can be browsed individually, or if you would like to view everything that we have on a topic, you can search all resources on the main resources page: http://stelar.edc.org/resources. We are continually adding to our resources and have recently improved our collection methods to allow projects to upload to the website directly. We expect this will result in even more frequent additions, and we encourage you to visit often or join our mailing list for updates.

STELAR also hosts a free, self-paced online course in which novice NSF proposal writers develop a full NSF proposal. While focused on ITEST, the course can be generalized to any NSF proposal. Two sessions focus on research and evaluation, breaking down the process for developing impactful evaluations. Participants learn what key elements to include in research designs, how to develop logic models, what is involved in deciding the evaluation’s design, and how to align the research design and evaluation sections. The content draws from expertise within the STELAR team and elements from NSF’s Common Guidelines for Education Research and Development. Since the course is self-paced, you can learn more about the course and register to participate at any time: https://mailchi.mp/edc.org/invitation-itest-proposal-course-2

We hope that these resources are useful in your work and invite you to share suggestions and feedback with us at stelar@edc.org. As a member of the NSF Resource Centers network, we welcome opportunities to explore cross-program collaboration, working together to connect and promote our shared goals.

Blog: Evaluation Plan Cheat Sheets: Using Evaluation Plan Summaries to Assist with Project Management

Posted on October 10, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kelly Robertson Lyssa Wilson Becho

We are Kelly Robertson and Lyssa Wilson Becho, and we work on EvaluATE as well as several other projects at The Evaluation Center at Western Michigan University. We wanted to share a trick that has helped us keep track of our evaluation activities and better communicate the details of an evaluation plan with our clients. To do this, we take the most important information from an evaluation plan and create a summary that can serve as a quick-reference guide for the evaluation management process. We call these “evaluation plan cheat sheets.”

The content of each cheat sheet is determined by the information needs of the evaluation team and clients. Cheat sheets can serve the needs of the evaluation team (for example, providing quick reminders of delivery dates) or of the client (for example, giving a reminder of when data collection activities occur). Examples of items we like to include on our cheat sheets are shown in Figures 1-3 and include the following:

  • A summary of deliverables noting which evaluation questions each deliverable will answer. In the table at the top of Figure 1, we indicate which report will answer which evaluation question. Letting our clients know which questions are addressed in each deliverable helps to set their expectations for reporting. This is particularly useful for evaluations that require multiple types of deliverables.
  • A timeline of key data collection activities and report draft due dates. On the bottom of Figure 1, we visualize a timeline with simple icons and labels. This allows the user to easily scan the entirety of the evaluation plan. We recommend including important dates for deliverables and data collection. This helps both the evaluation team and the client stay on schedule.
  • A data collection matrix. This is especially useful for evaluations with a lot of data collection sources. The example shown in Figure 2 identifies who implements the instrument, when the instrument will be implemented, the purpose of the instrument, and the data source. It is helpful to identify who is responsible for data collection activities in the cheat sheet, so nothing gets missed. If the client is responsible for collecting much of the data in the evaluation plan, we include a visual breakdown of when data should be collected (shown at the bottom of Figure 2).
  • A progress table for evaluation deliverables. Despite the availability of project management software with fancy Gantt charts, sometimes we like to go back to basics. We reference a simple table, like the one in Figure 3, during our evaluation team meetings to provide an overview of the evaluation’s status and avoid getting bogged down in the details.

Importantly, include the client and evaluator contact information in the cheat sheet for quick reference (see Figure 1). We also find it useful to include a page footer with a “modified on” date that automatically updates when the document is saved. That way, if we need to update the plan, we can be sure we are working on the most recent version.

 

Figure 1. Cheat Sheet Example Page 1. (Click to enlarge.)

Figure 2. Cheat Sheet Example Page 2. (Click to enlarge)

Figure 3. Cheat Sheet Example Page 2 (Click to enlarge.)

 

Blog: Four Personal Insights from 30 Years of Evaluation

Posted on August 30, 2018 by  in Blog ()

Haddix Community Chair of STEM Education, University of Nebraska Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I complete my 30th year in evaluation, I feel blessed to have worked with so many great people. In preparation for this blog, I spent a reflective morning with some hot coffee, cereal, and wheat toast (that morning donut is no longer an option), and I looked over past evaluations. I thought about any personal insights that I might share, and I came up with four:

  1. Lessons Learned Are Key: I found it increasingly helpful over the years to think about a project evaluation as a shared learning journey, taken with the project leadership. In this context, we both want to learn things that we can share with others.
  2. Evaluator Independence from Project Implementation Is Critical: Nearly 20 years ago, a program officer read in a project annual report that I had done a workshop on problem-based learning for the project. In response, he kindly asked if I had “gone native,” which is slang for a project evaluator getting so close to the project it threatens independence. As I thought it over, he had identified something that I was becoming increasingly uncomfortable with. It became difficult to offer suggestions on implementing problem-based learning when I had offered the training. That quick, thoughtful inquiry helped me to navigate that situation. It also helped me to think about my own future evaluator independence.
  3. Be Sure to Update Plans after Funding: I always adjust a project evaluation plan after the award. Once funded, everyone really digs in, and opportunities typically surface to make the project and its evaluation even better. I have come to embrace that process. I now typically include an “evaluation plan update” phase before we initiate an evaluation, to ensure that the evaluation plan is the best it can truly be when we implement it.
  4. Fidelity Is Important: It took me 10 years in evaluation before I fully understood the “fidelity issue.” Fidelity, for a loose definition, is essentially how faithful program implementers are to the recipe of a program intervention. The first time I became concerned with fidelity I was evaluating the implementation of 50 hours of curriculum. As I interviewed the teachers, it became clear that teachers were spending vastly different amounts of time on topics and activities. Like all good teachers, they had made the curriculum their own, but in many ways, the intended project intervention disappeared. This made it hard to learn much about the intervention. I evolved to include a fidelity feedback process in projects, to statistically adjust for that natural variation or to help examine differing impacts based on intervention fidelity.

In the last 30 years, program evaluation as a field has become increasingly useful and important. Like my days of eating donuts for breakfast, increasingly gone are the days of “superficial” evaluation. This has been replaced by evaluation strategies that are collaboratively planned, engaged, and flexible, which (like my wheat toast and cereal) gets evaluators and project leadership further on the shared journey. Although I do periodically miss the donuts, I never miss the superficial evaluations. Overall, I am always really glad that I now have the cereal and toast—and that I conduct strong and collaborative program evaluations.

Blog: Becoming a Sustainability Sleuth: Leaving and Looking for Clues of Long-Term Impact

Posted on August 1, 2018 by  in Blog ()

Director, SageFox Consulting Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello! I’m Rebecca from SageFox Consulting Group, and I’d like to start a conversation about measuring sustainability. Many of us work on ambitious projects with long-term impacts that cannot be achieved within the grant period and require sustained grant activities. Projects are often tasked with providing evidence of sustainability but are not given the funding to assess sustainability and impact after grant funding. In five, 10, or 15 years, if someone were to pick up your final report, would they be able to use it to get a baseline understanding of what occurred during the grant, and would they know where to look for evidence of impact and sustainability? Below are some suggestions for documenting “clues” for sustainability:

Relationships are examples of how projects are sustained. You may want to consider documenting evidence of the depth of relationships: are they person-dependent, or has it become a true partnership between entities? Evidence of the depth of relationships is often revealed when a key supporter leaves their position, but the relationship continues. You might also try to distinguish a person from a role. For example, one project I worked on lost the support of a key contact (due to a reorganization) at a federal agency that hosted student interns during the summer. There was enough goodwill and experience, however, continued efforts from the project leadership resulted in more requests for interns than there were students available for.

Documenting how and why the innovation evolves can provide evidence of sustainability. Often the adopter, user, or customer finds their own value in relation to their unique context. Understanding how and why someone adapts the product or process gives great insight into what elements may go on and in what contexts. For example, you might ask users, “What modifications were needed for your context and why?”

In one of my projects, we began with a set of training modules for students, but we found that an online test preparation module for a certification was also valuable. Through a relationship with the testing agency, a revenue stream was developed that also allowed the project to continue classroom work with students.

Institutionalization (adoption of key products or processes by an institution)—often through a dedicated line item in a budget for a previously grant-funded student support position—reflects sustainability. For example, when a grant-funded program found a permanent home at the university by expanding its student-focused training in entrepreneurship to faculty members, it aligned itself with the mission of the department. Asking “What components of this program are critical for the host institution?” is one way to uncover institutionalization opportunities.

Revenue generation is another indicator of customer demand for the product or process. Many projects are reluctant to commercialize their innovations, but commercialization can be part of a sustainability plan. There are even National Science Foundation (NSF) programs to help plan for commercialization (e.g., NSF Innovation Corps), and seed money to get started is also available (e.g., NSF Small Business Innovation Research).

Looking for clues of sustainability often requires a qualitative approach to evaluation through capturing the story from the leadership team and participants. It also involves being on the lookout for unanticipated outcomes in addition to the deliberate avenues a project takes to ensure the longevity of the work.

Blog: Successful Practices in ATE Evaluation Planning

Posted on July 19, 2018 by  in Blog ()

President, Mullins Consulting, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In this essay, I identify what helps me create a strong evaluation plan when working with new Advanced Technological Education (ATE) program partners. I hope my notes add value to current and future proposal-writing conversations.

Become involved as early as possible in the proposal-planning process. With ATE projects, as with most evaluation projects, the sooner an evaluator is included in the project planning, the better. Even if the evaluator just observes the initial planning meetings, their involvement helps them become familiar with the project’s framework, the community partnerships, and the way in which project objectives are taking shape. Such involvement also helps familiarize the evaluator with the language used to frame project components and the new or established relationships expected for project implementation.

Get to know your existing and anticipated partners. Establishing or strengthening partnerships is a core component of ATE planning, as ATE projects often engage with multiple institutions through the creation of new certifications, development of new industry partnerships, and explanation of outreach efforts in public schools. The evaluator should take detailed notes on the internal and external partnerships involved with the project. Sometimes, to support my own understanding as an evaluator, it helps for me to visually map these relationships. Also, the evaluator should prepare for the unexpected. Sometimes, partners will change during the planning process as partner roles and program purposes become more clearly defined.

Integrate evaluation thinking into conversations early on. Once the team gets through the first couple of proposal drafts, it helps if the evaluator creates an evaluation plan and the team makes time to review it as a group. This will help the planning team clarify the evaluation questions to be addressed and outcomes to be measured. This review also allows the team to see how their outcomes can be clearly attached to program activities and measured through specific methods of data collection. Sometimes during this process, I speak up if a component could use further discussion (e.g., cohort size, mentoring practices). If an evaluator has been engaged from the beginning and has gotten to know the partners, they have likely built the trust necessary to add value to the discussion of the proposal’s central components.

Operate as an illuminator. A colleague I admire once suggested that evaluation be used as a flashlight, not as a hammer. This perspective of prioritizing exploration and illumination over determination of cause and effect has informed my work. Useful evaluations certainly require sound evaluation methodology, but they also require the crafting of results into compelling stories, told with data guiding the way. This requires working with others as interpretations unfold, discovering how findings can be communicated to different audiences, and listening to what stakeholders need to move their initiatives forward.

ATE programs offer participants critical opportunities to be a part of our country’s future workforce. Stakeholders are passionate about their programs. Careful, thoughtful engagement throughout the proposal-writing process builds trust while contributing to a quality proposal with a strong evaluation plan.