Newsletter - Guest Columnist

Newsletter: How Do You Know if Your Program is Meeting Industry Needs?

Posted on January 1, 2015 by  in Newsletter - () ()

Since the core of the ATE program is to prepare a technical workforce, it is critical to match the skills that students are developing with the needs of the local industry. Most ATE projects and centers work with local industry and have advisory groups. But there are some additional resources available for evaluating how well a curricular program is meeting the industry needs on both the industry demand and educational supply sides.

Burning Glass Labor/InsightTM interactive software can be used to generate real-time demand (jobs) data. The beauty of Burning Glass is that the user can find the job titles for advertised positions. This helps ATE projects find the demand for specific jobs and then match program supply information to get a comparison. The Standard Occupational Classification (SOC) system is used by Federal statistical agencies to classify workers into one of 840 occupational categories. The North American Industry Classification System (NAICS) is the standard used by Federal statistical agencies in classifying U.S. business establishments. The SOC and NAICS codes do not always identify the job titles used in job announcements. Therefore, it is advisable to use a combination of data sources. By utilizing Burning Glass information in conjunction with the federal codes, a more accurate determination of the demand for technicians can be made. Institutional data and statewide community or technical college data as well as student follow-up can be helpful in determining the supply side.

Environmental scans can focus on labor market needs that warrant a community college response. A recent publication addressing the comparison of demand and supply is the Life Sciences & Biotechnology Middle Skills Workforce in California (October 2014) report, available from coeccc.net.

Newsletter: Using Evaluation Results to Guide Decision Making

Posted on October 1, 2014 by  in Newsletter - ()

As a PI for an ATE project or center, it is clear that working with evaluators provides key information for the success of the project. Gathering the information and synthesizing it contributes to the creation of know-ledge. Knowledge can be viewed as a valuable asset for the project and others. Some knowledge can be easily obtained from text or graphics, but other knowledge comes from experience. Tools exist to help with the management of such knowledge. In the Synergy: Research Practice Transformation (SynergyRPT) project, we used tools such as logic models, innovation attributes, and value creation worksheets to learn about and practice knowledge creation (see http://synergyrpt.org/resources-2). Recently, knowledge management software has been developed that can help organize information for projects. Some useful organizing tools include Trello, Dropbox, SharePoint, BrainKeeper, and IntelligenceBank.

One example of effective evaluation management is the Southwest Center for Microsystems Education, where the external and internal evaluators use a value creation framework as part of continuous improvement, as described in their evaluation handbook by David Hata and James Hyder (see http://bit.ly/scme-eval). This approach has proved useful and helps build understanding within the ATE community. Development of common definitions of terms has been essential to communication among interested parties. Transfer of successful assessment, knowledge creation, and evaluation outcomes continue to provide broader impact for the ATE projects.

Certainly the most important part of the evaluator and PI’s jobs is to promote a culture of sharing. Without the human desire to share knowledge and build true communities of practice, the knowledge is limited to small tweaking of a project. Along with the desire to share comes the support for strong evaluation plans and ways to disseminate findings. With that mindset, both PIs and evaluators can work with their networks to build trust and create communities of practice that are committed to sustainability and scale. In that way, lessons learned from the evaluation are not lost over time.

Newsletter: Tips for Writing the Results of Prior Support Section for NSF Proposals

Posted on July 1, 2014 by  in Newsletter - ()

Where are the hidden opportunities to positively influence proposal reviewers?  Surprisingly, this is often the Results from Prior Support section. Many proposers do not go beyond simply recounting what they did in prior grants. They miss the chance to “wow” the reader with impact examples, such as Nano-Link’s Nano-Infusion Project that has resulted in the integration and inclusion of nanoscale modules into multiple grade levels of K-14 across the nation. Teachers are empowered with tools to effectively teach nanoscale concepts as evidenced by their survey feedback. New leaders are emerging and enthusiasm for science can be seen on the videos available on the website. Because of NSF funding, additional synergistic projects allowed for scaling activities and growing a national presence.

Any PI having received NSF support in the past 5 years must include a summary of the results (up to 5 pages) and how those results support the current proposal. Because pages in this subsection count toward the total 15 pages, many people worry that they are using too much space to describe what has been done. These pages, however, can provide a punch and energy to the proposal with metrics, outcomes, and stories. This is the time to quote the evaluator’s comments and tie the results to the evaluation plan. The external view provides valuable decision-making information to the reviewers. This discussion of prior support helps reviewers evaluate the proposal, allows them to make comments, and provides evidence that the new activities will add value.According to the NSF Grant Proposal Guide, updated in 2013, the subsection must include: Award #, amount, period of support; title of the project; summary of results described under the distinct separate headings of Intellectual Merit, and Broader Impact; publications acknowledging NSF support; evidence of research products and their availability; and relation of completed work to proposed work.

The bottom line is that the beginning of the project description sets the stage for the entire proposal. Data and examples that demonstrate intellectual merit and broader impact clearly define what has been done thus leaving room for a clear description of new directions that will require funding.

 

Newsletter: The Power of Evaluator and PI Collaboration

Posted on April 1, 2014 by  in Newsletter - ()

The PI of an ATE center or project has the responsibility of keeping a strong communication flow with the evaluator. This begins even before the project is funded and continues in a dynamic interchange throughout the funding cycle. There are easy ways that PI and evaluator can add value to a project. Simply asking for help is sometimes overlooked.

A recent example demonstrates how an ATE center used the expertise of the evaluator to get some specific feedback on the use of clearinghouse materials. The co-PI asked the evaluator for assistance and a very nice survey was created that allowed the evaluator to gather additional information about curriculum and instructional materials usage and the center PI’s to gain valuable input about the use of its existing materials.

Second, it is important to actually use the information gained from the evaluation data. What a natural and built in opportunity for the PI and the team to take advantage of impact data to drive the future direction of the  center or project. Using data to make decisions provides an opportunity to test assumptions and to learn if current practices and products are working.

Third, the evaluation develops evidence to be used to obtain further funding, advance technical education and contribute field of evaluation. By regular communication and collaboration, the project, the PI and the evaluator all gain value and can more effectively contribute to the design of the current and future projects. Together, the PI and the Evaluator can learn about impact, trends, and key successes that are appropriate for scaling. Thus evaluation is more than reporting but becomes a tool for strategic planning.

The Bio-Link evaluator, Candiya Mann, provides not only a written document that can be used for reporting and planning but also works with me to expand my connections with other projects and people that have similar interests in the use of data to drive actions and achieve broader impact. Removing isolation contributes new ideas for metrics and can actually make evaluation fun.

Learn more about Bio-Link at www.bio-link.org.

Newsletter: The PI Guide to Working with Evaluators

Posted on January 1, 2014 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

(originally published as blog at ltd.edc.org/strong-pievaluator-partnerships-users-guide on January 10, 2013)

Evaluation can be a daunting task for PIs. It can seem like the evaluator speaks another language, and the stakes for the project can seem very high. Evaluators face their own challenges. Often working with a tight budgets and timeframes, expectations are high that they deliver both rigor and relevance, along with evidence of project impact. With all this and more in the mix, it’s no surprise that tension can mount and miscommunication can drive animosity and stress.

As the head of evaluation for the ITEST Learning Resource Center and as a NSF program officer, I saw dysfunctional relationships between PIs and their evaluators contribute to missed deadlines, missed opportunities, and frustration on all sides. As an evaluator, I am deeply invested in building evaluators’ capacity to communicate their work and in helping program staff understand the value of evaluation and what it brings to their programs. I was concerned that these dysfunctional relationships would thwart the potential of evaluation to provide vital information for program staff to make decisions and demonstrate the value of their programs.

To help strengthen PI/evaluator collaborations, I’ve done a lot of what I called “evaluation marriage counseling” for PI/evaluator pairs. Through these “counseling sessions,” I learned that evaluation relationships are not so different from any other relationships. Expectations aren’t always made clear, communication often breaks down, and, more than anything else, all relation-ships need care and feeding.

As a program officer, I had the chance to help shape and create a new resource that supports PIs and evaluators in forming strong working relationships. Rick Bonney of the Cornell Lab of Ornithology and I developed a guide to working with evaluators, written by PIs, for PIs. Although it was designed for the Informal Science Education community, the lessons translate to just about any situation in which program staff are working with evaluators. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects is available at bit.ly/1l28nTt.

Newsletter: Developing a Culture of Evaluation

Posted on October 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

As an ATE project, you and your team collect a lot of data: You complete the annual monitoring survey, you work with your evaluator to measure outcomes, you may even track your participants longitudinally in order to learn how they integrate their experiences into their lives. As overwhelming as it may seem at times to manage all the data collection logistics and report writing, these data are important to telling the story of your project and the ATE program. Developing a culture of evaluation in your project and your organization can help to meaningfully put these data to use.

Fostering a culture of evaluation in your project means that evaluation practices are not disconnected from program planning, implementation, and reporting. You’re thinking of evaluation in planning project activities and looking for ways to use data to reflect on and improve your work. During implementation, you consult your evaluator regularly so that you can hear what they’re learning from the data collection, and ensure that they know what’s new in the project. And at analysis and reporting times, you’re ensuring that the right people are thinking about how to use the evaluation findings to make improvements and demonstrate your project’s value to important stakeholder audiences. You and your team are reflecting on how the evaluation went and what can be improved. In a project that has an “evaluation culture,” evaluators are partners, collecting important information to inform decision making.

A great example of evaluators-as partners came from an NSF PI who shared that he regularly talks with his evaluator, peer-to-peer, about the state of the field, not just about his particular project. He wants to now what his evaluator is learning about practice in the field from other projects, workshops, conferences and meetings, and he uses these insights to help him reflect on his own work.

Newsletter: What makes a good evaluation section of a proposal?

Posted on July 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

As a program officer, I read hundreds of proposals for different NSF programs and I saw many different approaches to writing a proposal evaluation section. From my vantage point, here are a few tips that may help to ensure that your evaluation section shines.

First, make sure to involve your evaluator in writing the proposal’s evaluation section. Program officers and reviewers can tell when an evaluation section was written without the consultation of an evaluator. This makes them think you aren’t integrating evaluation into your project planning.

Don’t just call an evaluator a couple weeks before the proposal is due! A strong evaluation section comes from a thoughtful, robust, tailored evaluation plan. This takes collaboration with an evaluator! Get them on board early and talk with them often as you develop your proposal. They can help you develop measureable objectives, add insight to proposal organization, and, of course, work with you to develop an appropriate evaluation plan.

Reviewers and program officers look to see that the evaluator understands the project. This can be done using a logic model or in a paragraph that justifies the evaluation design, based on the proposed project design. The evaluation section should also connect the project objectives and targeted outcomes to evaluation questions, data collection methods and analysis, and dissemination plans. This can be done in a matrix format, which helps the reader to see clearly which data will answer which evaluation question and how these are connected to the objectives of the project.

A strong evaluation plan shows that the evaluator and the project team are in synch and working together, applies a rigorous design and reasonable data collection methods, and answers important questions that will help to demonstrate the value of the project and surface areas for improvement.

Newsletter: Setting the Stage for Useful Reporting

Posted on April 1, 2013 by  in Newsletter - ()

Principal Research Scientist, Education Development Center, Inc.

In my fifteen years as an evaluator, I’ve written quite a few reports and thought a lot about what makes an evaluation report useful. In addition, I was a program officer at NSF in the Division of Research on Learning, where I was an evaluation client and strove to put evaluation findings to good use. Here are some thoughts on how you can ensure that evaluation information gets used.

Communicating early and often is the foundation for strong evaluation reporting and use. PIs initiate these conversations about reporting with their evaluators, expressing needs and expectations about when they’d like evaluation reports, about what, and in what form.

Would you like a brief report about data collection activities? Talk with your evaluator about how you’d like this to look, what you might do with the data, and how these reports will get included in the annual report. This could be just bullet points about the key findings, or it could be data tables generated from a survey.

Do you want monthly progress reports? Talk with your evaluator about a template for an easy-to-read format. This report might detail funds expended to date, highlight upcoming tasks, and offer a place to raise questions and issues that need timely management.

Would you like a report that you can share with community stakeholders? This could be a one-page list of significant findings, a three-page executive summary, a PowerPoint presentation, or even a shortened version of the full report.

PIs and evaluators can talk about what’s possible, how your choices will affect budget and how you plan to work together to ensure that the evaluation reports are targeted for maximum use.