Sorry, no biography is available for Elaine Johnson.

Newsletter: How Do You Know if Your Program is Meeting Industry Needs?

Posted on January 1, 2015 by  in Newsletter - () ()

Since the core of the ATE program is to prepare a technical workforce, it is critical to match the skills that students are developing with the needs of the local industry. Most ATE projects and centers work with local industry and have advisory groups. But there are some additional resources available for evaluating how well a curricular program is meeting the industry needs on both the industry demand and educational supply sides.

Burning Glass Labor/InsightTM interactive software can be used to generate real-time demand (jobs) data. The beauty of Burning Glass is that the user can find the job titles for advertised positions. This helps ATE projects find the demand for specific jobs and then match program supply information to get a comparison. The Standard Occupational Classification (SOC) system is used by Federal statistical agencies to classify workers into one of 840 occupational categories. The North American Industry Classification System (NAICS) is the standard used by Federal statistical agencies in classifying U.S. business establishments. The SOC and NAICS codes do not always identify the job titles used in job announcements. Therefore, it is advisable to use a combination of data sources. By utilizing Burning Glass information in conjunction with the federal codes, a more accurate determination of the demand for technicians can be made. Institutional data and statewide community or technical college data as well as student follow-up can be helpful in determining the supply side.

Environmental scans can focus on labor market needs that warrant a community college response. A recent publication addressing the comparison of demand and supply is the Life Sciences & Biotechnology Middle Skills Workforce in California (October 2014) report, available from coeccc.net.

Newsletter: Using Evaluation Results to Guide Decision Making

Posted on October 1, 2014 by  in Newsletter - ()

As a PI for an ATE project or center, it is clear that working with evaluators provides key information for the success of the project. Gathering the information and synthesizing it contributes to the creation of know-ledge. Knowledge can be viewed as a valuable asset for the project and others. Some knowledge can be easily obtained from text or graphics, but other knowledge comes from experience. Tools exist to help with the management of such knowledge. In the Synergy: Research Practice Transformation (SynergyRPT) project, we used tools such as logic models, innovation attributes, and value creation worksheets to learn about and practice knowledge creation (see http://synergyrpt.org/resources-2). Recently, knowledge management software has been developed that can help organize information for projects. Some useful organizing tools include Trello, Dropbox, SharePoint, BrainKeeper, and IntelligenceBank.

One example of effective evaluation management is the Southwest Center for Microsystems Education, where the external and internal evaluators use a value creation framework as part of continuous improvement, as described in their evaluation handbook by David Hata and James Hyder (see http://bit.ly/scme-eval). This approach has proved useful and helps build understanding within the ATE community. Development of common definitions of terms has been essential to communication among interested parties. Transfer of successful assessment, knowledge creation, and evaluation outcomes continue to provide broader impact for the ATE projects.

Certainly the most important part of the evaluator and PI’s jobs is to promote a culture of sharing. Without the human desire to share knowledge and build true communities of practice, the knowledge is limited to small tweaking of a project. Along with the desire to share comes the support for strong evaluation plans and ways to disseminate findings. With that mindset, both PIs and evaluators can work with their networks to build trust and create communities of practice that are committed to sustainability and scale. In that way, lessons learned from the evaluation are not lost over time.

Newsletter: Tips for Writing the Results of Prior Support Section for NSF Proposals

Posted on July 1, 2014 by  in Newsletter - ()

Where are the hidden opportunities to positively influence proposal reviewers?  Surprisingly, this is often the Results from Prior Support section. Many proposers do not go beyond simply recounting what they did in prior grants. They miss the chance to “wow” the reader with impact examples, such as Nano-Link’s Nano-Infusion Project that has resulted in the integration and inclusion of nanoscale modules into multiple grade levels of K-14 across the nation. Teachers are empowered with tools to effectively teach nanoscale concepts as evidenced by their survey feedback. New leaders are emerging and enthusiasm for science can be seen on the videos available on the website. Because of NSF funding, additional synergistic projects allowed for scaling activities and growing a national presence.

Any PI having received NSF support in the past 5 years must include a summary of the results (up to 5 pages) and how those results support the current proposal. Because pages in this subsection count toward the total 15 pages, many people worry that they are using too much space to describe what has been done. These pages, however, can provide a punch and energy to the proposal with metrics, outcomes, and stories. This is the time to quote the evaluator’s comments and tie the results to the evaluation plan. The external view provides valuable decision-making information to the reviewers. This discussion of prior support helps reviewers evaluate the proposal, allows them to make comments, and provides evidence that the new activities will add value.According to the NSF Grant Proposal Guide, updated in 2013, the subsection must include: Award #, amount, period of support; title of the project; summary of results described under the distinct separate headings of Intellectual Merit, and Broader Impact; publications acknowledging NSF support; evidence of research products and their availability; and relation of completed work to proposed work.

The bottom line is that the beginning of the project description sets the stage for the entire proposal. Data and examples that demonstrate intellectual merit and broader impact clearly define what has been done thus leaving room for a clear description of new directions that will require funding.

 

Newsletter: The Power of Evaluator and PI Collaboration

Posted on April 1, 2014 by  in Newsletter - ()

The PI of an ATE center or project has the responsibility of keeping a strong communication flow with the evaluator. This begins even before the project is funded and continues in a dynamic interchange throughout the funding cycle. There are easy ways that PI and evaluator can add value to a project. Simply asking for help is sometimes overlooked.

A recent example demonstrates how an ATE center used the expertise of the evaluator to get some specific feedback on the use of clearinghouse materials. The co-PI asked the evaluator for assistance and a very nice survey was created that allowed the evaluator to gather additional information about curriculum and instructional materials usage and the center PI’s to gain valuable input about the use of its existing materials.

Second, it is important to actually use the information gained from the evaluation data. What a natural and built in opportunity for the PI and the team to take advantage of impact data to drive the future direction of the  center or project. Using data to make decisions provides an opportunity to test assumptions and to learn if current practices and products are working.

Third, the evaluation develops evidence to be used to obtain further funding, advance technical education and contribute field of evaluation. By regular communication and collaboration, the project, the PI and the evaluator all gain value and can more effectively contribute to the design of the current and future projects. Together, the PI and the Evaluator can learn about impact, trends, and key successes that are appropriate for scaling. Thus evaluation is more than reporting but becomes a tool for strategic planning.

The Bio-Link evaluator, Candiya Mann, provides not only a written document that can be used for reporting and planning but also works with me to expand my connections with other projects and people that have similar interests in the use of data to drive actions and achieve broader impact. Removing isolation contributes new ideas for metrics and can actually make evaluation fun.

Learn more about Bio-Link at www.bio-link.org.