Library - Newsletter

Newsletter: 2016 Fall

Posted on October 19, 2016 by  in ()

Printer Friendly Version

Happy New Year!

The calendar year may be coming to a close, but a new academic year just started and many ATE program grantees recently received their award notifications from the National Science Foundation. ‘Tis the season to start up or revisit evaluation plans for the coming year. This digital-only issue of EvaluATE’s newsletter is all about helping project leaders and evaluators get the new evaluation year off on the right track.

Don’t launch (or relaunch) your evaluation before taking these steps

launch

Mentor-Connect’s one-page checklist tells project leaders what they need to do to set the stage for a successful evaluation.

You won’t hear this from anyone else

3truths

EvaluATE’s director, Lori Wingate, shares Three Inconvenient Truths about ATE Evaluation in her latest contribution to the EvaluATE blog. You may find them unsettling, but ignorance is not bliss when it comes to these facts about evaluation.

Is your evaluation on track?

track

Use the Evaluation Progress Checklist to make sure your evaluation is on course. It’s on pages 26-28 in Westat’s Guidelines for Working with Third Party Evaluators, which also includes guidance for resolving problems and other tips for nonevaluators.

Myth: All evaluation stakeholders should be engaged equally

equal

Monitor, facilitate, consult, or co-create? Use our stakeholder identification worksheet to figure out the right way to engage different types of stakeholders in your evaluation.

EvaluATE at the ATE PI Conference: October 26-29

A Practical Approach to Outcome Evaluation: Step-by-Step
WORKSHOP: Wednesday 1-4 p.m.
DEMONSTRATION: Thursday 4:45-5:15 p.m.

SHOWCASES: We will be at all three showcase sessions.

Check out the conference program.

Next Webinar

slider-dec16-webinar

Did you miss our recent webinars?

Check out slides, handouts, and recordings

0816tile 0516tile

Shape the future of EvaluATE

EvaluATE has been refunded for another 4 years! Let us know how you would like us to invest our resources to advance evaluation in the ATE program.

Complete our two-minute survey today.

Want to receive our newsletter via email?

joinmailinglist2

Newsletter: Getting the Most out of Your Logic Model

Posted on July 1, 2016 by  in - ()

I recently led two workshops at the American Evaluation Association’s Summer Evaluation Institute. To get a sense of the types of projects that the participants were working on, I asked them to send me a brief project description or logic model in advance of the Institute. I received more than 50 responses, representing a diverse array of projects in the areas of health, human rights, education, and community development. While I have long advocated for logic models as a succinct way to communicae the nature and purpose of projects, it wasn’t until I received these responses that I realized how efficient logic models really are in terms of conveying what a project does, whom it serves, and how it is intended to bring about change.

In reviewing the logic models, I was able to quickly understand the main project activities and outcomes.  My workshops were on developing evaluation questions, and I was amazed how quickly I could frame evaluation questions and indicators based on what was presented in the models. It wasn’t as straight forward with the narrative project descriptions, which were much less consistent in terms of the types of information  conveyed and the degree to which the elements were linked conceptually.  When participants would show me their models in the workshop, I quickly remembered their projects and could give them specific feedback based on my previous review of their models.

Think of NSF proposal reviewers who have to read numerous 15-page project descriptions. It’s not easy to keep straight all the details of a single project, let alone that of 10 or more 15-page proposals. In a logic model, all the key information about a project’s activities, products, and outcomes is presented in one graphic. This helps reviewers consume the project information as a “package.”  For reviewers who are especially interested in the quality of the evaluation plan, a quick comparison of the evaluation plan against the model will reveal how well the plan is aligned to the project’s activities, scope, and purpose.  Specifically, mentally mapping the evaluation questions and indicators onto the logic model provides a good sense of whether the evaluation will adequately address both project implementation and outcomes.

One of the main reasons for creating a logic model—other than the fact it may be required by a funding agency—is to illustrate how key project elements logically relate to one another. I have found that representing a project’s planned activities, products, and outcomes in a logic model format can reveal weaknesses in the project’s plan. For example, there may be an activity that doesn’t seem to lead anywhere or ambitious outcomes that aren’t adequately supported by activities or outputs.  It is much better if you, as a project proposer, spot those weaknesses before an NSF reviewer does. A strong logic model can then serve as a blueprint for the narrative project description—all key elements of the model should be apparent in the project description and vice versa.

I don’t think there is such a thing as the perfect logic model. The trick is to recognize when it is good enough. Check to make sure the elements are located in the appropriate sections of the model, that all main project activities (or activity areas) and outcomes are included, and that they are logically linked. Ask someone from outside your team to review it; revise if they see problems or opportunities to increase clarity. But don’t overwork it—treat it as a living document that you can update when and if necessary

Download the logic model template from http://bit.ly/lm-temp.

Newsletter: What’s the Difference Between Outputs, Outcomes, and Impacts?

Posted on July 1, 2016 by  in - ()

LMGraphic

A common source of confusion among individuals who are learning about logic models is the difference between outputs, outcomes, and impacts. While most people generally understand that project activities are the things that a project does, the other terms may be less straightforward.

Outputs are the tangible products of project activities. I think of outputs as things whose existence can be observed directly, such as websites, videos, curricula, labs, tools, software, training materials, journal articles, and books. They tend to be the things that remain after a project ends or goes away.

Outcomes are the changes brought about through project activities and outputs/products.  Outcomes may include changes in individual knowledge, skills, attitudes, awareness, or behaviors; organizational practices; and broader social/economic conditions.  In her blog post “Outputs are for programs, outcomes are for people” (http://bit.ly/srob0314), Sheila Robinson offers this guidance: “OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:  How will program participants change as a result of their participation in the program?” This is a great way to check to see if your logic model elements are located in the right place.  If the outcomes in your logic model include things that don’t sound like an appropriate answer to that question, then you may need to move things around.

The term impact is usually used to refer to outcomes that are especially large in scope or the ultimate outcomes a project is seeking to bring about. Sometimes the terms impacts and long-term outcomes are used interchangeably.

For example, one of EvaluATE’s main activities are webinars. Outputs of these webinars include resource materials, presentation slides, and recordings. Short-term outcomes for webinar participants are expected to include increased knowledge of evaluation. Mid-term outcomes include modifications or changes in their evaluation practice. Long-term outcomes are improved quality and utility of ATE project evaluations. The ultimate intended impact is for ATE projects to achieve better outcomes through strategic use of high-quality evaluations.

Keep in mind that not all logic models use these specific terms, and not everyone adheres to these particular definitions. That’s OK! The important thing to remember when developing a logic model is to understand what YOU mean in using these terms and to use and apply them consistently in your model and elsewhere.  And regardless of how you define them, each column in your model should present new information, not a reiteration of something already communicated.

Newsletter: ATE Logic Model Template

Posted on July 1, 2016 by  in - ()

A logic model is a graphic depiction of how a project translates its resources into activities and outcomes. The ATE Project Logic Model Template presents the basic format for a logic model with question prompts and examples  to guide users in distilling their project plans into succinct statements about planned activities and products and desired outcomes. Paying attention to the prompts and ATE-specific examples will help users avoid common logic model mistakes, like placing outputs (tangible products) under outcomes (changes in people, organizations or conditions brought about through project activities and outputs).

The template is in PowerPoint so you may use the existing elements and start creating your own logic model right away—just delete the instructional parts of the document and input your project’s information.  We have found that when a document has several graphic elements, PowerPoint is easier to work in than Word.  Alternatively, you could create a simple table in Word that mirrors the layout in the template.

Formatting tips:

  • If you find you need special paper to print the logic model and maintain its legibility, it’s too complicated.  It should be readable on a 8.5” x 11” sheet of paper.  If you simply have too much information to include in a single page, include general summary statements/categories, and include detailed explanations in a proposal narrative or other project planning document.
  • You may wish to add arrows to connect specific activities to specific outputs or outcomes.  However, if you find that all activities are leading to all outcomes (and that is actually how the project is intended to work), there is no need to clutter your model with arrows leading everywhere.
  • Use a consistent font and font size.
  • Align, align, align! Alignment is one of the most important design principles. When logic model elements are out of alignment, it can make it seem messy and unprofessional.
  • Don’t worry if your logic model doesn’t capture all the subtle nuances of your project. It should provide an overview of what a project does and is intended to accomplish and  convey a clear logic as to how the pieces are connected.  Your proposal narrative or project plan is where the details go.

Download the template from http://bit.ly/lm-temp.

Newsletter: Project Spotlight: Geospatial Technician Education – Unmanned Aircraft Systems & Expanding Geospatial Technician Education through Virginia’s Community Colleges

Posted on July 1, 2016 by  in - ()

Chris Carter is the Deputy Director of the Virginia Space Grant Consortium, where he leads two ATE projects.

How do you use logic models in your ATE projects?

Our team recently received our fourth ATE award, which will support the development of academic pathways and faculty training in unmanned aircraft systems (UAS). UAS, when combined with geospatial technologies, will revolutionize spatial data collection and analysis.

Visualizing desired impacts and outcomes is an important first step to effective project management. Logic models are wonderful tools for creating a roadmap of key project components. As a principal investigator on two ATE projects, I have used logic models to conceptualize project outcomes and the change that our team desires to create. Logic models are also effective tools for articulating the inputs and resources that are leveraged to offer the activities that bring about this change.

With facilitation and guidance from our partner and external evaluator, our team developed several project logic models. We developed one overarching project logic model to conceptualize the intended outcomes and desired change of the regional project. Each community college partner also developed a logic model to capture its unique goals and theory of change while also articulating how it contributes to the larger effort. These complementary logic models allowed the team members to visualize and understand their contributions while ensuring everyone was on the same path.

Faculty partners used these logic models to inform their administrations, business partners, and employers about their work. They are great tools for sharing the vision of change and building consensus among key stakeholders.

Our ATE projects are focused on creating career pathways and building faculty competencies to prepare technicians. The geospatial and UAS workforce is a very dynamic employment sector that is constantly evolving. We find logic models helpful tools for keeping the team and partners focused on the desired outputs and outcomes. The models remind us of our goals and help us understand how the components fit together. It is crucial to identify the project inputs and understand that as these evolve, project activities also need to evolve. Constantly updating a logic model and understanding the relationships between the various sections are key pieces of project management.

I encourage all ATE project leaders to work closely with their project evaluators and integrate logic models. Our external evaluator was instrumental in influencing our team to adopt these models. Project evaluators must be viewed as team members and partners from the beginning. I cannot imagine effectively managing a project without the aid of this project blueprint.

Newsletter: Revisiting Intellectual Merit and Broader Impact

Posted on January 1, 2016 by  in - ()

If you have ever written a proposal to the National Science Foundation (NSF) or participated in a proposal review panel for NSF, you probably instantly recognize the terms Intellectual Merit and Broader Impacts as NSF’s merit review criteria. Proposals are rated and funding decisions are made based on how well they address these criteria. Therefore, proposers must describe the potential of their proposed work to advance knowledge and understanding (Intellectual Merit) and benefit society (Broader Impacts).

Like cramming for an exam and then forgetting 90 percent of what you memorized, it’s all too easy for principal investigators to lose sight of Intellectual Merit and Broader Impacts after proposal submission. But there are two important reasons to maintain focus on Intellectual Merit and Broader Impacts after an award is made and throughout project implementation.

First, the goals and activities expressed in a proposal are commitments about how a particular project will advance knowledge (Intellectual Merit) and bring tangible benefits to individuals, institutions, communities, and/or our nation (Broader Impacts). Simply put, PIs have an ethical obligation to follow through on these commitments to the best of their abilities.

Second, when funded PIs seek subsequent grants from NSF, they must describe the results of their prior NSF funding in terms of Intellectual Merit and Broader Impacts. In other words, proposers must explain how they used their NSF funding to actually advance knowledge and understanding and benefit society. PIs who have evidence of their accomplishments in these areas and can convey it succinctly will be well-positioned to seek additional funding. To ensure evidence of both Intellectual Merit and Broader Impacts are being captured, PIs should revisit project evaluation plans with their evaluators, crosschecking the proposal’s claims about potential Intellectual Merit and Broader Impacts in relation to the evaluation questions and data collection plan to make sure compelling evidence is captured.

Last October, I conducted a workshop on this topic at the ATE Principal Investigators Conference with colleague Kirk Knestis, an evaluator from Hezel Associates. Dr. Celeste Carter, ATE program co-lead, spoke about how to frame results of prior NSF support in proposals. She noted that a common misstep that she has seen in proposals is when proposers speak to results from prior support by simply reiterating what they said they were going to do in their funded proposals, rather than describing the actual outcomes of the grant. Project summaries (one-page descriptions that address a proposed project’s Intellectual Merit and Broader Impacts that are required as part of all NSF proposals) are necessarily written in a prospective, future-oriented manner because the work hasn’t been initiated yet. In contrast, the Results of Prior NSF Support sections are about completed work and therefore are written in past tense and should include evidence of accomplishments. Describing achievements and presenting evidence of the quality and impact of those achievements shows reviewers that the proposer is a responsible steward of federal funds, can deliver on promises, and is building on prior success.

Take time now, well before it is time to submit a new proposal or a Project Outcomes Report, to make sure you haven’t lost sight of the Intellectual Merit and Broader Impact aspects of your grant and how you promised to contribute to these national priorities.

Newsletter: How can PIs demonstrate that their projects have “advanced knowledge”?

Posted on January 1, 2016 by  in - ()

NSF’s Intellectual Merit criterion is about advancing knowledge and understanding within a given field or across fields. Publication in peer-reviewed journals provides strong evidence of the Intellectual Merit of completed work. It is an indication that the information generated by a project is important and novel. The peer review process ensures that articles meet a journal’s standard of quality, as determined by a panel of reviewers who are subject matter experts.

In addition, publishing in an academic journal is the best way of ensuring that the new knowledge you have generated is available to others, becomes part of a shared scientific knowledge base, and is sustained over time. Websites and digital libraries tend to come and go with staff and funding changes. Journals are archived by libraries worldwide and, importantly, indexed to enable searches using standard search terms and logic. Even if a journal is discontinued, its articles remain available through libraries. Conference presentations are important dissemination vehicles, but don’t have the staying power of publishing. Some conferences publish presented papers in conference proceedings documents, which helps with long-term accessibility of information presented at these events.

The peer review process that journals employ to determine if they should publish a given manuscript is essentially an evaluative process. A small group of reviewers assesses the manuscript against criteria established for the journal. If the manuscript is accepted for publication, it met the specified quality threshold. Therefore, it is not necessary for the quality of published articles produced by ATE projects to be separately evaluated as part of the project’s external evaluation. However, it may be worthwhile to investigate the influence of published works, such as through citation analysis (i.e., determination of the impact of a published article based on the number of times it has been cited—to learn more, see http://bit.ly/cit-an).

Journals focused on two-year colleges and technical education are good outlets for ATE-related publications. Examples include Community College Enterprise, Community College Research Journal, Community College Review, Journal of Applied Research in the Community College, New Directions for Community Colleges, Career and Technical Education Research, Journal of Career and Technical Education, and Journal of Education and Work. (For more options, see the list of journals maintained by the Center of Education and Work (CEW) at the University of Wisconsin at http://bit.ly/cew-journals.)

NSF’s Intellectual Merit criterion is about contributing to collective knowledge. For example, if a project develops embedded math modules for inclusion in an electrical engineering e-book, students may improve their understanding of math concepts and how they relate to a technical task—and that is certainly important given the goals of the ATE program. However, if the project does not share what was learned about developing, implementing, and evaluating such modules and present evidence of their effectiveness so that others may learn from and build on those advances, the project hasn’t advanced disciplinary knowledge and understanding.

If you are interested in preparing a journal manuscript to disseminate knowledge generated by your project, first look at the type of articles that are being published in your field (check out CEW’s list of journals referenced above). You will get an idea of what is involved and how the articles are typically structured. Publishing can become an important part of a PI’s professional development, as well as a project’s overall effort to disseminate results and advance knowledge.

Newsletter: Communicating Results from Prior NSF Support

Posted on January 1, 2016 by  in - ()

ATE proposal season is many months away in early October, but if you are submitting for new funding this year, now is the time to reflect on your project’s achievements and make sure you will be able to write a compelling account of your current or past project’s results as they relate to the NSF review criteria of Intellectual Merit and Broader Impacts. A section titled Results from Prior NSF Support is required whenever a proposal PI or co-PI has received previous grants from NSF in the past five years. A proposal may be returned without review if it does not use the specific headings of “Intellectual Merit” and “Broader Impacts” when presenting results from prior support.

Given that these specific headings are required, you should have something to say about your project’s achievements in these distinct areas. It is OK for some projects to emphasize one area over another (Intellectual Merit or Broader Impacts), but grantees should be able to demonstrate value in both areas. Descriptions of achievements should be supported with evidence. Bold statements about a proposed project’s potential broader impacts, for example, will be more convincing to reviewers if the proposer can describe tangible benefits of previously funded work.

To help with this aspect of proposal development, EvaluATE has created a Results from Prior NSF Support Checklist (see http://bit.ly/prior-check). This one-page checklist lists the NSF requirements for this section of a proposal, as well as our additional suggestions for what to include and how.

Two EvaluATE blogs include additional guidance in this area: Amy Germuth (http://bit.ly/ag-reapply) offers specific guidance regarding wording and structure, and Lori Wingate (http://bit.ly/nsf-merit) shares tips for assessing the quality and quantity of evidence of a project’s Intellectual Merit and Broader Impacts, with links to helpful resources.

The task of identifying and collecting evidence of results from prior support should not wait until proposal writing time. It should be embedded in a project’s ongoing evaluation.

Newsletter: Project Spotlight: PATHTECH Successful Academic & Employment Pathways in Advanced Technologies

Posted on January 1, 2016 by  in - ()

Will Tyson is PI for Path Tech, an ATE targeted research project. He is an associate professor of sociology at the University of South Florida. Learn more about his project at
www.sociology.usf.edu/pathtech/.

Q: What advice do you have for PIs who want to pursue targeted research in technician education?

The Targeted Research on Technician Education strand of ATE is an ideal avenue for current ATE PIs looking to fund small projects to learn more about student outcomes resulting from prior activities. The best advice I have is to seek out scholars with backgrounds in social science and education, preferably with NSF experience, to partner with on a targeted research submission.

Q: You’ve published numerous articles on your research. What is your sense of what journal editors and reviewers are looking for when it comes to research on technician education?

I’m not sure journal editors and reviewers are actually looking for research on technician education. This is both a challenge and an opportunity. Most STEM education research generally ignores the “T” and focuses on traditional pathways to science, engineering, and mathematics degrees and careers. I think people know “good tech jobs” exist, but generally lack knowledge about the educational pathways to those jobs and the rich life stories of community college students in technician education programs.

Q: How do you see ATE research fitting within the NSF-IES Common Guidelines for Education Research and Development?

I think there are some challenges to fitting ATE research into the Common Guidelines. There are several research types and ATE researchers have to be careful to make sure the type they choose is the best fit for their research questions. The Guidelines are a good start for new investigators, but senior investigators should continue to build upon their work and use prior research to justify their new research ideas.

Q: Based on your experience as an NSF proposer and reviewer, what are some common mistakes when it comes to targeted research proposals?

Everyone should pay close attention to the goals of the Targeted Research on Technician Education track as outlined in the ATE program solicitation, which are to simulate and support research on technician education and build the partnership capacity between 2- and 4-year institutions to design and conduct research and development projects. All projects should focus on studying education through partnerships between 2- and 4-year institutions. In my experience, targeted research proposals tend to be led by 2-year college faculty or scholars from 4-year institutions or private research institutes. The 2-year personnel tend to lack the capacity to conduct targeted research due to lack of experience or personnel, as evidenced by their biosketches. On the other hand, 4-year personnel tend to lack familiarity with 2-year colleges and seek to use students as “guinea pigs.” Proposals often do not show that the scholar will be able to recruit student participants. Targeted research proposals should show clear evidence that 2- and 4-year institutions or private research institutes are going to work collaboratively.

Newsletter: Shorten the Evaluation Learning Curve: Avoid These Common Pitfalls

Posted on October 1, 2015 by  in - ()

This EvaluATE newsletter issue is focused on getting started with evaluation. It’s oriented to new ATE principal investigators who are getting their projects off the ground, but I think it holds some good reminders for veteran PIs as well. To shorten the evaluation learning curve, avoid these common pitfalls:

Searching for the truth about “what NSF wants from evaluation.” NSF is not prescriptive about what an ATE evaluation should or shouldn’t look like. So, if you’ve been concerned that you’ve somehow missed the one document that spells out exactly what NSF wants from an ATE evaluation—rest assured, you haven’t overlooked anything. But there is information that NSF requests from all projects in annual reports and that you are asked to report on the annual ATE survey. So it’s worthwhile to preview the Research.gov reporting template (bit.ly/nsf_prt) and the ATE annual survey questions (bit.ly/ATEsurvey16). And if you’re doing research, be sure to review the Common Guidelines for Education Research and Development – which are pretty cut-and-dried criteria for different types of research (bit.ly/cg-checklist). Most importantly, put some time into thinking about what you, as a project leader, need to learn from the evaluation. If you’re still concerned about meeting expectations, talk to your program officer.

Thinking your evaluator has all the answers. Even for veteran evaluators, every evaluation is new and has to be tailored to context. Don’t expect your evaluator to produce a detailed, actionable evaluation plan on Day 1. He or she will need to work out the details of the plan with you. And if something doesn’t seem right to you, it’s OK to ask for something different.

Putting off dealing with the evaluation until you are less busy. “Less busy” is a mythical place and you will probably never get there. I am both an evaluator and a client of evaluation services, and even I have been guilty of paying less attention to evaluation in favor of “more urgent” matters. Here are some tips for ensuring your project’s evaluation gets the attention it needs: (a) Set a recurring conference call or meeting with your evaluator (e.g., every two to three weeks); (b) Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation matters; (c) Give someone other than the PI responsibility for attending to the evaluation—not to replace the PI’s attention, but to ensure the PI and other project members are staying on top of the evaluation and communicating regularly with the evaluator; (d) Commit to using the evaluation results in a timely way—if you do something on a recurring basis, make sure you gather feedback from those involved and use it to improve the next activity.

Assuming you will need your first evaluation report at the end of Year 1. PIs must submit their annual reports to NSF within the 90 days prior to the end of the current budget period. So if your grant started on September 1, your first annual report is due between the beginning of June and the end of August. And it will take some time to prepare, so you should probably start writing a month or so before you plan to submit it. You’ll want to include at least some of your evaluation results, so start working with your evaluator now to figure what information is most important to collect for your Year 1 report.

Veteran PIs: What tips do you have for shortening the evaluation learning curve?  Submit a blog to EvaluATE and tell your story and lessons learned for the benefit of new PIs: evalu-ate.org/category/blog/.