Archive: evidence

Blog: A Call to Action: Advancing Technician Education through Evidence-Based Decision-Making

Posted on May 1, 2019 by , in Blog (, , )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Faye R. Jones Marcia A. Mardis
Senior Research Associate
Florida State
Associate Professor and Assistant Dean
Florida State

Blog 5-1-19

Evaluators contribute to developing the Advanced Technological Education (ATE) community’s awareness and understanding of theories, concepts, and practices that can advance technician education at the discrete project level as well as at the ATE program level. Regardless of focus, project teams explore, develop, implement, and test interventions designed to lead to successful outcomes in line with ATE’s goals. At the program level, all ATE community members, including program officers, benefit from the reviewing and compiling of project outcomes to build an evidence base to better prepare the technical workforce.

Evidence-based decision-making is one way to ensure that project outcomes lead to quality and systematic program outcomes. As indicated in Figure 1, good decision-making depends on three domains of evidence within an environment or organizational context: contextual experiential (i.e., resources, including practitioner expertise); and best available research evidence (Satterfield et al., 2009)

Figure 1. Domains that influence evidence-based decision-making (Satterfield et al., 2009) [Click to enlarge]

As Figure 1 suggests, at the project level, as National Science Foundation (NSF) ATE principal investigators work (PIs), evaluators can assist PIs in making project design and implementation decisions based on the best available research evidence, considering participant, environmental, and organizational dimensions. For example, researchers and evaluators work together to compile the best research evidence about specific populations (e.g., underrepresented minorities) in which interventions can thrive. Then, they establish mutually beneficial researcher-practitioner partnerships to make decisions based on their practical expertise and current experiences in the field.

At the NSF ATE program level, program officers often review and qualitatively categorize project outcomes provided by project teams, including their evaluators, as shown in Figure 2.

 

Figure 2. Quality of Evidence Pyramid (Paynter, 2009) [Click to enlarge]

As Figure 2 suggests, aggregated project outcomes tell a story about what the ATE community has learned and needs to know about advancing technician education. At the highest levels of evidence, program officers strive to obtain strong evidence that can lead to best practice guidelines and manuals grounded by quantitative studies and trials, and enhanced by rich and in-depth qualitative studies and clinical experiences. Evaluators can meet PIs’ and program officers’ evidence needs with project-level formative and summative feedback (such as outcomes and impact evaluations) and program-level data, such as outcome estimates from multiple studies (i.e., meta-analyses of project outcome studies). Through these complementary sources of evidence, evaluators facilitate the sharing of the most promising interventions and best practices.

In this call to action, we charge PIs and evaluators with working closely together to ensure that project outcomes are clearly identified and supported by evidence that benefits the ATE community’s knowledge base. Evaluators’ roles include guiding leaders to 1) identify new or promising strategies for making evidence-based decisions; 2) use or transform current data for making informed decisions; and when needed, 3) document how assessment and evaluation strengthen evidence gathering and decision-making.

References:

Paynter, R. A. (2009). Evidence-based research in the applied social sciences. Reference Services Review, 37(4), 435–450. doi:10.1108/00907320911007038

Satterfield, J., Spring, B., Brownson, R., Mullen, E., Newhouse, R., Walker, B., & Whitlock, E. (2009). Toward a transdisciplinary model of evidence-based practice. The Milbank Quarterly, 86, 368–390.

Blog: Getting Ready to Reapply – Highlighting Results of Prior Support

Posted on December 2, 2015 by  in Blog ()

Founder and President, EvalWorks, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello. My name is Amy A. Germuth and I own EvalWorks, LLC, an education evaluation firm in Durham, NC, which has a strong focus on evaluating STEM projects. Having conducted evaluations of ATE and multiple other NSF STEM projects since the early 2000s, I have worked with PIs to help them better respond to NSF solicitations.

For every ATE solicitation, NSF has required that proposers identify the “Results of Prior Support.” NSF requests that proposers provide the following information:

  1. The NSF award number, amount and period of support
  2. The title of the project
  3. A summary of the results of the completed work
  4. A list of publications resulting from the NSF award
  5. A brief description of available data, samples, physical collections, and other related research products not described elsewhere
  6. If the proposal is for renewal of a grant, a description of the relation of the completed work to the proposed work

This is an excellent opportunity for proposers who have been funded previously by NSF to highlight how their prior funds were used to support a positive change among the targeted group or individuals. For point 3, rather than simply stating the number of persons served, proposers should do the following:

  • State briefly the main goal(s) of the project.
  • Identify who was served, how many were served, and in what capacity.
  • Explain the impact on these persons that resulted from their participation in this project.
  • Provide what evidence was used to make the above inference.

An example may read something like this:

“As part of this project, our goal was to increase the number of women who successfully earned an associate’s degree in welding. To this end, we began a targeted recruiting campaign focusing on women who were about to complete or had recently completed other related programs such as pipefitting and construction and developed a brochure for new students that included positive images of women in welding. We used funding to develop the Women in Welding program and support team building and outreach efforts by them. Institutional data reveal that since this project was started, the number of women in the welding program has almost tripled from 12 (2006 – 2010), of which only 8 graduated to 34 (2011 – 2016), of which 17 have already graduated and 5 have only one semester left. Even if the remaining 17 were not to graduate, the 17 who already have is double the number of female students who graduated from the program between 2006 – 2010.”

To summarize, if you have received prior support from NSF, use this opportunity to show how the funding supported project activities that made a difference and how they inform your current proposal (if applicable). Reviewers look to this section as a way to ascertain the degree to which you have been a good steward of the funding that you received and what impacts it had. Attention to this section will provide one more measure by which reviewers will judge the ability of your proposed project to be successful.

Blog: Intellectual Merit and Broader Impacts: Identifying Your Project’s Achievements and Supporting Evidence

Posted on October 21, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Updated April 2019 for Throwback Thursday series.

The deadline for proposals to the National Science Foundation’s Advanced Technological Education program for this year just passed, so a blog about what to include in the Results from Prior NSF Support section in an ATE proposal may seem untimely. But if you’re involved in a project that will seek additional funding from NSF in the next year or two, there’s no better time than NOW to assess the quality and quantity of the evidence of your current project’s intellectual merit and broader impacts (NSF’s review criteria for proposed and completed work).

Understand the fundamentals of intellectual merit and broader impacts: In a nutshell, intellectual merit is about advancing knowledge and understanding. Broader impacts are benefits to society. If your project is mainly research, it’s likely that most of your achievements are related to intellectual merit. If you’re mostly doing development and implementation, your achievements are probably more in the area of broader impacts. But you should have evidence of both aspects of your work.

Identify your project’s intellectual merit and broader impacts: To hone in on your project’s intellectual merit and broader impacts, it helps to break down these big ideas into smaller chunks. To identify your project’s intellectual merit, ask yourself, what are we doing that is generating new knowledge or improved understanding? Are we using novel research methods or investigating a novel topic to better understand an aspect of STEM education? Is our project transformative, bringing about extraordinary or revolutionary change? In terms of broader impacts, what are we doing to serve groups that have been historically underrepresented in STEM; developing a diverse workforce; creating partnerships between academia and industry; enhancing education infrastructure; increasing economic competitiveness; or improving STEM education in general?

Identify gaps in evidence: It’s not enough to profess your achievements—you need evidence. Evidence is not the method you used to collect data (tests, surveys, observations, etc.); it’s the evidence indicated by those data (a genetic test is not evidence that someone committed a crime, the result of that test is the evidence). If you don’t have good evidence of important achievements, revise your evaluation plan and start collecting data as soon as possible. Make sure that you have evidence of more than just the completion of activities. For example, if your achievement is that you developed a new certification program, to demonstrate broader impacts, you need evidence that it is a high-quality program and that students are enrolling, graduating, and getting jobs (or at least go as far down the outcomes chain as reasonable). Plotting your evidence on a logic model is a good way to figure out if you have sufficient evidence regarding outcomes as well as activities and outputs.

If you find gaps that will impair your ability to make a compelling case about what you’ve achieved with your current grant, update your evaluation plan accordingly. When you write your next proposal, you will be required to present evidence of your achievements under the specific headings of “Intellectual Merit” and “Broader Impacts” – if you don’t, your proposal is at risk of being returned without review.

 

To learn more, check out these resources:

NSF Grant Proposal Guide (this link goes directly to the section on Results from Prior NSF Support): http://bit.ly/nsf-results

NSF Merit Review Website: http://www.nsf.gov/bfa/dias/policy/merit_review/

NSF Important Notice #130: Transformative Research (for details about what NSF considers transformative, one dimension of intellectual merit): http://www.nsf.gov/pubs/2007/in130/in130.jsp

NSF Examples of Broader Impacts: http://www.nsf.gov/pubs/2002/nsf022/bicexamples.pdf

Perspectives on Broader Impacts: http://www.nsf.gov/od/oia/publications/Broader_Impacts.pdf

National Alliance for Broader Impacts: http://broaderimpacts.net/about/

Materials from our ATE PI conference workshop on this topic, including presentation slides, worksheets, and EvaluATE’s Results from Prior NSF Support Checklist: http://www.evalu-ate.org/library/conference/pi-conference/

Blog: Evidence and Evaluation in STEM Education: The Federal Perspective

Posted on August 12, 2015 by  in Blog ()

Evaluation Manager, NASA Office of Education

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you have been awarded federal grants over many years, you probably have seen the increasing emphasis on evaluation and evidence. As a federal evaluator working at NASA, I have seen firsthand the government-wide initiative to increase use of evidence to improve social programs. Federal agencies have been strongly encouraged by the administration to better integrate evidence and rigorous evaluation into their budget, management, operational, and policy decisions by:

(1) making better use of already-collected data within government agencies; (2) promoting the use of high-quality, low-cost evaluations and rapid, iterative experimentation; (3) adopting more evidence-based structures for grant programs; and (4) building agency evaluation capacity and developing tools to better communicate what works. (https://www.whitehouse.gov/omb/evidence)

Federal STEM education programs have also been affected by this increasing focus on evidence and evaluation. Read, for example, the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan (2013),1 which was prepared by the Committee on STEM Education of the National Science and Technology Council.2 This strategic plan provides an overview of the importance of STEM education to American society and describes the current state of federal STEM education efforts. Five priority STEM education investment areas are discussed where a coordinated federal strategy is currently under development. The plan also presents methods to build and share evidence. Finally, the plan lays out several strategic objectives for improving the exploration and sharing of evidence-based practices, including supporting syntheses of existing research that can inform federal investments in the STEM education priority areas, improving and aligning evaluation and research expertise and strategies across federal agencies, and streamlining processes for interagency collaboration (e.g., Memoranda of Understanding, Interagency Agreements).

Another key federal document that is influencing evaluation in STEM agencies is the Common Guidelines for Education Research and Development (2013),3 jointly prepared by the U.S. Department of Education’s Institute of Education Sciences and the National Science Foundation. This document describes the two agencies’ shared understandings of the roles of various types of research in generating evidence about strategies and interventions for increasing student learning. These research types range from studies that generate fundamental understandings related to education and learning to research (“Foundational Research”) to studies that assesses the impact of an intervention on an education-related outcome, including efficacy research, effectiveness research, and scale-up research. The Common Guidelines provide the two agencies and the broader education research community with a common vocabulary to describe the critical features of these study types.

Both documents have shaped, and will continue to shape, federal STEM programs and their evaluations. Reading them will help federal grantees gain a richer understanding of the larger federal context that is influencing reporting and evaluation requirements for grant awards.

1 A copy of the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan can be obtained here: https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_stratplan_2013.pdf

2 For more information on the Committee on Science, Technology, Engineering, and Math Education, visit https://www.whitehouse.gov/administration/eop/ostp/nstc/committees/costem.

3 A copy of the Common Guidelines for Education Research and Development can be obtained here: http://ies.ed.gov/pdf/CommonGuidelines.pdf