We EvaluATE - General Issues

Evaluating Network Growth through Social Network Analysis

Posted on May 11, 2017 by  in Blog ()

Doctoral Student, College of Education, University of Nebraska at Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the most impactful learning experiences from the ATE Principal Investigators Conference I attended in October 2016 was the growing use of Business and Industry Leadership Teams (BILT) partnerships in developing and implementing new STEM curriculum throughout the country.  This need for cross-sector partnerships has become apparent and reinforced through specific National Science Foundation (NSF) grants.

The need for empirical data about networks and collaborations is increasing within the evaluation realm, and social network surveys are one method of quickly and easily gathering that data. Social network surveys come in a variety of forms. The social network survey I have used is in a roster format. Each participant of the program is listed, and each individual completes the survey by selecting which option best describes their relationships with one another. The options vary in degree from not knowing that person at one extreme, to having formally collaborated with that person at the other extreme. In the past, data from these types of surveys was analyzed through social network analysis, which necessitated a large amount of programming knowledge.  Due to recent technological advancements, there are new social network analysis programs that make analyzing this data more user-friendly for non-programmers. I have worked on an NSF-funded project at the University of Nebraska at Oaha where the goal is to provide professional development and facilitate the growth of a network for middle school teachers in order to create and implement computer science lessons into their current curriculum (visit the SPARCS website).  One of the methods for evaluating the facilitation of the network is through a social network analysis questionnaire. This method has proved very helpful in determining the extent to which the professional relationships of the cohort members have evolved over the course of their year-long experience within the program.

The social network analysis program I have been using is known as NodeXL and is an Excel add-in. It is very user-friendly and can easily be used to generate quantitative data on network development. I was able to take the data gathered from the social network analysis, conduct research, and present my article, “Identification of the Emergent Leaders within a CSE Professional Development Program,” at an international conference in Germany. While the article is not focused on evaluation, it does review the survey instrument itself.  You may access the article through this link (although I think your organization must have access to ACM):  Tracie Evans Reding WiPSCE Article. The article is also posted on my Academia.edu page.

Another funding strand emphasizing networks through the National Science Foundation is known as Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (INCLUDES). The long-term goal of NSF INCLUDES is to “support innovative models, networks, partnerships, technical capabilities and research that will enable the U.S. science and engineering workforce to thrive by ensuring that traditionally underrepresented and underserved groups are represented in percentages comparable to their representation in the U.S. population.” Noted in the synopsis for this funding opportunity is the importance of “efforts to create networked relationships among organizations whose goals include developing talent from all sectors of society to build the STEM workforce.” The increased funding available for cross-sector collaborations makes it imperative that evaluators are able to empirically measure these collaborations. While the notion of “networks” is not a new one, the availability of resources such as NodeXL will make the evaluation of these networks much easier.

 

Full Citation for Article:

Evans Reding, T., Dorn, B., Grandgenett, N., Siy, H., Youn, J., Zhu, Q., Engelmann, C. (2016).  Identification of the Emergent Teacher Leaders within a CSE Professional Development Program.  Proceedings for the 11th Workshop in Primary and Secondary Computing Education.  Munster, Germany:  ACM.

Blog: Evolution of Evaluation as ATE Grows Up

Posted on March 15, 2017 by  in Blog ()

Independent Consultant, Independent Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I attended a packed workshop by EvaluATE called “A Practical Approach to Outcome Evaluation” at the 2016 NSF ATE Principal Investigators Conference. Two lessons from the workshop reminded me that the most significant part of the evaluation process is the demystification of the process itself:

  • “Communicate early and often with human data sources about the importance of their cooperation.”
  • “Ensure everyone understands their responsibilities related to data collection.”

Stepping back, it made me reflect upon the evolution of evaluation in the ATE community. When I first started out in the ATE world in 1995, I was on the staff of one of the first ATE centers ever funded. Back then, being “evaluated” was perceived as quite a different experience, something akin to taking your first driver’s test or defending a dissertation—a meeting of the tester and the tested.

As the ATE community has matured, so has our approach to both evaluation and the integral communication component that goes with it. When we were a fledgling center, the meetings with our evaluator could have been a chance to take advantage of the evaluation team’s many years of experience of what works and what doesn’t. Yet, at the start we didn’t realize that it was a two-way street where both parties learned from each other. Twenty years ago, evaluator-center/project relationships were neither designed nor explained in that fashion.

Today, my colleague, Dr. Sandra Mikolaski, and I are co-evaluators for NSF ATE clients who range from a small new-to-ATE grant (they weren’t any of those back in the day!) to a large center grant that provides resources to a number of other centers and projects and even has its own internal evaluation team. The experience of working with our new-to-ATE client was perhaps what forced us to be highly thoughtful about how we hope both parties view their respective roles and input. Because the “fish don’t talk about the water” (i.e., project teams are often too close to their own work to honk their own horn), evaluators can provide not only perspective and advice, but also connections to related work and other project and center principal investigators. This perspective can have a tremendous impact on how activities are carried out and on the goals and objectives of a project.

We use EvaluATE webinars like “User-Friendly Evaluation Reports” and “Small-Scale Evaluation” as references and resources not only for ourselves but also for our clients. These webinars help them understand that an evaluation is not meant to assess and critique, but to inform, amplify, modify, and benefit.

We have learned from being on the other side of the fence that an ongoing dialog, an ethnographic approach (on-the-ground research, participant observation, holistic approach), and formative input-based partnership with our client makes for a more fruitful process for everyone.

Blog: National Science Foundation-funded Resources to Support Your Advanced Technological Education (ATE) Project

Posted on August 3, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Did you know that other National Science Foundation programs focused on STEM education have centers that provide services to projects? EvaluATE offers evaluation-specific resources for the Advanced Technological Education program, while some of the others are broader in scope and purpose. They offer technical support, resources, and information targeted at projects within the scope of specific NSF funding programs. A brief overview of each of these centers is provided below, highlighting evaluation-related resources. Make sure to check the sites out for further information if you see something that might be of value for your project!

The Community for Advancing Discovery Research in Education (CADRE) is a network for NSF’s Discovery Research K-12 program (DR K-12). The evaluation resource on the CADRE site is a paper on evaluation options (formative and summative), which differentiates evaluation from the research and development efforts carried out as part of project implementation.  There are other more general resources such as guidelines and tools for proposal writing, a library of reports and briefs, along with a video showcase of DR K-12 projects.

The Center for the Advancement of Informal Science Education (CAISE) has an evaluation section of its website that is searchable by type of resource (i.e., reports, assessment instruments, etc.), learning environment, and audience. For example, there are over 850 evaluation reports and 416 evaluation instruments available for review. The site hosts the Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects, which was developed as an initiative of the Visitor Studies Association and has sections such as working with an evaluator, developing an evaluation plan, creating evaluation tools and reporting.

The Math and Science Partnership Network (MSPnet) supports the math and science partnership network and the STEM+C (computer science) community. MSPnet has a digital library with over 2,000 articles; a search using the term “eval” found 467 listings, dating back to 1987. There is a toolbox with materials such as assessments, evaluation protocols and form letters. Other resources in the MSPnet library include articles and reports related to teaching and learning, professional development, and higher education.

The Center for Advancing Research and Communication (ARC) supports the NSF Research and Evaluation on Education in Science and Engineering (REESE) program through technical assistance to principal investigators. An evaluation-specific resource includes material from a workshop on implementation evaluation (also known as process evaluation).

The STEM Learning and Research Center (STELAR) provides technical support for the Innovative Technology Experiences for Students and Teachers (ITEST) program. Its website includes links to a variety of instruments, such as the Grit Scale, which can be used to assess students’ resilience for learning, which could be part of a larger evaluation plan.

Blog: Professional Development Opportunities in Evaluation – What’s Out There?

Posted on April 29, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

To assist the EvaluATE community in learning more about evaluation, we have compiled a list of free and low-cost online and short-term professional development opportunities. There are always new things available, so this is only a place to start!  If you run across a good resource, please let us know and we will add it to the list.

Free Online Learning

Live Webinars

EvaluATE provides webinars created specifically for projects funded through the National Science Foundation’s Advanced Technological Education program. The series includes four live events per year. Recording, slides, and handouts of previous webinars are available.  http://www.evalu-ate.org/category/webinars/

Measure Evaluation is a USAID-funded project with resources targeted to the field of global health monitoring and evaluation. Webinars are offered nearly every month on various topics related to impact evaluation and data collection; recordings of past webinars are also available. http://www.cpc.unc.edu/measure/resources/webinars

Archived Webinars and Videos

Better Evaluation’s archives include recordings of an eight-part webinar series on impact evaluation commissioned by UNICEF. http://betterevaluation.org/search/site/webinar

Centers for Disease Control’s National Asthma Control Program offers recordings of its four-part webinar series on evaluation basics, including an introduction to the CDC’s Framework for Program Evaluation in Public Health. http://www.cdc.gov/asthma/program_eval/evaluation_webinar.htm

EvalPartners offered several webinars on topics related to monitoring and evaluation (M&E). They also have as series of self-paced e-learning courses. The focus of all programs is to improve competency in conducting evaluation, with an emphasis on evaluation in the community development context.  http://www.mymande.org/webinars

Engineers Without Borders partners with communities to help them meet their basic human needs. They offer recordings of their live training events focused on monitoring, evaluation, and reporting. http://www.ewb-usa.org/resources?_sfm_cf-resources-type=video&_sft_ct-international-cd=impact-assessment

The University of Michigan School of Social Work has created six free interactive Web-based learning modules on a range of evaluation topics. The target audience is students, researchers, and evaluators.  A competency skills test is given at the end of each module, and a printable certificate of completion is available at the end of each module. https://sites.google.com/a/umich.edu/self-paced-learning-modules-for-evaluation-research/

Low-Cost Online Learning

The American Evaluation Association (AEA) Coffee Break Webinars are 20-minute webinars on varying topics.  At this time non-members may register for the live webinars, but you must be a member of AEA to view the archived broadcasts. There are typically one or two sessions offered each month.  http://comm.eval.org/coffee_break_webinars/coffeebreak

AEA’s eStudy program is a series of in-depth real-time professional development opportunities and are not recorded.  http://comm.eval.org/coffee_break_webinars/estudy

The Canadian Evaluation Society (CES) offers webinars to members on a variety of evaluation topics. Reduced membership rates are available for members of AEA. http://evaluationcanada.ca/webinars

­Face-to-Face Learning

AEA Summer Evaluation Institute is offered annually in June, with a number of workshops and conference sessions.  http://www.eval.org/p/cm/ld/fid=232

The Evaluator’s Institute offers one- to five-day courses in Washington, DC in February and July. Four levels of certificates are available to participants. http://tei.cgu.edu/

Beyond these professional development opportunities, university degree and certificate programs are listed on the AEA website under the “Learn” tab.  http://www.eval.org/p/cm/ld/fid=43

Blog: Researching Evaluation Practice while Practicing Evaluation

Posted on November 10, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

There is a dearth of research on evaluation practice, particularly of the sort that practitioners can use to improve their own work (according to Nick Smith in a forthcoming edition of New Directions for Evaluation, “Using Action Design Research to Research and Develop Evaluation Practice”1,2).

Action design research is described by Dr. Smith as a “strategy for developing and testing alternative evaluation practices within a case-based, practical reasoning view of evaluation practice.” This approach is grounded in the understanding that evaluation is not a “generalizable intervention to be evaluated, but a collection of performances to be investigated” (p. 5). Importantly, action design research is conducted in real time, in authentic evaluation contexts. Its purpose is not only to better understand evaluation practices, but to develop effective solutions to common challenges.

We at EvaluATE are always on the lookout for opportunities to test out ideas for improving evaluation practice as well as our own work in providing evaluation education.  A chronic problem for many evaluators is low response rates. Since 2009, EvaluATE has presented 4 to 6 webinars per year, each concluding with a brief feedback survey. Given that these webinars are about evaluation, a logical conclusion is that participants are predisposed to evaluation and will readily complete the surveys, right? Not really. Our response rates for these surveys range from 34 to 96 percent, with an average of 60 percent. I believe we should consistently be in the 90 to 100 percent range.

So in the spirit of action design research on evaluation, I decided to try a little experiment. At our last webinar, before presenting any content, I showed a slide with the following statement beside an empty checkbox: “I agree to complete the <5-minute feedback survey at the end of this webinar.” I noted the importance of evaluation for improving our center’s work and for our accountability to the National Science Foundation.  We couldn’t tell exactly how many people checked the box, but it’s clear that several did (play the video clip below).  I was optimistic that asking for this public (albeit anonymous) commitment at the start of the webinar would boost response rates substantially.

The result: 72 percent completed the survey.  Pretty good, but well short of my standard for excellence. It was our eighth highest response rate ever and highest for the past year, but four of the five webinar surveys in 2013-14 had response rates between 65 and 73 percent. Like so often in research, the initial results are inclusive and we will have to investigate further: How are webinar response rates affected by audience composition, perceptions of the webinar’s quality, or asking for participation multiple times? As Nick Smith pointed out in his review of a draft of this blog: “What you are really after is not just a high response rate, but a greater understanding of what effects webinar evaluation response rates. That kind of insight turns your efforts from local problem solving to generalizable knowledge – from Action Design Problem Solving to Action Design Research.”

I am sharing this experience not because I found the sure-fire way to get people to respond to webinar evaluation surveys. Rather, I am sharing it as a lesson learned and to invite you to conduct your own action design research on evaluation and tell us about it here on the EvaluATE blog.

1 Disclosure: Nick Smith is the chairperson of EvaluATE’s National Visiting Committee, an advisory panel that reports to the National Science Foundation.

2 Smith, N. L. (in press). Using action design research to research and develop evaluation practice. In P. R. Brandon (Ed.), Recent developments in research on evaluation. New Directions for Evaluation.

Blog: Quick Graphics with Canva

Posted on November 4, 2015 by  in Blog ()

Project Manager, The Evaluation Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Quick Graphics

In your ATE project, you often have to develop different communication materials for your stakeholders.  To make them more interesting, and move beyond the use of clip art, you might want to consider moving up to a graphic design tool.  In this blog, I share my experience with the use of graphic programs and a quick tour of how I use Canva for this purpose.

When it comes to graphic work, I have a tendency to keep to my old-school ways. I love Adobe products; I have been using them for fifteen years and don’t like to veer off my path of using them. But when I offer advice to beginners, I steer them away from Adobe. First off, there is a steep learning curve. Many people are intimidated with the thought of Adobe Illustrator or Photoshop and won’t even attempt to learn them. Second, these products can be expensive. So with two strikes against Adobe and the constant challenge to try something new, I ventured out into the wild and tried Canva.

Free: Canva.com is a free online graphic design tool. It has a variety of pre-sized design templates you can choose from, which can be used for social media, blogs, email, presentations, posters, and other graphic materials; or you can create your own. Canva provides you with the choice of several different graphic sizes, which takes the guesswork out of designing for social media or print. Once your canvas size is set, you enter into design mode. Canva features a library of over one million graphics, layouts, and illustrations to choose from. Some elements are free, and some cost only $1. The prices are clearly marked as you browse through the options.

Quick and Easy: So after trying out Canva, I was really impressed. Is it Photoshop or Illustrator? No, but for doing basic graphic design, it is really good. The hardest part of designing any document is staring at the blank page. Canva helps get past “designer’s block” by providing templates, so you can just put in your text and hit save. For those who are ready for the next creative challenge, you can pick a blank template and choose a photo/graphic from the library or upload your own. It’s just that easy!

Social Media and Outreach: I have started using Canva for designing basic graphics for our social media and outreach items. Not only am I cutting down on the time spent working on these tasks, I am also being more creative with my designs. Seeing all the options within the system really brings out my creativity. I encourage you to go onto Canva.com and make your own graphics. Get rid of the boring white paper flyer, and wow the audience with a new look from Canva. It’s quick and easy. Happy Designing!

Canva Quick Guide:
1
2
3
4
5
6
7
8

Blog: Evaluation Training and Professional Development

Posted on October 7, 2015 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello ATE Community!

My name is Cheryl Endres, and I am the new blog editor and doctoral associate for EvaluATE. I am a doctoral student in the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. To help me begin to learn more about ATE and identify blog topics, we (EvaluATE) took a closer look at some results from the survey conducted by EvaluATE’s external evaluator. As you can see from the chart, the majority of ATE evaluators have gotten their knowledge about evaluation on the job, through self-study, and nonacademic professional development. Knowing this gives us some idea about additional resources for building your evaluation “toolkit.”

HelloATE--Graph

It may be difficult for practicing evaluators to take time for formal, graduate-level coursework.  Fortunately, there are abundant opportunities just a click away on the Internet!  Since wading through the array of options can be somewhat daunting, we have compiled a short list to get you started in your quest. As the evaluation field continues to expand, the opportunities do as well, and there are a number of online and in-person options for continuing to build your knowledge base about evaluation. Listed below are just a few to get you started:

  • The EvaluATE webinars evalu-ate.org/category/webinars/ are a great place to get started for information specific to evaluation in the ATE context.
  • The American Evaluation Association has a “Learn” tab that provides information about the Coffee Break Webinar series, eStudies, and the Summer Evaluation Institute. There are also links to online and in-person events around the country (and world) and university programs, some of which offer certificate programs in evaluation in addition to degree programs (master’s or doctoral level). The AEA annual conference in November is also a great option, offering an array of preconference workshops: eval.org
  • The Canadian Evaluation Society offers free webinars to members. The site includes archived webinars as well: evaluationcanada.ca/professional-learning
  • The Evaluators’ Institute at George Washington University offers in-person institutes in Washington, D.C. in February and July. They offer four different certificates in evaluation. Check out the schedules at tei.gwu.edu
  • EvalPartners has a number of free e-learning programs: mymande.org/elearning

These should get you started. If you find other good sources, please email me at cheryl.endres@wmich.edu.

Blog: Evaluation and Planning: Partners Through the Journey

Posted on August 19, 2015 by  in Blog ()

Director, Collaborative for Evaluation and Assessment Capacity, University of Pittsburgh

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation goes hand-in-hand with good planning—and so, good implementation. To plan well, you need to know the areas of priority needs (a good needs assessment is critical and often the backbone of planning with efficient use of resources!), and to implement well, you need to know about both process and outcomes. It’s not usually enough in our complex world to simply claim an outcome—the first question after that is usually, “how did you accomplish that?” Evaluations that are more integrated with both planning and implementation can better address those questions and support a strong organizational learning agenda.

Often in areas of grant-funded operations, evaluators are asked to come in pretty late in the process—to provide evaluation of a program or intervention already in action, after funding and programming has occurred. While this form of evaluation is possible and can be important, we find it better to be consulted on the front end of planning and grant writing. Our expertise is often helpful to our clients in connecting their specific needs with the resources they seek, through the most effective processes that can then lead to the outcomes they seek. Evaluation can become the “connecting tissue” between resources and outcomes, needs and processes, and activities and outcomes. Evaluation and planning are iterative partners—continuing to inform each other throughout the history of a project.

We often use tools such as logic modeling and the development of a theory of action or change to identify and articulate the most important elements of the equation. By identifying these components for program planners and articulating the theory of action, evaluation planning also assists in illustrating good project planning!

Evaluation provides the iterative planning and reflection process that is the hallmark of good programming and effective and efficient use of resources. Consider viewing evaluation more holistically—and resist the more narrow definition of evaluation as something that comes at the end of planning and implementation efforts!

By including the requirement for integrated evaluation in their requests for proposals (RFPs), grant funders can help project staff write better proposals for funding, and once funded, help to assure better planning toward achieving goals. Foundations such as W. K. Kellogg Foundation, Robert Wood Johnson Foundation, and a number of more local funders, for example the Heinz Endowments, the Grable Foundation, and FISA Foundation in our own region of southwestern Pennsylvania, have come to recognize these needs. The Common Guidelines for Education Research and Development, published in 2013 and used by the U.S. Department of Education and the National Science Foundation to advise research efforts, identifies the need for gathering and making meaning from evidence in all aspects of change endeavors, including evaluation.

In this 2015 International Year of Evaluation, let’s further examine how we use evaluation to inform all of the aspects of our work, with evaluation, planning and implementation as a seamless partnership!

Blog: Evidence and Evaluation in STEM Education: The Federal Perspective

Posted on August 12, 2015 by  in Blog ()

Evaluation Manager, NASA Office of Education

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

If you have been awarded federal grants over many years, you probably have seen the increasing emphasis on evaluation and evidence. As a federal evaluator working at NASA, I have seen firsthand the government-wide initiative to increase use of evidence to improve social programs. Federal agencies have been strongly encouraged by the administration to better integrate evidence and rigorous evaluation into their budget, management, operational, and policy decisions by:

(1) making better use of already-collected data within government agencies; (2) promoting the use of high-quality, low-cost evaluations and rapid, iterative experimentation; (3) adopting more evidence-based structures for grant programs; and (4) building agency evaluation capacity and developing tools to better communicate what works. (https://www.whitehouse.gov/omb/evidence)

Federal STEM education programs have also been affected by this increasing focus on evidence and evaluation. Read, for example, the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan (2013),1 which was prepared by the Committee on STEM Education of the National Science and Technology Council.2 This strategic plan provides an overview of the importance of STEM education to American society and describes the current state of federal STEM education efforts. Five priority STEM education investment areas are discussed where a coordinated federal strategy is currently under development. The plan also presents methods to build and share evidence. Finally, the plan lays out several strategic objectives for improving the exploration and sharing of evidence-based practices, including supporting syntheses of existing research that can inform federal investments in the STEM education priority areas, improving and aligning evaluation and research expertise and strategies across federal agencies, and streamlining processes for interagency collaboration (e.g., Memoranda of Understanding, Interagency Agreements).

Another key federal document that is influencing evaluation in STEM agencies is the Common Guidelines for Education Research and Development (2013),3 jointly prepared by the U.S. Department of Education’s Institute of Education Sciences and the National Science Foundation. This document describes the two agencies’ shared understandings of the roles of various types of research in generating evidence about strategies and interventions for increasing student learning. These research types range from studies that generate fundamental understandings related to education and learning to research (“Foundational Research”) to studies that assesses the impact of an intervention on an education-related outcome, including efficacy research, effectiveness research, and scale-up research. The Common Guidelines provide the two agencies and the broader education research community with a common vocabulary to describe the critical features of these study types.

Both documents have shaped, and will continue to shape, federal STEM programs and their evaluations. Reading them will help federal grantees gain a richer understanding of the larger federal context that is influencing reporting and evaluation requirements for grant awards.

1 A copy of the Federal Science, Technology, Engineering and Mathematics (STEM) Education 5-Year Strategic Plan can be obtained here: https://www.whitehouse.gov/sites/default/files/microsites/ostp/stem_stratplan_2013.pdf

2 For more information on the Committee on Science, Technology, Engineering, and Math Education, visit https://www.whitehouse.gov/administration/eop/ostp/nstc/committees/costem.

3 A copy of the Common Guidelines for Education Research and Development can be obtained here: http://ies.ed.gov/pdf/CommonGuidelines.pdf

Blog: Creation, Dissemination, and Accessibility of ATE-Funded Resources

Posted on July 15, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Bouda
Kendra Bouda,
Metadata and Information Specialist – Internet Scout Research Group
University of Wisconsin-Madison
Bower
Rachael Bower,
Director/PI – Internet Scout Research Group
University of Wisconsin-Madison

As most ATE community members are aware, the National Science Foundation requires that all grant applicants provide a one- to two-page data management plan describing how the grantee’s proposal will meet NSF guidelines on the dissemination of grant-funded work. In 2014, NSF added a new requirement to the ATE solicitation mandating that newly funded grantees archive their deliverables with ATE Central.

We were curious to find out more about the materials created within the ATE community. So, when EvaluATE approached us about including questions related to data management planning and archiving in their annual survey of ATE grantees, we jumped at the chance. We had an interest in discovering not only what resources have been created, but also how those resources are disseminated to larger audiences. Additionally, we hoped to discover whether grantees are actively making their materials web accessible to users with disabilities—a practice that ensures access by the broadest possible audience.

The survey responses highlight that the most widely created materials include (not surprisingly) curriculum and professional development materials, with newsletters and journal articles taking up the rear. Other materials created by the ATE community include videos, white papers and reports, data sets, and webinars.

However, although grantees are creating a lot of valuable resources, they may not be sharing them widely and, in some cases, may be unsure of how best to make them available after funding ends. The graphs below illustrate the available of these materials, both currently and after grant funding ends.

Bouda Chart

Data from the annual survey shows that 65 percent of respondents are aware of accessibility standards—specifically Section 508 of the Rehabilitation Act; however, 35 percent are not. Forty-eight percent of respondents indicated that some or most of their materials are accessible, while another 22 percent reported that all materials generated by their project or center adhere to accessibility standards. Happily, only 1 percent of respondents reported that their materials do not adhere to standards; however, 29 percent are unsure whether their materials adhere to those standards or not.

For more information about accessibility, visit the official Section 508 site, the World Wide Web Consortium’s (W3C) Accessibility section or the Web Content Accessibility Guidelines 2.0 area of W3C.

Many of us struggle with issues related to sustaining our resources, which is part of the reason we are all asked by NSF to create a data management plan. To help PIs plan for long-term access, ATE Central offers an assortment of free services. Specifically, ATE Central supports data management planning efforts, provides sustainability training, and archives materials created by ATE projects and centers, ensuring access to these materials beyond the life of the project or center that created them.

For more about ATE Central, check out our suite of tools, services, and publications or visit our website. If you have questions or comments, contact us at info@atecentral.net.