Blog




Blog: Scavenging Evaluation Data

Posted on January 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

But little Mouse, you are not alone,
In proving foresight may be vain:
The best laid schemes of mice and men
Go often askew,
And leave us nothing but grief and pain,
For promised joy!

From To a Mouse, by Robert Burns (1785), modern English version

Research and evaluation textbooks are filled with elegant designs for studies that will illuminate our understanding of social phenomena and programs. But as any evaluator will tell you, the real world is fraught with all manner of hazards and imperfect conditions that wreak havoc on design, bringing grief and pain, rather than the promised joy of a well-executed evaluation.

Probably the biggest hindrance to executing planned designs is that evaluation is just not the most important thing to most people. (GASP!) They are reluctant to give two minutes for a short survey, let alone an hour for a focus group. Your email imploring them to participate in your data collection effort is one of hundreds of requests for their time and attention that they are bombarded with daily.

So, do all the things the textbooks tell you to do. Take the time to develop a sound evaluation design and do your best to follow it. Establish expectations early with project participants and other stakeholders about the importance of their cooperation. Use known best practices to enhance participation and response rates.

In addition: Be a data scavenger. Here are two ways to get data for an evaluation that do not require hunting down project participants and convincing them to give you information.

1. Document what the project is doing.

I have seen a lot of evaluation reports in which evaluators painstakingly recount a project’s activities as a tedious story rather than straightforward account. This task typically requires the evaluator to ask many questions of project staff, pore through documents, and track down materials. It is much more efficient for project staff to keep a record of their own activities. For example, see EvaluATE’s resume. It is a no-nonsense record of our funding, activities, dissemination, scholarship, personnel, and contributors.  In and of itself, our resume does most of the work of the accountability aspect of our evaluation (i.e., Did we do what we promised?).  In addition, the resume can be used to address questions like these:

  • Is the project advancing knowledge, as evidenced by peer-reviewed publications and presentations?
  • Is the project’s productivity adequate in relation to its resources (funding and personnel)?
  • To what extent is the project leveraging the expertise of the ATE community?

2. Track participation.

If your project holds large events, use a sign-in sheet to get attendance numbers. If you hold webinars, you almost certainly have records with information about registrants and attendees. If you hold smaller events, pass around a sign-in sheet asking for basic information like name, institution, email address, and job title (or major if it’s a student group). If the project has developed a course, get enrollment information from the registrar.  Most importantly: Don’t put these records in a drawer. Compile them in a spreadsheet and analyze the heck out of them. Here are example data points that we glean from EvaluATE’s participation records:

  • Number of attendees
  • Number of attendees from various types of organizations (such as two- and four-year colleges, nonprofits, government agencies, and international organizations)
  • Number and percentage of attendees who return for subsequent events
  • Geographic distribution of attendees

Project documentation and participation data will be most helpful for process evaluation and accountability. You will still need cooperation from participants for outcome evaluation—and you should engage them early to garner their interest and support for evaluation efforts. Still, you may be surprised by how much valuable information you can get from these two sources—documentation of activities and participation records—with minimal effort.

Get creative about other data you can scavenge, such as institutional data that colleges already collect; website data, such as Google Analytics; and citation analytics for published articles.

Blog: Research Goes to School (RGS) Model

Posted on January 10, 2017 by  in Blog ()

Project Coordinator, Discovery Learning Research Center, Purdue University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Data regarding pathways to STEM careers indicate that a critical transition point exists between high school and college.  Many students who are initially interested in STEM disciplines and could be successful in these fields either do not continue to higher education or choose non-STEM majors in college.  In part, these students do not see what role they can have in STEM careers.  For this reason, the STEM curriculum needs to reflect its applicability to today’s big challenges and connect students to the roles that these issues have for them on a personal level.

We proposed a project that infused high school STEM curricula with cross-cutting topics related to the hot research areas that scientists are working on today.  We began by focusing on sustainable energy concepts and then shifted to nanoscience and technology.

Pre-service and in-service teachers came to a large Midwestern research university for two weeks of intensive professional development in problem-based learning (PBL) pedagogy.  Along with PBL training, participants also connected with researchers in the grand challenge areas of sustainable energy (in project years 1-3) and nanoscience and technology (years 4-5).

We proposed a two-tiered approach:

1. Develop a model for education that consisted of two parts:

  • Initiate a professional development program that engaged pre-service and in-service high school teachers around research activities in grand challenge programs.
  • Support these teachers to transform their curricula and classroom practice by incorporating concepts of the grand challenge programs.

2. Establish a systemic approach for integrating research and education activities.

Results provided a framework for creating professional development with researchers and STEM teachers that culminates with integration of grand challenge concepts and education curricula.

Using developmental evaluation over a multi-year process, core practices for an effective program began emerging:

  • Researchers must identify the basic scientific concepts their work entails. For example, biofuels researchers work with the energy and carbon cycles; nanotechnology researchers must thoroughly understand size-dependent properties, forces, self-assembly, size and scale, and surface area-to-volume ratio.
  • Once identified, these concepts must be mapped to teachers’ state teaching standards and Next Generation Science Standards (NGSS), making them relevant for teachers.
  • Professional development must be planned for researchers to help them share their research at an appropriate level for use by high school teachers in their classrooms.
  • Professional development must be planned for teachers to help them integrate the research content into their teaching and learning standards in meaningful ways.
  • The professional development for teachers must include illustrative activities that demonstrate scientific concepts and be mapped to state and NGSS teaching standards.

The iterative and rapid feedback processes of developmental evaluation allowed for evolution of the program.  Feedback from data provided impetus for change, but debriefing sessions provided insight to the program and to core practices.  To evaluate the core practices found in the biofuels topic from years 1-3, we used a dissimilar topic, nanotechnology, in years 4-5.  We saw a greater integration of research and education activities in teachers’ curricula as the core practices became more fully developed through iterative repetition even with a new topic. The core practices remained true regardless of topic, and practitioners became better at delivery with more repetitions in years 4 and 5.

 

Blog: Evaluating Creativity in the Context of STEAM Education

Posted on December 16, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Shelly Engelman
Senior Researcher
The Findings Group, LLC
Morgan Miller
Research Associate
The Findings Group, LLC

At The Findings Group, we are assessing a National Science Foundation Discovery Research K-12 project that gives students an opportunity to learn about computing in the context of music through EarSketch. As with other STEAM (Science, Technology, Engineering, Arts, Math) approaches, EarSketch aims to motivate and engage students in computing through a creative, cross-disciplinary approach. Our challenge with this project was threefold: 1) defining creativity within the context of STEAM education, 2) measuring creativity, and 3) demonstrating how creativity gives rise to more engagement in computing.

The 4Ps of Creativity

To understand creativity, we turned to the literature first.  According to previous research, creativity has been discussed from four perspectives, or the 4Ps of creativity: Process, Person, Press/Place, and Product   For our study, we focused on creativity from the perspective of the Person and the Place. Person refers to the traits, tendencies, and characteristics of the individual who creates something or engages in a creative endeavor. Place refers to the environmental factors that encourage creativity.

Measuring Creativity – Person

Building on previous work by Carroll (2009) and colleagues, we developed a self-report Creativity – Person measure that taps into six aspects of personal expressiveness within computing. These aspects include:

  • Expressiveness: Conveying one’s personal view through computing
  • Exploration:  Investigating ideas in computing
  • Immersion/Flow: Feeling absorbed by the computing activity
  • Originality: Generating unique and personally novel ideas in computing

Through a series of pilot tests with high school students, our final Creativity – Person scale consisted of 10-items and yielded excellent reliability (Cronbach’s alpha= .90 to .93); likewise, it is positively correlated with other psychosocial measures such as computing confidence, enjoyment, and identity and belongingness.

Measuring Creativity—Place

Assessing creativity at the environmental level proved to be more of a challenge! In building the Creativity – Place scale, we turned our attention to previous work by Shaffer and Resnick (1999) who assert that learning environments or materials that are “thickly authentic”—personally-relevant and situated in the real world—promote engagement in learning. Using this as our operational definition of a creative environment, we designed a self-report scale that taps into four identifiable components of a thickly authentic learning environment:

  • Personal: Learning that is personally meaningful for the learner
  • Real World: Learning that relates to the real-world outside of school
  • Disciplinary: Learning that provides an opportunity to think in the modes of a particular discipline
  • Assessment: Learning where the means of assessment reflect the learning process.

Our Creativity – Place scale consisted of 8 items and also yielded excellent reliability (Cronbach’s alpha=.91).

 Predictive Validity

Once we had our two self-report questionnaires in hand—Creativity – Person and Creativity – Place scales—we collected data among high school students who utilized EarSketch as part of their computing course. Our main findings were:

  • Students show significant increases from pre to post in personal expressiveness in computing (Creativity – Person), and
  • A creative learning environment (Creativity – Place) predicted students’ engagement in computing and intent to persist. That is, through a series of multiple regression analyses, we found that a creative learning environment, fueled by a meaningful and personally relevant curriculum, drives improvements in students’ attitudes and intent to persist in computing.

Moving forward, we plan on expanding our work by examining other facets of creativity (e.g., Creativity – Product) through the development of creativity rubrics to assess algorithmic music compositions.

References

Carroll, E.A., Latulipe, C. Fung, R., & Terry, M. (2009). Creativity factor evaluation: Towards a standardized survey metric for creativity support. In C&C ’09: Proceedings of the Seventh ACM Conference on Creativity and Cognition (pp. 127-136). New York, NY:  Association for Computing Machinery.

Engelman, S., Magerko, M., McKlin, T., Miller, M., Douglas, E., & Freeman, J. (in press). Creativity in authentic STEAM education with EarSketch. SIGCSE ’17: Proceedings of the 48th ACM Technical Symposium on Computer Science Education.Seattle, WA: Association for Computing Machinery.

Shaffer, D. W., & Resnick, M. (1999). “Thick” authenticity: New media and authentic learning. Journal of Interactive Learning Research, 10(2), 195-215.

Blog: The Value of Using a Psychosocial Framework to Evaluate STEM Outcomes Among Underrepresented Students

Posted on December 1, 2016 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
henderson2x breonte2x markert2x
Drs. Dawn X. Henderson, Breonte S. Guy, and Chad Markert serve as Co-Principal Investigators of an HBCU UP Targeted Infusion Project grant. Funded by the National Science Foundation, the project aims to explore how infusing lab-bench techniques into the Exercise Physiology curriculum informs undergraduate students’ attitudes about research and science and intentions to persist in STEM-related careers.

The National Science Foundation aims to fund projects that increase retention and persistence in STEM-related careers. Developing project proposals usually involves creating a logic model and an evaluation plan. The intervention, specifically one designed to change an individual’s behavior and outcomes, relies on a combination of psychological and social factors. For example, increasing the retention and persistence of underrepresented groups in the STEM education-to-workforce pipeline depends on attitudes about science, behavior, and the ability to access resources that lead to access, exploration, and exposure to STEM.

As faculty interested in designing interventions in STEM education, we developed a psychosocial framework to inform project design and evaluation and believe we offer an insightful strategy to investigators and evaluators. When developing a theory of change or logic model, you can create a visual map (see figure below) to identify underlying psychological and social factors and assumptions that influence program outcomes. In this post, we highlight a psychosocial framework for developing theories of change—specifically as it relates to underrepresented groups in STEM.

psychosocial_frameworkVisual mapping can outline the relationship between the intervention and psychological (cognitive) and social domains.

What do we mean by psychosocial framework?

Both retention and persistence rely on social factors, such as financial resources, mentoring, and other forms of social support. For example, in our work, we proposed introducing underrepresented students to lab-bench techniques in the Exercise Physiology curriculum and providing summer enrichment opportunities in research to receive funding and mentoring. Providing these social resources introduced students to scientific techniques they would not receive in a traditional curriculum. Psychological factors, such as individual attitudes about science and self-efficacy, are also key contributors to STEM persistence. For instance, self-efficacy is the belief one has the capacity to accomplish a specific task and achieve a specific outcome.

A practical exercise in developing the psychosocial framework is asking critical questions:

  • What are some social factors driving a project’s outcomes? For example, you may modify social factors by redesigning curriculum to engage students in hands-on experiences, providing mentoring or improving STEM teaching.
  • How can these social factors influence psychological factors? For example, improving STEM education can change the way students think about STEM. Outcomes then could relate to attitudes towards and beliefs about science.
  • How do psychological factors relate to persistence in STEM? For example, changing the way students think about STEM, their attitudes, and beliefs may shape their science identity and increase their likelihood to persist in STEM education (Guy, 2013).

What is the value-added?

Evaluation plans, specifically those seeking to measure changes in human behavior, hinge on a combination of psychological and social factors. The ways in which individuals think and form attitudes and behaviors, combined with their ability to access resources, influence programmatic outcomes. A psychosocial framework can be used to identify how psychological processes and social assets and resources contribute to increased participation and persistence of underrepresented groups in STEM-related fields and the workforce. More specifically, the recognition of psychological and social factors in shaping science attitudes, behaviors, and intentions to persist in STEM-related fields can generate value in project design and evaluation.

Reference
Guy, B. (2013). Persistence of African American men in science: Exploring the influence of scientist identity, mentoring, and campus climate. (Doctoral dissertation).

Useful Resource

Steve Powell’s AEA365 blog post, Theory Maker: Free web app for drawing theory of change diagrams

Blog: Course Improvement Through Evaluation: Improving Undergraduate STEM Majors’ Capacity for Delivering Inquiry-Based Mathematics and Science Lessons

Posted on November 16, 2016 by  in Blog ()

Associate Professor, Graduate School of Education, University of Massachusetts Lowell

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the goals of the University of Massachusetts (UMass) UTeach program is to produce mathematics and science teachers who not only are highly knowledgeable in their disciplines, but also can engage students through inquiry-based instruction. The Research Methods course is one of the core program courses designed to accomplish this goal. The Research Methods course is centered around a series of inquiry-based projects.

What We Did

Specifically, the first inquiry was a simple experiment. Students were asked to look around in their kitchens, come up with a research question, and carry out an experiment to investigate the question. The second inquiry required them to conduct a research project in their own disciplinary field. The third inquiry asked students to pretend to be teachers of a middle or high school math/science course who were about to teach their students topics that involve the concept of slope and its applications. This inquiry required students to develop and administer an assessment tool. In addition, they analyzed and interpreted assessment data in order to find out their pretend-students’ prior knowledge and understanding of the concept of slope and its applications in different STEM disciplines (i.e., using assessment information for lesson planning purposes).

Our Goal

We investigated whether our course achieved the goal of enhancing course enrollees’ development of pedagogical skills delivering inquiry-based instruction teaching mathematical or scientific concepts embedded in the inquiry projects.

What We Learned

Examinations of the quality of students’ written inquiry reports showed that students were able to do increasingly difficult work with a higher degree of competency as the course progressed.

Comparisons of students’ responses to pre-and-post course surveys that consisted of questions about a hypothetical experiment indicated that students gained skills at identifying and classifying experimental variables and sources of measurement errors. However, they struggled with articulating research questions and justifying whether a question was researchable. These results were consistent with what we observed in their written reports. As the course progressed, students were more explicit at identifying variables and their relationships and were better at explaining how their research designs addressed possible measurement errors. For most students, however, articulating a researchable question was the most difficult aspect of an inquiry project.

Students’ self-reflections and focus group discussions suggested that our course modeled inquiry-based learning quite well, which was a sharp departure from the step-by-step laboratory activities they were used to as K-12 students. Students also noted that the opportunity to independently conceptualize and carry out an experiment before getting peer and instructor feedback, revising, and producing a final product created a reflective process that they had not experienced in other university course work. Finally, students appreciated the opportunity to articulate the connection between practicing inquiry skills as part of their professional requirements (i.e., as STEM majors) and using inquiry as a pedagogical tool to teach the math and science concepts to middle or high school students. They also noted that knowing how to evaluate their own students’ prior knowledge was an important skill for lesson planning down the road.

Blog: 3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • —Set a recurring conference call or meeting with your evaluator—at least once a month.
  • — Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • —Assign one person on your project team to be the point-person for evaluation.
  • —Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.

 

Blog: Best Practices for Two-Year Colleges to Create Competitive Evaluation Plans

Posted on September 28, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ball
Kelly Ball
Ball
Jeff Grebinoski

Northeast Wisconsin Technical College’s (NWTC) Grants Office works closely with its Institutional Research Office to create ad hoc evaluation teams in order to meet the standards of evidence required in funders’ calls for proposals. Faculty members at two-year colleges often make up the project teams that are responsible for National Science Foundation (NSF) grant project implementation. However, they often need assistance navigating among terms and concepts that are traditionally found in scientific research and social science methodology.

Federal funding agencies are now requiring more evaluative rigor in their grant proposals than simply documenting deliverables. For example, the NSF’s Scholarships in Science, Technology, Engineering, and Mathematics (S-STEM) program saw dramatic changes in 2015: The program solicitation increased the amount of non-scholarship budget from 15% of the scholarship amount to 40% of the total project budget to increase supports for students and to investigate the effectiveness of those supports.

Technical colleges, in particular, face a unique challenge as solicitations change: These colleges traditionally have faculty members from business, health, and trades industries. Continuous improvement is a familiar concept to these professionals; however, they tend to have varying levels of expertise evaluating education interventions.

The following are a few best practices we have developed for assisting project teams in grant proposal development and project implementation at NWTC.

  • Where possible, work with an external evaluator at the planning stage. External evaluators can provide the expertise that principal investigators and project teams might lack as external evaluators are well-versed on current evaluation methods, trends, and techniques.
  • As they develop their projects, teams should meet with their Institutional Research Office to better understand data gathering and research capacity. Some data needed for evaluation plans might be readily available, whereas others might require some advanced planning to develop a system to track information. Conversations about what the data will be used for and what questions the team wants to answer will help ensure that the correct data are able to be gathered.
  • After a grant is awarded, have a conversation early with all internal and external evaluative parties about clarifying data roles and responsibilities. Agreeing to reporting deadlines and identifying who will collect the data and conduct further analysis will help avoid delays.
  • Create a “data dictionary” for more complicated projects and variables to ensure that everyone is on the same page about what terms mean. For example, “student persistence” can be defined term-to-term or year-to-year and all parties need to understand which data will need to be tracked.

With some planning and the right working relationships in place, two-year colleges can maintain their federal funding competitiveness even as agencies increase evaluation requirements.

Blog: Possible Selves: A Way to Assess Identity and Career Aspirations

Posted on September 14, 2016 by  in Blog ()

Professor of Psychology, Arkansas State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Children are often asked the question “What do you want to be when you grow up?” Many of us evaluate programs where the developers are hoping that participating in their program will change this answer. In this post, I’d like to suggest using “possible self” measures as a means of evaluating if a program changed attendees’ sense of identity and career aspirations.

What defines the term?

Possible selves are our representations of our future. We all think about what we ideally would like to become (the hoped-for possible self), things that we realistically expect to become (the expected possible self), and things that we are afraid of becoming (the feared-for possible self).[1][2] Possible selves can change many times over the lifespan and thus can be a useful measure to examine participants’ ideas about themselves in the future.

How can it be measured?

There are various ways to measure possible selves. One of the simplest is to use an open-ended measure that asks people to describe what they think will occur in the future. For example, we presented the following (adapted from Osyerman et al., 2006[2]) to youth participants in a science enrichment camp (funded by an NSF-ITEST grant to Arkansas State University):

Probably everyone thinks about what they are going to be like in the future. We usually think about the kinds of things that are going to happen to us and the kinds of people we might become.

  1. Please list some things that you most strongly hope will be true of you in the future.
  2. Please list some things that you think will most likely be true of you in the future.

The measure was used both before and after participating in the program. We purposely did not include a feared-for possible self, given the context of a summer camp.

What is the value-added?

Using this type of open-ended measure allows for participants’ own voices to be heard. Instead of imposing preconceived notions of what participants should “want” to do, it allows participants to tell us what is most important to them. We learned a great deal about participants’ world views and their answers helped us to fine-tune programs to better serve their needs and to be responsive to our participants. Students’ answers focused on careers, but also included hoped-for personal ideals. For instance, European-American students were significantly more likely to mention school success than African-American students.  Conversely, African-American students were significantly more likely to describe hoped-for positive social/emotional futures compared to European-American students. These results allowed program developers to gain a more nuanced understanding of motivations driving participants. Although we regarded the multiple areas of focus as a strength of the measure, evaluators considering using a possible self-measure may also want to include more directed, follow-up questions.

For more information on how to assess possible selves, see Professor Daphna Oyserman’s website.

References

[1] Markus, H. R., & Nurius, P. (1986). Possible selves. American Psychologist, 41, 954–969.

[2] Oyserman, D., Bybee, D., &Terry, K. (2006). Possible selves and academic outcomes: How and when possible selves impel action. Journal of Personality and Social Psychology, 91, 188–204.

Blog: Six Data Cleaning Checks

Posted on September 1, 2016 by  in Blog ()

Research Associate, WestEd’s STEM Program

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Data cleaning is the process of verifying and editing data files to address issues of inconsistency and missing information. Errors in data files can appear at any stage of an evaluation, making it difficult to produce reliable data. Data cleaning is a critical step in program evaluation because clients rely on accurate results to inform decisions about their initiatives. Below are six essential steps I include in my data cleaning process to minimize issues during data analysis:

1. Compare the columns of your data file against the columns of your codebook.

Sometimes unexpected columns might appear in your data file or columns of data may be missing. Data collected from providers external to your evaluation team (e.g., school districts) might include sensitive participant information like social security numbers. Failures in software used to collect data can lead to responses not being recorded. For example, if a wireless connection is lost while a file is being downloaded, some information in that file might not appear in the downloaded copy. Unnecessary data columns should be removed before analysis and, if possible, missing data columns should be retrieved.

2. Check your unique identifier column for duplicate values.

An identifier is a unique value used to label a participant and can take the form of a person’s full name or a number assigned by the evaluator. Multiple occurrences of the same identifier in a data file usually indicate an error. Duplicate identifier values can occur when participants complete an instrument more than once or when a participant identifier is mistakenly assigned to multiple records. If participants move between program sites, they might be asked to complete a survey for a second time. Administrators might record a participant’s identifier incorrectly, using a value assigned to another participant. Data collection software can malfunction and duplicate rows of records. Duplicate records should be identified and resolved.

3. Transform categorical data into standard values.

Non-standard data values often appear in data gathered from external data providers. For example, school districts often provide student demographic information but vary in the categorical codes they use. For example, the following table shows a range of values I received from different districts to represent students’ ethnicities:

Hubbard-Graphic

To aid in reporting on participant ethnicities, I transformed these values into the race and ethnicity categories used by the National Center for Education Statistics.

When cleaning your own data, you should decide on standard values to use for categorical data, transform ambiguous data into a standard form, and store these values in a new data column.  OpenRefine is a free tool that facilitates data transformations.

4. Check your data file for missing values.

Missing values occur when participants choose not to answer an item, are absent the day of administration, or skip an item due to survey logic. If missing values are found, apply a code to indicate the reason for the missing data point. For example, 888888 can indicate an instrument was not administered and 999999 can indicate a participant chose not to respond to an item. The use of codes can help data analysts determine how to handle the missing data. Analysts sometimes need to report on the frequency of missing data, use statistical methods to replace the missing data, or remove the missing data before analysis.

5. Check your data file for extra or missing records.

Attrition and recruitment can occur at all stages of an evaluation. Sometimes people who are not participating in the evaluation are allowed to submit data. Check the number of records in your data file against the number of recruited participants for discrepancies. Tracking dates when participants join a project, leave a project, and complete instruments can facilitate this review.

6. Correct erroneous or inconsistent values.

When instruments are completed on paper, participants can enter unexpected values. Online tools may be configured incorrectly and allow illegal values to be submitted. Create a list of validation criteria for each data field and compare all values against this list. de Jonge and van der Loo provide a tutorial for checking invalid data using R.

Data cleaning can be a time-consuming process. These checks can help reduce the time you spend on data cleaning and get results to your clients more quickly.

Blog: Three Tips for a Strong NSF Proposal Evaluation Plan

Posted on August 17, 2016 by  in Blog ()

Principal Research Scientist, Education Development Center, Inc.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m Leslie Goodyear and I’m an evaluator who also served as a program officer for three years at the National Science Foundation in the Division of Research on Learning, which is in the Education and Human Resources Directorate. While I was there, I oversaw evaluation activities in the Division and reviewed many, many evaluation proposals and grant proposals with evaluation sections.

In May 2016, I had the pleasure of participating in the “Meeting Requirements, Exceeding Expectations: Understanding the Role of Evaluation in Federal Grants.” Hosted by Lori Wingate at EvaluATE and Ann Beheler at the Centers Collaborative for Technical Assistance, this webinar covered topics such as evaluation fundamentals; evaluation requirements and expectations; and evaluation staffing, budgeting and utilization.

On the webinar, I shared my perspective on the role of evaluation at NSF, strengths and weaknesses of evaluation plans in proposals, and how reviewers assess Results from Prior NSF Support sections of proposals, among other topics. In this blog, I’ll give a brief overview of some important takeaways from the webinar.

First, if you’re making a proposal to education or outreach programs, you’ll likely need to include some form of project evaluation in your proposal. Be sure to read the program solicitation carefully to know what the specific requirements are for that program. There are no agency-wide evaluation requirements—instead they are specified in each solicitation. Lori had a great suggestion on the webinar:  Search the solicitation for “eval” to make sure you find all the evaluation-related details.

Second, you’ll want to make sure that your evaluation plan is tailored to your proposed activities and outcomes. NSF reviewers and program officers can smell a “cookie cutter” evaluation plan, so make sure that you’ve talked with your evaluator while developing your proposal and that they’ve had the chance to read the goals and objectives of your proposed work before drafting the plan. You want the plan to be incorporated into the proposal so that it appears seamless.

Third, indicators of a strong evaluation plan include carefully crafted, relevant overall evaluation questions, a thoughtful project logic model, a detailed data collection plan that is coordinated with project activities, and a plan for reporting and dissemination of findings. You’ll also want to include a bio for your evaluator so that the reviewers know who’s on your team and what makes them uniquely qualified to carry out the evaluation of your project.

Additions that can make your plan “pop” include:

  • A table that maps out the evaluation questions to the data collection plans. This can save space by conveying lots of information in a table instead of in narrative.
  • Combining the evaluation and project timelines so that the reviewers can see how the evaluation will be coordinated with the project and offer timely feedback.

Some programs allow for using the Supplemental Documents section for additional evaluation information. Remember that reviewers are not required to read these supplemental docs, so be sure that the important information is still in the 15-page proposal.

For the Results of Prior NSF Support section, you want to be brief and outcome-focused. Use this space to describe what resulted from the prior work, not what you did. And be sure to be clear how that work is informing the proposed work by suggesting, for example, that these outcomes set up the questions you’re pursuing in this proposal.