We EvaluATE - Data...

Blog: Using Embedded Assessment to Understand Science Skills

Posted on August 5, 2015 by , , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CStylinski
Cathlyn Stylinski
Senior Agent
University of Maryland Center
for Environmental Science
KPeterman
Karen Peterman
President
Karen Peterman Consulting
RBKlein
Rachel Becker-Klein
Senior Research Associate
PEER Associates

As our field explores the impact of informal (and formal) science programs on learning and skill development, it is imperative that we integrate research and evaluation methods into the fabric of the programs being studied. Embedded assessments (EAs) are “opportunities to assess participant progress and performance that are integrated into instructional materials and are virtually indistinguishable from day-to-day [program] activities” (Wilson & Sloane, 2000, p. 182). As such, EAs allow learners to demonstrate their science competencies through tasks that are integrated seamlessly into the learning experience itself.

Since they require that participants demonstrate their skills, rather than simply rate their confidence in using them, EAs offer an innovative way to understand and advance the evidence base for knowledge about the impacts of informal science programs. EAs can take on many forms and can be used in a variety of settings. The essential defining feature is that these assessments document and measure participant learning as a natural component of the program implementation and often as participants apply or demonstrate what they are learning.

Related concepts that you may have heard of:

  • Performance assessments: EA methods can include performance assessments, in which participants do something to demonstrate their knowledge and skills (e.g., scientific observation).
  • Authentic assessments: Authentic assessments are assessments of skills where the learning tasks mirror real-life problem-solving situations (e.g., the specific data collection techniques used in a project) and could be embedded into project activities. (Rural School and Community Trust, 2001; Wilson & Soane, 2000)

You can use EAs to measure participants’ abilities alongside more traditional research and evaluation measures and also to measure skills across time. So, along with surveys of content knowledge and confidence in a skill area, you might consider adding experiential and hands-on ways of assessing participant skills. For instance, if you were interested in assessing participants’ skills in observation, you might already be asking them to make some observations as a part of your program activities. You could then develop and use a rubric to assess the depth of that observation.

Although EA offers many benefits, the method also poses some significant challenges that have prevented widespread adoption to date. For the application of EA to be successful, there are two significant challenges to address: (1) the need for a standard EA development process that includes reliability and validity testing and (2) the need for professional development related to EA.

With these benefits and challenges in mind, we encourage project leaders, evaluators, and researchers to help us to push the envelope by:

  • Thinking critically about the inquiry skills fostered by their informal science projects and ensuring that those skills are measured as part of the evaluation and research plans.
  • Considering whether projects include practices that could be used as an EA of skill development and, if so, taking advantage of those systems for evaluation and research purposes.
  • Developing authentic methods that address the complexities of measuring skill development.
  • Sharing these experiences broadly with the community in an effort to highlight the valuable role that such projects can play in engaging the public with science.

We are currently working on a National Science Foundation grant (Embedded Assessment for Citizen Science – EA4CS) that is investigating the effectiveness of embedded assessment as a method to capture participant gains in science and other skills. We are conducting a needs assessment and working on creating embedded assessments at each of three different case study sites. Look for updates on our progress and additional blogs over the next year or so.

Rural School and Community Trust (2001). Assessing Student Work. Available from http://www.ruraledu.org/user_uploads/file/Assessing_Student_Work.pdf

Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181-208. Available from http://dx.doi.org/10.1207/S15324818AME1302_4

Blog: Visualizing Network Data

Posted on July 29, 2015 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Orlina
Rick Orlina
Evaluation and Research Consultant
Rocinante Research
VSmith
Veronica S. Smith
Data Scientist, Founder
data2insight, LLC

Social Network Analysis (SNA) is a methodology that we have found useful when answering questions about relationships. For example, our independent evaluation work with National Science Foundation-funded Integrative Graduate Education Traineeship (IGERT) programs typically include a line of inquiry about the nature of interdisciplinary relationships across trainees and faculty, and how those relationships change over time.

Sociograms are data displays that stakeholders can use to understand network patterns and identify potential ways to affect desired changes in the network. There are currently few, if any, rules regarding how to draw sociograms to facilitate effective communication with stakeholders. While there is only one network—the particular set of nodes and the ties that connect them—there are many ways to draw the network. We share two methods for visualizing networks and describe how they have been helpful when communicating evaluation findings to clients.

Approach 1: Optimized Force-Directed Maps

Figure 1 presents sociograms for one of the relationships defined as part of an IGERT evaluation as measured at two time points. Specifically, this relationship reflects whether participants reported that they designed or taught a course, seminar, or workshop together.

In this diagram, individuals (nodes) who share a tie tend to be close together, while individuals who do not share a tie tend to be farther apart. When drawn in this way, the sociogram reveals how people organize into clusters. Red lines represent interdisciplinary relationships, making it possible to see patterns in the connections that bridge disciplinary boundaries. These sociograms combine data from three years, so nodes do not move from one sociogram to the next. Nodes appear and disappear as individuals enter and leave the network, and the ties connecting people appear and disappear as reported relationships change. Thus, it is easy to see how connections—around individuals and across the network—evolve over time.

One shortcoming of this data display is that it can be difficult to identify the same person (node) in a set of sociograms spanning multiple time periods. However, with additional data processing, it is possible to create a set of aligned sociograms (in which node positions are fixed) that make visual analysis of changes over time easier.

Figure 1: Sociograms — Fixed node locations based on ties reported across all years

a) “Taught with” relationship year 4

 Orlina Smith 1

b) “Taught with” relationship year 3

Orlina Smith 2

Approach 2: Circular/Elliptical Maps

Figure 2 introduces another way to present a sociogram: a circular layout that places all nodes on the perimeter of a circle with ties drawn as chords passing through the middle of the circle (or along the periphery when connecting neighboring nodes). Using the same data used for Figure 1, Figure 2 groups nodes along the elliptical boundary by department and, within each department, by role. By imposing this arrangement on the nodes, interdisciplinary ties pass through the central area of the ellipse, making it easy to see the density of interdisciplinary ties and to identify people and departments that contribute to interdisciplinary connections.

One limitation of this map is that it is difficult to see the clustering and to distinguish people who are central in the group versus people who tend to occupy a position around the group’s periphery.

Figure 2: Sociograms—All nodes from all years of survey placed in a circular layout and fixed

a) “Taught with” relationship year 4

Orlina Smith 3

b) “Taught with” relationship year 3

Orlina Smith 4

Because both network diagrams have strengths and limitations, consider using multiple layouts and choose maps that best address stakeholders’ questions. Two excellent—and free—software packages are available for people interested in getting started with network visualization: NetDraw (https://sites.google.com/site/netdrawsoftware/home), which was used to create the sociograms in this post, and Gephi (http://gephi.github.io), which is also capable of computing a variety of network measures.

Blog: Crowdsourcing Interview Data Analysis

Posted on June 24, 2015 by  in Blog ()

Associate Professor, Claremont Graduate University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Most of the evaluations I conduct include interview or focus group data. This data provides a sense of student experiences and outcomes as they progress through a program. After collecting this data, we would transcribe, read, code, re-read, and recode to identify themes and experiences to capture the complex interactions between the participants, the program, and their environment. However, in our reporting of this data, we are often restricted to describing themes and providing illustrative quotes to represent the participant experiences. This is an important part of the report, but I have always felt that we could do more.

This led me to think of ways to quantify the transcribed interviews to obtain a broader impression of participant experiences and compare across interviews. I also came across the idea of crowdsourcing, which means that you get a lot of people to perform a very specific task for payment. For example, a few years ago 30,000 people were asked to review satellite images to locate a crashed airplane. Crowdsourcing has been around for a long time (e.g., the Oxford English dictionary was crowdsourced), but it has become considerably easier to access the “crowd.” Amazon’s Mechanical Turk (MTurk.com) gives researchers access to over 500,000 people around the world. It allows you to post specific tasks and have them completed within hours. For example, if you wanted to test the reliability of a survey or survey items, you can post it on MTurk and have 200 people take the survey (depending on the survey’s length, you can pay them $.50 to $1.00).

So the idea of crowdsourcing got me thinking about the kind of information we can get if we had 100 or 200 or 300 people read through interview transcripts. For simplicity, I wanted MTurk people (Called Workers on MTurk) to read transcripts and rate (using a Likert scale) students’ experiences in specific programs, as well as select text that they deemed important and illustrative of those participant experiences. We conducted a series of studies using this procedure and found that the crowd’s average ratings of the students’ experiences were stable and consistent, even after we used five different samples. We also found that the text the crowd selected was the same across the five different samples. This is important from a reporting standpoint, because it helped to identify the most relevant quotes for the reports, and the ratings provided a summary of the student experiences that could be used to compare different interview transcripts.

If you are interested in trying this approach out, here a few suggestions:

1) Make sure that you remove any identifying information about the program from the transcripts before posting them on MTurk (to protect privacy and comply with HSIRB requirements).

2) Pay the MTurk people more for work that takes more time. If a task takes 15 to 20 minutes, then I would suggest that a minimum payment is $.50 per response. If the task takes more than 20 minutes I would suggest going $.75 to $2.00 depending on the time it would take to complete.

3) Be specific about what you want the crowd to do. There should be no ambiguity about the task (this can be accomplished by pilot testing the instructions and tasks and asking the MTurk participants to provide you feedback on the clarity of the instructions).

I hope that you found this useful and please let me know how you have used crowdsourcing in your practice.

Blog: Tips for Data Preparation

Posted on June 17, 2015 by  in Blog ()

EvaluATE Blog Editor, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Over the course of analyzing data for a number of projects, I have encountered several situations that can make it hard to work with spreadsheet-based data, especially when working with multiple programs. Today, I would like to share some frustration-saving tips for preparing spreadsheet data for use with programs like Excel, Access, and SPSS.

1) In Excel, use an apostrophe at the beginning of numbers that start with a zero: Excel will usually remove zeros at the beginning of a field, so ZIP Codes or other numbers with leading zeros must be entered with a single apostrophe in front. Do not worry about this with numbers that are not categorical.

2) Minimize the number of columns in your data download: Survey systems like Qualtrics or Survey Monkey usually provide an option to export single-question responses as multiple columns, (i.e., Gender_male, Gender_female, and Gender_other). Using this option can make it difficult to clean and analyze your data later. If the question was a “select all that apply” type question, using multiple columns is appropriate. If the respondent can only select one option, then keep the answers as one column, with distinct codes for each type of response.

3) Use simple column headings when setting up spreadsheets: Both SPSS and Access like to read the first row of spreadsheets as variable names. The second row will always be interpreted as data. To save yourself frustration, instead of putting the question title “GPA” in the first line, and a series of years in the second line, simply have one line of variable names that includes “GPA2014” and “GPA2015.”

4) Avoid special characters: Many computer programs hate special characters (like quotation marks or @ symbols) in variable names or text fields. For example, Access will read quotation marks as delimiters in text fields during some operations, which will trigger errors that can cause the database to fault.

5) Use built-in number formats instead of hand-entering symbols: Avoid hand-entering percent symbols and dollar signs in Excel fields. Instead, enter .xx for percentages and xx.xx for dollars, and then set the number format for the whole column to percentage or currency (drop down menu on the ribbon). Excel will keep the numbers as entered, but will display them properly. Also make sure all cells in a column are set to the same number format.

6) Assign a code to identify missing data: Empty cells are interpreted differently by different programs, depending on the data type. Without going into too much detail as to the reason why, using a number that could never appear in your data (like 99999) is better than using zeros to represent missing data. Use formulas to exclude those codes from any calculations you do in Excel (using the codes above, the Excel formula =averageif(a:a,” <777” would exclude cells in column A that are coded as missing). This is also more upload-friendly to Access or SPSS. Use the word “None” to represent missing qualitative data in text columns. Doing this will speed up error checking and upload/transfer to other programs.

Following these tips can save you time and effort in the long run.

Blog: LGBT-Inclusive Language in Data Collection

Posted on May 27, 2015 by  in Blog (, , )

Coordinator of LGBT Student Services, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In order to obtain relevant and reliable demographic information regarding lesbian, gay, bisexual and transgender (LGBT) individuals, it is important to understand the complexities of sex, sexual orientation, gender identity and gender expression. In many English-speaking communities, the concepts of sex, gender and sexual orientation are frequently conflated in ways that can marginalize those who do not fit the dominant heterosexual, men are men/women are women narrative. Thus, when asking about an individual’s sex, gender, or sexual orientation on a survey, it is important that survey developers have a clear understanding of these terms and what it really means to ask for specific demographic information to ensure that the information collected is valid.

In Western culture, sex assignment is usually determined by a physician around the time of birth based on the appearance of external genitalia. There is an assumption that the development of external genitalia aligns with an expected chromosomal make up and release of hormones. Sex categories are commonly identified exclusively as either female or male, but a growing number of communities, cultures, and countries are advocating for expanded recognition of additional sex categories, including intersex.

SexAssignmentQuestion

Gender identity, while frequently used interchangeably with and conflated with sex assigned at birth, describes the way a person sees themselves in terms of gender categories, such as man, woman, agender, third-gender, and other gender identifier language. Gender expression describes the ways a person expresses their gender to other people through roles, mannerisms, clothing, interactions, hair styles, and other perceivable ways. If seeking to better understand how the respondent interacts with the outside world, a survey may ask for gender identity and gender expression.

The normative alignment of sex assigned at birth and gender identity, such as a person assigned female at birth who identifies as a woman, is described by the term cisgender. Transgender is a term that is broadly defined as an identity in which a person’s sex assigned at birth and gender identity does not align. It’s important to recognize that those who identify as transgender may or may not identify with binary male/female or man/woman categories. Surveys seeking reliable data regarding transgender populations should ask descriptive and precise questions regarding transgender identity, sex assigned at birth and gender identity with options of providing their own identity term.

GenderIdentityQuestion

Finally, sexual orientation describes a person’s emotional and/or physical attraction to other people. Common sexual orientation terms may include straight, gay, lesbian, bisexual, asexual and many others. Sexual orientation and sexual behavior may be different, however. If we are seeking to address health disparities that result from same-sex sexual behavior, it would be more relevant to ask about sexual behavior than sexual orientation. This is because identity and behavior are not the same.

SexualOrientationQuestion SexualBehaviorQuestion

It’s important that research tools reflect an understanding of the complexity and meaning of each of these categories in order to collect relevant demographic information that serves to answer an intended question. An important rule of thumb is to only ask what you really need to know about.

Blog: Some of My Favorite Tools and Apps

Posted on May 20, 2015 by  in Blog (, )

Co-Principal Investigator, Op-Tec Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 5 – How am I supposed to keep track of all this information?

I’ve been involved in NSF center and project work for over 17 years now. When it comes to keeping track of information from meetings, having a place to store evaluation data, and tracking project progress, there are a few habits and apps I’ve found particularly useful.

Habit: backing up my files
All the apps I use are cloud-based, so I can access my files anywhere, anytime, with any device. However, I also use Apple’s Time Machine to automatically back up my entire system on a daily basis to an external hard drive. I also have three cloud-based storage accounts (Dropbox, Google Drive, and Amazon Cloud Drive). When the FAS4ATE files on Dropbox were accidentally deleted last year, I could upload my backup copy and we recovered everything with relative ease.

Habit + App: Keeping track of notes with Evernote
I’ve been using Evernote since the beta was released in 2008 and absolutely love it. If you’ve been in a meeting with me and you’ve seen me typing away – I’m not emailing or tweeting – I’m taking meeting notes using Evernote. Notes can include anything: text, pictures, web links, voice memos, etc., and you can attach things like word documents, spreadsheets, etc. Notes are organized in folders and are archived and searchable from any connected devices. There are versions available for all of the popular operating systems and devices, and notes can easily be shared among users. If it’s a weekend and I’m five miles off the coast fishing and you call me about a meeting we had seven years ago, guess what? With a few clicks I can do a search from my phone, find those notes, and send them to you in a matter of seconds. Evernote has both a free, limited version and inexpensive, paid version.

App: LucidChart
When we first started with the FAS4ATE project, we thought we’d be developing our own cloud-based logic model dashboard-type app. We decided to start by looking at what was out there, so we investigated lots of project management apps like Basecamp. We tried to force Evernote into a logic model format; we liked DoView. However, at this time we’ve decided to go with LucidChart. LucidChart is a web-based diagramming app that runs in a browser and allows multiple users to collaborate and work together in real time. The app allows in-editor chat, comments, and video chat. It is fully integrated with Google Drive and Microsoft Office 2013 and right now appears to be our best option for collaborative (evaluator, PI, etc.) logic model work. You may have seen this short video logic model demonstration.

As we further develop our logic model-based dashboard, we’ll be looking for centers and projects to pilot it. If you are interested in learning more about being a pilot site, contact us by emailing Amy Gullickson, one of our co-PIs, at amy.gullickson@unimelb.edu.au. We’d love to work with you!

Blog: Finding Opportunity in Unintended Outcomes

Posted on April 15, 2015 by  in Blog (, , , )

Research and Evaluation Consultant, Steven Budd Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Working with underage students bears an increased responsibility for their supervision. Concerns may arise during the implementation of activities that were never envisioned when the project was designed. These unintended consequences may be revealed during an evaluation, thus presenting an opportunity for PIs and evaluators to both learn and intervene.

One project I’m evaluating includes a website designed for young teens, and features videos from ATETV and other sources. The site encourages our teen viewers to share information about the site with their peers and to explore links to videos hosted on other popular sites like YouTube. The overarching goal is to attract kids to STEM and technician careers by piquing their interest with engaging and accurate science content. What we didn’t anticipate was the volume of links to pseudoscience, science denial, and strong political agendas they would encounter. The question for the PI and Co-PIs became, “How do we engage our young participants in a conversation about good versus not-so-good science and how to think critically about what they see?”

As the internal project evaluator, I first began a conversation with the project PI and senior personnel around the question of responsibility. What is the responsibility of the PIs to engage our underage participants in a conversation about critical thinking and learning, so they can discriminate between questionable and solid content? Such content is readily accessible to young teens as they surf the Web, so a more important question was how the project team might capture this reality and capitalize on it. In this sense, was a teaching moment at hand?

As evaluators on NSF-funded projects, we know that evaluator engagement is critical right from the start. Formative review becomes especially important when even well-designed and well thought out activities take unanticipated turns. Our project incorporates a model of internal evaluation, which enables project personnel to gather data and provide real-time assessment of activity outcomes. We then present the data with comment to our external evaluator. The evaluation team works with the project leadership to identify concerns as they arise and strategize a response. That response might include refining activities and how they are implemented or by creating entirely new activities that address a concern directly.

After thinking it through, the project leadership chose to open a discussion about critical thinking and science content with the project’s teen advisory group. Our response was to:

  • Initiate more frequent “check-ins” with our teen advisers and have more structured conversations around science content and what they think.
  • Sample other teen viewers as they join their peers in the project’s discussion groups and social media postings.
  • Seek to better understand how teens engage Internet-based content and how they make sense of what they see.
  • Seek new approaches to activities that engage young teens in building their science literacy and critical thinking.

Tips to consider

  • Adjust your evaluation questions to better understand the actual experience of your project’s participants, and then look for the teaching opportunities in response to what you hear.
  • Vigilant evaluation may reveal the first signs of unintended impacts.

Blog: Still Photography in Evaluation

Posted on April 1, 2015 by  in Blog ()

Consultant/Evaluator, TEMPlaTe Educational Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I have used my photographic skills in my work as principal investigator for my ATE projects and more recently as external evaluator for other ATE projects and centers. If you have attended the annual ATE PI conference or the HI-TEC conference, you may have seen some of the photo books that I have produced for my clients at their showcase booths.

Photography is an important part of special events, e.g., weddings, birthdays, anniversaries, and the births of new family members. Why is photography so important at these times? Couldn’t we just write a summary of what happened and compile a list of who participated, and in the case of weddings, save a lot of money? Of course not, photographic images are one of the best ways to help you and others remember, relive events, and tell a story. Certainly, your project activities may not rank in importance to these family events, but it is still valuable to make an effort to document these events through still photographs and/or video in order to tell your story.

Story telling in evaluation has been discussed in the literature. An example is Richard Krueger’s contribution, titled “Using Stories in Evaluation” in the Handbook of Practical Program Evaluation (3rd ed.), edited by J. Wholey, K. Hatry, and K. Newcomer. What is missing from the literature is how to use photography to tell stories in evaluation, not just capture images for marketing purposes or to embellish an evaluation report. This is an area where we need to develop uses of photography in evaluation.

Taking photos during the event is the first step. I have found that some pre-event planning helps. Looking over the program or agenda will help you identify where you will want to be at any given time, especially if multiple sessions are happening concurrently. I also look for activities where the participants will be doing something (action shots) rather than just listening to a speaker and viewing PowerPoint slides (passive shots), although a few photos of the latter activity might be useful. Then, I develop a “shoot sheet” or a list of the types of images I want to capture.

Sherry Boyce in an AEA365 blog listed some questions to keep in mind when thinking about the types of photos you will need to tell your evaluation story (http://aea365.org/blog/sherry-boyce-on-using-photos-in-evaluation-reports/):

  • How will the photos help answer the evaluation questions for the project?
  • How will the photos help tell your evaluation story?
  • Will the photos be representative of the experiences of many participants?
  • Do the photos illustrate a participant’s engagement?
  • Do the photos illustrate “mastery”?

The next step is to arrange the photos to tell your story about the event, whether a conference, workshop, or celebration. Computers enable me to arrange my photos to make a photo book that can be uploaded to one of many sites for printing. Here is a photo book I produced for The Southwest Center for Microsystems Education (SCME) used in a display at a recent Micro Nano Tech Conference.

DSC_1137 MNT1 Photo Book

Photo books I have made for my ATE clients have been used by PIs to promote and advocate for their project or center. People feel comfortable perusing a photo book, and photo books require no equipment for viewing and are easily transported. They can be effective conversation starters.

Editor’s Note: Dave informed us that he works almost exclusively in iPhoto, since he is a MacBook Pro user. He uses Apple’s printing services, which are contracted out. Dave did tell us about some additional photo hosting/photo book software available on the web, although he did not vouch for their quality.

Blog: A Rose Isn’t as Sweet by Any Other Name: Lessons on Subject Lines for Web Surveys

Posted on February 25, 2015 by  in Blog ()

Principal Consultant, The Rucks Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Survey developers typically spend a great deal of time on the content of questionnaires. We struggle with what items to include, how to ask the question, whether an item should be closed-ended or open-ended; the list of considerations goes on.  After all that effort, we generally spend less time on a smaller aspect that is incredibly important to web surveys: the subject line.

I have come to appreciate the extent to which the subject line acts as a “frame” for a survey. In simplistic terms, a frame is how a concept is categorized. Framing is the difference between calling an unwanted situation a challenge versus a problem. There is a significant literature that suggests that the nature of a frame will produce particular types of behaviors. For instance, my firm recently disseminated a questionnaire to gain feedback on the services that EvaluATE provides. As shown in the chart below, initially we received about 100 responses. With that questionnaire invitation, we used the subject line EvaluATE Services Survey. Based on past experience, we would have expected the next dissemination to garner about 50 responses, but we got closer to 90. So what happened? We had started playing with the subject line.

Rucks_Chart1

 

EvaluATE’s Director, Lori Wingate, sent out a reminder email with the subject line, What do you think of EvaluATE? When we sent out the actual questionnaire, we used the subject line, Tell us what you think. For the next two iterations of dissemination, we had slightly higher than expected response rates.

For the third dissemination, Lori conducted an experiment. She sent out reminder notices but manipulated the subject lines. There were seven different subject lines in total, each sent to about 100 different individuals. The actual questionnaire disseminated had a constant subject line of Would you share your thoughts today? As you see below, the greatest response rate occurred when the subject line of the reminder was How is EvaluATE doing?, while the lowest response rate was when Just a few days was used.

Rucks_Chart2

 

These results aren’t completely surprising. In the 2012 presidential election, the Obama campaign devoted much effort to identifying subject lines that produced the highest response rates. They found that a “gap in information” was the most effective. Using this explanation, the question may emerge as to why the subject line Just a few days would garner the lowest response rate, because it presents a gap in information. The reason this occurred is unclear. One possibility is that incongruity between the sense of urgency implied by the subject line and the importance of the topic of the email to respondents made them feel tricked and they opted not to complete the survey.

Taking all of these findings together tells us that a “rose by any other name would not smell as sweet” and that what something is called does make a difference. So when you are designing your next web survey, make sure crafting the subject line is part of the design process.

Blog: Look No Further! Potential Sources of Institutional Data

Posted on February 4, 2015 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
brennan
Carolyn Brennan
Assistant Vice Chancellor for Research
University of Washington Bothell
cannon
Russell Cannon
Director of Institutional Research
University of Washington Bothell

This blog entry is a follow-up to our article in EvaluATE’s Winter 2015 newsletter on the use of institutional data for evaluation and grant proposals. In that article, we highlight data collected by most higher education institutions. Here we discuss additional sources that may be available on your campus.

  • Longitudinal Data Systems: As part of new federal regulations, states must track students longitudinally through and beyond the education pipeline. The implementation of these systems, the extent of the data stored, and the data availability varies greatly by state, with Texas, Florida, and Washington leading the pack.
  • Support Services: Check the college’s catalog of specific support service offerings and their target population; such listings are often created as resources for students as part of campus student success efforts. This information can help shape a grant proposal narrative and the services themselves may be potential spaces for embedding and assessing interventions.
  • Surveys: Many institutions administer surveys that track student beliefs, behaviors, and self-reported actions and characteristics. These include national surveys (which allow external benchmarking but less customization) and internal surveys (which allow more customization but only internal benchmarking). Data from such surveys can help shape grant proposals and evaluations in more qualitative ways. Frequently used survey types include:

Caution: All surveys may suffer from low response rates, low response bias, and the subjectivity of responses; they should only be used when more data are not available or to augment those “harder” data.

  • National Student Clearinghouse (NSC) data: Although schools are required to track data on student success at their own institutions, many are increasingly using tools like the National Student Clearinghouse to track where students transfer, whether they eventually graduate, and whether they go on to professional and graduate school. NSC is nearly always the most accurate source of data on graduate school attainment and can add nuance by reframing some “drop-outs” as transfers who eventually graduate.
  • Data on student behavior: While self-reported student behavior data can be obtained through surveys, many institutions, including have adopted card-swipe systems and tracking mechanisms that monitor student activity on learning management systems, which provide hard data on certain elements of student behavior, such as participation in extra-curricular activities, time spent with study groups or learning resources or behaviors such as coming late to class.
  • Campus-level assessment: Some institutions use standardized national tools like ACT’s Collegiate Assessment of Academic Proficiency or the Council for Aid to Education’s Collegiate Learning Assessment. They are sometimes administered to all students; more often they are administered to a sample, sometimes on a voluntary basis (which may result in bias). At the urging of internal efforts or external accreditors, some institutions have developed in-house instruments (rubric-graded analysis of actual student work). While these may not be as “accurate” or “reliable” as nationally developed instruments, they are often better proxies of faculty and campus engagement.
  • Program-level assessment: Many programs may have program-specific capstones or shared assessments that can be used as part of the evaluation process.

These are all potential sources of data that can improve the assessment and description of interventions designed to support the mission of higher education institutions: an increase in student retention, enhanced academic performance and improved graduation rates. We’d like to hear if you’ve used any of these or others successfully towards this aim or others.