We EvaluATE - Data...

Blog: Understanding Data Literacy

Posted on February 19, 2020 by  in Blog ()

Dean of Institutional Effectiveness, Coastline College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In today’s data-filled society, institutions are abundant with data but lack data literacy, the ability to transform data into usable information and further utilize the knowledge to facilitate actionable change.

Data literacy is a foundational driver in understanding institutional capacity to gather, consume, and utilize various data to build insight and inform actions. Institutions can use a variety of strategies to determine the maturity of their data utilization culture. The following list provides a set of methods that can be used to better understand your organization’s level of data literacy:

  • Conduct a survey that provides insight into areas of awareness, access, application, and action associated with data utilization. For example, Coastline College uses a data utilization maturity index tool, the EDUCAUSE benchmark survey, and annual utilization statistics to get this information. The survey can be conducted in person or electronically, based on the access and comfort employees or stakeholders have with technology. The goal of this strategy is to gain surface-level insight into the maturity of your organizational data culture.
  • Lead focus groups with a variety of stakeholders (e.g., faculty members, project directors) to gather rich insight into ideas about and challenges associated with data. The goal of this approach is to glean a deeper understanding of the associated “whys” found in broader assessments (e.g., observations, institutional surveys, operational data mining).
  • Compare your organizational infrastructure and operations to similar institutions that have been identified as having successful data utilization. The goal of this strategy is to help visualize and understand what a data culture is, how your organization compares to others, and how your organization can adapt or differentiate its data strategy (or adopt another one). A few resources I would recommend include Harvard Business Review’s Analytics topic library, EDUCAUSE’s Analytics library, What Works Clearinghouse, McKinsey & Company’s data culture article, and Tableau’s article on data culture.
  • Host open discussions with stakeholders (e.g., faculty members, project directors, administrators) about the benefits, disadvantages, optimism, and fears related to data. This method can build awareness, interest, and insight to support your data planning. The goal of this approach is to effectively prepare and address any challenges prior to your data plan investment and implementation.

Based on the insight collected, organizational leadership can develop an implementation plan to adopt and adapt tools, operations, and trainings to build awareness, access, application, and action associated with data utilization.

Avoid the following pitfalls:

  • Investing in a technology prior to engaging stakeholders and understanding the organizational data culture. In these instances, the technology will help but will not be the catalyst or foundation to build the data culture. The “build it and they will come” theory is not applicable in today’s data society. Institutions must first determine what they are seeking to achieve. Clay Christensen’s Jobs to Be Done Theory is a resource that can may bring clarity to this matter.
  • Assuming individuals have a clear understanding of the technical aspects of data. This assumption could lead to misuse or limited use of your data. To address this issue, institutions need to conduct an assessment to understand the realities in which they are operating.
  • Hiring for a single position to lead the effort of building a data culture. In this instance, a title does not validate the effort or ensure that an institution has a data-informed strategy and infrastructure. To alleviate this challenge, institutions must invest in teams and continuous trainings. For example, Coastline College has an online data coaching course, in-person hands-on data labs, and open discussion forums and study sessions to learn about data access and utilization.

As institutions better understand and foster their data cultures, the work of evaluators can be tailored and utilized to meet project stakeholders (e.g., project directors, faculty members, supporters, and advisory boards) where they are. By understanding institutional data capacity, evaluators can support continuous improvement and scaling through the provision of meaningful and palatable evaluations, presentations, and reports.

Blog: How I Came to Learn R, and Why You Should Too!

Posted on February 5, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Title graphic image

A few years ago, I left my job on the research team at the Oregon Community Foundation and started working as an independent evaluation consultant. No longer constrained by the data analysis software choices made by others, I was free to use whatever tool I wanted. As an independent consultant, I couldn’t afford proprietary software such as SPSS, so I used Excel. But the limits of Excel quickly became apparent, and I went in search of other options.

I had heard of R, but it was sort of a black box in my mind. I knew it was a tool for data analysis and visualization, but I had no idea how to use it. I had never coded before, and the prospect of learning was daunting. But my desire to find a new tool was strong enough that I decided to take up the challenge of learning R.

My journey to successfully using R was rocky and circuitous. I would start many projects in R before finding I couldn’t do something, and I would have to slink back to Excel. Eventually, though, it clicked, and I finally felt comfortable using R for all of my work.

The more I used R, the more I came to appreciate its power.

  1. The code that had caused me such trouble when I was learning became second nature. And I could reuse code in multiple projects, so my workflow became more efficient.
  2. The data visualizations I made in R were far better and more varied than anything I had produced in Excel.
  3. The most fundamental shift in my work, though, has come from using RMarkdown. This tool enables me to go from data import to final report in R, avoiding the dance across, say, SPSS (for analyzing data), Excel (for visualizing data), and Word (for reporting). And when I receive new data, I can simply rerun my code, automatically generating my report.

In 2019, I started R for the Rest of Us to help evaluators and others learn to embrace the power of R. Through online courses, workshops, coaching, and custom training for organizations, I’ve helped many people transition to R.

I’m delighted to share some videos here that show you a bit more about what R is and why you might consider learning it. You’ll learn about what importing data into R looks like and how you can use a few lines of code to analyze your data, and you’ll see how you can do this all in RMarkdown. The videos should give you a good sense of what working in R looks like and help you decide if it makes sense for you to learn it.

I always tell people considering R that it is challenging to learn. But I also tell them that the time and energy you invest in learning R is very much worth it in the end. Learning R will not only improve the quality of your data analysis, data visualization, and workflow, but also ensure that you have access to this powerful tool forever—because, oh, did I mention that R is free? Learning R is an investment in your current self and your future self. What could be better than that?

R Video Series

Blog: Increasing Response Rates*

Posted on January 9, 2020 by  in Blog ()

Founder and President, EvalWorks, LLC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Increasing Response Rates Graphic

Higher response rates result in greater sample sizes and reduce bias. Research on ways to increase response rates for mail and Internet surveys suggests that the following steps will improve the odds that participants will complete and return your survey, whether it is by Internet or mail.

Make the survey as salient as possible to potential respondents.
Relevance can be tested with a small group of people similar to your respondents.

If possible, use Likert-type questions, versus open-ended questions, to increase response rates. 
Generally, the shorter the survey appears to respondents, the better.

Limit the number of questions of a sensitive nature, when possible.
Additionally, if possible, make the survey anonymous, as opposed to confidential.

Include prenotification and follow-ups to survey respondents.
Personalizing these contacts will also increase response rates. In addition, surveys conducted by noncommercial institutions (e.g., colleges) obtain higher response rates than those conducted by commercial institutions.

Provide additional copies of or links to the survey.
This can be done as part of follow-up with potential respondents.

Provide incentives. 
Incentives included in the initial mailing produce higher return rates than those contingent upon survey return, with twice the increase when monetary (versus nonmonetary) incentives are included up-front.

Consider these additional strategies for mail surveys:
Sending surveys using recorded delivery, using colored paper for mail surveys, and providing addressed, stamped return envelopes.

Consider the following when conducting an Internet survey:
A visual indicator of how much of the survey respondents have completed—or, alternately, how much of the survey they have left to complete.

Although there are no hard-and-fast rules for what constitutes an appropriate response rate, many government agencies require response rates of 80 percent or higher before they are willing to report results. If you have conducted a survey and still have a low response rate, it is important to make additional efforts or use a different survey mode to reach non-respondents; however, it is important, to ensure that they do not respond differently than initial respondents and that the survey mode itself did not produce bias.

 

*This blog is a reprint of an article from an EvaluATE newsletter published in spring 2010.

Blog: LinkedIn for Alumni Tracking

Posted on June 13, 2019 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

LinkedIn for Alumni Tracking

Benjamin Reid Kevin Cooper
President of Impact Allies PI of RDNET and Dean of Advanced Technology at IRSC

Post-program outcomes for students are obviously huge indicators of success and primary metrics for measuring medium and long-term outcomes and impacts. EvaluATE’s 2019 revised Advanced Technological Education (ATE) Annual Survey states, “ATE program stakeholders would like to know more about post-program outcomes for students.” It lists the types of data sought:

    • Job placement
    • Salary
    • Employer satisfaction
    • Pursuit of additional STEM education
    • Acquisition of industry certifications or licenses

The survey also asks for the sources used to collect this data, giving the following choices:

    • Institutional research office
    • Survey of former students
    • Local economic data
    • Personal outreach to former students
    • State longitudinal data systems
    • Other (describe)

This blog introduces an “Other” data source: LinkedIn Alumni Tool (LAT).

LAT is data rich and free, yet underutilized. Each alumni’s professional information is readily available (i.e., no permissions process for the researcher) and personally updated. The information is also remarkably accurate, because the open-visibility and network effects help ensure honesty. These factors make LAT a great tool for quick health checks and an alternative to contacting each person and requesting this same information.

Even better, LinkedIn is a single tool that is useful for evaluators, principal investigators, instructors, and students. For example, a couple years ago Kevin, Principal Investigator for the Regional Center for Nuclear Education and Training (RCNET) and I (RCNET’s evaluator) realized that our respective work was leading us to use the same tool — LinkedIn — and that we should co-develop our strategies for connecting and communicating with students and alumni on this medium. Kevin uses it to help RCNET’s partner colleges to communicate opportunities (jobs, internships, scholarships, continued education) and develop soft skills (professional presentation, networking, awareness of industry news). I use it to glean information about students’ educational and professional experiences leading up to and during their programs and to track their paths and outcomes after graduation. LinkedIn is also a user-centric tool for students that — rather than ceasing to be useful after graduation — actually becomes more useful.

When I conducted a longitudinal study of RCNET’s graduates across the county over the preceding eight years, I used LinkedIn for two purposes: triangulation and connecting with alumni via another channel, because after college many students change their email addresses and telephone numbers. More than 30 percent of the alumni who responded were reached via LinkedIn, as their contact information on file with the colleges had since changed.

Using LAT, I viewed their current and former employers, job positions, promotions, locations, skills, and further education (and there were insignificant differences between what alumni reported on the survey and interview and what was on their LinkedIn profiles). That is, three of the five post-program outcomes for students of interest to ATE program stakeholders (plus a lot more) can be seen for many alumni via LinkedIn.

Visit https://university.linkedin.com/higher-ed-professionals for short videos about how to use the LinkedIn Alumni Tool and many others. Many of the videos take an institutional perspective, but here is a tip on how to pinpoint program-specific students and alumni. Find your college’s page, click Alumni, and type your program’s name in the search bar. This will filter the results only to the people in your program. It’s that simple.

 

Blog: Increase Online Survey Response Rates with These Four Tips

Posted on April 3, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Molly Henschel Elizabeth Peery Anne Cosby
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluator
Magnolia Consulting, LLC
Researcher and Evaluation Associate
Magnolia Consulting, LLC

 

Greetings! We are Molly Henschel, Beth Perry, and Anne Cosby with Magnolia Consulting. We often use online surveys in our Advanced Technological Education (ATE) projects. Online surveys are an efficient data collection method for answering evaluation questions and providing valuable information to ATE project teams. Low response rates threaten the credibility and usefulness of survey findings. At Magnolia Consulting, we use proven strategies to increase response rates, which, in turn, ensures survey results are representative of the population. We offer the following four strategies to promote high response rates:

1. Ensure the survey is easy to complete. Keep certain factors in mind as you create your survey. For example, is the survey clear and easy to read? Is it free of jargon? Is it concise? You do not want respondents to lose interest in completing a survey because it is difficult to read or too lengthy. To help respondents finish the survey, consider:

      • collaborating with the ATE project team to develop survey questions that are straightforward, clear, and relevant;
      • distributing survey questions across several pages to decrease cognitive load and minimize the need for scrolling;
      • including a progress bar; and
      • ensuring your survey is compatible with both computers and mobile devices.

Once the survey is finalized, coordinate with program staff to send the survey during ATE-related events, when the respondents have protected time to complete the survey.

2. Send a prenotification. Prior to sending the online survey, send a prenotification to all respondents, informing them of the upcoming survey. A prenotification establishes survey trustworthiness, boosts survey anticipation, and reduces the possibility that a potential respondent will disregard the survey. The prenotification can be sent by email, but research shows that using a mixed-mode strategy (i.e., email and postcard) can have positive effects on response rates (Dillman, Smyth, & Christian, 2014; Kaplowitz, Lupi, Couper, & Thorp, 2012). We also found that asking the ATE principal investigator (PI) or co-investigators (co-PIs) to send the prenotification helps yield higher response rates.

3. Use an engaging and informative survey invitation. The initial survey invitation is an opportunity to grab your respondents’ attention. First, use a short and engaging subject line that will encourage respondents to open your email. In addition, follow best practices to ensure your email is not diverted into a recipient’s spam folder. Next, make sure the body of your email provides respondents with relevant survey information, including:

      • a clear survey purpose;
      • a statement on the importance of their participation;
      • realistic survey completion time;
      • a deadline for survey completion;
      • information on any stipend requirements or incentives  (if your budget allows for it);
      • a statement about survey confidentiality;
      • a show of appreciation for time and effort; and
      • contact information for any questions about the survey.

4.  Follow up with nonresponders. Track survey response rates on a regular basis. To address low response rates:

      • continue to follow up with nonresponders, sending at least two reminders;
      • investigate potential reasons the survey has not been completed and offer any assistance (e.g., emailing a paper copy) to make survey completion less burdensome;
      • contact nonresponders via a different mode (e.g., phone); or
      • enlist the help of the ATE PI and co-PI to personally follow up with nonresponders. In our experience, the relationship between the ATE PI or co-PI and the respondents can be helpful in collecting those final surveys.

 

Resources:

Nulty, D. (2008). The adequacy of response rates to online and paper surveys: What can be done? Assessment & Evaluation in Higher Education33(3), 301–314.

References:

Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, mail, and mixed-mode surveys: The tailored design method (4th ed.). New York: Wiley.

Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on web survey response rates. Social Science Computer Review, 30, 339–349.

Blog: From Instruments to Analysis: EvalFest’s Outreach Training Offerings

Posted on February 26, 2019 by  in Blog ()

President, Karen Peterman Consulting, Co.

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Looking for a quick way to train field researchers? How about quick tips on data management or a reminder about what a p-value is? The new EvalFest website hosts brief training videos and related resources to support evaluators and practitioners. EvalFest is a community of practice, funded by the National Science Foundation, that was designed to explore what we could learn about science festivals by using shared measures. The videos on the website were created to fit the needs of our 25 science festival partners from across the United States. Even though they were created within the context of science festival evaluation, the videos and website have been framed generally to support anyone who is evaluating outreach events.

Here’s what you should know:

  1. The resources are free!
  2. The resources have been vetted by our partners, advisors, and/or other leaders in the STEM evaluation community.
  3. You can download PDF and video content directly from the site.

Here’s what we have to offer:

  • Instruments — The site includes 10 instruments, some of which include validation evidence. The instruments gather data from event attendees, potential attendees who may or may not have attended your outreach event, event exhibitors and partners, and scientists who conduct outreach. Two observation protocols are also available, including a mystery shopper protocol and a timing and tracking protocol.
  • Data Collection Tools — EvalFest partners often need to train staff or field researchers to collect data during events, so this section includes eight videos that our partners have used to provide consistent training to their research teams. Field researchers typically watch the videos on their own and then attend a “just in time” hands-on training to learn the specifics about the event and to practice using the evaluation instruments before collecting data. Topics include approaching attendees to do surveys during an event, informed consent, and online survey platforms, such as QuickTapSurvey and SurveyMonkey.
  • Data Management Videos — Five short videos are available to help clean and organize your data and to help begin to explore it in Excel. These videos include the kinds of data that are typically generated by outreach surveys, and they show step-by-step how to do things like filter your data, recode your data, and create pivot tables.
  • Data Analysis Videos — Available in this section are 18 videos and 18 how-to guides that provide quick explanations of things like the p-value, exploratory data analysis, the chi-square test, independent-samples t-test, and analysis of variance. The conceptual videos describe how each statistical test works in nonstatistical terms. The how-to resources are then provided in both video and written format, and walk users through conducting each analysis in Excel, SPSS, and R.

Our website tagline is “A Celebration of Evaluation.” It is our hope that the resources on the site help support STEM practitioners and evaluators in conducting high-quality evaluation work for many years to come. We will continue to add resources throughout 2019. So please check out the website, let us know what you think, and feel free to suggest resources that you’d like us to create next!

Blog: Using Think-Alouds to Test the Validity of Survey Questions

Posted on February 7, 2019 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Those who have spent time creating and analyzing surveys know that surveys are complex instruments that can yield misleading results when not well designed. A great way to test your survey questions is to conduct a think-aloud (sometimes referred to as a cognitive interview). A type of validity testing, a think-aloud asks potential respondents to read through a survey and discuss out loud how they interpret the questions and how they would arrive at their responses. This approach can help identify questions that are confusing or misleading to respondents, questions that take too much time and effort to answer, and questions that don’t seem to be collecting the information you originally intended to capture.

Distorted survey results generally stem from four problem areas associated with the cognitive tasks of responding to a survey question: failure to comprehend, failure to recall, problems summarizing, and problems reporting answers. First, respondents must be able to understand the question. Confusing sentence structure or unfamiliar terminology can doom a survey question from the start.

Second, respondents must be able to have access to or recall the answer. Problems in this area can happen when questions ask for specific details from far in the past or questions to which the respondent just does not know the answer.

Third, sometimes respondents remember things in different ways from how the survey is asking for them. For example, respondents might remember what they learned in a program but are unable to assign these different learnings to a specific course. This might lead respondents to answer incorrectly or not at all.

Finally, respondents must translate the answer constructed in their heads to fit the survey response options. Confusing or vague answer formats can lead to unclear interpretation of responses. It is helpful to think of these four problem areas when conducting think-alouds.

Here are some tips when conducting a think-aloud to test surveys:

    • Make sure the participant knows the purpose of the activity is to have them evaluate the survey and not just respond to the survey. I have found that it works best when participants read the questions aloud.
    • If a participant seems to get stuck on a particular question, it might be helpful to probe them with one of these questions:
      • What do you think this question is asking you?
      • How do you think you would answer this question?
      • Is this question confusing?
      • What does this word/concept mean to you?
      • Is there a different way you would prefer to respond?
    • Remember to give the participant space to think and respond. It can be difficult to hold space for silence, but it is particularly important when asking for thoughtful answers.
    • Ask the participant reflective questions at the end of the survey. For example:
      • Looking back, does anything seem confusing?
      • Is there something in particular you hoped  was going to be asked but wasn’t?
      • Is there anything else you feel I should know to truly understand this topic?
    • Perform think-alouds and revisions in an iterative process. This will allow you to test out changes you make to ensure they addressed the initial question.

Blog: PhotoVoice: A Method of Inquiry in Program Evaluation

Posted on January 25, 2019 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Ouen Hunter Emma Perk Michael Harnar
Doctoral Student
The Evaluation Center
Managing Director
EvaluATE
Assistant Professor of Interdisciplinary
Ph.D. in Evaluation
The Evaluation Center

Hello, EvaluATE! We are Ouen Hunter (student at the Interdisciplinary Ph.D. in Evaluation, IDPE), Emma Perk (co-PI of EvaluATE at The Evaluation Center), and Michael Harnar (assistant professor at the IDPE) from Western Michigan University. We recently used PhotoVoice in our evaluation of a Michigan-based Upward Bound (UB) program (a college preparation program focused on 14- to 19-year-old youth living in low-income families in which neither parent has a bachelor’s degree).

PhotoVoice is a method of inquiry that engages participants in creating photographs and short captions in response to specific prompts. The photos and captions provide contextually grounded insights that might otherwise be unreachable by those not living that experience. We opted to use PhotoVoice because the photos and narratives could provide insights into participants’ perspectives that cannot be captured using close-ended questionnaires.

We created two prompts, in the form of questions, and introduced PhotoVoice in person with the UB student participants (see the instructional handout below). Students used their cell phones to take one photo per prompt. For confidentiality reasons, we also asked the students to avoid taking pictures of human faces. Students were asked to write a two- to three-sentence caption for each photo. The caption was to include a short description of the photo, what was happening in the photo, and the reason for taking the photo.

PhotoVoice handout

Figure 1: PhotoVoice Handout

PhotoVoice participation was part of the UB summer programming and overseen by the UB staff. Participants had two weeks to complete the tasks. After receiving the photographs and captions, we analyzed them using MAXQDA 2018. We coded the pictures and the narratives using an inductive thematic approach.

After the preliminary analysis, we then went back to our student participants to see if our themes resonated with them. Each photo and caption was printed on a large sheet of paper (see figure 2 below) and posted on the wall. During a gallery walk, students were asked to review each photo and caption combination and to indicate whether they agree or disagree with our theme selections (see figure 3). We gave participants stickers and asked them to place the stickers in either the “agree” or “disagree” section on the bottom of each poster. After the gallery walk, we discussed the participants’ ratings to understand their photos and write-ups better.

Figure 2: Gallery walk layout (photo and caption on large pieces of paper)

Figure 3: Participants browsing the photographs

Using the participants’ insights, we finalized the analysis, created a webpage, and developed a two-page report for the program staff. To learn more about our reporting process, see our next blog. Below is a diagram of the activities that we completed during the evaluation.

Figure 4: Activities conducted in the Upward Bound evaluation

The PhotoVoice activity provided us with rich insights that we would not have received from the survey that was previously used. The UB student participants enjoyed learning about and being a part of the evaluation process. The program staff valued the reports and insights the method provided. The exclusion of faces in the photographs enabled us to avoid having to obtain parental permission to release the photos for use in the evaluation and by UB staff. Having the students use cell phone cameras kept costs low. Overall, the evaluation activity went over well with the group, and we plan to continue using PhotoVoice in the future.

Blog: Using Mixed-Mode Survey Administration to Increase Response

Posted on September 26, 2018 by  in Blog ()

Program Evaluator, Cold Spring Harbor Laboratory, DNA Learning Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

“Why aren’t people responding?”

This is the perpetual question asked by anyone doing survey research, and it’s one that I am no stranger to myself. There are common strategies to combat low survey participation, but what happens when they fail?

Last year, I was co-principal investigator on a small Advanced Technological Education (ATE) grant to conduct a nationwide survey of high school biology teachers. This was a follow-up to a 1998 survey done as part of an earlier ATE grant my institution had received. In 1998, the survey was done entirely by mail and had a 35 percent response rate. In 2018, we administered an updated version of this survey to nearly 13,000 teachers. However, this time, there was one big difference: we used email.

After a series of four messages over two months (pre-notice, invitation, and two reminders), an incentivized survey, and intentional targeting of high school biology teachers, our response rate was only 10 percent. We anticipated that teachers would be busy and that a 15-minute survey might be too much for many of them to deal with at school. However, there appeared to be a bigger problem: nearly two-thirds of our messages were never opened and perhaps never even seen.

To boost our numbers, we decided to return to what had worked previously: the mail. Rather than send more emails, we mailed an invitation to individuals who had not completed the survey, followed by postcard reminders. Individuals were reminded of the incentive and directed to a web address where they could complete the survey online. The end result was a 14 percent response rate.

I noticed that, particularly when emailing teachers at their school-provided email addresses, many messages never reach the intended recipients. Although use of a mail-exclusive design may never be likely, an alternative would be to heed the advice of Millar and Dillman (2011): administer a mixed-mode, web-then-mail messaging strategy to ensure that spam filters don’t prevent participants from being a part of surveys. Asking the following questions can help guide your method-of-contact decisions and help avoid troubleshooting a low response rate mid-survey.

  1. Have I had low response rates from a similar population before?
  2. Do I have the ability to contact individuals via multiple methods?
  3. Is using the mail cost- or time-prohibitive for this particular project?
  4. What is the sample size necessary for my sample to reasonably represent the target population?
  5. Have I already made successful contact with these individuals over email?
  6. Does the survey tool I’m using (Survey Monkey, Qualtrics, etc.) tend to be snagged by spam filters if I use its built-in invitation management features?

These are just some of the considerations that may help you avoid major spam filter issues in your forthcoming project. Spam filters may not be the only reason for a low response rate, but anything that can be done to mitigate their impact is a step toward a better response rate for your surveys.


Reference

Millar, M., & Dillman, D. (2011). Improving response to web and mixed-mode surveys. Public Opinion Quarterly 75, 249–269.

Blog: Using Rubrics to Demonstrate Educator Mastery in Professional Development

Posted on September 18, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Nena Bloom
Evaluation Coordinator
Center for Science Teaching and Learning, Northern Arizona University
Lori Rubino-Hare
Professional Development Coordinator
Center for Science Teaching and Learning, Northern Arizona University

We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.

One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.

GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.

We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.

Practices we used to develop and utilize the rubric included the following:

  • We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
  • We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
  • Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
  • We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
  • We pilot tested the rubric with several units, using several scorers, and revised accordingly.

This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.


References and resources

Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.

Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric

Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.

Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.

Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.