Blog




Blog: Shorten the Evaluation Learning Curve: Avoid These Common Pitfalls*

Posted on September 16, 2020 by  in Blog ()

Executive Director, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This EvaluATE blog is focused on getting started with evaluation. It’s oriented to new ATE principal investigators who are getting their projects off the ground, but I think it holds some good reminders for veteran PIs as well. To shorten the evaluation learning curve, avoid these common pitfalls:

Searching for the truth about “what NSF wants from evaluation.” NSF is not prescriptive about what an ATE evaluation should or shouldn’t look like. So, if you’ve been concerned that you’ve somehow missed the one document that spells out exactly what NSF wants from an ATE evaluation—rest assured, you haven’t overlooked anything. But there is information that NSF requests from all projects in annual reports and that you are asked to report on the annual ATE survey. So it’s worthwhile to preview the Research.gov reporting template (bit.ly/nsf_prt) and the ATE annual survey questions (bit.ly/ATEsurvey16). And if you’re doing research, be sure to review the Common Guidelines for Education Research and Development – which are pretty cut-and-dried criteria for different types of research (bit.ly/cg-checklist). Most importantly, put some time into thinking about what you, as a project leader, need to learn from the evaluation. If you’re still concerned about meeting expectations, talk to your program officer.

Thinking your evaluator has all the answers. Even for veteran evaluators, every evaluation is new and has to be tailored to context. Don’t expect your evaluator to produce a detailed, actionable evaluation plan on Day 1. He or she will need to work out the details of the plan with you. And if something doesn’t seem right to you, it’s OK to ask for something different.

 Putting off dealing with the evaluation until you are less busy. “Less busy” is a mythical place and you will probably never get there. I am both an evaluator and a client of evaluation services, and even I have been guilty of paying less attention to evaluation in favor of “more urgent” matters. Here are some tips for ensuring your project’s evaluation gets the attention it needs: (a) Set a recurring conference call or meeting with your evaluator (e.g., every two to three weeks); (b) Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation matters; (c) Give someone other than the PI responsibility for attending to the evaluation—not to replace the PI’s attention, but to ensure the PI and other project members are staying on top of the evaluation and communicating regularly with the evaluator; (d) Commit to using the evaluation results in a timely way—if you do something on a recurring basis, make sure you gather feedback from those involved and use it to improve the next activity.

Assuming you will need your first evaluation report at the end of Year 1. PIs must submit their annual reports to NSF 90 days prior to the end of the current budget period. So if your grant started on September 1, your first annual report is due around June 1. And it will take some time to prepare, so you should probably start writing in early May. You’ll want to include at least some of your evaluation results, so start working with your evaluator now to figure what information is most important to collect right now.

Veteran PIs: What tips do you have for shortening the evaluation learning curve?  Submit a blog to EvaluATE and tell your story and lessons learned for the benefit of new PIs.

*Blog is a reprint of the 2015 newsletter article

Blog: Making the Most of Virtual Conferences: An Exercise in Evaluative Thinking

Posted on September 2, 2020 by  in Blog ()

Research Associate, Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

We at EvaluATE affectionately call the fall “conference season.” Both the ATE PI Conference and the American Evaluation Association’s annual conference usually take place between October and November every year. This year, both conferences will be virtual events. Planning how our project will engage in this new virtual venue got me thinking: What makes a virtual conference successful for attendees? What would make a virtual conference successful for me?

I started by considering what makes an in-person conference successful, and I quickly realized that this was an exercise in evaluative thinking. The concept of evaluative thinking has been defined in a variety of ways—as a “type of reflective practice” (Baker & Bruner, 2012, p. 1), a combination of “critical thinking, creative thinking, inferential thinking, and practical thinking” (Patton, 2018, p. 21), and a “problem-solving approach” (Vo, 2013, p. 105). In this case, I challenged myself to consider what my personal evaluation criteria would be for a successful conference and what my ideal outcomes would look like.

In my reflection process, I came up with a list of key outcomes for attending a conference. Specifically, at conferences, I hope to:

  • build new relationships with peers;
  • grow relationships with existing partners;
  • learn about new trends in research and practice;
  • learn about future research opportunities (places I might be able to fill in the gaps); and
  • feel part of a community and re-energized about my work.

I realized that many of these outcomes are typically achieved through happenstance. For example, at previous conferences, most of my new relationships with peers occurred because of a hallway chat or because I sat next to someone in a session and we struck up a conversation and exchanged information. It’s unlikely these situations would occur organically in a virtual conference setting. I would need to be intentional about how I participated in a virtual conference to achieve the same outcomes.

I began to work backwards to determine what actions I could take to ensure I achieved these outcomes in a virtual conference format. In true evaluator fashion, I constructed a logic model for my virtual conference experience (shown in Figure 1). I realized I needed to identify specific activities—agreements with myself—to get the most out of the experience and have a successful virtual conference.

For example, one of my favorite parts of a conference is feeling like I am part of a larger community and becoming re-energized about my work. Being at home, it can be easy to become distracted and not fully engage with the virtual platform, potentially threatening these important outcomes. To address this, I have committed to blocking off time on my schedule during both conferences to authentically engage with the content and attendees.

How do you define a successful conference? What outcomes do you want to achieve in upcoming conferences that have gone virtual? While you don’t have to make a logic model out of your thoughts, I would challenge you to think evaluatively about upcoming conferences, asking yourself what you hope to achieve and how can you ensure that it happens.

Figure 1. Lyssa’s Logic Model to Achieve a Successful Virtual Conference

Figure 1. Lyssa’s Logic Model to Achieve a Successful Virtual Conference

Blog: Designing Accessible Digital Evaluation Materials

Posted on August 19, 2020 by  in Blog ()

Developmental Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on July 23, 2020:
https://aea365.org/blog/designing-accessible-digital-evaluation-materials-by-don-glass/

Designing Accessible Digital Evaluation Materials title graphic

Hi, I am Don Glass, a DC-based developmental evaluator, learning designer, and proud member of the AEA Disabilities and Underrepresented Populations TIG.

COVID-19 has increased our reliance on- and maybe fast-tracked- our use of digital and online communication to serve our diverse evaluation clients and audiences. This is an opportunity to push our evaluation communication design to the next level. Just like AEA members enthusiastically embraced Stephanie Evergreen’s and Sheila Robinson’s contributions to Potent Presentations and established a flourishing Data Visualization TIG, we can now integrate inclusive design routines into our communication practice!

Being inclusive is part of the AEA mission- and for some of us a legal duty– to make sure that our digital communications are barrier-free and accessible to all. This article is a quick reference guide for design considerations for digital communication like AEA365 blogs, social media, online webinars/courses, virtual conference presentations, and evaluation reports— any digital content, really, that uses text, images, and media.

The evaluation field has had a solid foundation in our literature to guide inclusive evaluation thinking and design. Donna M. Merten’s 1999 AEA Presidential Address crystallized the rationale for inclusive approaches to evaluation. In 2011, Jennifer Sulewski and June Gothberg first developed a Universal Design for Evaluation Checklist to help evaluators systematically think about the inclusive design of all aspects of your evaluation practice. The guidance in this blog focuses on:

Principle 4: Perceptible Information. The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities.

Social Media Accessibility: Plain language, CamelCase Hashtags, Image Descriptions, Captioning and Audio, Link Shorteners

Hot Tips

Text: Provide supports to access this primary form of content and navigate its organization.

  • Structured Text: Use headers and bulleted/numbered lists. Think about reading order.
  • Fonts and Font Size: Make text large and legible enough to easily read. Avoid serif fonts.
  • Colors and Contrast: Make sure text and background are not too similar. Consider a contrast checker tool.
  • Descriptive Hyperlinks: Embed links in text that describe the destination. Remember, links should look like links.

Images: Provide a barrier-free and purposeful use of images beyond aesthetics.

  • Alternative Text: Write a short description about the content and function of an image read by a screen-reader, web-browser, and search engine.
  • Accessible Images: Select or design images and diagrams to enhance comprehension and communication.

Media: Provide supports to make media content accessible and search-able.

  • Closed Captioning: Make text versions of the spoken word presented in multimedia. Consider auto-captioning on YouTube.
  • Transcripts: Make a full text version of spoken word presented in multimedia. Explore searching transcripts as a way of navigating media.
  • Audio Description: A narration that describes visual-only content in media. Check out examples of Descriptive Video Service on your streaming service.

Rad Resources

Blog: Quick Reference Guides Evaluators Can’t Live Without

Posted on August 5, 2020 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

* This blog was originally published on AEA365 on May 15, 2020:
https://aea365.org/blog/quick-reference-guides-evaluators-cant-live-without-by-kelly-robertson/

My name is Kelly Robertson, and I work at The Evaluation Center at Western Michigan University and EvaluATE, the National Science Foundation–funded evaluation hub for Advanced Technological Education.

I’m a huge fan of quick reference guides. Quick reference guides are brief summaries of important content that can be used to improve practice in real time. They’re also commonly referred to as job aids or cheat sheets.

I found quick reference guides to be especially helpful when I was just learning about evaluation. For example, Thomas Guskey’s Five Critical Levels of Professional Development Evaluation helped me learn about different levels of outcomes (e.g., reaction, learning, organizational support, application of skills, and target population outcomes).

Even with 10-plus years of experience, I still turn to quick reference guides every now and then. Here are a few of my personal favorites:

My colleague Lyssa Becho is also a huge fan of quick reference guides, and together we compiled a list of over 50 evaluation-related quick reference guides. The list draws on the results from a survey we conducted as part of our work at EvaluATE. It includes quick reference guides that 45 survey respondents rated as most useful for each stage of the evaluation process.

Here are some popular quick reference guides from the list:

  • Evaluation Planning: Patton’s Evaluation Flash Cards introduce core evaluation concepts such as evaluation questions, standards, and reporting in an easily accessible format.
  • Evaluation Design: Wingate’s Evaluation Data Matrix Template helps evaluators organize information about evaluation indicators, data collection sources, analysis, and interpretation.
  • Data Collection: Wingate and Schroeter’s Evaluation Questions Checklist for Program Evaluation provides criteria to help evaluators understand what constitutes high-quality evaluation questions.
  • Data Analysis: Hutchinson’s You’re Invited to a Data Party! explains how to engage stakeholders in collective data analysis.
  • Evaluation Reporting: Evergreen and Emery’s Data Visualization Checklist is a guide for the development of high-impact data visualizations. Topics covered include text, arrangement, color, and lines.

If you find any helpful evaluation-related quick reference guides are missing from the full collection please contact kelly.robertson@wmich.edu.

Blog: Shift to Remote Online Work: Assets to Consider

Posted on July 22, 2020 by  in Blog ()

Principal Partner, Education Design, INC

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I’m the principal partner of Education Design in Boston, focusing on STEM program evaluation. I first engaged in online instruction and design in 1994 with CU-SeeMe, a very early desktop videoconferencing app (without audio… that came in 1995!). While I’m certainly no expert in online learning, I’ve observed this newly accelerated shift toward virtual learning for several decades.

During 2020 we’ve seen nearly all of our personal and professional meetings converted to online interactions. In education this has been both challenging and illuminating. For decades, many in our field have planned and designed for the benefits online and digital learning might offer, often with predictive optimism. Clearly the future we anticipated is upon us.

Here, I want to identify some of the key assets and benefits of online and remote learning. I don’t intend to diminish the value of in-person human contact, but rather to help projects thrive in the current environment.

More Embrace than Rejection of Virtual

In nearly all our STEM learning projects, I’ve noticed far more embrace than rejection of virtual learning and socializing spaces.

In one project with partner colleges located in different states, online meetings and remote professional training were part of the original design. Funded in early 2020, the work has begun seamlessly, pandemic notwithstanding, owing to the colleges’ commitment to remote sharing and learning. These partners, leaders from a previous ATE project, will now become mentors for technical college partners, and that work will most likely be done remotely as well.

While forced to change approaches and learning modes, these partners haven’t just accepted remote interactions. Rather than focus on what is missing (site visits will not occur at this time), they’re actively seeking to understand the benefits and assets of connecting remotely.

“Your Zoom face is your presence”

Opportunities of the Online Context

  1. Videoconferencing presents some useful benefits: facial communication enables trust and human contact. Conversations flow more easily. Chat text boxes provide a platform for comments and freeform notes, and most platforms allow recording of sessions for later review. In larger meetings, group breakout functionality helps facilitate smaller sub-sessions.
  2. Online, sharing and retaining documents and artifacts becomes part of the conversation without depending on the in-person promise to “email it later.”
  3. There is an inherent scalability to online models, whether for instructional activities, such as complete courses or teaching examples, or for materials.
  4. It’s part of tomorrow’s landscape, pandemic or not. Online working, learning, and sharing has leapt forward out of necessity. It’s highly likely that when we return to a post-virus environment, many of the online shifts that have shown value and efficiency will remain in schools and the workforce, leading toward newer hybrid models. If you’re part of the development now, you’re better positioned for those changes.

Tip

As an evaluator, my single most helpful action has been to attend more meetings and events than originally planned, engaging with the team more, building the trust necessary to collect quality data. Your Zoom face is your presence.

Less Change than You’d Think

In most projects, re-calibration has been necessary, but you’d be surprised at how few changes may be required to continue your project work successfully in this new context simply through a change of perspective.

Blog: Data Cleaning Tips in R*

Posted on July 8, 2020 by  in Blog ()

Founder, R for the Rest of Us

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I recently came across a set of data cleaning tips in Excel from EvaluATE, which provides support for people looking to improve their evaluation practice.


Screenshot of the Excel Data Cleaning Tips

As I looked through the tips, I realized that I could show how to do each of the five tips listed in the document in R. Many people come to R from Excel so having a set of R to Excel equivalents (also see this post on a similar topic) is helpful.

The tips are not intended to be comprehensive, but they do show some common things that people do when cleaning messy data. I did a live stream recently where I took each tip listed in the document and showed its R equivalent.

As I mention at the end of the video, while you can certainly do data cleaning in Excel, switching to R enables you to make your work reproducible. Say you have some surveys that need cleaning today. You write your code and save it. Then, when you get 10 new surveys next week, you can simply rerun your code, saving you countless Excel points and clicks.

You can watch the full video at the very bottom or go each tip by using the videos immediately below. I hope it’s helpful in giving an overview of data cleaning in R!

Tip #1: Identify all cells that contain a specific word or (short) phrase in a column with open-ended text

Tip #2: Identify and remove duplicate data

Tip #3: Identify the outliers within a data set

Tip #4: Separate data from a single column into two or more column

Tip #5: Categorize data in a column, such as class assignments or subject groups

Full Video

*This is a Repost of David Keyes’ blog Data Cleaning Tips in R

Blog: What I’ve Learned about Evaluation: Lessons from the Field

Posted on June 21, 2020 by  in Blog ()

Coordinator in Educational Leadership, San Francisco State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

What I’ve Learned about Evaluation_ Lessons from the FieldI’m completing my second year as the external evaluator of a three-year ATE project. As a first-time evaluator, I have to confess that I’ve had a lot to learn.

The first surprise was that, in spite of my best intentions, my evaluation process seems always a bit messy. A grant proposal is just that: a proposed plan. It is an idealized vision of what may come. Therefore, the evaluation plan based on that vision is also idealized. Over time, I have had to reconsider my evaluation as grant activities and circumstances evolved—what data is to be collected, how it is to be collected, or whether that data is to be collected at all.

I also thought that my evaluations would somehow reveal something startling to my project team. In reality, my evaluations have served as a mirror to them, acknowledging what they have done and mostly confirming what they already suspect to be true. In a few instances, the manner in which I’ve analyzed data has allowed the team to challenge some assumptions made along the way. In general, though, my work is less revelatory than I had expected.

Similarly, I anticipated my role as a data analyst would be more important. However, this project was designed to use iterative continuous improvement and so the team has met frequently to analyze and consider anecdotal data and impromptu surveys. This more immediate feedback on project activities was regularly used to guide changes. So while my planned evaluation activities and formal data analysis has been important, it has been a less significant contribution than I had expected.

Instead, I’ve added the greatest value to the team by serving as a critical colleague. Benefiting from distance from the day-to-day work, I can offer a more objective, outsider’s view of the project activities. By doing so, I’m able to help a talented, innovative, and ambitious team consider their options and determine whether or not investing in certain activities promotes the goals of the grant or moves the team tangentially. This, of course, is critical for a small grant on a small budget.

Over my short time involved in this work, I see that by being brought into the project from the beginning, and encouraged to offer guidance along the way, I’ve assessed the progress made in achieving the grant goals, and I have been able to observe and document how individuals work together effectively to achieve those goals. This insight highlights another important service evaluators can offer: to tell the stories of successful teams to their stakeholders.

As evaluators, we are accountable to our project teams and also to their funders. It is in the funders’ interest to learn how teams work effectively to achieve results. I had not expected it, but I now see that it’s in the teams’ interest for the external evaluators to understand their successful collaboration and bring it to light.

Blog: Improving the Quality of Evaluation Data from Participants

Posted on June 10, 2020 by  in Blog ()

Professor of Educational Research and Evaluation, Tennessee Tech University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Improving the Quality of Evaluation Data from Participants

I have had experience evaluating a number of ATE projects, all of them collaborative projects among several four-year and two-year community colleges. One of the projects’ overarching goals is to provide training to college instructors as well as elementary-, middle-, and high-school teachers, to advance their skills in additive manufacturing and/or smart manufacturing.

The training is done via the use of train-the-trainer studios (TTS). TTSs provide novel hands-on learning experiences to program participants. As with any program, the evaluation of such projects needs to be informed by rich data to capture participants’ entire experience, including the knowledge they gain.

Here’s one lesson I’ve learned from evaluating these projects: Participants’ perception of their value in the project contributes crucially to the quality of data collected.

As the evaluator, one can make participants feel that the data they are being asked to provide (regarding technical knowledge gained, their application of it, and perceptions about all aspects of the training) will be beneficial to the overall program and to them directly or indirectly.

If they feel that their importance is minimal, and that the information they provide will not matter, they will provide the barest amount of information (regardless of the method of data collection employed). If they understand the importance of their participation, they’re more likely to provide rich data.

How can you make them feel valued?

Establish good rapport with each of the participants, particularly if the group(s) is(are) of reasonable size. Make sure to interact informally with each participant throughout the training workshop(s). Inquire about their professional work, and ask them about supports that they might need when they return to their workplace.

The responses to the open-ended questions on most of my workshop evaluations have been very rich and detailed¾much more so than those from participants to whom I administered the survey remotely, without ever meeting. Program participants want to connect to a real person, not a remote evaluator. In the event that in-person connections are not possible, explore other innovative ways of establishing rapport with individual participants, before and during the program.

How can you improve the quality of data they will provide?

 Sell the evaluation. Make it clear how the evaluation findings will be used and how the results will benefit the participants and their constituents specifically, directly or indirectly.

 Share success stories. During the training workshops that I have been evaluating, I’ve shared some previous success stories with participants in order to show them what they are capable of accomplishing as well.

The time and energy you spend building these connections with participants will result in high-quality evaluation data, ultimately helping the program serve participants better.

Blog: Evaluating Critical Thinking Skills

Posted on May 27, 2020 by  in Blog ()

Professor of Sociology and Co-Director of the Center for Assessment and Improvement of Learning, Tennessee Technological University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Like many of you, I wear multiple professional hats. Critical thinking skills are at the nexus of all my roles. The importance of improving critical thinking transcends disciplines, even though the contexts and applications vary. As a sociologist, I see the how the deficit of critical thinking skills has a negative impact on society. As an evaluator, I find that these skills are frequent targets for NSF projects across disciplines.

Identifying important skills and implementing strategies to improve them is only one part of a grant proposal. An equally challenging issue is finding appropriate assessments.

Through the years I have learned some useful tips in selecting an instrument to best complement your evaluation needs. You should select an assessment that:

    1. Aligns with the skills that are important to your project.
    2. Is transparent about both the questions and the rubric for assessing those questions.
    3. Provides insight and/or training as to how you can improve the skills.
    4. Provides flexible comparison groups for you (e.g., pre and post, or national user norms).
    5. Provides reports that are easy to read and use.
    6. Demonstrates validity, reliability, and cultural fairness.

As we struggled to find assessment options that could meet these standards ourselves, we developed and refined the Critical-thinking Assessment Test (CAT). If you are seeking to improve students’ critical thinking skills, you may want to consider this instrument.

The Critical-thinking Assessment Test (CAT)

This NSF-funded instrument is the product of 20 years’ extensive development, testing, and refinement with faculty and students from over 350 institutions and over 40 NSF projects. One innovation of this assessment is its integration of short-answer essay questions based on real-world situations. It provides quantitative and qualitative data on the skills that faculty believe are most important for their students to have 10 years after graduating.

Skills Assessed by the CAT:

Evaluating Information

    • Separate factual information from inferences.
    • Interpret numerical relationships in graphs.
    • Understand the limitations of correlational data.
    • Evaluate evidence and identify inappropriate conclusions.

Creative Thinking

    • Identify alternative interpretations for data and observations.
    • Identify new information that might support or contradict a hypothesis.
    • Explain how new information can change a problem.

Learning and Problem Solving

    • Separate relevant from irrelevant information.
    • Integrate information to solve problems.
    • Learn and apply new information.
    • Use mathematical skills to solve real-world problems.

Communication

    • Communicate ideas effectively.

Our team truly enjoys working with evaluators and PIs to help them assess these skills and provide evidence of their success. Some NSF projects and courses have made gains in critical thinking equivalent to those gained in an entire four-year college experience. Our partner institutions have experienced positive outcomes, growth, and learning from working with the CAT.

You can find more information about the CAT here, or you can contact me with any questions you have.

 

References:

Haynes, A., Lisic, E., Goltz, M., Stein, B., & Harris, K. (2016). Moving beyond assessment to improving students’ critical thinking skills: A model for implementing change. Journal of the Scholarship of Teaching and Learning16(4), 44–61.

Stein, B., & Haynes, A. (2011). Engaging faculty in the assessment and improvement of students’ critical thinking using the CAT. Change: The Magazine of Higher Learning, 43, 44–49.

Blog: Building Capacity for High-Quality Data Collection

Posted on May 13, 2020 by  in Blog (, )

Director of Evaluation, Thomas P. Miller & Associates, LLC 

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

As I, like everyone else, am adjusting to working at home and practicing social distancing, I have been thinking about how to conduct my evaluation projects remotely. One thing that’s struck me as I’ve been retooling evaluation plans and data collection timelines is the need for even more evaluation capacity building around high-quality data collection for our clients. We will continue to rely on our clients to collect program data, and now that they’re working remotely too, a refresher on how to collect data well feels timely.  

Below are some tips and tricks for increasing your clients capacity to collect their own high-quality data for use in evaluation and informed decision making. 

Identify who will need to collect the data.  

Especially with multiple-site programs or programs with multiple collectors, identifying who will be responsible for data collection and ensuring that all data collectors use the same tools is key to collecting similar data across the program.  

Determine what is going to be collected.  

Examine your tool. Consider the length of the tool, the types of data being requested, and the language used in the tool itself. When creating a tool that will be used by others, be certain that your tool will yield the data that you need and will make sense to those who will be using it. Test the tool with a small group of your data collectors, if possible, before full deployment.  

Make sure data collectors know why the data is being collected.  

When those collecting data understand how the data will be used, they’re more likely to be invested in the process and more likely to collect and report their data carefully. When you emphasize the crucial role that stakeholders play in collecting data, they see the value in the time they are spending using your tools. 

Train data collectors on how to use your data collection tools.  

Walking data collectors through the step-by-step process of using your data collection tool, even if the tool is a basic intake form, will ensure that all collectors use the tool in the same way. It will also ensure they have had a chance to walk through the best way to use the tool before they actually need to implement it. Provide written instructions, too, so that they can refer to them in the future.  

Determine an appropriate schedule for when data will be reported.  

To ensure that your data reporting schedule is not overly burdensome, consider the time commitment that the data collection may entail, as well as what else the collectors have on their plates.  

 Conduct regular quality checks of what data is collected.  

Checking the data regularly allows you to employ a quality control process and promptly identify when data collectors are having issues. Catching these errors quickly will allow for easier course correction.