Archive: data collection

Blog: Project Data for Evaluation: Google Groups Project Team Feedback

Posted on February 11, 2016 by  in Blog ()

President and CEO, Censeo Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

At Censeo Group, a program evaluation firm located in northeast Ohio, we are evaluating a number of STEM projects and often face the challenge of how to collect valid and reliable data about the impact of curriculum implementation: What implementation looks like, students’ perceptions of the program, project leaders’ comfort with lessons, and the extent to which students find project activities engaging and beneficial.

We use various methods to gather curriculum implementation data. Observations offer a glimpse into how faculty deliver new curriculum materials and how students interact and react to those materials, but are time-intensive and require clear observation goals and tools. Feedback surveys offer students and staff the opportunity to provide responses that support improvement or provide a summative analysis of the implementation, but not everyone responds and some responses may be superficial. During a recent project, we were able to use an ongoing, rich, genuine, and helpful source of project information for the purpose of evaluation.

Google Groups Blog

Project leaders created a Google Group and invited all project staff and the evaluation team to join with the following message:

“Welcome to our Google Group! This will be a format for sharing updates from interventions and our sites each week. Thanks for joining in our discussion!”

The team chose Google Groups because everybody was comfortable with the environment, and it is free, easy to use and easy to access.

Organizing the Posts

Project leaders created a prompt each week, asking staff to “Post experiences from Week X below.” This chronological method of organization kept each week’s feedback clustered. However, a different organizing principle could be used, for example, curriculum unit or school.

In the case of this Google Group, the simple prompt resonated well with project staff, who wrote descriptive and reflective entries. Graduate students, who were delivering a new curriculum to high school students, offered recommendations for colleagues who would be teaching the content later in the week about how to organize instruction, engage students, manage technology, or address questions that were asked during their lessons. Graduate students also referred to each other’s posts, indicating that this interactive method of project communication was useful and helpful for them as they worked in the schools, for example, in organizing materials or modifying lessons based on available time or student interest.

Capturing and Analyzing the Data 

The evaluation team used NVIVO’s NCapture, a Web browser add-on for NVIVO qualitative data analysis software that allows the blog posts to be quickly imported into the software for analysis. Once in NVIVO, the team coded the data to analyze the successes and challenges of using the new curriculum in the high schools.

Genuine and Ongoing Data

The project team is now implementing the curriculum for the second time with a new group of students. Staff members are posting weekly feedback about this second implementation. This ongoing use of the Google Group blog will allow the evaluation team to analyze and compare implementation by semester (Fall 2015 versus Spring 2016), by staff type (reveal changes in graduate students’ skills and experience), by school, and other relevant categories.

From a strictly data management perspective, a weekly survey of project staff using a tool such as Google Forms or an online survey system, from which data could be transferred directly into a spreadsheet, likely would have been easier to manage and analyze. However, the richness of the data that the Google Groups entries generated was well worth the trade-off of the extra time required to capture and upload each post. Rather than giving staff an added “evaluation” activity that was removed from the work of the project, and to which likely not all staff would have responded as enthusiastically, these blog posts provided evaluation staff with a glimpse into real-time, genuine staff communication and classroom implementation challenges and successes. The ongoing feedback about students’ reactions to specific activities supported project implementation by helping PIs understand which materials needed to be enhanced to support students of different skill levels as the curriculum was being delivered. The blog posts also provided insights into the graduate students’ comfort with the curriculum materials and highlighted the need for additional training for them about specific STEM careers. The blog allowed PIs to quickly make changes during the semester and provided the evaluation team with information about how the curriculum was being implemented and how changes affected the project over the course of the semester.

You can find additional information about NVIVO here: http://www.qsrinternational.com/product. The site includes training resources and videos about NVIVO.

You can learn how to create and use a Google Group at the Google Groups Help Center:
https://support.google.com/groups/?hl=en#topic=9216

Newsletter: Data Collection Planning Matrix

Posted on July 1, 2015 by  in Newsletter - ()

The part of your proposal’s evaluation plan that reviewers will probably scrutinize most closely is the data collection plan. Given that the evaluation section of a proposal is typically just 1-2 pages, you have minimal space to communicate a clear plan for gathering evidence of your project’s quality and impact. An efficient way to convey this information is in a matrix format. To help with this task, we’ve created a Data Collection Planning Matrix, available from (bit.ly/data-matrix).

This tool prompts the user to specify the evaluation questions that will serve as the foundation for the evaluation; what indicators1 will be used to answer each evaluation question; how data for each indicator will be collected, from what sources, by whom, and when; and how the data will be analyzed. (The document includes definitions for each of these components to support shared understandings among members of the proposal development team.) Including details about data collection in your proposal shows reviewers that you have been thoughtful and strategic in determining how you will build a body of evidence about the effectiveness and quality of your NSF-funded work. The value of putting this information in a matrix format is that it ensures you have a clear plan for gathering data that will enable you to fully address all the evaluation questions and, conversely, that all the data you plan to collect will serve a specific purpose.

A good rule of thumb is to develop at least one overarching evaluation question for each main element of a project logic model (i.e., activities, outputs, and short-, mid-, and long-term outcomes). Although not required for ATE program proposals, logic models are an efficient way to convey how your project’s activities and products will lead to intended outcomes. The evaluation’s data collection plan should align clearly with your project’s activities and goals, whether you use a logic model or not. If you are interested in developing a logic model for your project and want to learn more, see our ATE Logic Model Template at (bit.ly/ate-logic).

If you have questions about the data collection planning matrix or logic model template or suggestions for improving it, let us know: email us at info@evalu-ate.org.

1 For more on indicators and how to select ones that will serve your evaluation well, see Goldie MacDonald’s checklist, Criteria for Selection of High-Performing Indicators, available from (bit.ly/indicator-eval).

Blog: Still Photography in Evaluation

Posted on April 1, 2015 by  in Blog ()

Consultant/Evaluator, TEMPlaTe Educational Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I have used my photographic skills in my work as principal investigator for my ATE projects and more recently as external evaluator for other ATE projects and centers. If you have attended the annual ATE PI conference or the HI-TEC conference, you may have seen some of the photo books that I have produced for my clients at their showcase booths.

Photography is an important part of special events, e.g., weddings, birthdays, anniversaries, and the births of new family members. Why is photography so important at these times? Couldn’t we just write a summary of what happened and compile a list of who participated, and in the case of weddings, save a lot of money? Of course not, photographic images are one of the best ways to help you and others remember, relive events, and tell a story. Certainly, your project activities may not rank in importance to these family events, but it is still valuable to make an effort to document these events through still photographs and/or video in order to tell your story.

Story telling in evaluation has been discussed in the literature. An example is Richard Krueger’s contribution, titled “Using Stories in Evaluation” in the Handbook of Practical Program Evaluation (3rd ed.), edited by J. Wholey, K. Hatry, and K. Newcomer. What is missing from the literature is how to use photography to tell stories in evaluation, not just capture images for marketing purposes or to embellish an evaluation report. This is an area where we need to develop uses of photography in evaluation.

Taking photos during the event is the first step. I have found that some pre-event planning helps. Looking over the program or agenda will help you identify where you will want to be at any given time, especially if multiple sessions are happening concurrently. I also look for activities where the participants will be doing something (action shots) rather than just listening to a speaker and viewing PowerPoint slides (passive shots), although a few photos of the latter activity might be useful. Then, I develop a “shoot sheet” or a list of the types of images I want to capture.

Sherry Boyce in an AEA365 blog listed some questions to keep in mind when thinking about the types of photos you will need to tell your evaluation story (http://aea365.org/blog/sherry-boyce-on-using-photos-in-evaluation-reports/):

  • How will the photos help answer the evaluation questions for the project?
  • How will the photos help tell your evaluation story?
  • Will the photos be representative of the experiences of many participants?
  • Do the photos illustrate a participant’s engagement?
  • Do the photos illustrate “mastery”?

The next step is to arrange the photos to tell your story about the event, whether a conference, workshop, or celebration. Computers enable me to arrange my photos to make a photo book that can be uploaded to one of many sites for printing. Here is a photo book I produced for The Southwest Center for Microsystems Education (SCME) used in a display at a recent Micro Nano Tech Conference.

DSC_1137 MNT1 Photo Book

Photo books I have made for my ATE clients have been used by PIs to promote and advocate for their project or center. People feel comfortable perusing a photo book, and photo books require no equipment for viewing and are easily transported. They can be effective conversation starters.

Editor’s Note: Dave informed us that he works almost exclusively in iPhoto, since he is a MacBook Pro user. He uses Apple’s printing services, which are contracted out. Dave did tell us about some additional photo hosting/photo book software available on the web, although he did not vouch for their quality.

Newsletter: Data Management Plan (DMP)

Posted on July 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

DMPs are not evaluation plans, but they should address how evaluation data will be handled and possibly shared.

DMPs are required for all NSF proposals, uploaded as a Supplementary Document in the FastLane.gov proposal submission system. They can be up to two pages long and should describe

  • the kind of data you will gather
  • how you’ll format the data and metadata (metadata is documentation about what your primary data are)
  • what policies will govern the access and use of your data by others
  • how you’ll protect privacy, confidentiality, security, and intellectual property
  • who would be interested in accessing your data and in what ways they might use it
  • your plans for archiving the data and preserving access to them

To learn more about data management plans, check out these resources:

Newsletter: Connecting the Dots between Data and Conclusions

Posted on April 1, 2013 by  in Newsletter - ()

Director of Research, The Evaluation Center at Western Michigan University

Data don’t speak for themselves. But the social and educational research traditions within which many evaluators have been trained offer little in the way of tools to support the task of translating data into meaningful, evaluative conclusions in transparent and justifiable ways (see Jane Davidson’s article). However, we can draw on what educators already do when they develop and use rubrics for grading student writing, presentations, and other assessment tasks. Rubrics can be used in similar ways to aid in the interpretation of project evaluation results. Rubrics can be developed for individual indicators, such as the number of women in a degree program or percentage of participants expressing satisfaction with a professional development workshop. Or, a holistic rubric can be created to assess larger aspects of a project for which it is impractical to parse into distinct data points. Rubrics are a means for increasing transparency in terms of how conclusions are generated from data. For example, if a project claimed that it would increase enrollment of students from underrepresented minority (URM) groups, an important variable would be the percentage increase in URM enrollment. The evaluator could engage project stakeholders in developing a rubric to interpret the date for this variable, in consultation with secondary sources such as the research literature and/or national data. When the results are in, the evaluator can refer to the rubric to determine the degree to which the project was successful on this dimension. To learn more about how to connect the dots between data and conclusions, see the recording, handout, and slides from EvaluATE’s March webinar evalu-ate.org/events/march_2013/.