Candiya Mann

Senior Research Manager, Social & Economic Sciences Research Center at Washington State University

Candiya Mann is the independent evaluator for several National Science Foundation (NSF) grantees across multiple programs, including 10 Advanced Technology Education (ATE) centers and projects. She specializes in K-16 education and youth workforce issues and has conducted evaluations for clients including the US Department of Labor, Washington State Office of the Superintendent of Public Instruction, United Way, school districts, community-based organizations, and workforce development agencies. Mann served on the advisory group for the NSF ATE Evaluation Community of Practice. She is a senior research manager with the Social and Economic Sciences Research Center at Washington State University, where she has spent over 18 years.


Blog: 5 Tips for Evaluating Multisite Projects*

Posted on August 21, 2019 by  in Blog (, )

Senior Research Manager, Social & Economic Sciences Research Center at Washington State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Conducting evaluations for multisite projects can present unique challenges and opportunities. For example, evaluators must be careful to ensure that consistent data are captured across sites, which can be challenging. However, having results for multiple sites can lead to stronger conclusions about an intervention’s impact. The following are helpful tips for evaluating multisite projects.

 1.      Investigate the consistency of project implementation. Just because the same guidelines have been provided to each site does not mean that they have been implemented the same way! Variations in implementation can create difficulties in collecting the data and interpreting the evaluation results.

2.      Standardize data collection tools across sites. This will minimize confusion and result in a single dataset with information on all sites. On the downside, this may result in having to limit the data to a subset of information that is available across all sites.

3.      Help the project managers at each site understand the evaluation plan. Provide a clear, comprehensive overview of the evaluation plan that includes the expectations of the managers. Simplify their roles as much as possible.

4.      Be sensitive in reporting side-by-side results of the sites. Consult with project stakeholders to determine if it is appropriate or helpful to include side-by-side comparisons of the performance of the various sites.

5.      Analyze to what extent differences in outcomes are due to variations in project implementation. Variation in results across sites may provide clues to factors that may facilitate or impede the achievement of certain outcomes.

6.      Report the evaluation results back to the site managers in whatever form would be the most useful to them. This is an excellent opportunity to recruit the site managers as supporters of evaluation, especially if they see that the evaluation results can be used to aid their participant recruitment and fundraising efforts.

 

* This blog is a reprint of a conference handout from an EvaluATE workshop at the 2011 ATE PI Conference.

 

FOR MORE INFORMATION

Smith-Moncrieffe, D. (2009, October). Planning multi-site evaluations of model and promising programs. Paper presented at the Canadian Evaluation Society Conference, Ontario, CA.

Lawrenz, F., & Huffman, D. (2003). How can multi-site evaluations be participatory? American Journal of Evaluation, 24(4), 471–482.

Webinar: Developing and Validating Survey Instruments

Posted on May 18, 2011 by , , , in Webinars ()

Presenter(s): Arlen Gullickson, Candiya Mann, Stephanie Evergreen, Wayne Welch
Date(s): May 18, 2011
Recording: https://vimeo.com/24031893

You know your project’s goals. And you know you need to measure your progress toward reaching them. You probably even know whether a survey questionnaire would help you measure that progress. But what sort of questions belong on a survey instrument? And how should they be worded? This webinar will explain the questionnaire development process, using ATE survey work as examples. Along with the EvaluATE team, Candiya Mann will showcase her work and we’ll feature Wayne Welch as a discussant, sharing his process of establishing face and content validity with a method that is practical for most ATE projects and centers. Both examples emphasize the importance of thinking from a measurement perspective to get more trustworthy data.

Resources:
Slide PDF
Handout PDF