We EvaluATE - General Issues

Blog: Attending to culture, diversity, and equity in STEM program evaluation (Part 2)

Posted on May 9, 2018 by  in Blog ()

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

In my previous post, I gave an overview of two strategies you can use to inform yourself about the theoretical aspect of engagement with culture, diversity, and equity in evaluation. I now present two practical strategies, which I believe should follow the theoretical strategies presented in my previous post.

Strategy three: Engage with related sensitive topics informally

To begin to feel comfortable with these topics, engage with these issues during interactions with your evaluation team members, clients, or other stakeholders. Evaluators should acknowledge differing stakeholder opinions, while also attempting to assist stakeholders in surfacing their own values, prejudices, and subjectivities (Greene, Boyce, & Ahn, 2011).

To do this, bring up issues of race, power, inequity, diversity, and culture for dialogue in meetings, emails, and conversations (Boyce, 2017). Call out and discuss micro-aggressions (Sue, 2010) and practice acts of micro-validation (Packard, Gagnon, LaBelle, Jeffers, & Lynn, 2011). For example, when meeting with clients, you might ask them to discuss how they plan to ensure not just diversity but inclusivity within their program. You also can ask them to chart out program goals through a logic model but also ask them to consider if they think underrepresented participants might experience the program differently than their majority participants. Ask clients if they have considered cultural sensitivity training for program managers and/or participants.

Strategy four: Attend to issues of culture, equity, and diversity formally

Numerous scholars have addressed the implications of cultural responsiveness in practice (Frierson, Hood, Hughes, & Thomas, 2010; Hood, Hopson, & Kirkhart, 2015), with some encouraging contemplation surrounding threats to, as well as evidence for, multicultural validity by examining relational, consequential, theoretical, experiential, and methodological justificatory perspectives (Kirkhart, 2005, 2010). I believe the ultimate goal is to be able to attend to culture and context in all formal aspects of the research and evaluation. It is especially important to take a strengths-based, anti-defect approach (Chun & Evans, 2009) and focus on research intersectionality (Collins, 2000).

To do this, you can begin with the framing of the program goals. My programs aim to give underrepresented minorities in STEM skills to survive in the field. This perspective assumes that something is inherently wrong with these students. Instead, think about rewording evaluation questions to examine the culture of the department or program, to explore why more underrepresented groups (at least to have parity with the percentage in population) don’t thrive. Further, evaluators can attempt to include these topics in evaluation questions, develop culturally commensurate data instruments, and be sensitive to these issues during data collection, analysis, and reporting. Challenge yourself to think about this attendance as more than the inclusion of symbolic and politically correct buzzwords (Boyce & Chouinard, 2017), but as a true infusion of these aspects into your practice. For example, I always include an evaluation question about diversity, equity, and culture in my evaluation plans.

These two blog posts are really just the tip of the iceberg. I hope you find these strategies useful as you begin to engage with culture, equity, and diversity in your work. As I previously noted, I have included citations throughout so that you can read more about these important concepts. In a recently published article, my colleague Jill Anne Chouinard and I discuss how we trained evaluators to work through these strategies in a Culturally Responsive Approaches to Research and Evaluation course (Boyce & Chouinard, 2017).

References

Blog: Attending to culture, diversity, and equity in STEM program evaluation (Part 1)

Posted on May 1, 2018 by  in Blog ()

Assistant Professor, Department of Educational Research Methodology, University of North Carolina and Greensboro

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The conversation, both practical and theoretical, surrounding culture, diversity, and equity in evaluation has increased in recent years. As many STEM education programs aim to broaden participation of women, ethnic minority groups, and persons with disabilities, attention to culture, diversity, and equity is paramount. In two blog posts, I will provide a brief overview of four strategies to meaningfully and respectfully engage with these important topics. In this first blog, I will focus on strategies that are helpful in learning more about these issues but that are theoretical and not directly related to evaluation practice. I will also help you learn more about these issues. I should note that I purposely have included a number of citations so that you may read further about these topics.

Strategy one: Recognize social inquiry is a cultural product

Social science knowledge of minority populations, constructed with narrow worldviews, has demeaned characteristics, distorted interpretations of conditions and potential, and remained limited in its capacity to inform efforts to improve the life chances of historically disadvantaged populations (Ladson-Billings, 2000). Begin by educating yourself about the role communicentric bias—the tendency to make one’s own community, often the majority class, the center of conceptual frames that constrains all thought (Gordon, Miller, & Rollock, 1990)—and individual, institutional, societal, and civilizational racism play in education and the social sciences (Scheurich & Young, 2002). Seek to understand the culture, context, historical perspectives, power, oppressions, and privilege in each new context (Greene, 2005; Pon, 2009).

To do this, you can read and discuss books, articles, and chapters related to epistemologies— theories of knowledge—of difference, racialized discourses, and critiques about the nature of social inquiry. Some excellent examples include Stamped from the Beginning by Ibram X. Kendi, The Shape of the River by William G. Bowen and Derek Bok, and Race Matters by Cornel West. Each of these books is illuminating and a must-read as you begin or continue your journey to better understand race and privilege in America. Perhaps start a book club so that you can process these ideas with colleagues and friends.

Strategy two: Locate your own values, prejudices, and identities

The lens through which we view the world influences all evaluation processes, from design to implementation and interpretations (Milner, 2007; Symonette, 2015). In order to think crtically bout issues of culture, power, equity, class, race, and diversity, evaluators should understand their own personal and cultural values (Symonette, 2004). As Peshkin (1988) has noted, the practice of locating oneself can result in a better understanding of one’s own subjectivities. In my own work, I always attempt to acknowledge the role my education, gender, class, and ethnicity will play in my work.

To do this, you can reflect on your own educational background, personal identities, experiences, values, prejudices, predispositions, beliefs, and intuition. Focus on your own social identity, the identities of others, whether you belong to any groups with power and privilege, and how your educational background and identities shape your beliefs, role as an evaluator, and experiences. To unearth some of the more underlying values, you might consider participating in a privilege walk exercise and reflecting on your responses to current events.

These two strategies are just the beginning. In my second blog post, I will focus on engaging with these topics informally and formally within your evaluation practice.

References

Blog: Evaluator, Researcher, Both?

Posted on June 21, 2017 by  in Blog ()

Professor, College of William & Mary

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Having served as a project evaluator and as a project researcher, it is apparent to me how critical it is to have conversations about roles at the onset of funded projects.  Early and open conversations can help avoid confusion, help eliminate missed timing to collect critical data, and highlight where differences exist for each project team role. The blurring of lines over time regarding strict differences between evaluator and researcher requires project teams, evaluators, and researchers to create new definitions for project roles, to understand scope of responsibility for each role, and to build data systems that allow for sharing information across roles.

Evaluation serves a central role in funded research projects. The lines between the role of the evaluator and that of the researcher can blur, however, because many researchers also conduct evaluations. Scriven (2003/2004) saw the role of evaluation as a means to determine “the merit, worth, or value of things” (para. #1), whereas social science research instead is “restricted to empirical (rather than evaluative) research, and bases its conclusion only on factual results—that is, observed, measured, or calculated data” (para. #2).  Consider too, how Powell (2006) posited “Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes” (p. 102).  It is easy to see how confusion arises.

Taking a step back can shed light on the differences in these roles and ways they are now being redefined. The role of researcher shows a different project perspective, as a goal of research is the production of knowledge, whereas the role of the external evaluator is to provide an “independent” assessment of the project and its outcomes. Typically, an evaluator is seen as a judge of a project’s merits, which assumes a perspective that a “right” outcome exists. Yet inherent in the role of evaluation are the values held by the evaluator, the project team, and the stakeholders as context influences the process and who makes decisions on where to focus attention, why, and how feedback is used (Skolits, Morrow, & Burr, 2009).  Knowing more about how the project team intends to use evaluation results to help improve project outcomes requires a shared understanding of the role of the evaluator (Langfeldt & Kyvik, 2011).

Evaluators seek to understand what information is important to collect and review and how to best use the findings to relate outcomes to stakeholders (Levin-Rozalis, 2003).  Researchers instead focus on diving deep into investigating a particular issue or topic with a goal of producing new ways of understanding in these areas. In a perfect world, the roles of evaluators and researchers are distinct and separate. But, given requirements for funded projects to produce outcomes that inform the field, new knowledge is also discovered by evaluators. The swirl of roles results in evaluators publishing results of projects that informs the field, researchers leveraging their evaluator roles to publish scholarly work, and both evaluators and researchers borrowing strategies from each other to conduct their work.

The blurring of roles requires project leaders to provide clarity about evaluator and researcher team functions. The following questions can help in this process:

  • How will the evaluator and researcher share data?
  • What are the expectations for publication from the project?
  • What kinds of formative evaluation might occur that ultimately changes the project trajectory? How do these changes influence the research portion of the project?
  • How does shared meaning of terms, role, scope of work, and authority for the project team occur?

Knowing how the evaluator and researcher will work together provides an opportunity to leverage expertise in ways that move beyond the simple additive effect of both roles.  Opportunities to share information is only possible when roles are coordinated, which requires advanced planning. It is important to move beyond siloed roles and towards more collaborative models of evaluation and research within projects. Collaboration requires more time and attention to sharing information and defining roles, but the time spent on coordinating these joint efforts is worth it given the contributions to both the project and to the field.


References

Levin-Rozalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation, 18(2):1-31.

Powell, R. R. (2006).  Evaluation research:  An overview.  Library Trends, 55(1), 102-120.

Scriven, M. (2003/2004).  Michael Scriven on the differences between evaluation and social science research.  The Evaluation Exchange, 9(4).

Blog: Evaluating Network Growth through Social Network Analysis

Posted on May 11, 2017 by  in Blog ()

Doctoral Student, College of Education, University of Nebraska at Omaha

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the most impactful learning experiences from the ATE Principal Investigators Conference I attended in October 2016 was the growing use of Business and Industry Leadership Teams (BILT) partnerships in developing and implementing new STEM curriculum throughout the country.  This need for cross-sector partnerships has become apparent and reinforced through specific National Science Foundation (NSF) grants.

The need for empirical data about networks and collaborations is increasing within the evaluation realm, and social network surveys are one method of quickly and easily gathering that data. Social network surveys come in a variety of forms. The social network survey I have used is in a roster format. Each participant of the program is listed, and each individual completes the survey by selecting which option best describes their relationships with one another. The options vary in degree from not knowing that person at one extreme, to having formally collaborated with that person at the other extreme. In the past, data from these types of surveys was analyzed through social network analysis, which necessitated a large amount of programming knowledge.  Due to recent technological advancements, there are new social network analysis programs that make analyzing this data more user-friendly for non-programmers. I have worked on an NSF-funded project at the University of Nebraska at Oaha where the goal is to provide professional development and facilitate the growth of a network for middle school teachers in order to create and implement computer science lessons into their current curriculum (visit the SPARCS website).  One of the methods for evaluating the facilitation of the network is through a social network analysis questionnaire. This method has proved very helpful in determining the extent to which the professional relationships of the cohort members have evolved over the course of their year-long experience within the program.

The social network analysis program I have been using is known as NodeXL and is an Excel add-in. It is very user-friendly and can easily be used to generate quantitative data on network development. I was able to take the data gathered from the social network analysis, conduct research, and present my article, “Identification of the Emergent Leaders within a CSE Professional Development Program,” at an international conference in Germany. While the article is not focused on evaluation, it does review the survey instrument itself.  You may access the article through this link (although I think your organization must have access to ACM):  Tracie Evans Reding WiPSCE Article. The article is also posted on my Academia.edu page.

Another funding strand emphasizing networks through the National Science Foundation is known as Inclusion across the Nation of Communities of Learners of Underrepresented Discoverers in Engineering and Science (INCLUDES). The long-term goal of NSF INCLUDES is to “support innovative models, networks, partnerships, technical capabilities and research that will enable the U.S. science and engineering workforce to thrive by ensuring that traditionally underrepresented and underserved groups are represented in percentages comparable to their representation in the U.S. population.” Noted in the synopsis for this funding opportunity is the importance of “efforts to create networked relationships among organizations whose goals include developing talent from all sectors of society to build the STEM workforce.” The increased funding available for cross-sector collaborations makes it imperative that evaluators are able to empirically measure these collaborations. While the notion of “networks” is not a new one, the availability of resources such as NodeXL will make the evaluation of these networks much easier.

 

Full Citation for Article:

Evans Reding, T., Dorn, B., Grandgenett, N., Siy, H., Youn, J., Zhu, Q., Engelmann, C. (2016).  Identification of the Emergent Teacher Leaders within a CSE Professional Development Program.  Proceedings for the 11th Workshop in Primary and Secondary Computing Education.  Munster, Germany:  ACM.

Blog: Evolution of Evaluation as ATE Grows Up

Posted on March 15, 2017 by  in Blog ()

Independent Consultant, Independent Consultant

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I attended a packed workshop by EvaluATE called “A Practical Approach to Outcome Evaluation” at the 2016 NSF ATE Principal Investigators Conference. Two lessons from the workshop reminded me that the most significant part of the evaluation process is the demystification of the process itself:

  • “Communicate early and often with human data sources about the importance of their cooperation.”
  • “Ensure everyone understands their responsibilities related to data collection.”

Stepping back, it made me reflect upon the evolution of evaluation in the ATE community. When I first started out in the ATE world in 1995, I was on the staff of one of the first ATE centers ever funded. Back then, being “evaluated” was perceived as quite a different experience, something akin to taking your first driver’s test or defending a dissertation—a meeting of the tester and the tested.

As the ATE community has matured, so has our approach to both evaluation and the integral communication component that goes with it. When we were a fledgling center, the meetings with our evaluator could have been a chance to take advantage of the evaluation team’s many years of experience of what works and what doesn’t. Yet, at the start we didn’t realize that it was a two-way street where both parties learned from each other. Twenty years ago, evaluator-center/project relationships were neither designed nor explained in that fashion.

Today, my colleague, Dr. Sandra Mikolaski, and I are co-evaluators for NSF ATE clients who range from a small new-to-ATE grant (they weren’t any of those back in the day!) to a large center grant that provides resources to a number of other centers and projects and even has its own internal evaluation team. The experience of working with our new-to-ATE client was perhaps what forced us to be highly thoughtful about how we hope both parties view their respective roles and input. Because the “fish don’t talk about the water” (i.e., project teams are often too close to their own work to honk their own horn), evaluators can provide not only perspective and advice, but also connections to related work and other project and center principal investigators. This perspective can have a tremendous impact on how activities are carried out and on the goals and objectives of a project.

We use EvaluATE webinars like “User-Friendly Evaluation Reports” and “Small-Scale Evaluation” as references and resources not only for ourselves but also for our clients. These webinars help them understand that an evaluation is not meant to assess and critique, but to inform, amplify, modify, and benefit.

We have learned from being on the other side of the fence that an ongoing dialog, an ethnographic approach (on-the-ground research, participant observation, holistic approach), and formative input-based partnership with our client makes for a more fruitful process for everyone.

Blog: National Science Foundation-funded Resources to Support Your Advanced Technological Education (ATE) Project

Posted on August 3, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Did you know that other National Science Foundation programs focused on STEM education have centers that provide services to projects? EvaluATE offers evaluation-specific resources for the Advanced Technological Education program, while some of the others are broader in scope and purpose. They offer technical support, resources, and information targeted at projects within the scope of specific NSF funding programs. A brief overview of each of these centers is provided below, highlighting evaluation-related resources. Make sure to check the sites out for further information if you see something that might be of value for your project!

The Community for Advancing Discovery Research in Education (CADRE) is a network for NSF’s Discovery Research K-12 program (DR K-12). The evaluation resource on the CADRE site is a paper on evaluation options (formative and summative), which differentiates evaluation from the research and development efforts carried out as part of project implementation.  There are other more general resources such as guidelines and tools for proposal writing, a library of reports and briefs, along with a video showcase of DR K-12 projects.

The Center for the Advancement of Informal Science Education (CAISE) has an evaluation section of its website that is searchable by type of resource (i.e., reports, assessment instruments, etc.), learning environment, and audience. For example, there are over 850 evaluation reports and 416 evaluation instruments available for review. The site hosts the Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects, which was developed as an initiative of the Visitor Studies Association and has sections such as working with an evaluator, developing an evaluation plan, creating evaluation tools and reporting.

The Math and Science Partnership Network (MSPnet) supports the math and science partnership network and the STEM+C (computer science) community. MSPnet has a digital library with over 2,000 articles; a search using the term “eval” found 467 listings, dating back to 1987. There is a toolbox with materials such as assessments, evaluation protocols and form letters. Other resources in the MSPnet library include articles and reports related to teaching and learning, professional development, and higher education.

The Center for Advancing Research and Communication (ARC) supports the NSF Research and Evaluation on Education in Science and Engineering (REESE) program through technical assistance to principal investigators. An evaluation-specific resource includes material from a workshop on implementation evaluation (also known as process evaluation).

The STEM Learning and Research Center (STELAR) provides technical support for the Innovative Technology Experiences for Students and Teachers (ITEST) program. Its website includes links to a variety of instruments, such as the Grit Scale, which can be used to assess students’ resilience for learning, which could be part of a larger evaluation plan.

Blog: Professional Development Opportunities in Evaluation – What’s Out There?

Posted on April 29, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

To assist the EvaluATE community in learning more about evaluation, we have compiled a list of free and low-cost online and short-term professional development opportunities. There are always new things available, so this is only a place to start!  If you run across a good resource, please let us know and we will add it to the list.

Free Online Learning

Live Webinars

EvaluATE provides webinars created specifically for projects funded through the National Science Foundation’s Advanced Technological Education program. The series includes four live events per year. Recording, slides, and handouts of previous webinars are available.  http://www.evalu-ate.org/category/webinars/

Measure Evaluation is a USAID-funded project with resources targeted to the field of global health monitoring and evaluation. Webinars are offered nearly every month on various topics related to impact evaluation and data collection; recordings of past webinars are also available. http://www.cpc.unc.edu/measure/resources/webinars

Archived Webinars and Videos

Better Evaluation’s archives include recordings of an eight-part webinar series on impact evaluation commissioned by UNICEF. http://betterevaluation.org/search/site/webinar

Centers for Disease Control’s National Asthma Control Program offers recordings of its four-part webinar series on evaluation basics, including an introduction to the CDC’s Framework for Program Evaluation in Public Health. http://www.cdc.gov/asthma/program_eval/evaluation_webinar.htm

EvalPartners offered several webinars on topics related to monitoring and evaluation (M&E). They also have as series of self-paced e-learning courses. The focus of all programs is to improve competency in conducting evaluation, with an emphasis on evaluation in the community development context.  http://www.mymande.org/webinars

Engineers Without Borders partners with communities to help them meet their basic human needs. They offer recordings of their live training events focused on monitoring, evaluation, and reporting. http://www.ewb-usa.org/resources?_sfm_cf-resources-type=video&_sft_ct-international-cd=impact-assessment

The University of Michigan School of Social Work has created six free interactive Web-based learning modules on a range of evaluation topics. The target audience is students, researchers, and evaluators.  A competency skills test is given at the end of each module, and a printable certificate of completion is available at the end of each module. https://sites.google.com/a/umich.edu/self-paced-learning-modules-for-evaluation-research/

Low-Cost Online Learning

The American Evaluation Association (AEA) Coffee Break Webinars are 20-minute webinars on varying topics.  At this time non-members may register for the live webinars, but you must be a member of AEA to view the archived broadcasts. There are typically one or two sessions offered each month.  http://comm.eval.org/coffee_break_webinars/coffeebreak

AEA’s eStudy program is a series of in-depth real-time professional development opportunities and are not recorded.  http://comm.eval.org/coffee_break_webinars/estudy

The Canadian Evaluation Society (CES) offers webinars to members on a variety of evaluation topics. Reduced membership rates are available for members of AEA. http://evaluationcanada.ca/webinars

­Face-to-Face Learning

AEA Summer Evaluation Institute is offered annually in June, with a number of workshops and conference sessions.  http://www.eval.org/p/cm/ld/fid=232

The Evaluator’s Institute offers one- to five-day courses in Washington, DC in February and July. Four levels of certificates are available to participants. http://tei.cgu.edu/

Beyond these professional development opportunities, university degree and certificate programs are listed on the AEA website under the “Learn” tab.  http://www.eval.org/p/cm/ld/fid=43

Blog: Researching Evaluation Practice while Practicing Evaluation

Posted on November 10, 2015 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

There is a dearth of research on evaluation practice, particularly of the sort that practitioners can use to improve their own work (according to Nick Smith in a forthcoming edition of New Directions for Evaluation, “Using Action Design Research to Research and Develop Evaluation Practice”1,2).

Action design research is described by Dr. Smith as a “strategy for developing and testing alternative evaluation practices within a case-based, practical reasoning view of evaluation practice.” This approach is grounded in the understanding that evaluation is not a “generalizable intervention to be evaluated, but a collection of performances to be investigated” (p. 5). Importantly, action design research is conducted in real time, in authentic evaluation contexts. Its purpose is not only to better understand evaluation practices, but to develop effective solutions to common challenges.

We at EvaluATE are always on the lookout for opportunities to test out ideas for improving evaluation practice as well as our own work in providing evaluation education.  A chronic problem for many evaluators is low response rates. Since 2009, EvaluATE has presented 4 to 6 webinars per year, each concluding with a brief feedback survey. Given that these webinars are about evaluation, a logical conclusion is that participants are predisposed to evaluation and will readily complete the surveys, right? Not really. Our response rates for these surveys range from 34 to 96 percent, with an average of 60 percent. I believe we should consistently be in the 90 to 100 percent range.

So in the spirit of action design research on evaluation, I decided to try a little experiment. At our last webinar, before presenting any content, I showed a slide with the following statement beside an empty checkbox: “I agree to complete the <5-minute feedback survey at the end of this webinar.” I noted the importance of evaluation for improving our center’s work and for our accountability to the National Science Foundation.  We couldn’t tell exactly how many people checked the box, but it’s clear that several did (play the video clip below).  I was optimistic that asking for this public (albeit anonymous) commitment at the start of the webinar would boost response rates substantially.

The result: 72 percent completed the survey.  Pretty good, but well short of my standard for excellence. It was our eighth highest response rate ever and highest for the past year, but four of the five webinar surveys in 2013-14 had response rates between 65 and 73 percent. Like so often in research, the initial results are inclusive and we will have to investigate further: How are webinar response rates affected by audience composition, perceptions of the webinar’s quality, or asking for participation multiple times? As Nick Smith pointed out in his review of a draft of this blog: “What you are really after is not just a high response rate, but a greater understanding of what effects webinar evaluation response rates. That kind of insight turns your efforts from local problem solving to generalizable knowledge – from Action Design Problem Solving to Action Design Research.”

I am sharing this experience not because I found the sure-fire way to get people to respond to webinar evaluation surveys. Rather, I am sharing it as a lesson learned and to invite you to conduct your own action design research on evaluation and tell us about it here on the EvaluATE blog.

1 Disclosure: Nick Smith is the chairperson of EvaluATE’s National Visiting Committee, an advisory panel that reports to the National Science Foundation.

2 Smith, N. L. (in press). Using action design research to research and develop evaluation practice. In P. R. Brandon (Ed.), Recent developments in research on evaluation. New Directions for Evaluation.

Blog: Quick Graphics with Canva

Posted on November 4, 2015 by  in Blog ()

Project Manager Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Quick Graphics

In your ATE project, you often have to develop different communication materials for your stakeholders.  To make them more interesting, and move beyond the use of clip art, you might want to consider moving up to a graphic design tool.  In this blog, I share my experience with the use of graphic programs and a quick tour of how I use Canva for this purpose.

When it comes to graphic work, I have a tendency to keep to my old-school ways. I love Adobe products; I have been using them for fifteen years and don’t like to veer off my path of using them. But when I offer advice to beginners, I steer them away from Adobe. First off, there is a steep learning curve. Many people are intimidated with the thought of Adobe Illustrator or Photoshop and won’t even attempt to learn them. Second, these products can be expensive. So with two strikes against Adobe and the constant challenge to try something new, I ventured out into the wild and tried Canva.

Free: Canva.com is a free online graphic design tool. It has a variety of pre-sized design templates you can choose from, which can be used for social media, blogs, email, presentations, posters, and other graphic materials; or you can create your own. Canva provides you with the choice of several different graphic sizes, which takes the guesswork out of designing for social media or print. Once your canvas size is set, you enter into design mode. Canva features a library of over one million graphics, layouts, and illustrations to choose from. Some elements are free, and some cost only $1. The prices are clearly marked as you browse through the options.

Quick and Easy: So after trying out Canva, I was really impressed. Is it Photoshop or Illustrator? No, but for doing basic graphic design, it is really good. The hardest part of designing any document is staring at the blank page. Canva helps get past “designer’s block” by providing templates, so you can just put in your text and hit save. For those who are ready for the next creative challenge, you can pick a blank template and choose a photo/graphic from the library or upload your own. It’s just that easy!

Social Media and Outreach: I have started using Canva for designing basic graphics for our social media and outreach items. Not only am I cutting down on the time spent working on these tasks, I am also being more creative with my designs. Seeing all the options within the system really brings out my creativity. I encourage you to go onto Canva.com and make your own graphics. Get rid of the boring white paper flyer, and wow the audience with a new look from Canva. It’s quick and easy. Happy Designing!

Canva Quick Guide:
1
2
3
4
5
6
7
8

Blog: Evaluation Training and Professional Development

Posted on October 7, 2015 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello ATE Community!

My name is Cheryl Endres, and I am the new blog editor and doctoral associate for EvaluATE. I am a doctoral student in the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. To help me begin to learn more about ATE and identify blog topics, we (EvaluATE) took a closer look at some results from the survey conducted by EvaluATE’s external evaluator. As you can see from the chart, the majority of ATE evaluators have gotten their knowledge about evaluation on the job, through self-study, and nonacademic professional development. Knowing this gives us some idea about additional resources for building your evaluation “toolkit.”

HelloATE--Graph

It may be difficult for practicing evaluators to take time for formal, graduate-level coursework.  Fortunately, there are abundant opportunities just a click away on the Internet!  Since wading through the array of options can be somewhat daunting, we have compiled a short list to get you started in your quest. As the evaluation field continues to expand, the opportunities do as well, and there are a number of online and in-person options for continuing to build your knowledge base about evaluation. Listed below are just a few to get you started:

  • The EvaluATE webinars evalu-ate.org/category/webinars/ are a great place to get started for information specific to evaluation in the ATE context.
  • The American Evaluation Association has a “Learn” tab that provides information about the Coffee Break Webinar series, eStudies, and the Summer Evaluation Institute. There are also links to online and in-person events around the country (and world) and university programs, some of which offer certificate programs in evaluation in addition to degree programs (master’s or doctoral level). The AEA annual conference in November is also a great option, offering an array of preconference workshops: eval.org
  • The Canadian Evaluation Society offers free webinars to members. The site includes archived webinars as well: evaluationcanada.ca/professional-learning
  • The Evaluators’ Institute at George Washington University offers in-person institutes in Washington, D.C. in February and July. They offer four different certificates in evaluation. Check out the schedules at tei.gwu.edu
  • EvalPartners has a number of free e-learning programs: mymande.org/elearning

These should get you started. If you find other good sources, please email me at cheryl.endres@wmich.edu.