Archive: professional development

Blog: Evaluating Professional Development Projects*

Posted on October 16, 2019 by  in Blog ()

Founder and President, The Allison Group

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 

Terryll Bailey Head Shot
Terryll Bailey
Founder and President
The Allison Group
Lori Wingate
Director of Research
The Evaluation Center at Western Michigan University

A good prompt to start thinking about how to approach the evaluation of an Advanced Technological Education (ATE) professional development (PD) project is the ATE program solicitation. Regarding PD grants, the solicitation states that “projects should be designed to enhance the educators’ disciplinary capabilities, teaching skills, and understanding of current technologies and practices, and employability skills.” It further recommends the “evaluation should demonstrate use in the classrooms and sustainable changes in practice of participating faculty and teachers leading to more qualified technicians for the industry. Changes in student learning outcomes as well as students’ perceptions of technical careers should be assessed” (National Science Foundation, p. 5).

ATE grants span multiple years. However sustainable, lasting systemic change is the long-term goal. It is important to consider the potential for systemic change as the project begins, and build in realistic indicators that the project activities are influencing the system. The following are some tips to consider when evaluating PD projects.

  1. Evaluate the design and process of PD interventions, as well as the outcomes. This is especially helpful for formative evaluation, which provides feedback for improving interventions while they’re underway. It’s also critical for illuminating the strengths and weaknesses of a PD effort to aid in understanding why certain outcomes were or were not achieved. Learning Forward’s Standards for Professional Learning and the Southern Regional Education Board’s Standards for Online Professional Development are good sources of information about what high-quality PD looks like. Fellow instructors or program deans with content knowledge can be helpful collaborators and internal evaluators, providing feedback on the quality of the content, instruction, and materials.
  2. Don’t reinvent the wheel with your evaluation design. PD is one of a relatively few areas where there are well-established frameworks for evaluation. Donald Kirkpatrick was the guru of PD evaluation and the originator of the “Four Levels” approach. Thomas Guskey adapted the Kirkpatrick model specifically for education contexts and defined five levels of professional learning evaluation. Jack and Patti Phillips bring a return-on-investment perspective to this work. Check out their materials for great ideas for framing your PD evaluation and for guidance in determining which data and data sources to employ. Joellen Killion brings these models together in her book Assessing Impact, which offers six levels to consider: reaction, learning, organizational support, application, impact on students, and return on investment.
  3. Once you embrace the “levels” approach to PD evaluation, project stakeholders can work collaboratively to define the intended outcomes for each level and the evaluation data collection methods and sources. One way to focus this work is to recall the National Science Foundation’s interest in impacting (1) educators’ disciplinary capabilities, teaching skills, and understanding of current technologies and practices, and employability skills, and (2) students’ learning outcomes and perceptions of technical
  4. If a professional learning community (e.g., community of practice, virtual learning community) is involved, pay special attention to capturing the nature of the interactions and associated learning among participants. In this type of PD initiative, assessing process is crucial. To learn more about evaluating professional communities, see Etienne and Beverly Wenger-Trayner’s overview of communities of practice.

Online PD has its own set of challenges for evaluation, but tools and frameworks are available to successfully evaluate them. Back-end analytics are available via various online venues, and with that technology, evaluation may actually be easier, because records are kept automatically.

ADDITIONAL RESOURCES

The Evaluation Exchange’s special issue on professional development (see especially the article by Spicer et al. about online professional development).

Example professional development follow-up survey developed by the ATE project, Destination Problem-Based Learning

The Student Assessment of Their Learning Gains Instrument for use by college instructors to “gather learning-focused feedback from students.”

* This blog is based on a handout from an EvaluATE workshop at the 2011 ATE Principal Investigators Conference.

Blog: Using Rubrics to Demonstrate Educator Mastery in Professional Development

Posted on September 18, 2018 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Nena Bloom
Evaluation Coordinator
Center for Science Teaching and Learning, Northern Arizona University
Lori Rubino-Hare
Professional Development Coordinator
Center for Science Teaching and Learning, Northern Arizona University

We are Nena Bloom and Lori Rubino-Hare, the internal evaluator and principal investigator, respectively, of the Advanced Technological Education project Geospatial Connections Promoting Advancement to Careers and Higher Education (GEOCACHE). GEOCACHE is a professional development (PD) project that aims to enable educators to incorporate geospatial technology (GST) into their classes, to ultimately promote careers using these technologies. Below, we share how we collaborated on creating a rubric for the project’s evaluation.

One important outcome of effective PD is the ability to master new knowledge and skills (Guskey, 2000; Haslam, 2010). GEOCACHE identifies “mastery” as participants’ effective application of the new knowledge and skills in educator-created lesson plans.

GEOCACHE helps educators teach their content through Project Based Instruction (PBI) that integrates GST. In PBI, students collaborate and critically examine data to solve a problem or answer a question. Educators were provided 55 hours of PD, during which they experienced model lessons integrated with GST content. Educators then created lesson plans tied to the curricular goals of their courses, infusing opportunities for students to learn appropriate subject matter through the exploration of spatial data. “High-quality GST integration” was defined as opportunities for learners to collaboratively use GST to analyze and/or communicate patterns in data to describe phenomena, answer spatial questions, or propose solutions to problems.

We analyzed the educator-created lesson plans using a rubric to determine if GEOCACHE PD supported participants’ ability to effectively apply the new knowledge and skills within lessons. We believe this is a more objective indicator of the effectiveness of PD than solely using self-report measures. Rubrics, widespread methods of assessing student performance, also provide meaningful information for program evaluation (Davidson, 2004; Oakden, 2013). A rubric illustrates a clear standard and set of criteria for identifying different levels of performance quality. The objective is to understand the average skill level of participants in the program on the particular dimensions of interest. Davidson (2004) proposes that rubrics are useful in evaluation because they help make judgments transparent. In program evaluation, scores for each criterion are aggregated across all participants.

Practices we used to develop and utilize the rubric included the following:

  • We developed the rubric collaboratively with the program team to create a shared understanding of performance expectations.
  • We focused on aligning the criteria and expectations of the rubric with the goal of the lesson plan (i.e., to use GST to support learning goals through PBI approaches).
  • Because good rubrics existed but were not entirely aligned with our project goal, we chose to adapt existing technology (Britten & Casady, 2005; Harris, Grandgenett & Hofer, 2010) and PBI rubrics (Buck Institute for Education, 2017) to include GST use, rather than start from scratch.
  • We checked that the criteria at each level was clearly defined, to ensure that scoring would be accurate and consistent.
  • We pilot tested the rubric with several units, using several scorers, and revised accordingly.

This authentic assessment of educator learning informed the evaluation. It provided information about the knowledge and skills educators were able to master and how the PD might be improved.


References and resources

Britten, J. S., & Cassady, J. C. (2005). The Technology Integration Assessment Instrument: Understanding planned use of technology by classroom teachers. Computers in the Schools, 22(3), 49-61.

Buck Institute for Education. (2017). Project design rubric. Retrieved from http://www.bie.org/object/document/project_design_rubric

Davidson, E. J. (2004). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage Publications, Inc.

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, CA: Corwin Press.

Harris, J., Grandgenett, N., & Hofer, M. (2010). Testing a TPACK-based technology integration assessment instrument. In C. D. Maddux, D. Gibson, & B. Dodge (Eds.), Research highlights in technology and teacher education 2010 (pp. 323-331). Chesapeake, VA: Society for Information Technology and Teacher Education.

Haslam, M. B. (2010). Teacher professional development evaluation guide. Oxford, OH: National Staff Development Council.

Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne, Australia: BetterEvaluation.

Blog: Gauging the Impact of Professional Development Activities on Students

Posted on January 17, 2018 by  in Blog ()

Executive Director of Emerging Technology Grants, Collin College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Gauging the Impact of Professional Development Activities on Students

Many Advanced Technological Education (ATE) grants hold professional development events for faculty. As the lead for several ATE grants, I have been concerned that while data obtained from faculty surveys immediately after these events are useful, they do not gauge the impact of the training on students. The National Convergence Technology Center’s (CTC) approach, described below, uses longitudinal survey data from the faculty attendees to begin to provide evidence on student impact. I believe the approach is applicable to any discipline.

The CTC provides information technology faculty with a free intensive professional development event titled Working Connections Faculty Development Institute. The institute is held twice per year—five days in the summer and two and a half days in the winter. Working Connections helps faculty members develop the skills needed to create a new course or to perform major updates on an existing course in the summer and enough to update a course in the winter. Over the years, more than 1,700 faculty have enrolled in the training. From the beginning, we have gathered attendee feedback via two surveys at the end of the event. One survey focuses on the specific topic track, asking about the extent to which attendees feel that their three learning outcomes were mastered, as well as information on the instructor’s pacing, classroom handling, etc. The other survey asks questions about the overall event, including attendees’ reactions to the focused lunch programs, and how many new courses have been created or enhanced as a result of past attendance.

The CTC educates faculty members as a vehicle for educating students. To learn how the training impacts students and programs, we also send out longitudinal surveys at 6, 18, 30, 42, and 54 months after each summer Working Connections training. These surveys ask faculty members to report on what they did with what they learned at each training, including how many students they educated as a result of what they learned. Faculty are also asked to report how many certificates and degrees were created or enhanced. Each Working Connections cohort receives a separate survey invitation (i.e., someone who attended two Working Connections will get two separate invitations) that includes a link to the survey as well as a roster to help attendees remember which track they took that year. Participation is voluntary, but over the years, we have consistently and strongly emphasized the importance of getting this longitudinal data so that we can provide some evidence of student impact to the National Science Foundation. Our response rate from surveys sent in January 2016 was 59%.

Responses from surveys from 2008-2016 indicate the following:

Number of students who were taught the subjects trained 88,591
Number of new/enhanced courses 241
Number of sections taught 4,899
Number of new/enhanced certificates and degrees 310

While these data still do not allow us to know how the students themselves consumed the attendees’ learning, it does provide evidence that is one step closer to obtaining student impact than just counting faculty feedback after each training. We are considering what else we can do to further unpack the impact on students, but the Family Educational Rights and Privacy Act’s (FERPA) limitations stand in the way of the CTC contacting affected students directly without their permission.

Tip: It is mandatory that a longitudinal survey effort be intentional and consistent. Further, it is extremely important to consistently promote the need for attendees to fill out surveys both during the professional development events and via emails preceding the annual survey emails.  It is all too easy for attendees to simply delete the longitudinal survey if they do not see the point of filling them out.

Blog: Professional Development Opportunities in Evaluation – What’s Out There?

Posted on April 29, 2016 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

To assist the EvaluATE community in learning more about evaluation, we have compiled a list of free and low-cost online and short-term professional development opportunities. There are always new things available, so this is only a place to start!  If you run across a good resource, please let us know and we will add it to the list.

Free Online Learning

Live Webinars

EvaluATE provides webinars created specifically for projects funded through the National Science Foundation’s Advanced Technological Education program. The series includes four live events per year. Recording, slides, and handouts of previous webinars are available.  https://www.evalu-ate.org/category/webinars/

Measure Evaluation is a USAID-funded project with resources targeted to the field of global health monitoring and evaluation. Webinars are offered nearly every month on various topics related to impact evaluation and data collection; recordings of past webinars are also available. http://www.cpc.unc.edu/measure/resources/webinars

Archived Webinars and Videos

Better Evaluation’s archives include recordings of an eight-part webinar series on impact evaluation commissioned by UNICEF. http://betterevaluation.org/search/site/webinar

Centers for Disease Control’s National Asthma Control Program offers recordings of its four-part webinar series on evaluation basics, including an introduction to the CDC’s Framework for Program Evaluation in Public Health. http://www.cdc.gov/asthma/program_eval/evaluation_webinar.htm

EvalPartners offered several webinars on topics related to monitoring and evaluation (M&E). They also have as series of self-paced e-learning courses. The focus of all programs is to improve competency in conducting evaluation, with an emphasis on evaluation in the community development context.  http://www.mymande.org/webinars

Engineers Without Borders partners with communities to help them meet their basic human needs. They offer recordings of their live training events focused on monitoring, evaluation, and reporting. http://www.ewb-usa.org/resources?_sfm_cf-resources-type=video&_sft_ct-international-cd=impact-assessment

The University of Michigan School of Social Work has created six free interactive Web-based learning modules on a range of evaluation topics. The target audience is students, researchers, and evaluators.  A competency skills test is given at the end of each module, and a printable certificate of completion is available at the end of each module. https://sites.google.com/a/umich.edu/self-paced-learning-modules-for-evaluation-research/

Low-Cost Online Learning

The American Evaluation Association (AEA) Coffee Break Webinars are 20-minute webinars on varying topics.  At this time non-members may register for the live webinars, but you must be a member of AEA to view the archived broadcasts. There are typically one or two sessions offered each month.  http://comm.eval.org/coffee_break_webinars/coffeebreak

AEA’s eStudy program is a series of in-depth real-time professional development opportunities and are not recorded.  http://comm.eval.org/coffee_break_webinars/estudy

The Canadian Evaluation Society (CES) offers webinars to members on a variety of evaluation topics. Reduced membership rates are available for members of AEA. http://evaluationcanada.ca/webinars

­Face-to-Face Learning

AEA Summer Evaluation Institute is offered annually in June, with a number of workshops and conference sessions.  http://www.eval.org/p/cm/ld/fid=232

The Evaluator’s Institute offers one- to five-day courses in Washington, DC in February and July. Four levels of certificates are available to participants. http://tei.cgu.edu/

Beyond these professional development opportunities, university degree and certificate programs are listed on the AEA website under the “Learn” tab.  http://www.eval.org/p/cm/ld/fid=43

Blog: Evaluation Training and Professional Development

Posted on October 7, 2015 by  in Blog ()

Doctoral Associate, EvaluATE

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Hello ATE Community!

My name is Cheryl Endres, and I am the new blog editor and doctoral associate for EvaluATE. I am a doctoral student in the Interdisciplinary Ph.D. in Evaluation program at Western Michigan University. To help me begin to learn more about ATE and identify blog topics, we (EvaluATE) took a closer look at some results from the survey conducted by EvaluATE’s external evaluator. As you can see from the chart, the majority of ATE evaluators have gotten their knowledge about evaluation on the job, through self-study, and nonacademic professional development. Knowing this gives us some idea about additional resources for building your evaluation “toolkit.”

HelloATE--Graph

It may be difficult for practicing evaluators to take time for formal, graduate-level coursework.  Fortunately, there are abundant opportunities just a click away on the Internet!  Since wading through the array of options can be somewhat daunting, we have compiled a short list to get you started in your quest. As the evaluation field continues to expand, the opportunities do as well, and there are a number of online and in-person options for continuing to build your knowledge base about evaluation. Listed below are just a few to get you started:

  • The EvaluATE webinars evalu-ate.org/category/webinars/ are a great place to get started for information specific to evaluation in the ATE context.
  • The American Evaluation Association has a “Learn” tab that provides information about the Coffee Break Webinar series, eStudies, and the Summer Evaluation Institute. There are also links to online and in-person events around the country (and world) and university programs, some of which offer certificate programs in evaluation in addition to degree programs (master’s or doctoral level). The AEA annual conference in November is also a great option, offering an array of preconference workshops: eval.org
  • The Canadian Evaluation Society offers free webinars to members. The site includes archived webinars as well: evaluationcanada.ca/professional-learning
  • The Evaluators’ Institute at George Washington University offers in-person institutes in Washington, D.C. in February and July. They offer four different certificates in evaluation. Check out the schedules at tei.gwu.edu
  • EvalPartners has a number of free e-learning programs: mymande.org/elearning

These should get you started. If you find other good sources, please email me at cheryl.endres@wmich.edu.