Blog




Blog: Overcoming Writer’s Block – Strategies for Writing Your NSF Annual Report

Posted on February 14, 2018 by  in Blog ()

Supervisor, Grant Projects, Columbus State Community College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

For many new grantees and seasoned principal investigators, nothing is more daunting than an email from Research.gov titled “Annual Project Report Is NOW DUE.” In this blog post, I will help tackle the challenge of writing the annual project report by highlighting the strategy Columbus State Community College has developed for effectively writing annual reports and discussing why this strategy also serves as a planning and feedback tool.

Columbus State’s strategy for Advanced Technological Education (ATE) annual reporting developed organically, with input and collaboration from faculty, staff, and external evaluators, and is based on three key components:

  • shared knowledge of reporting requirements and format,
  • a structured annual reporting timeline, and
  • best-practice sharing and learning from past experience.

This three-pronged approach was utilized by four ATE projects during 2017 and builds on the old adage that “you only get out what you put in.” The key to annual reporting, which also serves as an important planning and feedback tool, is the adoption of a structured annual reporting timeline. The 10-week timeline outlined below ensures that adequate time is dedicated to writing the annual report. The timeline is designed to be collaborative and spur discussion around key milestones, lessons learned, and next steps for revising and improving project plans.

PREPARE

Week 1: Communicating Reporting Requirements

Weeks 1-3: Planning and Data Collection

  • All team members should actively participate in the planning and data collection phase.
  • Project teams should collect a wide breadth of information related to project achievements and milestones. Common types of information collected include individual progress updates, work samples, project work plans and documentation, survey and evaluation feedback, and project metrics.

Week 4: Group Brainstorming

  • Schedule a 60- to 90-minute meeting that focuses specifically on brainstorming and discussing content for the annual report. Include all project team members and your evaluator.
  • Use the project reports template to guide the conversation.

WRITE

Weeks 5-6: First Draft Writing and Clarification Seeking

  • All information is compiled by the project team and assembled into a first draft.
  • It may be useful to mirror the format of a grant proposal or project narrative during this phase to ensure that all project areas are addressed and considered.
  • The focus of this stage is ensuring that all information is accurately captured and integrated.

REVISE

Week 7: First Draft Review

  • First drafts should be reviewed by the project team and two to three people outside of the project team.
  • Including individuals from both inside and outside of the project team will help ensure that useful content is not omitted and that content is presented in an accessible manner.

Weeks 8-9: Final Revisions

  • Feedback and comments are evaluated and final revisions are made.

Week 10: Annual Report Submission

  • The final version of the annual report, with appendices and the evaluation report, is uploaded and submitted through Research.gov.

For additional information about Columbus State’s writing tips, please view our full white paper.

Blog: Part 2: Using Embedded Assessment to Understand Science Skills

Posted on January 31, 2018 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
RBKlein
Rachel Becker-Klein
Senior Research Associate
PEER Associates
KPeterman
Karen Peterman
President
Karen Peterman Consulting
CStylinski
Cathlyn Stylinski
Senior Agent
University of Maryland Center
for Environmental Science

In our last EvaluATE blog, we defined embedded assessments (EAs) and described the benefits and challenges of using EAs to measure and understand science skills. Since then, our team has been testing the development and use of EAs for three citizen science projects through our National Science Foundation (NSF) project, Embedded Assessment for Citizen Science. Below we describe our journey and findings, including the creation and testing of an EA development model.

Our project first worked to test a process model for the development of EAs that could be both reliable and valid (Peterman, Becker-Klein, Stylinski, & Grack-Nelson, in press). Stage 1 was about articulating program goals and determining evidence for documenting those goals. In Stage 2, we collected both content validity evidence (the extent to which a measure was related to the identified goal) and response process validity evidence (how understandable the task was to participants). Finally, the third stage involved field-testing the EA. The exploratory process, with stages and associated products, is depicted in the figure below.

We applied our EA development approach to three citizen-science case study sites and were successful at creating an EA for each. For instance, for Nature’s Notebook (an online monitoring program where naturalists record observations of plants and animals to generate long-term datasets), we worked together to create an EA of paying close attention. This EA was developed for participants to use in the in-person workshop, where they practiced observation skills by collecting data about flora and fauna at the training site. Participants completed a Journal and Observation Worksheet as part of their training, and the EA process standardized the worksheet and also included a rubric for assessing how participants’ responses reflected their ability to pay close attention to the flora and fauna around them.

Embedded Assessment Development Process

Lessons Learned:

  • The EA development process had the flexibility to accommodate the needs of each case study to generate EAs that included a range of methods and scientific inquiry skills.
  • Both the SMART goals and Measure Design Template (see Stage 1 in the figure above) proved useful as a way to guide the articulation of project goals and activities, and the identification of meaningful ways to document evidence of inquiry learning.
  • The response process validity component (from Stage 2) resulted in key changes to each EA, such as changes to the assessment itself (e.g., streamlining the activities) as well as the scoring procedures.

Opportunities for using EAs:

  • Modifying existing activities. All three of the case studies had project activities that we could build off to create an EA. We were able to work closely with program staff to modify the activities to increase the rigor and standardization.
  • Formative use of EAs. Since a true EA is indistinguishable from the program itself, the process of developing and using an EA often resulted in strengthened project activities.

Challenges of using EAs:

  • Fine line between EA and program activities. If an EA is truly indistinguishable from the project activity itself, it can be difficult for project leaders and evaluators to determine where the program ends and the assessment begins. This ambiguity can create tension in cases where volunteers are not performing scientific inquiry skills as expected, making it difficult to disentangle whether the results were due to shortcomings of the program or a failing of the EA designed to evaluate the program.
  • Group versus individual assessments. Another set of challenges for administering EAs relates to the group-based implementation of many informal science projects. Group scores may not represent the skills of the entire group, making the results biased and difficult to interpret.

Though the results of this study are promising, we are at the earliest stages of understanding how to capture authentic evidence to document learning related to science skills. The use of a common EA development process, with common products, has the potential to generate new research to address the challenges of using EAs to measure inquiry learning in the context of citizen science projects and beyond. We will continue to explore these issues in our new NSF grant, Streamlining Embedded Assessment for Citizen Science (DRL #1713424).

Acknowledgments:

We would like to thank our case study partners: LoriAnne Barnett from Nature’s Notebook; Chris Goforth, Tanessa Schulte, and Julie Hall from Dragonfly Detectives; and Erick Anderson from the Young Scientists Club. This work was supported by the National Science Foundation under grant number DRL#1422099.

Resource:

Peterman, K., Becker-Klein, R., Stylinski, C., & Grack-Nelson, A. (2017). Exploring embedded assessment to document scientific inquiry skills within citizen science. In C. Herodotou, M. Sharples, & E. Scanlon (Eds.), Citizen inquiry: A fusion of citizen science and inquiry learning (pp. 63-82). New York, NY: Rutledge.

Gauging the Impact of Professional Development Activities on Students

Posted on January 17, 2018 by  in Blog ()

Executive Director of Emerging Technology Grants, Collin College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many Advanced Technological Education (ATE) grants hold professional development events for faculty. As the lead for several ATE grants, I have been concerned that while data obtained from faculty surveys immediately after these events are useful, they do not gauge the impact of the training on students. The National Convergence Technology Center’s (CTC) approach, described below, uses longitudinal survey data from the faculty attendees to begin to provide evidence on student impact. I believe the approach is applicable to any discipline.

The CTC provides information technology faculty with a free intensive professional development event titled Working Connections Faculty Development Institute. The institute is held twice per year—five days in the summer and two and a half days in the winter. Working Connections helps faculty members develop the skills needed to create a new course or to perform major updates on an existing course in the summer and enough to update a course in the winter. Over the years, more than 1,700 faculty have enrolled in the training. From the beginning, we have gathered attendee feedback via two surveys at the end of the event. One survey focuses on the specific topic track, asking about the extent to which attendees feel that their three learning outcomes were mastered, as well as information on the instructor’s pacing, classroom handling, etc. The other survey asks questions about the overall event, including attendees’ reactions to the focused lunch programs, and how many new courses have been created or enhanced as a result of past attendance.

The CTC educates faculty members as a vehicle for educating students. To learn how the training impacts students and programs, we also send out longitudinal surveys at 6, 18, 30, 42, and 54 months after each summer Working Connections training. These surveys ask faculty members to report on what they did with what they learned at each training, including how many students they educated as a result of what they learned. Faculty are also asked to report how many certificates and degrees were created or enhanced. Each Working Connections cohort receives a separate survey invitation (i.e., someone who attended two Working Connections will get two separate invitations) that includes a link to the survey as well as a roster to help attendees remember which track they took that year. Participation is voluntary, but over the years, we have consistently and strongly emphasized the importance of getting this longitudinal data so that we can provide some evidence of student impact to the National Science Foundation. Our response rate from surveys sent in January 2016 was 59%.

Responses from surveys from 2008-2016 indicate the following:

Number of students who were taught the subjects trained 88,591
Number of new/enhanced courses 241
Number of sections taught 4,899
Number of new/enhanced certificates and degrees 310

While these data still do not allow us to know how the students themselves consumed the attendees’ learning, it does provide evidence that is one step closer to obtaining student impact than just counting faculty feedback after each training. We are considering what else we can do to further unpack the impact on students, but the Family Educational Rights and Privacy Act’s (FERPA) limitations stand in the way of the CTC contacting affected students directly without their permission.

Tip: It is mandatory that a longitudinal survey effort be intentional and consistent. Further, it is extremely important to consistently promote the need for attendees to fill out surveys both during the professional development events and via emails preceding the annual survey emails.  It is all too easy for attendees to simply delete the longitudinal survey if they do not see the point of filling them out.

Tips and Tricks When Writing Interview Questions

Posted on January 2, 2018 by  in Blog ()

Senior Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Developing a well-constructed interview protocol is by no means an easy task. To give us some ideas on how to formulate well-designed interview questions, Michael Patton (2015) dedicates an entire chapter of his book, Qualitative Research & Evaluation Methods, to question formulation. Like with any skill, the key to improving your craft is practice. That’s why I wanted to share a few ideas from Patton and contribute some of my own thoughts to help improve how you formulate interview questions.

One approach I find useful is to consider the category of question you are asking. With qualitative research, the categories of questions can sometimes seem infinite. However, Patton provides a few overarching categories, which can help frame your thinking, allowing you to ask questions with more precision and be intentional with what you are asking. Patton (2015, p. 444) suggests general categories and provides a few question examples, which are presented below. So, when trying to formulate a question, consider the type you are interested in asking:

  • Experience and behavior questions: If I had been in the program with you, what would I have seen you doing?
  • Opinion and value questions: What would you like to see happen?
  • Feeling questions: How do you feel about that?
  • Knowledge questions: Who is eligible for this program?
  • Sensory questions: What does the counselor ask you when you meet with her? What does she actually say? (Questions that describe stimuli)
  • Background and demographic questions: How old are you?

Once the category is known and you start writing or editing questions, some additional strategies are to double check that you are writing truly open-ended questions and avoiding jargon. For instance, don’t assume that your interviewee knows the acronyms you’re using. As evaluators, sometimes we know the program better than the informants! This makes it so important to write questions with clarity. Everyone wins when you take the time to be intentional and design a question with clarity—you get better data and you won’t confuse your interviewee.

Another interesting point from Patton is to make sure you are asking a singular question. Think about when you’re conducting quantitative research and writing an item for a questionnaire—a red flag might be if it’s double-barreled (i.e., asking more than one question simultaneously). For example, a poorly framed questionnaire item about experiences in a mentorship program might read: To what extent do you agree with the statement, “I enjoyed this program and would do it again.” You simply wouldn’t put that item in a questionnaire, since a person might enjoy the program, but wouldn’t necessarily do it again. Although you have more latitude during an interview, it’s always best to write your questions with precision. It’s also a good chance for you to flex some skills when conducting the interview, knowing when to probe effectively if you need to shift the conversation or dive deeper based on what you hear.

It is important to keep in mind there is no right way to formulate interview questions. However, by having multiple tools in your tool kit, you can lean on different strategies as appropriate, allowing you to develop stronger and more rigorous qualitative studies.

 

Reference:

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice. Thousand Oaks, CA: SAGE.

Blog: One Pagers: Simple and Engaging Reporting

Posted on December 20, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
 
Lyssa Wilson Becho
Research Associate, EvaluATE
Emma Perk
Co-Principal Investigator, EvaluATE

Traditional, long-form reports are often used to detail the depth and specifics of an evaluation. However, many readers simply don’t have the time or bandwidth to digest a 30-page report. Distilling the key information into one page can help catch the eye of busy program staff, college administrators, or policy makers.

When we say “one pager,” we mean a single-page document that summarizes evaluation data, findings, or recommendations. It’s generally a stand-alone document that supplements a longer report, dataset, or presentation.

One pagers are a great way to get your client the data they need to guide data-driven decisions. These summaries can work well as companion documents for long reports or as a highlight piece for an interim report. We created a 10-step process to help facilitate the creation of a one pager. Additional materials are available, including detailed slides, grid layouts, videos, and more.

Ten-step process for creating a one pager:

1. Identify the audience

Be specific about who you are talking to and their information priorities. The content and layout of the document should be tailored to meet the needs of this audience.

2. Identify the purpose

Write a purpose statement that identifies why you are creating the one pager. This will help you decide what information to include or to exclude.

3. Prioritize your information

Categorize the information most relevant to your audience. Then rank each category from highest to lowest priority to help inform layout of the document.

4. Choose a grid

Use a grid to intentionally organize elements visually for readers. Check out our free pre-made grids, which you can use for your own one pagers, and instructions on how to use them in PowerPoint (video).

5. Draft the layout

Print out your grid layout and sketch your design by hand. This will allow you to think creatively without technological barriers and will save you time.

6. Create an intentional visual path

Pay attention to how the reader’s eye moves around the page. Use elements like large numbers, ink density, and icons to guide the reader’s visual path. Keep in mind the page symmetry and need to balance visual density. For more tips, see Canva’s Design Elements and Principles.

7. Create a purposeful hierarchy

Use headings intentionally to help your readers navigate and identify the content.

8. Use white space

The brain subconsciously views content grouped together as a cohesive unit. Add white space to indicate that a new section is starting.

9. Get feedback

Run your designs by a colleague or client to help catch errors, note areas needing clarification, and ensure the document makes sense to others. You will likely need to go through a few rounds of feedback before the document is finalized.

10. Triple-check consistency

Triple-check, and possibly quadruple-check, for consistency of fonts, alignment, size, and colors. Style guides can be a useful way to keep track of consistency in and across documents. Take a look at EvaluATE’s style guide here.

The demand for one pagers is growing, and now you are equipped with the information you need to succeed in creating one. So, start creating your one pagers now!

Vlog: Checklist for Program Evaluation Report Content

Posted on December 6, 2017 by  in Blog ()

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

This video provides an overview of EvaluATE’s Checklist for Program Evaluation Report Content, and three reasons why this checklist is useful to evaluators and clients.

Blog: Addressing Challenges in Evaluating ATE Projects Targeting Outcomes for Educators

Posted on November 21, 2017 by  in Blog ()

CEO, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Kirk Knestis—CEO of Hezel Associates and ex-career and technology educator and professional development provider—here to share some strategies addressing challenges unique to evaluating Advanced Technological Education (ATE) projects that target outcomes for teachers and college faculty.

In addition to funding projects that directly train future technicians, the National Science Foundation (NSF) ATE program funds initiatives to improve abilities of grade 7-12 teachers and college faculty—the expectation being that improving their practice will directly benefit technical education. ATE tracks focusing on professional development (PD), capacity building for faculty, and technological education teacher preparation all count implicitly on theories of action (typically illustrated by a logic model) that presume outcomes for educators will translate into outcomes for student technicians. This assumption can present challenges to evaluators trying to understand how such efforts are working. Reference this generic logic model for discussion purposes:

Setting aside project activities acting directly on students, any strategy aimed at educators (e.g., PD workshops, faculty mentoring, or preservice teacher training) must leave them fully equipped with dispositions, knowledge, and skills necessary to implement effective instruction with students. Educators must then turn those outcomes into actions to realize similar types of outcomes for their learners. Students’ action outcomes (e.g., entering, persisting in, and completing training programs) depend, in turn, on them having the dispositions, knowledge, and skills educators are charged with furthering. If educators fail to learn what they should, or do not activate those abilities, students are less likely to succeed. So what are the implications—challenges and possible solutions—of this for NSF ATE evaluations?

  • EDUCATOR OUTCOMES ARE OFTEN NOT WELL EXPLICATED. Work with program designers to force them to define the new dispositions, understandings, and abilities that technical educators require to be effective. Facilitate discussion about all three outcome categories to lessen the chance of missing something. Press until outcomes are defined in terms of persistent changes educators will take away from project activities, not what they will do during them.
  • EDUCATORS ARE DIFFICULT TO TEST. To truly understand if an ATE project is making a difference in instruction, it is necessary to assess if precursor outcomes for them are realized. Dispositions (attitudes) are easy to assess with self-report questionnaires, but measuring real knowledge and skills requires proper assessments—ideally, performance assessments. Work with project staff to “bake” assessments into project strategies, to be more authentic and less intrusive. Strive for more than self-report measures of increased abilities.
  • INSTRUCTIONAL PRACTICES ARE DIFFICULT AND EXPENSIVE TO ASSESS. The only way to truly evaluate instruction is to see it, assessing pedagogy, content, and quality with rubrics or checklists. Consider replacing expensive on-site visits with the collection of digital videos or real-time, web-based telepresence.

With clear definitions of outcomes and collaboration with ATE project designers, evaluators can assess whether technician training educators are gaining the necessary dispositions, knowledge, and skills, and if they are implementing those practices with students. Assessing students is the next challenge, but until we can determine if educator outcomes are being achieved, we cannot honestly say that educator-improvement efforts made any difference.

Blog: Partnering with Clients to Avoid Drive-by Evaluation

Posted on November 14, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
   
 John Cosgrove

Senior Partner, Cosgrove & Associates

 Maggie Cosgrove

Senior Partner, Cosgrove & Associates

If a prospective client says, “We need an evaluation, and we will send you the dataset for evaluation,” our advice is that this type of “drive-by evaluation” may not be in their best interest.

As calls for program accountability and data-driven decision making increase, so does demand for evaluation. Given this context, evaluation services are being offered in a variety of modes. Before choosing an evaluator, we recommend the client pause to consider what they would like to learn about their efforts and how evaluation can add value to such learning. This perspective requires one to move beyond data analysis and reporting of required performance measures to examining what is occurring inside the program.

By engaging our clients in conversations related to what they would like to learn, we are able to begin a collaborative and discovery-oriented evaluation. Our goal is to partner with our clients to identify and understand strengths, challenges, and emerging opportunities related to program/project implementation and outcomes. This process will help clients not only understand which strategies worked, but why they worked and lays the foundation for sustainability and scaling.

These initial conversations can be a bit of a dance, as clients often focus on funder-required accountability and performance measures. This is when it is critically important to elucidate the differences between evaluation and auditing or inspecting. Ann-Murray Brown examines this question and provides guidance as to why evaluation is more than just keeping score in Evaluation, Inspection, Audit: Is There a Difference? As we often remind clients, “we are not the evaluation police.”

During our work with clients to clarify logic models, we encourage them to think of their logic model in terms of storytelling. We pose commonsense questions such as: When you implement a certain strategy, what changes to you expect to occur? Why do you think those changes will take place? What do you need to learn to support current and future strategy development?

Once our client has clearly outlined their “story,” we move quickly to connect data collection to client-identified questions and, as soon as possible, we engage stakeholders in interpreting and using their data. We incorporate Veena Pankaj and Ann Emery’s (2016) data placemat process to engage clients in data interpretation.  By working with clients to fully understand their key project questions, focus on what they want to learn, and engage in meaningful data interpretation, we steer clear of the potholes associated with drive-by evaluations.

Pankaj, V. & Emery, A. (2016). Data placemats: A facilitative technique designed to enhance stakeholder understanding of data. In R. S. Fierro, A. Schwartz, & D. H. Smart (Eds.), Evaluation and Facilitation. New Directions for Evaluation, 149, 81-93.

Blog: Thinking Critically about Critical Thinking Assessment

Posted on October 31, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Vera Beletzan
Senior Special Advisor Essential Skills
Humber College
Paula Gouveia
Dean, School of Liberal Arts and Sciences
Humber College

Humber College, as part of a learning outcomes assessment consortium funded by the Higher Education Quality Council of Ontario (HEQCO), has developed an assessment tool to measure student gains in critical thinking (CT) as expressed through written communication (WC).

In Phase 1 of this project, a cross-disciplinary team of faculty and staff researched and developed a tool to assess students’ CT skills through written coursework. The tool was tested for usability by a variety of faculty and in a variety of learning contexts. Based on this pilot, we revised the tool to focus on two CT dimensions: comprehension and integration of writer’s ideas, within which are six variables: interpretation, analysis, evaluation, inference, explanation, and self-regulation.

In Phase 2, our key questions were:

  1. What is the validity and reliability of the assessment tool?
  2. Where do students experience greater levels of CT skill achievement?
  3. Are students making gains in learning CT skills over time?
  4. What is the usability and scalability of the tool?

To answer the first question, we examined the inter-rater reliability of the tool, as well as compared CTWC assessment scores with students’ final grades. We conducted a cross-sectional analysis by comparing diverse CT and WC learning experiences in different contexts, namely our mandatory semester I and II cross-college writing courses, where CTWC skills are taught explicitly and reinforced as course learning outcomes; vocationally-oriented courses in police foundations, where the skills are implicitly embedded as deemed essential by industry; and a critical thinking course in our general arts and sciences programs, where CT is taught as content knowledge.

We also performed a longitudinal analysis by assessing CTWC gains in a cohort of students across two semesters in their mandatory writing courses.

Overall, our tests showed positive results for reliability and validity. Our cross-sectional analysis showed the greatest CT gains in courses where the skill is explicitly taught. Our longitudinal analysis showed only modest gains, indicating that a two-semester span is insufficient for significant improvement to occur.

In terms of usability, faculty agreed that the revised tool was straightforward and easy to apply. However, there was less agreement on the tool’s meaningfulness to students, indicating that further research needs to include student feedback.

Lessons learned:

  • Build faculty buy-in at the outset and recognize workload issues
  • Ensure project team members are qualified
  • For scalability, align project with other institutional priorities

Recommendations:

  • Teach CT explicitly and consistently, as a skill, and over time
  • Strategically position courses where CT is taught explicitly throughout a program for maximum reinforcement
  • Assess and provide feedback on students’ skills at regular intervals
  • Implement faculty training to build a common understanding of the importance of essential skills and their assessment
  • For the tool to be meaningful, students must understand which skills are being assessed and why

Our project will inform Humber’s new Essential Skills Strategy, which includes the development of an institutional learning outcomes framework and assessment process.

A detailed report, including our assessment tool, will be available through HEQCO in the near future. For further information, please contact the authors: vera.beletzan@humber.ca  or paula.gouveia@humber.ca

Getting Your New ATE Project’s Evaluation off to a Great Start

Posted on October 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

New ATE project principal investigators (PIs): When you worked with your evaluator to develop an evaluation plan for your project proposal, you were probably focused on the big picture—how to gather credible and meaningful evidence about the quality and impact of your work. To ensure your evaluation achieves its aims, take these four steps now to make sure your project provides the human resources, time, and information needed for a successful evaluation:

  1. Schedule regular meetings with your evaluator. Regular meetings help ensure that your project’s evaluation receives adequate attention. These exchanges should be in real time—via phone call, web meetings, or face-to-face—not just email. See EvaluATE’s new Communication Plan Checklist for ATE PIs and Evaluators for a list of other communication issues to discuss with your evaluator at the start of a project.
  1. Work with your evaluator to create a project evaluation calendar. This calendar should span the life of your project and include the following:
  • Due dates for National Science Foundation (NSF) annual reports: You should include your evaluation reports or at least information from the evaluation in these reports. Work backward from their due dates to determine when evaluation reports should be completed. To find out when your annual report is due, go to Research.gov, enter your NSF login information, select “Awards & Reporting,” then “Project Reports.”
  • Advisory committee meeting dates: You may want your evaluator to attend these meetings to learn more about your project and to communicate directly with committee members.
  • Project events: Activities such as workshops and outreach events present valuable opportunities to collect data directly from the individuals involved in the project. Make sure your evaluator is aware of them.
  • Due dates for new proposal submissions: If submitting to NSF again, you will need to include evidence of your current project’s intellectual merit and broader impacts. Working with your evaluator now will ensure you have compelling evidence to support a future submission.
  1. Keep track of what you’re doing and who is involved. Don’t leave these tasks to your evaluator or wait until the last minute. Taking an active—and proactive—role in documenting the project’s work will save you time and result in more accurate information. Your evaluator can then use that information when preparing their reports. Moreover, you will find it immensely useful to have good documentation at your fingertips when preparing your annual NSF report.
  • Maintain a record of project activities and products—such as conference presentations, trainings, outreach events, competitions, publications—as they are completed. Check out EvaluATE’s project vita as an example.
  • Create a participant database (or spreadsheet): Everyone who engages with your project should be listed. Record their contact information, role in the project, and pertinent demographic characteristics (such as whether a student is a first-generation college student, a veteran, or part of a group that has been historically underrepresented in STEM). You will probably find several uses for this database, such as for follow-up with participants for evaluation purposes, for outreach, and as evidence of your project’s broader impacts.

An ounce of prevention is worth of pound of cure: Investing time up front to make sure your evaluation is on solid footing will save headaches down the round.