When looking at online platforms to make virtual evaluation possible, it is important to consider the types of activities that you will need to do. These include interviews, surveys, collaboration, and planning sessions. Below are some communication platforms for data collection and stakeholder engagement.
|Senior Grant Specialist
Northeast Wisconsin Technical College
Northeast Technical College
Northeast Wisconsin Technical College’s (NWTC) Grants Office works closely with its Institutional Research Office to create ad hoc evaluation teams in order to meet the standards of evidence required in funders’ calls for proposals. Faculty members at two-year colleges often make up the project teams that are responsible for National Science Foundation (NSF) grant project implementation. However, they often need assistance navigating among terms and concepts that are traditionally found in scientific research and social science methodology.
Federal funding agencies are now requiring more evaluative rigor in their grant proposals than simply documenting deliverables. For example, the NSF’s Scholarships in Science, Technology, Engineering, and Mathematics (S-STEM) program saw dramatic changes in 2015: The program solicitation increased the amount of non-scholarship budget from 15% of the scholarship amount to 40% of the total project budget to increase supports for students and to investigate the effectiveness of those supports.
Technical colleges, in particular, face a unique challenge as solicitations change: These colleges traditionally have faculty members from business, health, and trades industries. Continuous improvement is a familiar concept to these professionals; however, they tend to have varying levels of expertise evaluating education interventions.
The following are a few best practices we have developed for assisting project teams in grant proposal development and project implementation at NWTC.
- Where possible, work with an external evaluator at the planning stage. External evaluators can provide the expertise that principal investigators and project teams might lack as external evaluators are well-versed on current evaluation methods, trends, and techniques.
- As they develop their projects, teams should meet with their Institutional Research Office to better understand data gathering and research capacity. Some data needed for evaluation plans might be readily available, whereas others might require some advanced planning to develop a system to track information. Conversations about what the data will be used for and what questions the team wants to answer will help ensure that the correct data are able to be gathered.
- After a grant is awarded, have a conversation early with all internal and external evaluative parties about clarifying data roles and responsibilities. Agreeing to reporting deadlines and identifying who will collect the data and conduct further analysis will help avoid delays.
- Create a “data dictionary” for more complicated projects and variables to ensure that everyone is on the same page about what terms mean. For example, “student persistence” can be defined term-to-term or year-to-year and all parties need to understand which data will need to be tracked.
With some planning and the right working relationships in place, two-year colleges can maintain their federal funding competitiveness even as agencies increase evaluation requirements.
Melanie is CEO of SPEC Associates, a nonprofit program evaluation and process improvement organization headquartered in downtown Detroit. Melanie is also on the faculty of Michigan State University, where she teaches Evaluation Management in the M.A. in Program Evaluation program. Melanie holds a Ph.D. in Applied Social Psychology and has directed evaluations for almost 40 years both locally and nationally. Her professional passion is making evaluation an engaging, fun and learning experience for both program stakeholders and evaluators. To this end, Melanie co-created EvaluationLive!, an evaluation practice model that guides evaluators in ways to breathe life into the evaluation experience.
Why is it that sometimes meetings with evaluation stakeholders seem to generate anxiety and boredom, while other times they generate excitement, a hunger for learning and, yes, even fun!?
My colleague, Mary Williams, and I started wondering and defining this about eight years ago. With 60 years of collective evaluation experience, we documented, analyzed cases, and conducted an in-depth literature review seeking an answer. We honed in on two things: (1) a definition of what exemplary stakeholder engagement looks and feels like, and (2) a set of factors that seem to predict when maximum stakeholder engagement exists.
To define “exemplary stakeholder engagement” we looked to the field of positive psychology and specifically to Mihaly Csikszentmihalyi’s (2008) Flow Theory. Csikszentmihalyi defines “flow” as that highly focused mental state where time seems to stand still. Think of a musician composing a sonata. Think of a basketball player being in the “zone.” Flow theory says that this feeling of “flow” occurs when the person perceives that the task at hand is challenging and also perceives that she or he has the skill level sufficient to accomplish the task.
The EvaluationLive! model asserts that maximizing stakeholder engagement with an evaluation – having a flow-like experience during encounters between the evaluator and the stakeholders – requires certain characteristics of the evaluator/evaluation team, of the client organization, and of the relationship between them. Specifically, the evaluator/evaluation team must (1) be competent in the conduct of program evaluation; (2) have expertise in the subject matter of the evaluation; (3) have skills in the art of interpersonal, nonverbal, verbal and written communication; (4) be willing to be flexible in order to meet stakeholders’ needs typically for delivering results in time for decision making; and (5) approach the work with a non-egotistical learner attitude. The client organization must (1) be a learning organization open to hearing good, bad, and ugly news; (2) drive the questions that the evaluation will address; and (3) have a champion positioned within the organization who knows what information the organization needs when, and can put the right information in front of the right people at the right time. The relationship between the evaluator and client must be based on (1) trust, (2) a belief that both parties are equally expert in their own arenas, and (3) a sense that the evaluation will require shared responsibility on the part of the evaluator and the client organization.
Feedback from the field shows EvaluationLive!’s goalposts help evaluators develop strategies to emotionally engage clients in their evaluations. EvaluationLive! has been used to diagnose problem situations and to direct “next steps.” Evaluators are also using the model to guide how to develop new client relationships. We invite you to learn and get involved.
Who are the stakeholders in your project evaluation? How should they be engaged in the evaluation? An evaluation stakeholder is anyone who is involved in or affected by a project or its evaluation —from the student experiencing a new curriculum to an NSF program officer monitoring a project’s progress. Engagement can be anything from serving as an information source for the evaluation to participating in data interpretation and recommendation development. With such broad definitions, it can be difficult to figure out the right mix of whom to involve in evaluation activities and how.
We have created a new resource to support reflection and decision making around this issue. The Identifying Stakeholders and their Role in an Evaluation worksheet presents a series of prompts to help PIs and evaluators move from thinking generically about stakeholder engagement to identifying specific individuals and the type of involvement best suited to them.
Involving stakeholders in a project’s evaluation has many benefits. For example, when stakeholders are engaged in various aspects of an evaluation, it usually increases the evaluation’s relevance and usefulness to the project. When key stakeholders demonstrate support for the evaluation, it may enhance cooperation with data collection. Stakeholders’ knowledge of a project’s context and content typically exceeds that of an external evaluator; that knowledge can be tapped for myriad purposes throughout an evaluation. But stakeholder engagement is not a one-size-fits-all activity. It’s not necessary—and rarely feasible—to involve all stakeholders to the same degree in an evaluation. Maybe some just need to be kept abreast of evaluation activities, while others should take a more active role in decision making. The worksheet is intended to help you figure out what stakeholder engagement should look like in your project.
Click on the link to download the Identifying Stakeholders and their Role in an Evaluation worksheet.