Blog




Blog: Sustaining Career Pathways System Development Efforts

Posted on February 15, 2017 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Debbie Mills
Director
National Career Pathways Network
Steven Klein
Director
RTI International

Career pathways are complex systems that leverage education, workforce development, and social service supports to help people obtain the skills they need to find employment and advance in their careers. Coordinating people, services, and resources across multiple state agencies and training providers can be a complicated, confusing, and at times, frustrating process. Changes to longstanding organizational norms can feel threatening, which may lead some to question or actively resist proposed reforms.

To ensure lasting success, sustainability and evaluation efforts should be integrated into career pathways system development and implementation efforts at the outset to ensure new programmatic connections are robust and positioned for longevity.

To support states and local communities in evaluating and planning for sustainability, RTI International created A Tool for Sustaining Career Pathways Efforts.

This innovative paper draws upon change management theory and lessons learned from a multi-year, federally-funded initiative to support five states in integrating career and technical education into their career pathways. Hyperlinks embedded within the paper allow readers to access and download state resources developed to help evaluate and sustain career pathways efforts. A Career Pathways Sustainability Checklist, included at the end of the report, can be used to assess your state’s or local community’s progress toward building a foundation for the long-term success of its career pathways system development efforts.

This paper identified three factors that contribute to sustainability in career pathways systems.

1. Craft a Compelling Vision and Building Support for Change

Lasting system transformation begins with lowering organizational resistance to change. This requires that stakeholders build consensus around a common vision and set of goals for the change process, establish new management structures to facilitate cross-agency communications, obtain endorsements from high-level leaders willing to champion the initiative, and publicize project work through appropriate communication channels.

2. Engage Partners and Stakeholders in the Change Process

Relationships play a critical role in maintaining systems over time. Sustaining change requires actively engaging a broad range of partners in an ongoing dialogue to share information about project work, progress, and outcomes, making course corrections when needed. Employer involvement also is essential to ensure that education and training services are aligned with labor market demand.

3. Adopt New Behaviors, Practices, and Processes

Once initial objectives are achieved, system designers will want to lock down new processes and connections to prevent systems from reverting to their original form. This can be accomplished by formalizing new partner roles and expectations, creating an infrastructure for ensuring ongoing communication, formulating accountability systems to track systemic outcomes, and securing new long-term resources and making more effective use of existing funding.

For additional information contact the authors:

Steve Klein; sklein@rti.org
Debbie Mills; fdmills1@comcast.net

Blog: Declutter Your Reports: The Checklist for Straightforward Evaluation Reports

Posted on February 1, 2017 by  in Blog (, )

Senior Research Associate, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation reports have a reputation for being long, overly complicated, and impractical. The recent buzz about fresh starts and tidying up for the new year got me thinking about the similarities between these infamous evaluation reports and the disastrously cluttered homes featured on reality makeover shows. The towering piles of stuff overflowing from these homes reminds me of the technical language and details that clutter up so many evaluation reports. Informational clutter, like physical clutter, can turn reports, just like homes, into difficult-to-navigate obstacle courses that can render the contents virtually unusable. If you are looking for ideas on how to organize and declutter your reports, check out the Checklist for Straightforward Evaluation Reports that Lori Wingate and I developed. The checklist provides guidance on how to produce comprehensive evaluation reports that are concise, easy to understand, and easy to navigate. Main features of the checklist include:

  • Quick reference sheet: A one-page summary of content to include in an evaluation report and tips for presenting content in a straightforward manner.
  • Detailed checklist: A list and description of possible content to include in each report section.
  • Straightforward reporting tips: General and section-specific suggestions on how to present content in a straightforward manner.
  • Recommended resources: List of resources that expand on information presented in the checklist.

Evaluators, evaluation clients, or other stakeholders can use the report to set reporting expectations such as what content to include and how to present information.

Straightforward Reporting Tips

Here are some tips, inspired by the checklist, on how to tidy up your reports:

  • Use short sentences: Each sentence should communicate one idea. Sentences should contain no more than 25 words. Downsize your words to only the essentials, just like you might downsize your closet.
  • Use headings: Use concise and descriptive headings and subheadings to clearly label and distinguish report sections. Use report headings, like labels on boxes, to make it easier to locate items in the future.
  • Organize results by evaluation questions: Organize the evaluation results section by evaluation question with separate subheadings for findings and conclusions under each evaluation question. Just like most people don’t put decorations for various holidays in one box, don’t put findings for various evaluation questions in one findings section.
  • Present takeaway messages: Label each figure with a numbered title and separate takeaway message. Similarly, use callout to grab readers’ attention and highlight takeaway messages. For example, use a callout in the results section to summarize the conclusion in one-sentence under the evaluation question.
  • Minimize report body length: Reduce page length as much as possible without compromising quality. One way to do this is to place details that enhance understanding—but are not critical for basic understanding—in the appendices. Only information that is critical for readers’ understanding of the evaluation process and results should be included in the report body. Think of the appendices like a storage area such as a basement, attic, or shed where you keep items you need but don’t use all the time.

If you’d like to provide feedback you can write your comments in an email or return a review form to info@evalu-ate.org. We are especially interested in getting feedback from individuals that have used the checklist as they develop evaluation reports.

Blog: Scavenging Evaluation Data

Posted on January 17, 2017 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

But little Mouse, you are not alone,
In proving foresight may be vain:
The best laid schemes of mice and men
Go often askew,
And leave us nothing but grief and pain,
For promised joy!

From To a Mouse, by Robert Burns (1785), modern English version

Research and evaluation textbooks are filled with elegant designs for studies that will illuminate our understanding of social phenomena and programs. But as any evaluator will tell you, the real world is fraught with all manner of hazards and imperfect conditions that wreak havoc on design, bringing grief and pain, rather than the promised joy of a well-executed evaluation.

Probably the biggest hindrance to executing planned designs is that evaluation is just not the most important thing to most people. (GASP!) They are reluctant to give two minutes for a short survey, let alone an hour for a focus group. Your email imploring them to participate in your data collection effort is one of hundreds of requests for their time and attention that they are bombarded with daily.

So, do all the things the textbooks tell you to do. Take the time to develop a sound evaluation design and do your best to follow it. Establish expectations early with project participants and other stakeholders about the importance of their cooperation. Use known best practices to enhance participation and response rates.

In addition: Be a data scavenger. Here are two ways to get data for an evaluation that do not require hunting down project participants and convincing them to give you information.

1. Document what the project is doing.

I have seen a lot of evaluation reports in which evaluators painstakingly recount a project’s activities as a tedious story rather than straightforward account. This task typically requires the evaluator to ask many questions of project staff, pore through documents, and track down materials. It is much more efficient for project staff to keep a record of their own activities. For example, see EvaluATE’s resume. It is a no-nonsense record of our funding, activities, dissemination, scholarship, personnel, and contributors.  In and of itself, our resume does most of the work of the accountability aspect of our evaluation (i.e., Did we do what we promised?).  In addition, the resume can be used to address questions like these:

  • Is the project advancing knowledge, as evidenced by peer-reviewed publications and presentations?
  • Is the project’s productivity adequate in relation to its resources (funding and personnel)?
  • To what extent is the project leveraging the expertise of the ATE community?

2. Track participation.

If your project holds large events, use a sign-in sheet to get attendance numbers. If you hold webinars, you almost certainly have records with information about registrants and attendees. If you hold smaller events, pass around a sign-in sheet asking for basic information like name, institution, email address, and job title (or major if it’s a student group). If the project has developed a course, get enrollment information from the registrar.  Most importantly: Don’t put these records in a drawer. Compile them in a spreadsheet and analyze the heck out of them. Here are example data points that we glean from EvaluATE’s participation records:

  • Number of attendees
  • Number of attendees from various types of organizations (such as two- and four-year colleges, nonprofits, government agencies, and international organizations)
  • Number and percentage of attendees who return for subsequent events
  • Geographic distribution of attendees

Project documentation and participation data will be most helpful for process evaluation and accountability. You will still need cooperation from participants for outcome evaluation—and you should engage them early to garner their interest and support for evaluation efforts. Still, you may be surprised by how much valuable information you can get from these two sources—documentation of activities and participation records—with minimal effort.

Get creative about other data you can scavenge, such as institutional data that colleges already collect; website data, such as Google Analytics; and citation analytics for published articles.

Blog: Research Goes to School (RGS) Model

Posted on January 10, 2017 by  in Blog ()

Project Coordinator, Discovery Learning Research Center, Purdue University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Data regarding pathways to STEM careers indicate that a critical transition point exists between high school and college.  Many students who are initially interested in STEM disciplines and could be successful in these fields either do not continue to higher education or choose non-STEM majors in college.  In part, these students do not see what role they can have in STEM careers.  For this reason, the STEM curriculum needs to reflect its applicability to today’s big challenges and connect students to the roles that these issues have for them on a personal level.

We proposed a project that infused high school STEM curricula with cross-cutting topics related to the hot research areas that scientists are working on today.  We began by focusing on sustainable energy concepts and then shifted to nanoscience and technology.

Pre-service and in-service teachers came to a large Midwestern research university for two weeks of intensive professional development in problem-based learning (PBL) pedagogy.  Along with PBL training, participants also connected with researchers in the grand challenge areas of sustainable energy (in project years 1-3) and nanoscience and technology (years 4-5).

We proposed a two-tiered approach:

1. Develop a model for education that consisted of two parts:

  • Initiate a professional development program that engaged pre-service and in-service high school teachers around research activities in grand challenge programs.
  • Support these teachers to transform their curricula and classroom practice by incorporating concepts of the grand challenge programs.

2. Establish a systemic approach for integrating research and education activities.

Results provided a framework for creating professional development with researchers and STEM teachers that culminates with integration of grand challenge concepts and education curricula.

Using developmental evaluation over a multi-year process, core practices for an effective program began emerging:

  • Researchers must identify the basic scientific concepts their work entails. For example, biofuels researchers work with the energy and carbon cycles; nanotechnology researchers must thoroughly understand size-dependent properties, forces, self-assembly, size and scale, and surface area-to-volume ratio.
  • Once identified, these concepts must be mapped to teachers’ state teaching standards and Next Generation Science Standards (NGSS), making them relevant for teachers.
  • Professional development must be planned for researchers to help them share their research at an appropriate level for use by high school teachers in their classrooms.
  • Professional development must be planned for teachers to help them integrate the research content into their teaching and learning standards in meaningful ways.
  • The professional development for teachers must include illustrative activities that demonstrate scientific concepts and be mapped to state and NGSS teaching standards.

The iterative and rapid feedback processes of developmental evaluation allowed for evolution of the program.  Feedback from data provided impetus for change, but debriefing sessions provided insight to the program and to core practices.  To evaluate the core practices found in the biofuels topic from years 1-3, we used a dissimilar topic, nanotechnology, in years 4-5.  We saw a greater integration of research and education activities in teachers’ curricula as the core practices became more fully developed through iterative repetition even with a new topic. The core practices remained true regardless of topic, and practitioners became better at delivery with more repetitions in years 4 and 5.

 

Blog: Evaluating Creativity in the Context of STEAM Education

Posted on December 16, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Shelly Engelman
Senior Researcher
The Findings Group, LLC
Morgan Miller
Research Associate
The Findings Group, LLC

At The Findings Group, we are assessing a National Science Foundation Discovery Research K-12 project that gives students an opportunity to learn about computing in the context of music through EarSketch. As with other STEAM (Science, Technology, Engineering, Arts, Math) approaches, EarSketch aims to motivate and engage students in computing through a creative, cross-disciplinary approach. Our challenge with this project was threefold: 1) defining creativity within the context of STEAM education, 2) measuring creativity, and 3) demonstrating how creativity gives rise to more engagement in computing.

The 4Ps of Creativity

To understand creativity, we turned to the literature first.  According to previous research, creativity has been discussed from four perspectives, or the 4Ps of creativity: Process, Person, Press/Place, and Product   For our study, we focused on creativity from the perspective of the Person and the Place. Person refers to the traits, tendencies, and characteristics of the individual who creates something or engages in a creative endeavor. Place refers to the environmental factors that encourage creativity.

Measuring Creativity – Person

Building on previous work by Carroll (2009) and colleagues, we developed a self-report Creativity – Person measure that taps into six aspects of personal expressiveness within computing. These aspects include:

  • Expressiveness: Conveying one’s personal view through computing
  • Exploration:  Investigating ideas in computing
  • Immersion/Flow: Feeling absorbed by the computing activity
  • Originality: Generating unique and personally novel ideas in computing

Through a series of pilot tests with high school students, our final Creativity – Person scale consisted of 10-items and yielded excellent reliability (Cronbach’s alpha= .90 to .93); likewise, it is positively correlated with other psychosocial measures such as computing confidence, enjoyment, and identity and belongingness.

Measuring Creativity—Place

Assessing creativity at the environmental level proved to be more of a challenge! In building the Creativity – Place scale, we turned our attention to previous work by Shaffer and Resnick (1999) who assert that learning environments or materials that are “thickly authentic”—personally-relevant and situated in the real world—promote engagement in learning. Using this as our operational definition of a creative environment, we designed a self-report scale that taps into four identifiable components of a thickly authentic learning environment:

  • Personal: Learning that is personally meaningful for the learner
  • Real World: Learning that relates to the real-world outside of school
  • Disciplinary: Learning that provides an opportunity to think in the modes of a particular discipline
  • Assessment: Learning where the means of assessment reflect the learning process.

Our Creativity – Place scale consisted of 8 items and also yielded excellent reliability (Cronbach’s alpha=.91).

 Predictive Validity

Once we had our two self-report questionnaires in hand—Creativity – Person and Creativity – Place scales—we collected data among high school students who utilized EarSketch as part of their computing course. Our main findings were:

  • Students show significant increases from pre to post in personal expressiveness in computing (Creativity – Person), and
  • A creative learning environment (Creativity – Place) predicted students’ engagement in computing and intent to persist. That is, through a series of multiple regression analyses, we found that a creative learning environment, fueled by a meaningful and personally relevant curriculum, drives improvements in students’ attitudes and intent to persist in computing.

Moving forward, we plan on expanding our work by examining other facets of creativity (e.g., Creativity – Product) through the development of creativity rubrics to assess algorithmic music compositions.

References

Carroll, E.A., Latulipe, C. Fung, R., & Terry, M. (2009). Creativity factor evaluation: Towards a standardized survey metric for creativity support. In C&C ’09: Proceedings of the Seventh ACM Conference on Creativity and Cognition (pp. 127-136). New York, NY:  Association for Computing Machinery.

Engelman, S., Magerko, M., McKlin, T., Miller, M., Douglas, E., & Freeman, J. (in press). Creativity in authentic STEAM education with EarSketch. SIGCSE ’17: Proceedings of the 48th ACM Technical Symposium on Computer Science Education.Seattle, WA: Association for Computing Machinery.

Shaffer, D. W., & Resnick, M. (1999). “Thick” authenticity: New media and authentic learning. Journal of Interactive Learning Research, 10(2), 195-215.

Blog: The Value of Using a Psychosocial Framework to Evaluate STEM Outcomes Among Underrepresented Students

Posted on December 1, 2016 by , , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
henderson2x breonte2x markert2x
Drs. Dawn X. Henderson, Breonte S. Guy, and Chad Markert serve as Co-Principal Investigators of an HBCU UP Targeted Infusion Project grant. Funded by the National Science Foundation, the project aims to explore how infusing lab-bench techniques into the Exercise Physiology curriculum informs undergraduate students’ attitudes about research and science and intentions to persist in STEM-related careers.

The National Science Foundation aims to fund projects that increase retention and persistence in STEM-related careers. Developing project proposals usually involves creating a logic model and an evaluation plan. The intervention, specifically one designed to change an individual’s behavior and outcomes, relies on a combination of psychological and social factors. For example, increasing the retention and persistence of underrepresented groups in the STEM education-to-workforce pipeline depends on attitudes about science, behavior, and the ability to access resources that lead to access, exploration, and exposure to STEM.

As faculty interested in designing interventions in STEM education, we developed a psychosocial framework to inform project design and evaluation and believe we offer an insightful strategy to investigators and evaluators. When developing a theory of change or logic model, you can create a visual map (see figure below) to identify underlying psychological and social factors and assumptions that influence program outcomes. In this post, we highlight a psychosocial framework for developing theories of change—specifically as it relates to underrepresented groups in STEM.

psychosocial_frameworkVisual mapping can outline the relationship between the intervention and psychological (cognitive) and social domains.

What do we mean by psychosocial framework?

Both retention and persistence rely on social factors, such as financial resources, mentoring, and other forms of social support. For example, in our work, we proposed introducing underrepresented students to lab-bench techniques in the Exercise Physiology curriculum and providing summer enrichment opportunities in research to receive funding and mentoring. Providing these social resources introduced students to scientific techniques they would not receive in a traditional curriculum. Psychological factors, such as individual attitudes about science and self-efficacy, are also key contributors to STEM persistence. For instance, self-efficacy is the belief one has the capacity to accomplish a specific task and achieve a specific outcome.

A practical exercise in developing the psychosocial framework is asking critical questions:

  • What are some social factors driving a project’s outcomes? For example, you may modify social factors by redesigning curriculum to engage students in hands-on experiences, providing mentoring or improving STEM teaching.
  • How can these social factors influence psychological factors? For example, improving STEM education can change the way students think about STEM. Outcomes then could relate to attitudes towards and beliefs about science.
  • How do psychological factors relate to persistence in STEM? For example, changing the way students think about STEM, their attitudes, and beliefs may shape their science identity and increase their likelihood to persist in STEM education (Guy, 2013).

What is the value-added?

Evaluation plans, specifically those seeking to measure changes in human behavior, hinge on a combination of psychological and social factors. The ways in which individuals think and form attitudes and behaviors, combined with their ability to access resources, influence programmatic outcomes. A psychosocial framework can be used to identify how psychological processes and social assets and resources contribute to increased participation and persistence of underrepresented groups in STEM-related fields and the workforce. More specifically, the recognition of psychological and social factors in shaping science attitudes, behaviors, and intentions to persist in STEM-related fields can generate value in project design and evaluation.

Reference
Guy, B. (2013). Persistence of African American men in science: Exploring the influence of scientist identity, mentoring, and campus climate. (Doctoral dissertation).

Useful Resource

Steve Powell’s AEA365 blog post, Theory Maker: Free web app for drawing theory of change diagrams

Blog: Course Improvement Through Evaluation: Improving Undergraduate STEM Majors’ Capacity for Delivering Inquiry-Based Mathematics and Science Lessons

Posted on November 16, 2016 by  in Blog ()

Associate Professor, Graduate School of Education, University of Massachusetts Lowell

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the goals of the University of Massachusetts (UMass) UTeach program is to produce mathematics and science teachers who not only are highly knowledgeable in their disciplines, but also can engage students through inquiry-based instruction. The Research Methods course is one of the core program courses designed to accomplish this goal. The Research Methods course is centered around a series of inquiry-based projects.

What We Did

Specifically, the first inquiry was a simple experiment. Students were asked to look around in their kitchens, come up with a research question, and carry out an experiment to investigate the question. The second inquiry required them to conduct a research project in their own disciplinary field. The third inquiry asked students to pretend to be teachers of a middle or high school math/science course who were about to teach their students topics that involve the concept of slope and its applications. This inquiry required students to develop and administer an assessment tool. In addition, they analyzed and interpreted assessment data in order to find out their pretend-students’ prior knowledge and understanding of the concept of slope and its applications in different STEM disciplines (i.e., using assessment information for lesson planning purposes).

Our Goal

We investigated whether our course achieved the goal of enhancing course enrollees’ development of pedagogical skills delivering inquiry-based instruction teaching mathematical or scientific concepts embedded in the inquiry projects.

What We Learned

Examinations of the quality of students’ written inquiry reports showed that students were able to do increasingly difficult work with a higher degree of competency as the course progressed.

Comparisons of students’ responses to pre-and-post course surveys that consisted of questions about a hypothetical experiment indicated that students gained skills at identifying and classifying experimental variables and sources of measurement errors. However, they struggled with articulating research questions and justifying whether a question was researchable. These results were consistent with what we observed in their written reports. As the course progressed, students were more explicit at identifying variables and their relationships and were better at explaining how their research designs addressed possible measurement errors. For most students, however, articulating a researchable question was the most difficult aspect of an inquiry project.

Students’ self-reflections and focus group discussions suggested that our course modeled inquiry-based learning quite well, which was a sharp departure from the step-by-step laboratory activities they were used to as K-12 students. Students also noted that the opportunity to independently conceptualize and carry out an experiment before getting peer and instructor feedback, revising, and producing a final product created a reflective process that they had not experienced in other university course work. Finally, students appreciated the opportunity to articulate the connection between practicing inquiry skills as part of their professional requirements (i.e., as STEM majors) and using inquiry as a pedagogical tool to teach the math and science concepts to middle or high school students. They also noted that knowing how to evaluate their own students’ prior knowledge was an important skill for lesson planning down the road.

Blog: 3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • —Set a recurring conference call or meeting with your evaluator—at least once a month.
  • — Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • —Assign one person on your project team to be the point-person for evaluation.
  • —Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.

 

Blog: Best Practices for Two-Year Colleges to Create Competitive Evaluation Plans

Posted on September 28, 2016 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Ball
Kelly Ball
Ball
Jeff Grebinoski

Northeast Wisconsin Technical College’s (NWTC) Grants Office works closely with its Institutional Research Office to create ad hoc evaluation teams in order to meet the standards of evidence required in funders’ calls for proposals. Faculty members at two-year colleges often make up the project teams that are responsible for National Science Foundation (NSF) grant project implementation. However, they often need assistance navigating among terms and concepts that are traditionally found in scientific research and social science methodology.

Federal funding agencies are now requiring more evaluative rigor in their grant proposals than simply documenting deliverables. For example, the NSF’s Scholarships in Science, Technology, Engineering, and Mathematics (S-STEM) program saw dramatic changes in 2015: The program solicitation increased the amount of non-scholarship budget from 15% of the scholarship amount to 40% of the total project budget to increase supports for students and to investigate the effectiveness of those supports.

Technical colleges, in particular, face a unique challenge as solicitations change: These colleges traditionally have faculty members from business, health, and trades industries. Continuous improvement is a familiar concept to these professionals; however, they tend to have varying levels of expertise evaluating education interventions.

The following are a few best practices we have developed for assisting project teams in grant proposal development and project implementation at NWTC.

  • Where possible, work with an external evaluator at the planning stage. External evaluators can provide the expertise that principal investigators and project teams might lack as external evaluators are well-versed on current evaluation methods, trends, and techniques.
  • As they develop their projects, teams should meet with their Institutional Research Office to better understand data gathering and research capacity. Some data needed for evaluation plans might be readily available, whereas others might require some advanced planning to develop a system to track information. Conversations about what the data will be used for and what questions the team wants to answer will help ensure that the correct data are able to be gathered.
  • After a grant is awarded, have a conversation early with all internal and external evaluative parties about clarifying data roles and responsibilities. Agreeing to reporting deadlines and identifying who will collect the data and conduct further analysis will help avoid delays.
  • Create a “data dictionary” for more complicated projects and variables to ensure that everyone is on the same page about what terms mean. For example, “student persistence” can be defined term-to-term or year-to-year and all parties need to understand which data will need to be tracked.

With some planning and the right working relationships in place, two-year colleges can maintain their federal funding competitiveness even as agencies increase evaluation requirements.

Blog: Possible Selves: A Way to Assess Identity and Career Aspirations

Posted on September 14, 2016 by  in Blog ()

Professor of Psychology, Arkansas State University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Children are often asked the question “What do you want to be when you grow up?” Many of us evaluate programs where the developers are hoping that participating in their program will change this answer. In this post, I’d like to suggest using “possible self” measures as a means of evaluating if a program changed attendees’ sense of identity and career aspirations.

What defines the term?

Possible selves are our representations of our future. We all think about what we ideally would like to become (the hoped-for possible self), things that we realistically expect to become (the expected possible self), and things that we are afraid of becoming (the feared-for possible self).[1][2] Possible selves can change many times over the lifespan and thus can be a useful measure to examine participants’ ideas about themselves in the future.

How can it be measured?

There are various ways to measure possible selves. One of the simplest is to use an open-ended measure that asks people to describe what they think will occur in the future. For example, we presented the following (adapted from Osyerman et al., 2006[2]) to youth participants in a science enrichment camp (funded by an NSF-ITEST grant to Arkansas State University):

Probably everyone thinks about what they are going to be like in the future. We usually think about the kinds of things that are going to happen to us and the kinds of people we might become.

  1. Please list some things that you most strongly hope will be true of you in the future.
  2. Please list some things that you think will most likely be true of you in the future.

The measure was used both before and after participating in the program. We purposely did not include a feared-for possible self, given the context of a summer camp.

What is the value-added?

Using this type of open-ended measure allows for participants’ own voices to be heard. Instead of imposing preconceived notions of what participants should “want” to do, it allows participants to tell us what is most important to them. We learned a great deal about participants’ world views and their answers helped us to fine-tune programs to better serve their needs and to be responsive to our participants. Students’ answers focused on careers, but also included hoped-for personal ideals. For instance, European-American students were significantly more likely to mention school success than African-American students.  Conversely, African-American students were significantly more likely to describe hoped-for positive social/emotional futures compared to European-American students. These results allowed program developers to gain a more nuanced understanding of motivations driving participants. Although we regarded the multiple areas of focus as a strength of the measure, evaluators considering using a possible self-measure may also want to include more directed, follow-up questions.

For more information on how to assess possible selves, see Professor Daphna Oyserman’s website.

References

[1] Markus, H. R., & Nurius, P. (1986). Possible selves. American Psychologist, 41, 954–969.

[2] Oyserman, D., Bybee, D., &Terry, K. (2006). Possible selves and academic outcomes: How and when possible selves impel action. Journal of Personality and Social Psychology, 91, 188–204.