Sorry, no biography is available for Krystin Martens.

Webinar: Evaluation: A Key Ingredient for a Successful ATE Proposal

Posted on August 21, 2013 by , , , in Webinars ()

Presenter(s): Connie Della-Piana, Krystin Martens, Lori Wingate, Rachael Bower
Date(s): August 21, 2013
Time: 1:00 p.m. EST
Recording: http://youtu.be/CK-nM-CEr6I

Evaluation is more than a requirement for ATE proposals, it’s an essential ingredient for increasing the coherence and competitiveness of your submission. Developing your proposal with an evaluative perspective can help you avoid common proposal pitfalls, such as writing goals that are either too lofty or too simplistic or failing to demonstrate a logical relationship between your activities and your intended outcomes. In this webinar, we’ll share our recipe for a strong ATE proposal that includes all the necessary and important evaluative ingredients. Veteran ATE and NSF personnel will provide additional insights on how to enhance your proposal.

Resources:
Slide PDF
Evaluation Planning Checklist for NSF-ATE Proposals

Newsletter: Evaluator Qualifications: Which is more important- evaluation or subject matter expertise?

Posted on July 1, 2013 by  in Newsletter - ()

When shopping for an external evaluator, a common question is whether it is better to hire someone who really understands your project’s content area or if you need someone with technical expertise in evaluation, research, and measurement.

In an ideal world, an evaluator or evaluation team would have high levels of both types of expertise. Since most evaluators hail from a non-evaluation content area, it may be possible for your project to find an evaluator who is both an expert in your content area and a skilled evaluator. But such combinations are relatively rare. So, back to the original question: If you do have to make a decision regarding expertise, which way should you lean? My answer is, evaluation expertise.

Most professional evaluators have experience working in many different content areas. For example, here at The Evaluation Center at Western Michigan University, where EvaluATE is based, in the past year alone we have evaluated the proposal review process for the Swiss National Science Foundation, a project focused on performative design, a graduate-level optical science program, institutional reform at a small university, a community literacy initiative, substance abuse treatment, a career and technical education research center, a preschool music education program, and a set of International Labour Organisation evaluations—among other things. Situational Practice, which is “attending to the unique interests, issues, and contextual circumstances in which evaluation skills are being applied” is a core competency for professional evaluators (Canadian Evaluation Society, 2010). Regardless of expertise, any evaluator should take the time to learn about the content and context of your project, asking questions and doing background research to fill in any significant knowledge gaps. Additionally, if you have an internal evaluation component to your project, you most likely already have subject-matter expertise embedded into your evaluation activities.

While it might feel more natural to lean toward someone with a high level of subject-matter expertise who “speaks your language,” a risk of hiring an evaluator for his or her subject-matter knowledge rather than evaluation expertise is that in the absence of a strong foundation in education and social science-based research skills, they may rely more on their opinions and experiences to formulate judgments about a project instead of systematic inquiry and interpretation. If you find that the perspectives and guidance of subject-matter experts bring value to your work, a better role for them might be that of an advisor.

Regardless of the type of evaluator you select, a key to a successful evaluation is regular and open communication between project stakeholders and the evaluation team. Your project is unique, so even if your evaluator has been involved with similar efforts, he or she will still benefit from learning about your unique and specific context.

For more information about evaluator competencies see http://bit.ly/10v3dc3.

Webinar: The Nuts and Bolts of ATE Evaluation Reporting

Posted on May 15, 2013 by , , , in Webinars ()

Presenter(s): Jason Burkhardt, Krystin Martens, Lori Wingate, MATE – Marine Advanced Technology Education Center, Michael Lesiecki
Date(s): May 15, 2013
Time: 1:00 p.m. EDT
Recording: https://vimeo.com/66343717

In this webinar, we will give practical advice about evaluation reporting in the ATE context, including report content and structure, integrating evaluation report content into annual reports to NSF, and using results. We will provide step-by-step guidance for developing an ATE evaluation report that balances the competing demands that reports be both comprehensive and concise. We’ll discuss the how-where-and-what of including evaluation results in NSF annual reports and project outcome reports. Finally we’ll address how to use evaluation results to inform project-level improvements and build the case for further funding. Participants will leave the webinar with a clear strategy for creating effective ATE evaluation reports that meet NSF accountability requirements and support project-level improvement. *This webinar was in inspired, in part, by the work of Jane Davidson, author of Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (Sage, 2005).

Resources:
Slide PDF
Handout PDF
Example highlights report : MATE Program Highlights

Newsletter: Evaluation Use

Posted on April 1, 2013 by  in Newsletter - ()

All evaluators want their evaluations to be useful and used. Evaluation clients need evaluation to bring value to their work to make the investment worthwhile. What does evaluation use look like in your context? It should be more than accountability reporting. Here are common types of evaluation use as defined in the evaluation literature:

Instrumental Use is using evaluation for decision-making purposes. These decisions are most commonly focused on improvement, such as changing marketing strategies or modifying curriculum. Or, they can be more summative in nature, such as deciding to continue, expand, or reinvent a project.

Process Use happens when involvement in an evaluation leads to learning or different ways of thinking or working.

Conceptual Use is evaluation use for knowledge. For example, a college dean might use an evaluation of her academic programs to further understand an issue related to another aspect of STEM education. This evaluation influences her thinking, but does not trigger any specific action.

Symbolic Use is use of evaluation findings to forward an existing agenda. Using evaluation to market an ATE program or to apply for further funding could be examples.

Webinar: Developing Questions for Effective Surveys

Posted on January 16, 2013 by , in Webinars ()

Presenter(s): Krystin Martens, Lori Wingate
Date(s): January 16, 2013
Recording: http://youtu.be/j2ePUZwkUIc

The use of surveys for data collection is ubiquitous in evaluation, so writing good survey questions is an essential skill for any evaluator.  In this webinar, we’ll cover the essential DOs and DON’Ts of writing survey items. Crafting good questions is an art as well as a science and requires careful attention to context, including respondents, evaluation purposes, and intended use of results.  We’ll dissect examples of good and bad question phrasing and response options, explore the implications of various question-and-answer formats for data analysis, and offer strategies to ensure that surveys will yield meaningful and useful data for your evaluation.

Resources:
Slide PDF
Handout PDF

Webinar: ATE Evaluation: Measuring Reaction, Learning, Behavior, and Results

Posted on November 28, 2012 by , , in Webinars ()

Presenter(s): Jo Ann Balsamo, Krystin Martens, Lori Wingate
Date(s): November 28, 2012
Recording: https://vimeo.com/55389625

In this webinar, participants will learn about evaluating ATE initiatives in practical, yet meaningful ways. The Kirkpatrick “Levels” model for evaluation is a systematic approach for assessing a project’s quality and effectiveness in terms of participants’ satisfaction, learning of the material, application of new skills or content, and resulting impacts. Participants will learn about what questions should drive data collection at each level, steps to take to obtain data of sufficient quantity and quality, and how to interpret and use the evaluation results. Proven strategies for tracking students’ employment outcomes will be shared by Kevin Cooper, PI for the Regional Center for Nuclear Energy Education and Training at Indian River State College.

Resources:
Slide PDF
Handout PDF