We EvaluATE - Evaluation Management

Blog: 3 Inconvenient Truths about ATE Evaluation

Posted on October 14, 2016 by  in Blog ()

Director of Research, The Evaluation Center at Western Michigan University

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Many evaluations fall short of their potential to provide useful, timely, and accurate feedback to projects because project leaders or evaluators (or both) have unrealistic expectations. In this blog, I expose three inconvenient truths about ATE evaluation. Dealing with these truths head-on will help project leaders avoid delays and misunderstandings.

1. Your evaluator does not have all the answers.

Even for highly experienced evaluators, every evaluation is new and has to be tailored to the project’s particular context. Do not expect your evaluator to produce an ideal evaluation plan on Day 1, be able to pull the perfect data collection instrument off his or her shelf, or know just the right strings to pull to get data from your institutional research office. Your evaluator is an expert on evaluation, not your project or your institution.

As an evaluator, when I ask clients for input on an aspect of their evaluation, the last thing I want to hear is “Whatever you think, you’re the expert.” Work with your evaluator to refine your evaluation plan to ensure it fits your project, your environment, and your information needs. Question elements that don’t seem right to you and provide constructive feedback. The Principal Investigator’s Guide: Managing Evaluation in Informal STEM Education Projects (Chapter 4) has detailed information about how project leaders can bring their expertise to the evaluation process.

2. There is no one right answer to the question, “What does NSF wants from evaluation?”

This is the question I get the most as the director of the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program. The truth is, NSF is not prescriptive about what an ATE evaluation should look like, and different program officers have different expectations. So, if you’ve been looking for the final word on what NSF wants from an ATE evaluation, you can end your search because you won’t find it.

However, NSF does request common types of information from all projects via their annual reports and the annual ATE survey. To make sure you are not caught off guard, preview the Research.gov reporting template and the most recent ATE annual survey questions. If you are doing research, get familiar with the Common Guidelines for Education Development and Research.

If you’re still concerned about meeting expectations, talk to your NSF program officer.

3. Project staff need to put in time and effort.

Evaluation matters often get put on a project’s backburner so more urgent issues can be addressed.  (Yes, even an evaluation support center is susceptible to no-time-for-evaluation-itis.) But if you put off dealing with evaluation matters until you feel like you have time for them, you will miss key opportunities to collect data and use the information to make improvements to your project.

To make sure your project’s evaluation gets the attention it needs:

  • —Set a recurring conference call or meeting with your evaluator—at least once a month.
  • — Put evaluation at the top of your project team’s meeting agendas, or hold separate meetings to focus exclusively on evaluation.
  • —Assign one person on your project team to be the point-person for evaluation.
  • —Commit to using your evaluation results in a timely way—if you have a recurring project activity, make sure your gather feedback from those involved and use it to improve the next event.

 

Blog: Good Communication Is Everything!

Posted on February 3, 2016 by  in Blog ()

Evaluator, South Carolina Advanced Technological Education Resource Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

I am new to the field of evaluation, and the most important thing that I learned in my first nine months is that effective communication is critical to the success of the evaluation of a project. Whether primarily virtual or face-to-face, knowing the communication preferences of your client is important. Knowing the client’s schedule is also important. For example, if you are working with faculty, having a copy of their teaching and office hours schedule for each semester can help.

While having long lead times to get to know the principal investigator and project team is desirable and can promote strong relationship building in advance of implementing evaluation strategies, that isn’t always possible. With my first project, contracts were finalized with the client and evaluators only days before a major project event. There was little time to prepare and no opportunity to get to know the principal investigator or grant team before launching into evaluation activities. In preparation, I had an evaluation plan, a copy of the proposal as submitted, and other project-related documents. Also, I was working with a veteran evaluator who knew the PI and had experience evaluating another project for the client. Nonetheless, there were surprises that caught both the veteran evaluator and me off guard. As the two evaluators worked with the project team to hone in on the data needed to make the evaluation stronger, we discovered that the goals, objectives, and some of the activities had been changed during the project’s negotiations with NSF prior to funding. As evaluators, we discovered that we were working from a different playbook than the PI and other team members! The memory of this discovery still sends chills down my back!

A mismatch regarding communication styles and anticipated response times can also get an evaluation off to a rocky start. If not addressed, unmet expectations can lead to disappointment and animosity. In this case, face-to-face interaction was key to keeping the evaluation moving forward. Even when a project is clearly doing exciting and impactful work, it isn’t always possible to collect all of the data called for in the evaluation plan. I’ve learned firsthand that the tug-of-war that exists between an evaluator’s desire and preparation to conduct a rigorous evaluation and the need to be flexible and to work within the constraints of a particular situation isn’t always comfortable.

Lessons learned

From this experience, I learned some important points that I think will be helpful to new evaluators.

  • Establishing a trusting relationship can be as important as conducting the evaluation. Find out early if you and the principal investigator are compatible and can work together. The PI and evaluator should get to know each other and establish some common expectations at the earliest possible date.
  • Determine how you will communicate and ensure a common understanding of what constitutes a reasonable response time for emails, telephone calls, or requests for information from either party. Individual priorities differ and thus need to be understood by both parties.
  • Be sure that you ask at the onset if there have been changes to the goals and objectives for the project since the proposal was submitted. Adjust the evaluation plan accordingly.
  • Determine the data that can be and will be collected and who will be responsible for providing what information. In some situations, it helps to secure permission to work directly with an institutional research office or internal evaluator for a project to collect data.
  • When there are differences of opinion or misunderstandings, confront them head on. If the relationship continues to be contentious in any way, changing evaluators may be the best solution.

I hope that some of my comments will help other newcomers to realize that the yellow brick road does have some potential potholes and road closures.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Evaluator’s Perspective

Posted on December 16, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
Manu Platt Ayesha Boyce

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

In this second part of the conversation, a Principal Investigator (client) interviews the independent evaluator to unearth key points within our professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments, will be useful to other STEM evaluators and clients. In this post, the principal investigator (PI)/client interviews the evaluator and key takeaways are suggested for evaluation clients (see our prior post in which the tables are turned).

Understanding of Evaluation

PI (Manu): What were your initial thoughts about evaluation before we began working together?

Evaluator (Ayesha): “I thought evaluation was this amazing field where you had the ability to positively impact programs. I assumed that everyone else, including my clients, would believe evaluation was just as exciting and awesome as I did.”

Key takeaway: Many evaluators are passionate about their work and ultimately want to provide valid and useful feedback to clients.

Evaluation Reports

PI: What were your initial thoughts when you submitted the evaluation reports to me and the rest of the leadership team?

Evaluator: “I thought you (stakeholders) were all going to rush to read them. I had spent a lot of time writing them.”

PI: Then you found out I wasn’t reading them.

Evaluator: “Yes! Initially I was frustrated, but I realized that maybe because you hadn’t been exposed to evaluation, that I should set up a meeting to sit down and go over the reports with you. I also decided to write brief evaluation memos that had just the highlights.”

Key takeaway: As a client, you may need to explicitly ask for the type of evaluation reporting that will be useful to you. You may need to let the evaluator know that it is not always feasible for you to read and digest long evaluation reports.

Ah ha moment!

PI: When did you have your “Ah ha! – I know how to make this evaluation useful” moment?

Evaluator: “I had two. The first was when I began to go over the qualitative formative feedback with you. You seemed really excited and interested in the data and recommendations.”

The second was when I began comparing your program to other similar programs I was evaluating. I saw that it was incredibly useful to you to see what their pitfalls and successful strategies were.”

Key takeaway: As a client, you should check in with the evaluator and explicitly state the type of data you find most useful. Don’t assume that the evaluator will know. Additionally, ask if the evaluator has evaluated similar programs and if she or he can give you some strengths and challenges those programs faced.

Blog: Improving Evaluator Communication and PI Evaluation Understanding to Increase Evaluation Use: The Principal Investigator’s Perspective

Posted on December 10, 2015 by , in Blog (, )
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Boyce
 Ayesha Boyce  Manu Platt

As an evaluator, have you ever spent hours working on an evaluation report only to find that your client skimmed it or didn’t read it?

As a project PI, have you ever glanced at an evaluation report and wished it has been presented in a different format to be more useful?

In this blog post, an independent evaluator and principal investigator (client) interview each other to unearth key points in their professional relationship that lead to clarity and increased evaluation use. This is a real conversation that took place between the two of us as we brainstormed ideas to contribute to the EvaluATE blog. We believe these key points: understanding of evaluation, evaluation reporting, and “ah ha” moments will be useful to other STEM evaluators and clients. In this blog post the evaluator interviews the client and key takeaways are suggested for evaluators (watch for our follow-up post in which the tables are turned).

Understanding of Evaluation

Evaluator (Ayesha): What were your initial thoughts about evaluation before we began working together?
PI (Manu): “Before this I had no idea about evaluation, never thought about it. I had probably been involved in some before as a participant or subject but never really thought about it.”

Key takeaway: Clients have different experiences with evaluation, which can make it harder for them to initially appreciate the power of evaluation.

Evaluation Reports

Evaluator: What were your initial thoughts about the evaluation reports provided to you?
PI: “So for the first year, I really didn’t look at them. And then you would ask, “Did you read the evaluation report?” and I responded, “uuuuhhh…. No.”

Key takeaway: Don’t assume that your client is reading your evaluation reports. It might be necessary to check in with them to ensure utilization.

Evaluator: Then I pushed you to read them thoroughly and what happened?
PI: “Well, I heard the way you put it and thought, “Oh I should probably read it.” I found out that it was part of your job and not just your Ph.D. project and it became more important. Then when I read it, it was interesting! Part of the thing I noticed – you know we’re three institutions partnering – was what people thought about the other institutions. I was hearing from some of the faculty at the other institutions about the program. I love the qualitative data even more nowadays. That’s the part that I care about the most.”

Key takeaway: Check with your client to see what type of data and what structure of reporting they find most useful. Sometimes a final summative report isn’t enough.

Ah ha moment!

Evaluator: When did you have your “Ah ha! – the evaluation is useful” moment?
PI: “I had two. I realized as diversity director that I was the one who was supposed to stand up and comment on evaluation findings to the National Science Foundation representatives during the project’s site visit. I would have to explain the implementation, satisfaction rate, and effectiveness of our program. I would be standing there alone trying to explain why there was unhappiness here, or why the students weren’t going into graduate school at these institutions.

The second was, as you’ve grown as an evaluator and worked with more and more programs, you would also give us comparisons to other programs. You would say things like, “Oh other similar programs have had these issues and they’ve done these things. I see that they’re different from you in these aspects, but this is something you can consider.” Really, the formative feedback has been so important.”

Key takeaway: You may need to talk to your client about how they plan to use your evaluation results, especially when it comes to being accountable to the funder. Also, if you evaluate similar programs it can be important to share triumphs and challenges across programs (without compromising the confidentiality of the programs; share feedback without naming exact programs). 

Blog: The Shared Task of Evaluation

Posted on November 18, 2015 by  in Blog (, )

Independent Educational Program Evaluator

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Evaluation was an important strand at the recent ATE meeting in Washington, DC. As I reflected on my own practice as an external evaluator and listened to the comments of my peers, I was impressed once again with how dependent evaluation is on a shared effort by project stakeholders. Ironically, the more external an evaluator is to a project, the more important it is to collaborate closely with PIs, program staff, and participating institutions. Many assessment and data collection activities that are technically part of the outside evaluation are logistically and financially dependent on the internal workings of the project.

This has implications for the scope of work for evaluation and for the evaluation budget. A task might appear in the project proposal as, “survey all participants,” and it would likely be part of the evaluator’s scope of work. But in practice, tasks such as deciding what to ask on the survey, reaching the participants, and following up with nonresponders are likely to require work by the PIs or their assistants.

Occasionally you hear certain percentages cited as appropriate levels of effort for evaluation. Whatever overall portion evaluation plays in a project, my approach is to think of that portion as the sum of my efforts and those of my clients. This has several advantages:

  • During planning, it immediately highlights data that might be difficult to collect. It is much easier to come up with a solution or an alternative in advance and avoid a big gap in the evidence record.
  • It makes clear who is responsible for what activities and avoids embarrassing confrontations along the lines of, “I thought you were going to do that.”
  • It keeps innocents on the project and evaluation staffs from being stuck with long (and possibly uncompensated) hours trying to carry out tasks outside their expected job descriptions.
  • It allows for more accurate budgeting. If I know that a particular study involves substantial clerical support for pulling records from school databases, I can reduce my external evaluation fee, while at the same time warning the PI to anticipate those internal evaluation costs.

The simplest way to assure that these dependencies are identified is to consider them during the initial logic modelling of the project. If an input is professional development, and its output is instructors who use the professional development, and the evidence for the output is use of project resources, who will have to be involved in collecting that evidence? Even if the evaluator proposes to visit every instructor and watch them in practice, it is likely that those visits will have to be coordinated by someone close to the instructional calendar and daily schedule. Specifying and fairly sharing those tasks produces more data, better data, and happier working relationships.

Blog: Checklists for Evaluating K-12 and Credentialing Testing Programs

Posted on October 14, 2015 by , in Blog ()
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Kosh Cizek
Audra Kosh
Doctoral Student
Learning Sciences and Psychological Studies
University of North Carolina, Chapel Hill
Gregory Cizek
Professor
Educational Measurement and Evaluation
University of North Carolina, Chapel Hill

In 2012, we created two checklists for evaluators to use as a tool for evaluating K-12 and credentialing assessment programs. The purpose of these checklists is to assist evaluators in thoroughly reviewing testing programs by distilling the best practices for testing outlined by various professional associations, including the Standards for Educational and Psychological Testing, the U.S. Department of Education’s Standards and Assessment Peer Review Guidance, the Standards for the Accreditation of Certification Programs, the Code of Fair Testing Practices in Education, and the Rights and Responsibilities of Test Takers.

The checklists were developed to allow evaluation of five aspects of testing: 1) Test Development, 2) Test Administration, 3) Reliability Evidence, 4) Validity Evidence, and 5) Scoring and Reporting. A separate checklist was developed for each area; each of the checklists presents detailed indicators of quality testing programs that evaluators can check off as observed (O), not observed (N), or not applicable (NA) as they conduct evaluations. Three examples of checklist items are included below (one each from the Test Development, Test Administration, and Scoring and Reporting checklists).

The checklists are intended to be used by those wishing to evaluate K-12 or credentialing assessment programs against consensus criteria regarding quality standards for such programs. One of the main sources informing development of the original checklists was the guidance provided in the then-current edition of the Standards for Educational and Psychological Testing (AERA, APA, NCME, 1999). However, much has changed in testing since the publication of the 1999 Standards, and the Standards were revised in 2014 to address emerging methods and concerns related to K-12 and credentialing assessment programs. Consequently, revised checklists have been produced to reflect the new Standards.

The latest edition of the Standards, as compared to the 1999 edition, pays greater attention to testing diverse populations and the role of new technologies in testing. For example, the following three key revisions to the Standards are reflected in the new checklists:

  1. Validity and reliability evidence should be produced and documented for subgroups of test takers. Testing programs should collect validity evidence for various subgroups of test takers from different socioeconomic, linguistic, and cultural backgrounds, as opposed to aggregating validity evidence for an entire sample of test takers. A focus on validity evidence within unique subgroups helps ensure that test interpretations remain valid for all members of the intended testing population.
  2. Tests should be administered in an appropriate language. Given that test takers can come from linguistically diverse backgrounds, evaluators should check that tests are administered in the most appropriate language for the intended population and intended purpose of the test. Interpreters, if used, should be fluent in both the language and content of the test.
  3. Automated scoring methods should be described. Current tests increasingly rely on automated scoring methods to score constructed-response items previously scored by human raters. Testing programs should document how automated scoring algorithms are used and how scores obtained from such algorithms should be interpreted.

Although these three new themes in the Standards illustrate the breadth of coverage of the checklists, they provide only a sample of the changes embodied in the full version of the revised checklists, which contain approximately 100 specific practices that testing programs should follow distilled from contemporary professional standards for assessment programs. The revised checklists are particularly helpful in that they provide users with a single-source compilation of the most up-to-date and broadly endorsed elements of defensible testing practice.  Downloadable copies of the revised checklists for K-12 and credentialing assessment programs can be found at (bit.ly/checklist-assessment).

Blog: Five Questions All Evaluators Should Ask Their Clients

Posted on July 8, 2015 by  in Blog ()

Senior Research Analyst, Hezel Associates

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

One of the things that I love about program evaluation is the diversity of models and methods that you must think about to analyze a program. But even before you get to the point of developing and solidifying your evaluation design, there’s a lot of legwork you need to do up front. In successful evaluations, that process starts by asking the right questions. So where does this process start? Here are just a few questions you can start with to get a conversation rolling with your client and have confidence that your evaluation is moving in the right direction.

1. What do you hope to achieve with this program?

A common challenge for all organizations is goal setting, and in an evaluation setting, having clear and measurable goals is absolutely essential. Too often goals are defined, but may not actually be matched to participant or organizational needs. As evaluators, we should pay close attention to these distinctions, as they enable us to help clients improve the implementation of their programs and guide them towards their anticipated outcomes.

2. What’s the history of this program?

New program or old, you’re going to need to know the background of the initiative. That will lead you to understand the funding, core stakeholders, requirements, and any necessary information needed to evaluate the program. You might learn interesting stories about why the program has struggled, which can help you to design your evaluation and create research questions. It’s also a great way to get to know a client and learn about their pain points in the past and really understand what their objectives are for the evaluation.

3. What kind of data do you plan on collecting or do you have access to?

Every program evaluator has faced the challenge of getting the data they need to conduct an evaluation. You need to know what’s needed early on and what kind of data you’ll need to do the evaluation. Don’t wait to have those conversations with your clients. If you’re putting this on hold until you are ready to conduct your tests, it may very well be too late.

4. What challenges do you foresee with program implementation?

Program designs might change as challenges that impact program design and delivery arise. But if you can spot some red flags early on, you might be able to help your client navigate implementation challenges and avoid roadblocks. The key is being flexible and working with your client to understand and anticipate implementation issues and work to address them in advance.

5. What excites you about this program?

This question allows you to get to know the client a bit more, understand their interests, and build a relationship with the client. I love this question because it reinforces the idea of an evaluator as a partner in the program. By acting as a partner, you can provide your clients with the right kind of evaluation, and build a partnership along the way.

Program evaluation presents some very challenging and complex questions for evaluators. Starting with these five questions will allow you to focus the evaluation and set your client and the evaluation team up for success.

 

 

Blog: Some of My Favorite Tools and Apps

Posted on May 20, 2015 by  in Blog (, )

Co-Principal Investigator, Op-Tec Center

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 5 – How am I supposed to keep track of all this information?

I’ve been involved in NSF center and project work for over 17 years now. When it comes to keeping track of information from meetings, having a place to store evaluation data, and tracking project progress, there are a few habits and apps I’ve found particularly useful.

Habit: backing up my files
All the apps I use are cloud-based, so I can access my files anywhere, anytime, with any device. However, I also use Apple’s Time Machine to automatically back up my entire system on a daily basis to an external hard drive. I also have three cloud-based storage accounts (Dropbox, Google Drive, and Amazon Cloud Drive). When the FAS4ATE files on Dropbox were accidentally deleted last year, I could upload my backup copy and we recovered everything with relative ease.

Habit + App: Keeping track of notes with Evernote
I’ve been using Evernote since the beta was released in 2008 and absolutely love it. If you’ve been in a meeting with me and you’ve seen me typing away – I’m not emailing or tweeting – I’m taking meeting notes using Evernote. Notes can include anything: text, pictures, web links, voice memos, etc., and you can attach things like word documents, spreadsheets, etc. Notes are organized in folders and are archived and searchable from any connected devices. There are versions available for all of the popular operating systems and devices, and notes can easily be shared among users. If it’s a weekend and I’m five miles off the coast fishing and you call me about a meeting we had seven years ago, guess what? With a few clicks I can do a search from my phone, find those notes, and send them to you in a matter of seconds. Evernote has both a free, limited version and inexpensive, paid version.

App: LucidChart
When we first started with the FAS4ATE project, we thought we’d be developing our own cloud-based logic model dashboard-type app. We decided to start by looking at what was out there, so we investigated lots of project management apps like Basecamp. We tried to force Evernote into a logic model format; we liked DoView. However, at this time we’ve decided to go with LucidChart. LucidChart is a web-based diagramming app that runs in a browser and allows multiple users to collaborate and work together in real time. The app allows in-editor chat, comments, and video chat. It is fully integrated with Google Drive and Microsoft Office 2013 and right now appears to be our best option for collaborative (evaluator, PI, etc.) logic model work. You may have seen this short video logic model demonstration.

As we further develop our logic model-based dashboard, we’ll be looking for centers and projects to pilot it. If you are interested in learning more about being a pilot site, contact us by emailing Amy Gullickson, one of our co-PIs, at amy.gullickson@unimelb.edu.au. We’d love to work with you!

Blog: Understanding Dosage

Posted on May 6, 2015 by  in Blog (, )

Director, Centre for Program Evaluation, The Melbourne Graduate School of Education, The University of Melbourne

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 3 – Why am I not seeing the results I expected?

Using a logic model helps you to see the connections between the parts of your project. Once you have a clear connection set up, another critical consideration is dosage. Dosage is how much of an intervention (activities like training, outputs like curriculum, etc.) is delivered to the target audience. Understanding dosage is critical to understanding the size of outcomes you can reasonably expect to see as a result of your efforts. As a program developer, it is essential that you know how much your participants need to engage with your intervention to achieve the desired impact.

Think about dosage in relation to medicine. If you have a mild bacterial infection, a doctor will prescribe a specific antibiotic and a specific dosage. Assuming you take all your pills, you should recover. If you don’t feel better, it may be because the bacteria were stronger than the antibiotic. The doctor will prescribe a different and probably stronger dose of antibiotic to ensure you get better. So the dosage is directly related to the desired outcome.

In a program, the same is true: The dose needs to match the desired size of change. Consider the New Media Enabled Technician ATE project, which Joyce Malyn-Smith from EDC discussed in our first FAS4ATE webinar. They wanted to improve what students know and are able to do with social media to market their small businesses (outcome). The EDC team planned to create social media scenarios and grading rubrics for community college faculty to use in their existing classes. Scenarios and rubrics (outputs) were the initial, intended dose.

However, preliminary discussions with potential faculty participants showed the majority of them had limited social media experience. They wouldn’t be able to use the scenarios as a free-standing intervention, because they weren’t familiar with the subject matter. Thus, the dosage would not be enough to get the desired result, because of the characteristics of the intended implementers

Clinton Pic 1.

So Joyce’s team changed the dosage by creating scenario-driven class projects with detailed instructional resources for the faculty. This adaptation, suited to their target faculty, enabled them to get closer to the desired level of project outcomes.

Clinton pic 2

So as we develop programs from great ideas, we need to think about dosage. How much of that great idea do we need to convey to ensure we get the outcomes we’re looking for? How much do our participants need in terms of engagement or materials to achieve the desired results? The logic model can direct our formative assessment activities to help us discover places where our dosage is not quite right. Then, we can make a change early in the life of the project, like Joyce’s team did with the community college faculty, to ensure we’ve got the correct amount of intervention needed.

Blog: Finding Opportunity in Unintended Outcomes

Posted on April 15, 2015 by  in Blog (, , , )

Research and Evaluation Consultant, Steven Budd Consulting

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Working with underage students bears an increased responsibility for their supervision. Concerns may arise during the implementation of activities that were never envisioned when the project was designed. These unintended consequences may be revealed during an evaluation, thus presenting an opportunity for PIs and evaluators to both learn and intervene.

One project I’m evaluating includes a website designed for young teens, and features videos from ATETV and other sources. The site encourages our teen viewers to share information about the site with their peers and to explore links to videos hosted on other popular sites like YouTube. The overarching goal is to attract kids to STEM and technician careers by piquing their interest with engaging and accurate science content. What we didn’t anticipate was the volume of links to pseudoscience, science denial, and strong political agendas they would encounter. The question for the PI and Co-PIs became, “How do we engage our young participants in a conversation about good versus not-so-good science and how to think critically about what they see?”

As the internal project evaluator, I first began a conversation with the project PI and senior personnel around the question of responsibility. What is the responsibility of the PIs to engage our underage participants in a conversation about critical thinking and learning, so they can discriminate between questionable and solid content? Such content is readily accessible to young teens as they surf the Web, so a more important question was how the project team might capture this reality and capitalize on it. In this sense, was a teaching moment at hand?

As evaluators on NSF-funded projects, we know that evaluator engagement is critical right from the start. Formative review becomes especially important when even well-designed and well thought out activities take unanticipated turns. Our project incorporates a model of internal evaluation, which enables project personnel to gather data and provide real-time assessment of activity outcomes. We then present the data with comment to our external evaluator. The evaluation team works with the project leadership to identify concerns as they arise and strategize a response. That response might include refining activities and how they are implemented or by creating entirely new activities that address a concern directly.

After thinking it through, the project leadership chose to open a discussion about critical thinking and science content with the project’s teen advisory group. Our response was to:

  • Initiate more frequent “check-ins” with our teen advisers and have more structured conversations around science content and what they think.
  • Sample other teen viewers as they join their peers in the project’s discussion groups and social media postings.
  • Seek to better understand how teens engage Internet-based content and how they make sense of what they see.
  • Seek new approaches to activities that engage young teens in building their science literacy and critical thinking.

Tips to consider

  • Adjust your evaluation questions to better understand the actual experience of your project’s participants, and then look for the teaching opportunities in response to what you hear.
  • Vigilant evaluation may reveal the first signs of unintended impacts.