Archive: FAS4ATE

Blog: Understanding Dosage

Posted on May 6, 2015 by  in Blog (, )

Director, Centre for Program Evaluation, The Melbourne Graduate School of Education, The University of Melbourne

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 3 – Why am I not seeing the results I expected?

Using a logic model helps you to see the connections between the parts of your project. Once you have a clear connection set up, another critical consideration is dosage. Dosage is how much of an intervention (activities like training, outputs like curriculum, etc.) is delivered to the target audience. Understanding dosage is critical to understanding the size of outcomes you can reasonably expect to see as a result of your efforts. As a program developer, it is essential that you know how much your participants need to engage with your intervention to achieve the desired impact.

Think about dosage in relation to medicine. If you have a mild bacterial infection, a doctor will prescribe a specific antibiotic and a specific dosage. Assuming you take all your pills, you should recover. If you don’t feel better, it may be because the bacteria were stronger than the antibiotic. The doctor will prescribe a different and probably stronger dose of antibiotic to ensure you get better. So the dosage is directly related to the desired outcome.

In a program, the same is true: The dose needs to match the desired size of change. Consider the New Media Enabled Technician ATE project, which Joyce Malyn-Smith from EDC discussed in our first FAS4ATE webinar. They wanted to improve what students know and are able to do with social media to market their small businesses (outcome). The EDC team planned to create social media scenarios and grading rubrics for community college faculty to use in their existing classes. Scenarios and rubrics (outputs) were the initial, intended dose.

However, preliminary discussions with potential faculty participants showed the majority of them had limited social media experience. They wouldn’t be able to use the scenarios as a free-standing intervention, because they weren’t familiar with the subject matter. Thus, the dosage would not be enough to get the desired result, because of the characteristics of the intended implementers

Clinton Pic 1.

So Joyce’s team changed the dosage by creating scenario-driven class projects with detailed instructional resources for the faculty. This adaptation, suited to their target faculty, enabled them to get closer to the desired level of project outcomes.

Clinton pic 2

So as we develop programs from great ideas, we need to think about dosage. How much of that great idea do we need to convey to ensure we get the outcomes we’re looking for? How much do our participants need in terms of engagement or materials to achieve the desired results? The logic model can direct our formative assessment activities to help us discover places where our dosage is not quite right. Then, we can make a change early in the life of the project, like Joyce’s team did with the community college faculty, to ensure we’ve got the correct amount of intervention needed.

Blog: Get the Right People In!

Posted on April 29, 2015 by  in Blog ()

Director, National Convergence Technology Center, Collin College

Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

The Formative Assessment Systems for ATE project (FAS4ATE) focuses on assessment practices that serve the ongoing evaluation needs of projects and centers. Determining these information needs and organizing data collection activities is a complex and demanding task, and we’ve used logic models as a way to map them out. Over the next five weeks, we offer a series of blog posts that provide examples and suggestions of how you can make formative assessment part of your ATE efforts. – Arlen Gullickson, PI, FAS4ATE

Week 2 – Why who you invite to your professional development makes a difference for your results.

What would happen if you hosted an event and were careless regarding the invitation list? You’d probably get plenty of people to come, but they might not be the ones you wanted to participate in order to make the event a success…and showing sheer numbers alone doesn’t indicate success.

At the National Convergence Technology Center, we offer professional development events called Working Connections. The purpose of these week-long institutes is to prepare community college faculty to teach new IT topics in upcoming semesters.

Who would be the “wrong” people to invite to Working Connections? Anyone BUT community college IT faculty!

Sullivan LM1 Pic

 

When you are working on the input section of your logic model, sometimes you need to look at the outputs and outcomes sections first (i.e., what kind of output and outcome do you want and what inputs are needed to achieve those?) In our case, we wanted to show impact from professional development of IT faculty. For example, did the faculty actually teach the courses they learned about at Working Connections? How did this training impact the way they teach? How did Working Connections sessions impact the students these professors taught? Did these new skills impact student learning?

We gather data from attendees at the completion of each Working Connections (overall and topic track surveys), then we follow up with longitudinal questions at six months, 18 months, 30 months, 42 months, and 54 months after the event.

When we first implemented the surveys, we noticed that some of the participants had not planned to teach the track they studied at Working Connections. We wondered why this was so, and we looked at a variety of possibilities and soon discovered that some of our registrants were not IT community college faculty.

We instituted a simple step in the registration process to verify the participant’s job, which was comprised by two items on the registration form: (1) Please provide your supervisor’s name, title, phone number, and email; and (2) What IT/convergence classes do you currently teach or supervise (Working Connections is intended solely for IT/convergence faculty or academic administrators.)

Soon our impact data started trending upward. We also highlighted this requirement in BOLD on our event website: http://summerworkingconnections2014.mobilectc.wikispaces.net/home.

In a practical sense, we also wanted to make sure that the money we invested in the event was going toward the right target. Ensuring that we had the “right” people come also ensured we were getting the best bang for the buck.

It seems like such a simple thing, but examining who you are involving in your program events can make a big difference in the success of your project.