In the past month, I’ve been thinking and reading a lot about evaluation, particularly in light of my videoconferencing program at Berrien RESA. Evaluation is more than just counting how many videoconferences occurred (which is my usual pattern); it is determining the effectiveness of the program. To determine the effectiveness, we need to have some expected results, a way to measure them, and a way to find out if the program had the desired results.
What are the Expected Results?
One of the books I read on evaluation (McNeil et al., 2005) suggested that to evaluate a program, you should first consider what you expect the results of the program to be. So, what do you expect as the results of your curriculum videoconferencing program? What benefits do we expect for the students involved? We know that teachers see a benefit to their students, but what benefit is it specifically?
- Motivation (for what? Learning?)
- Achievement? (in specific content areas?)
- Expanded learning opportunities
- Interactivity?
- Cross cultural exposure?
- Increased communication skills?
How Can We Measure Those Results?
Next, we consider how we might measure those results. This is where I get seriously confused and unsure. Most of my teachers do one or two videoconferences a year. The highest is 12 videoconferences, and that was a preschool teacher with an am and pm class. So that was really just 6 videoconferences per class. Can six videoconferences in a school year make any measurable change in students? Can one videoconference make a measurable change beyond an anecdote?
- If we expect increased motivation, is that measurable after just one videoconference? Two?
- If we are measuring achievement, how would it be measured across my 70 schools with VC carts who use VC in a myriad of ways in the curriculum?
- If we are measuring expanded learning opportunities, maybe we just count how many they did and leave it at that?
- Do we determine the effectiveness of our videoconferences by how interactive they are? Do students learn more from a one-on-one vs. a view only session? Maybe. But sometimes the teacher needs to see a view-only before they will attempt a live interactive session.
- If we are measuring cross cultural exposure, does a Michigan-Texas videoconference count? (That’s for you, Rox!) What about connections to zoos & museums?
- How would we measure students’ increased communication skills? Teacher perceptions? Student surveys?
Measure Effectivness by Comparing Results
The next step would be to compare results to baseline data (before the program) and after the program. McNeil suggests using last year’s data as the baseline for this year.
But to do this, we have to resolve the issue of what is the benefit and can we measure it?
I’m not asking these questions because I don’t think there is a benefit. I do!! But can we get clearer and more articulate about the benefits of using curriculum videoconferencing? Can we improve our annual evaluations beyond just counting participation? Is it possible?
Reflection
The problem with asking these questions is it starts to make you think that videoconferencing may not be worth doing. But are the things worth doing only those we can measure? Certainly the current NCLB climate leans that direction.
How did schools justify in-person field trips? Should there be hard SBR data to prove that field trips to a zoo or museum are worth doing before we actually do them? Similarly, do short-term learning experiences like videoconferences need hard data to show it’s worth doing?
I don’t know the answers to these questions. But I sure want to talk to you all about them!! Please comment and share your thoughts/reactions to these questions even if you don’t have an answer either! Help me think about this!!
References
McNeil, K. A., Newman, I., & Steinhauser, J. (2005). How to be involved in program evaluation: What every administrator needs to know. Lanham, MD: Scarecrow Education.