Here’s a question for you. How do you define the successful implementation of curriculum videoconferencing in a building? How do you measure it? What does “successful” curriculum videoconferencing in a building look like?
When we first starting using VC, from about 1999-2005 or so, we had access only in the high schools. Elementary and middle school teachers used it occasionally for activities that seemed worth getting a bus to the high school to experience. During this time we counted how many videoconferences a district did each year. Districts that used it for 20-40 curriculum videoconferences a year were our “high use” districts. The others did 2-5 or so per year.
Total Events and Percentage of Teacher Use
Last year was the first year most of our districts had access to VC in all of their buildings. District use shot up to 80-90 curriculum videoconferences in some areas. Some buildings alone were doing 50+ conferences with individual teachers using it at least twice in the year. So last year, for the first time, we starting counting the number of teachers who participated in a videoconference at least once throughout the year. That number, combined with the total number of events for a building, gives a picture not only of how much they use it, but the breadth of implementation across all the teachers in the building. For the first time, I have building coordinators setting a goal for this year to have EVERY teacher participate in at least one event this year.
Is “More Use” Success?
So here’s the question. If we are only counting how much they use it, does that imply “using it more” is success? Does that imply we believe everyone should use it at least once a year? Do we believe that everyone should use it every year? If this measurement of success isn’t “up to par”, then what would be?
Someone suggested to me last evening that I should be measuring the “effectiveness” of curriculum videoconferencing. I’m not sure about this. What do you think? Can you measure the effectiveness of VC in a school? I know for sure that we can measure the effectiveness of particular programs (ASK, content providers, MysteryQuest, Read Around the Planet, etc.). We do collect evaluation data on these events. But can we measure the effectiveness of VC in a school? What is the desired end result? Can a few 1-2 hour events through the year impact student achievement? Can a few events through the year impact the attitudes about students towards the world around them? Can a few events – or even one event – impact the students’ understanding of technology communication? What do you think?
What about measuring the quality of the experiences? Isn’t that based more on what events the teachers choose, the money they have available to spend on content providers, etc? Would you say connecting to COSI for a live surgery is better quality than Monster Match? Is that a fair comparison? Don’t they both meet curriculum goals in unique ways?
What about the ability of the schools to use videoconferencing on their own? I think that increased use results in building coordinators and teachers able to connect and participate in videoconferences on their own. (Of course, they call for help when something major is broken.) So is their independence in using and creating videoconferencing experiences a measure of success? Or is that independence actually collected in how much they are using it? I don’t have any buildings using it at a high rate who aren’t able to connect & create their own VC experiences.
Integration into the Curriculum
Can we measure how well videoconferencing is integrated into the curriculum? Can we measure if the events participated in actually address curriculum goals? Would this be a measure of successful implementation?
What do you think? How do you know if the buildings you service, or even your own building, is successfully using videoconferencing? What is your definition of successful implementation? I really want to know!! So please comment!