Testing effectiveness of live tutoring vs. recorded video

Jul 29, 2013 • Luke Lee

Regarding our live-tutoring sessions, Luke chose to teach an introduction to Python’s virtualenv, and Philipp chose to teach a fast introduction to Go (the programming language).

Here’s how we think how to measure the success of live-tutoring: First, create a survey of a few possible topics to teach. Then, have a group of students fill it out by rating the familiarity with each topic on a scale of 1-5.  You could then try to pick a topic that the majority of the students are LEAST familiar with and separate the students into two groups. Of course, you’d have to make sure to select a diverse set of possible topics to make sure that you can easily split the students into two groups.

Next, create a script/lesson plan to teach this topic and give half of the students the material in a live-tutoring session and the other half a recorded video version of the lesson. Finally, give the students a short ‘test’ that has a few simple fact-based/memorization multiple-choice questions, and one question that requires students to apply their new knowledge to solve some non-trivial problem. You could then try to use these test results to informally determine which, if any, method was better suited for fact-based information and ‘higher-order’ understanding like experts typically have.

We also had another more intrusive idea that would involve recording students faces while they watch both types of lessons. You could then attempt to analyze this facial recognition data to see if their eyes, face reactions, etc. could give any insight into how they are taking in information. It might be possible to also try to determine whether they are bored or not. We believe there was a published paper related to this idea but were unable to find it
online.