Effectiveness measurement—post-lesson assessment.

Jul 31, 2013 • Neal Davis

David Merand and I discussed the following post-lesson. (I didn’t condense it into three paragraphs since I wasn’t certain how much of a consensus we reached—feel free to edit this further if you like, David.)

tl;dr: Some form of post-lesson accountability will be required (i.e., you can’t judge the outcome on the basis of the lesson in isolation). This should be fitted to the strengths of each method in order to give them all a fair shot at success.

David:

As I understand it, the brief is to design a measure to determine which of the 3 instruction methods is more effective…. One would need to set up instructional videos, real-time tutoring sessions and face to face tutoring with 3 different groups/individuals of similar levels of proficiency.

The topic would need to be taught in a similar way, i.e. the same lesson using the same teaching constructs (i.e. examples, techniques etc.) with only a variation in the technology used. Proficiency tests and a short questionnaire could then be used to gauge the effectiveness of the 3 different teaching modes used.

Limitations of this approach would be ensuring that the 3 groups are similar, that the teacher can teach as effectively in all the 3 methods, and that the topic may be better suited to one or the other approach.

Neal:

Arguably teaching face-to-face is superior since the instructor can immediately adjust to the demonstrated skill level and interests of the pupil. Effective instruction requires feedback (which teaching face-to-face in principle has) and accountability (which either face-to-face teaching or video lectures may or may not have). Self-assessments of perceived learning are probably unreliable and are not controlled experiments in any case. So you have to clearly outline the desiderata of the lesson and then perform an objective assessment (quiz) immediately thereafter in both cases for a “reasonable” sample size (n > 10 in each pool).

So David and I definitely concur on the use of a post-lesson assessment in short order after the lesson. I’m skeptical about constraining the tutoring sessions to cover the same materials with the same techniques as the pre-recorded video, since that hobbles those techniques at precisely their strength. (I understand the scientific rationale, but think that in this case such rigor will invalidate the final assessment.) It’s not just that the topic might be suited for one or another approach, but that the medium itself is subject to the same constraint: the medium is the message, after all.

It’s a situation analogous to GPU programming. Much literature in this field shows the comparison of an optimized GPU kernel to the naïve CPU algorithm, but it’s more honest to compare a fully-tuned CPU code to a fully-tuned GPU code. So if that holds, then I would argue for comparing the highest-quality video one can make to tutoring performed by a skilled instructor—that way strengths are compared to strengths. Instructional psychology is a soft science because one cannot perform truly controlled experiments—one is forced to use statistics well and to make (often) imprecise comparisons (this isn’t a dig at IP, by the by). If we are going to make imprecise comparisons anyway, let’s make them on a playing field which allows each method its unique expression.

That’s all, folks.