Wrapping Up MCQs, Starting on Motivation

Feb 15, 2014 • Greg Wilson

Software Carpentry Instructor Training Round 8.3 (Thursday, February 13, 2014)

We had a good turnout this week, with 31 people taking part in three meetings to discuss multiple-choice questions. The notes are below, but a few things that came up in discussion are:

  1. MCQs are best used for formative assessment, to tell you (and your learners) what needs to be covered, because they’re quick and scalable. They shouldn’t be used for summative assessment (to measure what has been learned) because they don’t give you insight into thought processes: if someone does all the steps right, but drops a decimal place, all you’ll get is “wrong answer” rather than “right process, a bit sloppy”.
  2. It takes several iterations to create an MCQ that’s unambiguous and has good distractors. If you’re teaching a class several times, you can beta-test MCQs with the current learners for use with the next group (the GRE does this).
  3. A concept inventory uses carefully-validated MCQs to triangulate on learners’ misconceptions (see Allison Tew’s thesis from Georgia Tech, or this paper, for a concept inventory for basic computing concepts). The problem is, they’re a lot of work to develop.

Perhaps the most important discussion point was this: no matter what you do, or how long you spend preparing, it won’t be perfect. We have a wide spread of backgrounds, abilities, and interests in every class we teach; nothing we can prepare will be exactly right for everyone, so a “New Jersey” solution (90% right, can ship today) isn’t something to be ashamed of: it’s still helping them a lot, and the time we would put into polishing would be better spent helping people climb the next hill.

For the next meeting:

As you’ll have learned from Chapter 3 of How Learning Works, motivation is often the most important determinant of whether or not someone learns something. It’s easy to think of ways to demotivate people who are learning to program (telling them that it’s easy, or taking the keyboard away from them because it’s easier for you to type in the solution than to watch them struggle, are two ways). Our goal this week is to come up with ways to motivate them instead. Write a blog post detailing something specific you could do at some point during your teaching to get learners excited about the topic you’re about to cover, and make them believe that they can learn it, and want to learn it. Be concrete: “tell them a story about how this tool helped me” isn’t something that someone else could pick up and use.

Separately, please have a look at these three papers from the web site (and any others from our materials page that catch your eye as well):

19:00 Eastern

14:00 Eastern

  • Greg Wilson (Mozilla, Toronto): no MCQ URL
    • Which MCQ (other than your own) did you learn the most from (either about the subject, or about MCQs)?
    • How long did it take you to come up with an MCQ you were happy with?
    • How much insight do you think answers to your MCQ would give you into the minds of your learners?
  • Daniel Chen (Mailman School of Public Health, NYC): http://teaching.software-carpentry.org/2014/02/11/mcq-python-flow-control/
    • git index
    • a lot longer than I expected, i confused myself sometimes
    • i think some of my questions go beyond what i what i was testing (flow-control) in that it forces people to go line-by-line into what exactly is going on in each loop (good for debugging)
  • Jeramia Ory (King’s College, Wilkes-Barre, PA): http://teaching.software-carpentry.org/2014/02/11/mcq-and-exercise-find-command/
    • I enjoyed the exercise on string processing, it seemed a good match for the SWC boot camps, although it was difficult to do “on the screen.” It would ge fine for a paper test.
    • I’m still not happy with mine, but it took quite a bit of thought not to come up with “throw away answer”
    • A reasonable amount into mind of novices, very little into experts
  • Chris Friedline: http://teaching.software-carpentry.org/2014/02/12/mcq/
    • I liked Matthias’s on display hooks (b/c I’ve had issues sorting this out for myself in the past), as well as another one. oh right, VNC/SSH b/c it was a really tough topic and would have a hard time myself putting that one together. Comments on supplemental figures was a good one.
    • Not long enough — grossly underestimated the time and felt like I rushed through it. Also, my topic was kind of high level and hard to narrow down the entire map into a single novice/competant distinguishing MCQ
    • Hard to say from this exercise, but well-designed questions should be helpful, especially for teaching subsequent lectures.
  • Jess Hamrick (UC Berkeley): http://teaching.software-carpentry.org/2014/02/12/mcq-making-a-git-commit/
    • I really liked the exercise from Python Equality and Inequality — I didn’t learn anything about the subject material, but I thought the way the exercise was structured worked well
    • I spent a long time thinking about it. Actually writing it… maybe an hour?
    • It’s a little hard to tell, because I think it’s easy to miss something in the question and then give the wrong answer, and it’s hard to identify if someone has given a wrong answer because they misunderstood, or because they genuinely don’t know. The exercise is actually more illuminating, I think, because it’s a bit more freeform but also more applied, so it reveals more about the thought process.
  • Mark Stacy (University of Oklahoma) http://teaching.software-carpentry.org/2014/02/13/mcq-python-file-object/
    • Didn’t look at a lot of the MCQ. Looked at the Aggregation in SQL. Liked the flow of the course material.
    • It took a while and still not satisfied with results. Change the topic three times.
    • Insight: Provides a way to see if you are actually teaching the material.
  • Atul Varma — http://teaching.software-carpentry.org/2014/02/13/mcq-relative-paths-and-urls/
    • The second MCQ on variable scope in Python made me think about how the Python interpreter deals with variable scope in a way I hadn’t thought about before.
    • I think it took me maybe 15-30 minutes? It was made easier partly because of past attempts at teaching the material, and seeing a student’s actual mistakes (which I then supplied as distractors).
    • I think it would give me a decent amount of insight, though depending on the material, I’d like an explanation of why the student arrived at the answer they did. For instance, a wrong answer arrived at through careful yet flawed reasoning is much more useful than a wrong answer arrived at by random selection.
  • Abigail Cabunoc: http://teaching.software-carpentry.org/2014/02/13/mcq-select-statement-sql/
    • Learn: Didn’t look at a ton of them, but I enjoyed Python File (Mark Stacy) object. I had to open up the python docs for a bit.
    • How long: Not very long, maybe 20min. But I taught this yesterday.
    • Insight: One commenter said this seemed more like an exercise in logic than testing knowledge. I think SQL is close enough to English so it’s not hard to figure out what a command does. May be able to separate complete novices, but a lot is guessable if you have some programming knowledge, not necessarily SQL.
  • Jason Orendorff (Mozilla, Nashville): no MCQ URL due to delinquency, tea
    • Which MCQ did you learn from?: I think it’s a wrong question, we are not the target audience for these questions ;-)
    • Looking at others’ work: Something I noticed is that some answers are tricky and some questions are tricky. Is it bad to use a tricky question?
    • How long: It took me 20-30 minutes to come up with a decent MCQ. Coming up with a pre-test MCQ was extremely hard. The hardest part was finding something simultaneously simple enough and incisive enough.
    • Insight: Not so much. At most two bits per student, right? And they may be random… But hopefully their own reaction will give them some insight into their own mental state.
  • Likit Preeyanon: http://teaching.software-carpentry.org/2014/02/06/mcqs-python-text-processing/
    • I like MCQ about Aggregation in SQL
    • It took me about 2 hours
    • I designed my MCQ so that learners could be able to apply what they learn in real problems as well as be aware of gotchas that may trip them up. So I think they will have a good insight about text processing using Python
  • Isabel Fenton: http://teaching.software-carpentry.org/2014/02/13/mcq-python-while-loops/
    • I learned most from the other question about python loops, as it allowed me to see how someone else approached the same problem
    • It took me about an hour to work out the MCQs and the exercise, though I found writing the short practical exercise was most difficult / took the most time
    • I think mine would identify people who had no knowledge of loops (provided they didn’t just put the code into python and check) but I’m not sure it tests a particularly deep level of understanding. Also hard to be sure for the reasoning for wrong answers
  • Stéfan van der Walt: http://teaching.software-carpentry.org/2014/02/13/mcq-numpy-broadcasting/
    • I liked how Jess “doubled up” the answers to her questions—I wonder how long that takes for a student to parse!
    • Time: quite a while background thinking, 5 minutes to type up
    • Hard to know what I thought I wanted to teach the students (e.g., first chose list comprehension, found that that was almost only syntax, eventually chose a more difficult topic just so I would be able to ask more in-depth questions—i.e. gave me a wider range of questions to choose from)
    • Still not sure what distinguishes a good question from a bad one—what type of question teaches me most as an instructor
  • Matthias Bussonnier http://teaching.software-carpentry.org/2014/02/12/mcq-ipython-explicit-display-vs-displayhook-on-output/
    • How much did I learn: honestly I didn’t had much time to look at MCQ lot, but I saw a lot of subject I wouldn’t even have thought to teach, and are things I take for granted.
    • Difficult to make really different questions between distinguishing beginner/advanced and post classes exam.
    • I found that some other MCQ had “subjective” answer.
    • it took me around 30 min to get things right, and I had to iterate between the before MCQ and after MCQ not to have too much overlap.
    • insight, I’m not sure I’ll be able to know wether student woud just know what they shoudl do by heart, or really understand the underlining concept.

10:00 Eastern

  • Greg Wilson (Mozilla, Toronto): no MCQ
    • Which MCQ (other than your own) did you learn the most from, and why?
    • How long did it take you to come up with an MCQ you were happy with?
    • How much insight do you think the mix of right/wrong answers to your MCQ would give you into the minds of your learners?
  • Gabriel Devenyi (McMaster, Hamilton): http://teaching.software-carpentry.org/2014/02/13/mcq-stdout-stderr-and-redirects/
    • I’m not sure I can say I learned anything from the questions directly (this was mostly because the questions were presented with very little context): most questions were either too hard or too vague
    • Preparation of the question took about 30 minutes, mostly because I needed to make sure I understood the topic properly first
    • Unfortuately very little, since my topic was so small I think it wasn’t possible to have any conceptual difficulty (10 minutes of teaching is very little time)
  • JC Leyder (ESA, Spain): http://teaching.software-carpentry.org/2014/02/03/mcqs-the-role-of-the-index-in-git/
    • I learned the most from the “Objects in JavaScript”, I found the MCQs and exercises well designed.
    • It took me a few minutes to pick the topic, then several hours (spread over a few days) to get a MCQ I liked.
    • I think my MCQs would allow me to see if the learners are just applying commands, or if they truly understand the meaning of each step they take.
  • Brian Miles (UNC-CH): http://teaching.software-carpentry.org/2014/02/11/mcq-exercise-regular-expressions/
    • Unit testing, I learned that I haven’t read/thought deeply about unit tests in a while
    • 15-20 minutes
    • If the alternatives are well crafted, I think they can be good diagnostics.
  • Stephen Turner (UVA): http://teaching.software-carpentry.org/2014/02/06/mcq-matrix-manipulation-in-r/
    • Robert’s Aggregation in SQL. Been a while since used SQL, and as far as MCQs go, challenge there was to remember order of WHERE versus aggregate operations in GROUP BY, etc…
    • About a half hour writing questions, checking them, and thinking about the level of expertise of my audience.
    • I guess I didn’t consider this very much. With my question, knowing what wrong choice the student gave wouldn’t have really given me much insight into what the student (mis)understood.
  • Alexis Pyrkosz (MSU): http://teaching.software-carpentry.org/2014/02/12/round-8-2-mcq-python-loops/
    • 1 Most of them seemed too hard for a standard workshop
    • 2 Too long: easier to test small chunks/implementation than test larger concepts directly
    • 3 Answers were cut and dried, but were designed to see if the students were paying attention
  • Patrick Marsh: http://teaching.software-carpentry.org/2014/02/12/mcq-thoughts/
    • I learned from just about all of them actually; not necessarily from the content of the question, but about the design of the questions and the assessment between the different categories
    • I’m still not happy with my question, so 2+ weeks?
    • I actually think you learn a lot, and part of my frustration was coming up with questions in which you can craft good distractors. I ended up questioning my own knowledge more than I probably should have.
  • Anne Moroney http://teaching.software-carpentry.org/2014/02/12/mcq-pretestpost-test-for-github-contribution-workflow-using-a-patch-branch/
    • MCQ learned most from ; Objective C testing, because I am trying to learn ObjC and love TDD so it was perfect to try my hand.
    • MCQ questions were not that hard maybe an hour plus a writeup, but the exercise took me another 3hours I think it was.
    • MCQ I do think the wrong answers will help see what words confuse people or not.
  • Benjamin Bradshaw: http://teaching.software-carpentry.org/2014/02/12/python-equality-vs-identity/
    • The initial idea for here’s some output, fill in the blanks in the input I saw in the SQL aggregation one, and I thought that was a good approach for the exercise
    • maybe about 30 minutes, mostly brainstorming ideas for the concept to teach
    • A little. They were designed to tell me in what way people might be going wrong, but that wouldn’t tell me necessarily which incorrect concepts were leading them down that wrong path.
  • Rob Beagrie: