From Concept Maps to Assessment Questions

Apr 4, 2013 • Greg Wilson

Meeting of the Software Carpentry Instructors Study Group
2013-04-03
Round 4.1/4.2

Agenda

  • What did you learn about your chosen concept from doing a concept map?
  • What would you do differently having seen other people’s concept maps?
  • Too big, too small, too high-level, too low-level, too mixed-level, …?
  • this is too large, this is too small
  • See also Steve Eddins on testing image processing (to motivate Will Trimble’s concept map on floating point representation)
  • Would you use this:
    • to prepare for teaching
    • in class?
  • How would you tell if someone had understood the concepts?
  • How would you distinguish these three levels of understanding:
    • Novice
    • Intermediate (can apply rules, doesn’t know when to break them)

Greg: embedding images in a concept map for image processing

Few used questions/question marks in their concept maps

  • Exceptions: Rich FitzJohn, Patrick March
  • Dunning-Kruger effect: the less people know, the less accurate their estimation of their own knowledge
  • Project planning: let the bubbles explode, then ask “Which subset of about half a dozen bubbles am I am going to tackle?”
    • (~ 7 bits of infromation)
  • Using concept maps in teaching
    • test your own understanding of concept ideas for lesson development
    • refresh ideas before class
    • to assess students understanding of concept
    • provide students with a concept map as a form of “teaching to the test”
  • What is skill level?
    • Novice:
      • doesn’t know what they don’t know (incompetant and ignorant)
      • the less competant you are, the less accurate your estimation of your own competence
    • Competent Practicioner:
      • can use it proficiently
      • asks questions about special/edge cases
    • Expert:
      • knows when to break the rules, has more dense concept map in their head
      • has more densly connected map, fewer jumps between concepts. This has pros and cons
      • can handle special cases
      • expert knows why
      • experts estimation of how hard this is is very reliable, (compared ot novice who is just guessing)
  • How to determine if someone is an novice or expert?
    • No universal scale of ability (and self-assessment unreliable, as above)
  • How do you tell if someone simply didn’t get it?
    • carefully chosen test question?
    • can they transfer skill to another context, or is skill context dependent?
  • The longer the field has had big data, the more likely you are to find skills but variance within groups is much larger than differences between means

For April 17 meeting:

  • Read Chapter 4 of How Learning Works (“How Do Students Develop Mastery?”)
  • Choose a topic that someone else did a concept map for (either in Round 4.1 or the previous round)
  • By Wednesday April 10, write a blog post (in Round 4.2) with:
    • Two questions that distinguish novice from competent practioner (correctness of answers should be objective)
    • Two questions that distinguish competent practioner from expert (correctness of answers may be subjective, i.e., more than one way to do it)
  • By Tuesday April 16, comment on at least two other people’s posts
  • We’ll meet at the same times on April 17
  • Please make sure you have a copy of Facts and Fallacies of Software Engineering by April 17
  • If you are not already on the Software Carpentry ‘discuss’ list, please join