Instructor Training

Key Points

  • This workshop will cover general teaching pedagogy and how it applies specifically to Software and Data Carpentry.

  • Trainee motivation and prior knowledge vary widely, but can be explored with a quick multiple choice quiz.

Novices and Formative Assessment
  • Novices: don’t know what they don’t know.

  • Competent practitioners: have a usable mental model that’s good enough for everyday purposes.

  • Expert: can handle edge cases.

  • Goal when teaching novices is to help them construct a usable mental model.

  • To do this, must clear up their misconceptions.

  • Summative assessment: done at the end of teaching to see whether learning took place.

  • Formative assessment: done during teaching to guide learning.

  • Can use multiple choice questions (MCQs) as formative assessments to diagnose misconceptions.

  • Aims to strengthen participants’ teaching skills.

  • And to connect them with each other.

  • Educational psychology: the study of how people learn.

  • Instructional design: the engineering of lessons.

  • Pedagogical content knowledge: connects general understanding of teaching to domain-specific content.

Teaching as a Performance Art
  • Great teachers are made, not born.

  • Formal written descriptions of teaching practices are ineffective.

  • Lesson study (‘jugyokenkyu’) is essential to transferring skills between teachers.

  • Feedback is most effective when those involved share ground rules and expectations.

Morning Wrap-Up
  • Have learners write minute cards as exit tickets to get actionable feedback.

Expertise and Memory
  • Experts’ mental models are much more densely connected than those of non-experts.

  • Expert blind spot: knowing something so well that it seems easy when it’s not.

  • Can represent mental models using concept maps.

  • Relationships are as important as concepts.

  • Long-term memory is large but slow, while short-term is fast but (very) small.

  • Most adults can store 7±2 items in short-term memory for a few seconds before loss.

  • Things seen together repeatedly are remembered (or mis-remembered) in chunks.

  • Teaching consists of loading short-term memory and reinforcing it long enough for items to be transferred to long-term memory.

  • Lesson episodes should not overload short-term memory.

Performance Revised
  • Practice makes perfect.

Cognitive Load
  • Self-directed (inquiry-based) learning is less effective than guided instruction.

  • Cognitive load theory predicts that focusing on one aspect at a time improves learning.

  • Use faded examples to focus attention when learning.

Afternoon Wrap-Up
  • Use ‘one up, one down’ to get wide-ranging feedback.

Live Coding
  • Watching instructors write software is more informative and more compelling than being presented with the finished product.

  • Live coding allows instructors to follow learners.

  • The mistakes are the pedagogy.

Carpentry Teaching Practices
  • Live coding is a more effective way to teach programming than slides or whiteboarding.

  • Making and correcting mistakes in front of learners is good teaching practice.

  • Try to segment learners by prior knowledge.

  • Ask more advanced learners to help colleagues during lessons.

  • Use sticky notes as status indicators.

  • Collaborative note-taking improves learning outcomes.

  • Pair programming aids learning, but have everyone pair so no-one feels singled out.

Motivation and Demotivation
  • People learn best when they are intrinsically motivated.

  • The two biggest demotivators are indifference and unfairness.

  • Teach what’s most immediately useful first in order to gain learners’ trust.

  • Be careful not to remind learners of negative stereotypes when teaching.

  • We’re all faking it.

  • Don’t teach or learn alone.

  • Belief that competence comes with practice improves learning outcomes.

  • Measures taken to improve accessibility aid everyone.

  • Measures taken to make learning more inclusive aid everyone.

Morning Wrap-Up
  • Use sticky notes for collecting end-of-class feedback.

Lessons and Objectives
  • Write learner profiles to clarify audience for a lesson.

  • Communicate lesson goals by writing specific, verifiable learning objectives.

  • Bloom’s Taxonomy classifies levels of understanding.

  • Use reverse instructional design to create lessons: concepts, summative assessment, formative assessments, teachings.

  • Software Carpentry’s lessons cover the Unix shell, version control, programming, SQL, and Make.

  • Data Carpentry’s lessons cover data cleanup, management, analysis, and visualization in a variety of fields.

The Carpentries
  • Software Carpentry was founded in 1998 to teach scientists how to program better.

  • Data Carpentry was founded in 2014 to teach researchers how to handle data.

  • Their materials are all openly licensed, but their names and logos are trademarked.

  • They share teaching methods and a common instructor pool.

  • The workshop operations guide summarizes what they have learned about organizing and delivering training.

Afternoon Wrap-Up
  • Final steps to qualify are to make a contribution, take part in a discussion, and do a teaching demo.

Top Ten

  1. Be kind: all else is details.
  2. Never teach alone.
  3. No lesson survives first contact with learners.
  4. Nobody will be more excited about the lesson than you are.
  5. Every lesson is too short from the teacher’s point of view and too long from the learner’s.
  6. Never hesitate to sacrifice truth for clarity.
  7. Every mistake is a lesson.
  8. “I learned this a long time ago” is not the same as “this is easy”.
  9. You learn with your learners.
  10. You can’t help everyone, but you can always help someone.

A Few Other Things

  1. Everyone in our community is required to abide by our Code of Conduct, both at workshops and online, to ensure that everyone else feels welcome.
  2. You teach our material, not your own, and you need to work through the materials before teaching to verify your own understanding and figure out where people might have trouble.
  3. We organize some workshops, but we expect people to organize workshops locally as well. You can charge people to attend, and we will charge you when we help organize.
  4. We expect you to teach at least once within a year of certifying in exchange for this training.
  5. Expect a broad range of expertise and experience, and be prepared to adapt your teaching to accommodate beginners or those who struggle.
  6. We use live coding instead of slides: instructors work through the lesson material, typing in the code or instructions, while the learners follow along.
  7. Use sticky notes for real-time feedback and minute cards or “one up, one down” at lunch and at the end of the day in order to find out how the class is going while there’s still time to fix things.
  8. The “I don’t know what I’m doing” feeling never goes away. You just learn the “but I can figure it out” part. – Sciencegurl

A note on #2: some instructors start improvising after they’ve taught the core lessons as-is a few times, but you should know what you’re improvising around—remember, our materials have been used hundreds of times, and probably address problems you don’t yet know will arise.

What Kinds of Practice and Feedback Enhance Learning?

  1. Mismatched expectations can be difficult to diagnose and waste much time.
  2. In the absence of structure, learners tend to glide along more comfortable paths (i.e., making slides prettier rather than more content-rich).
  3. Deliberate practice without effective feedback can instill new unknown bad habits.
  4. Learning and performance are best fostered when students engage in practice that:
    1. focuses on a specific goal or criterion for performance,
    2. targets an appropriate level of challenge relative to students’ current performance, and
    3. is of sufficient quantity and frequency to meet the performance criteria.
  5. Articulate goals in measurable ways:
    1. Use good metrics that relate to the objective and to possible performance from the learner.
    2. Include higher-level goals.
  6. Concurrent learning can work, but often not at the novice skill level.
  7. While quality of practice matters, time on task is also important.
  8. Practice tends to be most effective at improving skills in the “competent” range. (Novices grapple with known knowns, competent practitioners with known unknowns, and experts with unknown unknowns.)
  9. Instructors should point out progress as it is made so that students recognize their accomplishment and discern the change in their behavior, especially when gradual.
  10. Grades and scores provide some information on the degree to which students’ performance has met the criteria, they do not explain which aspects did or did not meet the criteria and how, so more specific feedback is necessary.

How Does Students’ Prior Knowledge Affect Their Learning?

  1. Learners come with past experiences and models of knowledge. If we can activate that prior knowledge and correctly link it to what we are trying to teach, the effect will be increased retention and a greater ability to apply what we are teaching to novel problems.
  2. If the learner’s past knowledge is not activated, we lose this integration of knowledge and the amplifying effect of their past experience and their declarative and procedural knowledge.
  3. If the learner comes with incorrect information or misunderstands how the new knowledge relates to their past experience, their learning can be hindered until they understand the misconception.
  4. The nature of misconceptions is that the learner will not realize they have them. Specifically, if asked they may well report that they understand the situation.
  5. Testing knowledge (e.g., doing diagnostic assessment with well-designed multiple choice questions) will reveal the misconceptions and form a basis for correcting them.
  6. Well-crafted challenges will provide the learner with information about their understanding. Faded examples support the student and provide a “win” at the start and indicate a lack when they stop being able to complete the challenge.
  7. Successfully completing a challenge while still holding a misconception about the subject of the challenge is a very bad thing, because it increases the learners’ confidence in their incorrect model.
  8. Using examples that involve universal activities rather than domain specific or highly technical examples will maximize the number of correct connections that form the basis of transferring knowledge. It is a delicate balance: if the problem is too simple, students may dismiss it as unimportant or and switch off to conserve energy because they believe they already understand. Some humour or an interesting story will allow you to keep engagement while speaking directly to most people’s experiences.
  9. Analogies are useful in connecting past understanding to a current problem, but be explicit about how it applies to the situation because the learner may not understand where the analogy breaks down or stops being applicable.

Why Do Student Development and Course Climate Matter for Student Learning?

  1. Make uncertainty safe: support students who are uncomfortable with ambiguity (i.e. there are various solutions to an answer).
  2. Resist a single right answer: acknowledge that we are teaching them one way to do things, but there are many many tools that could be used to do the same thing (version control, visualization, etc).
  3. Examine your assumptions about students: don’t expect people to be able (or unable) to do a specific task based on their race, gender, age, experience).
  4. Reduce anonymity: try to remember names (or encourage learners to wear their badges), provide opportunity for learners to interact during breaks, and as instructor, remember to interact as well rather than reading emails during break and looking aloof.
  5. Establish and reinforce ground rules for interaction: refer to the code of conduct and have a plan of action for when the code of conduct is breached.
  6. Use the syllabus and first day of class to establish the course climate: sticking to the published agenda and course timings is one of the things that we get the most consistent positive comment about. Tell learners what they can expect and keep to what you told them. If things have to change, inform them promptly. Once people start to feel like they don’t know what is going on, it’s hard for them to focus on learning new skills.
  7. Set up processes to get feedback: collect sticky notes before lunch and before the end of day for two-day workshops and develop another strategy for workshops that run over multiple half days or other formats. Go through the feedback immediately and act upon the suggestions that can be dealt with immediately. A responsive instructor gain the trust of learners and make them feel important and heard.
  8. Model inclusive language, behaviour, and attitudes: as instructor, try to address all learners equally rather than only talking to the ones who follow along nicely or demands more of your attention.
  9. Be mindful of low-ability cues: (an example from How Learning Works is, “I’ll be happy to help you with this because I know girls have trouble with math.”) Think about how you address your students and what stereotypes you are reinforcing unintentionally (e.g., with jokes).
  10. Address tensions early: sometimes workshops have one or two very experienced computational people in the room filled otherwise with novices. Try to think of ways to keep them engaged by maybe asking them to rather act as helpers than learners: they might not like it if they have paid to attend a course where they hoped to learn something new, so find ways to acknowledge their help as well.

How do Students Become Self-Directed Learners?

  1. Learners can become self-directed when they can assess the demands of a task, evaluate their own knowledge and skills, plan an approach, monitor their own progress, and adjust strategy. Emphasizing these steps can help learners on their path to achieving competence.
  2. Learners come to us with a variety of pre-conceptions on how to learn and also if they are “good” at “computers”, “programming”, etc. We may need to address unfounded pre-conceptions.
  3. Learners can easily ignore instructors/instructions on how to proceed with an exercise (e.g. “Did they even read the assignment?”). As you assign challenges, remind learners about the point/purpose of the exercises and get their feedback to confirm they understand the goals. Be more explicit than you may think necessary.
  4. Learners can be (and usually are) poor judges of their knowledge and skills.
  5. Novices spend little time in planning approaches and more time trying to find solutions. Emphasize planning as a first-line strategy to problem solving.
  6. Learners will typically continue with strategies that work moderately well rather than change to a new strategy that would work better.
  7. Asking for peer assessment can be a positive and productive learning experience when everyone is given criteria to give feedback on.
  8. Help learners set realistic expectations. Learners should be able develop a sense of how long it may take to develop particular skills.
  9. Discuss metacognition in the classroom. The evidence-based teaching style of Software Carpentry is not proprietary or hidden, so share what you know about learning in the classroom and why you are taking certain approaches.
  10. Provide heuristics for self-correction. Learners need to develop a skill for evaluating their own work. While this skill will take time to develop, you can provide ‘guideposts’ for what their code and results should look like.

Motivational Strategies


Susan Ambrose et al: How Learning Works: Seven Research-Based Principles for Smart Teaching.
An excellent overview of what we know about education and why we believe it’s true, covering everything from cognitive psychology to social factors.
Stephen D. Brookfield and Stephen Preskill: [The Discussion Book][amazon-dicsussion].
Describes fifty different ways to get groups talking productively.
Elizabeth Green: Building a Better Teacher.
A well-written look at why educational reforms in the past 50 years have mostly missed the mark, and what we should be doing instead.
Mark Guzdial: Learner-Centered Design of Computing Education: Research on Computing for Everyone.
A well-researched investigation of what it means to design computing courses for everyone, not just people who are going to become professional programmers, from one of the leading researchers in CS education.
Doug Lemov: Teach Like a Champion 2.0.
Presents 62 classroom techniques drawn from intensive study of thousands of hours of video of good teachers in action.
Therese Huston: Teaching What You Don’t Know.
A pointed, funny, and very useful book that explores exactly what the title suggests.
James Lang: Small Teaching.
A short guide to evidence-based teaching practices that can be adopted without requiring large up-front investments of time and money.
Jane Margolis and Allan Fisher: Unlocking the Clubhouse: Women in Computing.
A groundbreaking report on the gender imbalance in computing, and the steps Carnegie-Mellon took to address the problem.
Claude M. Steele: Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do.
Explains and explores stereotype threat and strategies for addressing it.


Baume: “Writing and Using Good Learning Outcomes
A useful detailed guide to constructing useful learning outcomes.
Borrego and Henderson: “Increasing the Use of Evidence-Based Teaching in STEM Higher Education: A Comparison of Eight Change Strategies
Describes eight approaches to effecting change in STEM education that form a useful framework for thinking about how Software Carpentry and Data Carpentry can change the world.
Brown and Altadmri: “Investigating Novice Programming Mistakes: Educator Beliefs vs Student Data
Compares teachers’ opinions about common programming errors with data from over 100,000 students, and finds only weak consensus amongst teachers and between teachers and data.
Carroll, Smith-Kerker, Ford, and Mazur-Rimetz: “The Minimal ManualHuman–Computer Interaction, 3:2, 123-153, 1987.
Outlines an approach to documentation and instruction in which each lesson is one page long and describes how to accomplish one concrete task. Its focus on immediate application, error recognition and recovery, and reference use after training makes it an interesting model for Software and Data Carpentry.
Crouch and Mazur: “Peer Instruction: Ten Years of Experience and Results
An early report on peer instruction and its effects in the classroom.
Deans for Impact: “The Science of Learning
Summarizes cognitive science research related to how students learn, and connects it to practical implications for teaching and learning.
Guzdial: “Exploring Hypotheses about Media Computation
A look back on 10 years of media computation research.
De Bruyckere et al: “Urban Myths About Learning and Education
A one-page summary drawn from their book of the same name.
Gormally et al: “Feedback about Teaching in Higher Ed: Neglected Opportunities to Promote Change
Summarizes best practices for providing instructional feedback and recommends specific strategies for sharing instructional expertise.
Guzdial: “Why Programming is Hard to Teach
A chapter from Making Software that explores why programming seems so much harder to teach than some other standard subjects.
Kirschner et al: “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching
Argues that inquiry-based learning is less effective for novices than guided instruction.
Lee: “What can I do today to create a more inclusive community in CS?”.
A brief, practical guide on exactly that with references to the research literature.
Mayer and Moreno: “Nine Ways to Reduce Cognitive Load in Multimedia Learning
Shows how research into how we absorb and process information can be applied to the design of instructional materials.
Porter et al: “Success in Introductory Programming: What Works?
Summarizes the evidence that three techniques—peer instruction, media computation, and pair programming—can significantly improve outcomes in introductory programming courses.
Wiggins and McTighe: “UbD in a Nutshell
A four-page summary of the authors’ take on reverse instructional design.
Wilson et al: “Good Enough Practices in Scientific Computing”.
Describes and justifies a minimal set of computing practices that every researcher could and should adopt.
Wilson et al: “Best Practices for Scientific Computing
Describes and justifies the practices that mature scientific software developers ought to use.
Wilson: “Software Carpentry: Lessons Learned
Summarizes what we’ve learned in 17 years of running classes for scientists.