Instructor Training

Lessons and Objectives

Overview

Teaching: 30 min
Exercises: 30 min
Questions
  • How can I design more effective lessons?

  • What lessons do Software and Data Carpentry currently contain?

Objectives
  • Describe the four steps in reverse instructional design and explain why following them is an efficient way to create good teaching materials.

  • Follow the steps in the reverse instructional design process to create a short lesson.

  • Analyze a lesson by comparing it to the elements of WHERETO.

  • Describe the characteristics of a good learning objective and correctly state whether a learning objective meets those criteria.

  • Classify the level of a learning objective in terms of Bloom’s taxonomy and similar cognitive hierarchies.

  • Write a learner profile describing a typical member of their intended audience.

  • Summarize the existing Software Carpentry and Data Carpentry lessons.

We have already covered certain elements of lesson design in our previous lessons on educational psychology and how that can inform teaching tools. In this lesson, we will look at writing learning objectives and a repeatable process for lesson design.

Writing Learning Objectives

Summative and formative assessments help instructors figure out what they’re going to teach, but in order to communicate that to learners and other instructors, we should also write learning objectives. It’s easy to come up with fuzzy ones like, “Learners will gain an appreciation of the role of research software engineers in the research process.” Useful ones take a bit more work.

What we want are specific, verifiable descriptions of what learners can do to demonstrate their learning. Each learning objective should have a measurable or verifiable verb specifying what the learner will do, and should specify the criteria for acceptable performance. For example, a better learning objective than the one above would be, “Learners will list three things that make research software engineers distinct from other specialists involved in the research process.”

In order to formulate good learning objectives we need to decide what kinds of learning we are aiming for. There is a difference between knowing the atomic weight of fluorine and understanding what elements it’s likely to bond with and why. Similarly, there’s a difference between being able to figure out why a microscope isn’t focusing properly and being able to design a new microscope that focuses more easily. What we need is a taxonomy of understanding that is hierarchical, measurable, stable, and cross-cultural.

The best-known attempt to build one is Bloom’s taxonomy, which was first published in 1956. More recent efforts are Wiggins and McTighe’s facets of understanding and Fink’s taxonomy from his book Creating Significant Learning Experiences. The table below compares them and shows some of the verbs typically used in learning objectives written for each level.

Bloom's taxonomy Facets of understanding (Wiggins & McTighe) Taxonomy of significant learning (Fisk) Typical learning objective verbs
Knowledge: recalling learned information Explain: provide sophisticated and apt explanations and theories that provide knowledgeable and justified accounts of phenomena, facts, and data Foundational knowledge: the facts, terms, formulas, concepts, principles, etc. that one understands and remembers name, define, recall
Comprehension: explaining the meaning of information Interpret: interpretations, narratives, and translations that provide meaning; make subjects personal or accessible through images, anecdotes, analogies, and models Application: using critical, creative, and practical (decision-making, problem-solving) skills restate, locate, explain, recognize
Application: applying what one knows to novel, concrete situations Apply: ability to use and adapt what one knows to new situations and in various contexts Integration: making connections among ideas, subjects, and people apply, demonstrate, use
Analysis: breaking down a whole into its component parts and explaining how each part contributes to the whole Have perspective: critical and insightful points of view; see the big picture Human dimensions: learning about and changing one's self; interacting with others differentiate, criticize, compare
Synthesis: assembling components to form a new and integrated whole Empathize: ability to get inside another's feelings and perspectives; use prior indirect experience to perceive sensitively Caring: identifying and changing one's feelings, values, and interests design, construct, organize
Evaluation: using evidence to make judgments about the relative merits of ideas and materials Have self-knowledge: perceive how one's patterns or thought and action shape and impede one's own understanding Learning how to learn: becoming a better, self-directed learner; learning to ask and answer questions choose, rate, select
Reproduced with additions from Allen and Tanner's "Putting the Horse Back in Front of the Cart: Using Visions and Decisions about High-Quality Learning Experiences to Drive Course Design" (2007)

Baume’s guide to writing and using good learning outcomes is a good longer discussion of these issues.

Evaluate SWC and DC Learning Objectives

Your instructor has posted links to a handful of current Software and Data Carpentry lessons in the Etherpad. Take a minute to select one learning objective from one of those lessons, then complete the following steps to evaluate it and reword it to make it sharper.

  1. Identify the learning objective verb.
  2. Decide what type of learning outcome this applies to (i.e. comprehension, application, evaluation).
  3. Reword the learning objective for a different learning outcome (i.d. from application to knowledge based outcome or vice versa).
  4. Pair up to discuss your rewording or help each other with point 3 or 4 if necessary.
  5. Share the original and your re-worded learning objectives in the Etherpad.

Learner Profiles

One way to characterize the audience for a course is to write learner profiles. This technique is borrowed from user interface design, where short profiles of typical users are created to help designers think about their audience’s needs, and to give them a shorthand for talking about specific cases.

Learner profiles have three parts: the person’s general background, the problem they face, and how the course will help them. A learner profile for Software Carpentry might be:

João is an agricultural engineer doing his masters in soil physics. His programming experience is a first year programming course using C. He was never able to use this low-level programming into his activities, and never programmed after the first year.

His work consists of evaluating physical properties of soil samples from different conditions. Some of the soil properties are measured by an automated device that sends logs in a text format to his machine. João has to open each file in Excel, crop the first and last quarters of points, and calculate an average.

Software Carpentry will show João how to write shell scripts to count the lines and crop the right range for each file, and how to use R to read these files and calculate the required statistics. It will also show him how to put his programs and files under version control so that he can re-run analyses and figure out which results may have been affected by changes.

Learner Profiles

Read Software Carpentry’s learner profiles and then write one that describes a fictional colleague of your own. Who are they, what problems do they face, and how will this training help them? Try to be as specific as possible.

Existing Lessons

Software Carpentry’s most commonly used lessons are:

Only one of the three programming lessons (Python or one of the R lessons) is used in a typical workshop. Software Carpentry also maintains lessons on:

but these are less frequently used.

The main aim of the Unix shell lesson is to familiarize people with a handful of basic concepts that crop up in many other areas of computing:

The aims of the version control lesson are to teach people:

The ostensible aim of the programming lessons are to show people how to build modular programs out of small functions that can be read, tested, and re-used. However, these concepts turn out to be hard to convey to people who are still learning the syntax of a programming language (forest and trees), so in practice the programming lessons focus primarily on the mechanics of doing common operations in those languages.

Data Carpentry’s lessons are domain-specific and cover data organization, manipulation, and visualization skills relevant to the target domain. Currently, there are fully-developed workshops for:

There are also materials in development and testing for:

Other Data Carpentry lessons are in the incubator stage.

Lesson Development

As stated above, the lesson materials for Software and Data Carpentry are hosted on GitHub:

and are developed collaboratively—in 2015 alone, almost 200 people made contributions to various lessons. Each lesson is in a separate repository, and consists of narrative lesson material and an associated directory containing the data or scripts needed in the lesson. This source material is also then served as a website, using GitHub’s “gh-pages” feature.

Lesson contribution is managed within the repository using “issues” and “pull requests”. New problems or suggestions can be introduced as issues, discussed by the community, and addressed via a pull request, which serves as a “request” to make changes, and can also be discussed before changes are merged.

Many Ways to Contribute

We recognize that the medium of GitHub may be restrictive to those who wish to contribute to our lessons. We are always searching for ways to make the process more friendly to all, whether that be contribution training, or alternative routes to contribution. If you have any ideas how we might make contribution more contributor-friendly, please let us know.

Reverse Instructional Design

Most people design courses as follows:

  1. The chair tells you that you have to teach something you haven’t thought about in ten years.
  2. You start writing slides to explain what you know about the subject.
  3. After two or three weeks, you make up an assignment based more or less on what you’ve taught so far.
  4. You repeat step 3 several times.
  5. You stay up ‘til the wee hours to make up a final exam.

There’s a better way, and to explain it, we first need to explain how test-driven development (TDD) is used in software development. When programmers are using TDD, they don’t write software and then (possibly) write tests. Instead, they write the tests first, then write just enough new software to make those tests pass, and then clean up a bit.

TDD works because writing tests forces programmers to specify exactly what they’re trying to accomplish and what “done” looks like. It’s easy to be vague when using a human language like English or Korean; it’s much harder to be vague in Python or R.

TDD also reduces the risk of endless polishing, and increases the likelihood that tests will actually get written. (Somehow, people always seem to run out of time…) Finally, writing the tests first reduces the risk of confirmation bias: someone who hasn’t written a program is much more likely to be objective when testing it than its original author.

A similar “backward” method works very well for lesson design. As described in Wiggins and McTighe’s Understanding by Design, the method proceeds through four stages:

  1. Identify what is worth learning (e.g., draw concept maps).
  2. Decide what constitutes evidence that learning has taken place (i.e., create the final exam or some other summative assessment).
  3. Design practice work to prepare learners for what they will have to do during the summative assessment. These should include formative assessments to be done in class and the exercises to be done out of class.
  4. Sort those practices in order of increasing complexity and then write short episodes to close the gap between what learners know and what they need to know in order to do each one. (An actual classroom lesson will then consist of several such episodes, each building toward a quick formative assessment.)

This reverse instructional design method helps keep teaching focused on its objectives. It also ensures that learners don’t face anything on the final exam that the course hasn’t prepared them for. When writing the lessons themselves, Wiggins and McTighe use the acronym WHERETO:

How and Why to Fake It

One of the most influential papers in the history of software engineering was Parnas and Clements’ “A Rational Design Process: How and Why to Fake It” (PDF), in which they pointed out that in real life we move back and forth between gathering requirements, interface design, programming, and testing, but when we write up our work it’s important to describe it as if we did these steps one after another so that other people can retrace our steps. The same is true of lesson design: while we may change our mind about what we want to teach based on something that occurs to us while we’re writing an MCQ, we want the notes we leave behind to present things in the order described above.

Teaching to the Test

Is reverse instructional design “teaching to the test”? I.e., does it steer teachers toward getting their students to pass an exam rather than learn things?

Reverse instructional design is not the same thing as “teaching to the test”. When using RID, teachers set goals to aid in lesson design, and may not ever actually give the final exam that they wrote as a goal. In many school systems, on the other hand, an external authority defines assessment criteria for all learners, regardless of their individual situations, and the outcomes of those summative assessments directly affect the teachers’ pay and promotion. Green’s Building a Better Teacher focus on measurement with little (usually no) help for improving centralized, standardized testing is appealing (particularly to those with the power to set the tests), but as Scott pointed out in Seeing Like a State, large organizations invariably prefer uniformity to productivity.

Validate Learning Objectives

Choose one topic from a Data Carpentry or Software Carpentry lesson and read through its learning objectives. Does this lesson accomplish what it sets out to achieve? Does it contain too much? Is the content on point with the learning objectives?

Key Points