Instructor Training: Discussion

This training course only a start. If you’d like to help us make it better, we would welcome additions discussing:

We would also appreciate additions to this list of things we don’t do, and explanations of why not:

peer instruction
This powerful teaching method has been proven effective, but we are already asking workshop participants to assimilate a lot of new things, and picking up a new learning technique while learning the basics of coding and data wrangling seems too much to ask.
certification
Many people have asked us to certify workshop participants in the same way that we certify instructors, but any meaningful certification process would require a lot of resources to set up and run.

Effecting Change

Henderson et al’s “Facilitating Change in Undergraduate STEM Instructional Practices” discusses ways to get educational institutions to actually change what they teach. Their findings are summarized in this table:

Aspect of System
to be Changed
Intended Outcome
Individuals Prescribed I. Disseminating: Curriculum & Pedagogy
Change Agent Role: tell/teach individuals about new teaching conceptions and/or practices and encourage their use.
Diffusion
Implementation
Emergent II. Developing: Reflective Teachers
Change Agent Role: encourage/support individuals to develop new teaching conceptions and/or practices.
Scholarly Teaching
Faculty Learning Communities
Environments
and Structures
Prescribed III. Enacting: Policy
Change Agent Role: enact new environmental features that require/encourage new teaching conceptions and/or practices.
Quality Assurance
Organizational Development
Emergent IV. Developing: Shared Vision
Change Agent Role: empower/support stakeholders to collectively develop new environmental features that encourage new teaching conceptions and/or practices.
Learning Organizations
Complexity Leadership

The eight italicized approaches are:

Why Do(n’t) We Teach X?

Workshop attendees and trainee instructors often ask why we don’t teach high-performance computing, machine learning, Perl, or a long list of other topics. Our answer is that as with every curriculum, the question is not, “What would we like to add?” but, “What are we willing to take out in order to make room?” We believe our core topics are the absolute minimum that researchers need to know in order to work efficiently and reproducibly. More importantly, we don’t know what we could take out to make space for something else.

One thing we do know is that we do not wish to become embroiled in debates over the relative merits of different languages or operating systems. No one has ever demonstrated that R programmers are more productive than Python programmers, and proficient users of Windows seem just as productive as equally-proficient users of Unix. If a learner asserts that their favorite tool is better than alternatives in some way, ask them for their data; if they don’t have any, point out as gently as possible that we’re supposed to be scientists, and that if we want politicians, business leaders, and the general public to pay attention to our findings on climate change and drug-resistant diseases, it behooves us to try to meet those same standards ourselves.

Evidence and Its Absence

As far as is practical, our teaching methods are based on the best available evidence. We wish we could say the same about our content, but very little research has been done on what researchers actually use and what impact it has on productivity. An example of what we wish existed is this summary by Stefik et al of empirical research on the usability of programming languages (while this full-length paper gives an idea of what’s possible).

Why We’re Not a MOOC

If you use robots to teach, you teach people to be robots.

This difference between what novices are doing when they learn, and what competent practitioners are doing, is one of the reasons we have stopped trying to teach via recorded video with auto-graded drill exercises. Any recorded content is as ineffective for most learners as broadcast television, or as a professor standing in front of 400 people in a lecture hall, because neither can intervene to clear up specific learners’ misconceptions. Some people happen to already have the right conceptual categories for a subject, or happen to form them correctly early on; these are the ones who stick with most massive online courses, but many discussions of the effectiveness of such courses ignore this survivor bias.

Program Assessment

The Carpentries’ greatest weakness is a lack of systematic assessment: while we have done some small-scale studies of the impact we have on our learners, and Dr. Beth Duckles’ studies of why instructors join us and why people qualify but then don’t teach are very insightful, we still don’t know what learners actually adopt or what effect it has on their productivity, the reproducibility of their work, and so on.

We have sometimes used this as the basis for an in-class exercise. Working in groups of four, trainees brainstorm answers to the following: “Your dean has provisionally agreed to set aside funds to support some Carpentry workshops over the next year, but wants to know how you will tell at the end of those workshops whether the money was worth spending. Given the resources you have, what information can you collect, how would you analyze it, and why do you think it would be convincing?” Each group then presents its best idea, which the trainers and other trainees critique.

This exercise always generates a lot of discussion, but end-of-day assessment has usually indicated that trainees don’t find it particularly useful. We have therefore cut it, but may re-introduce it if and when we include a module on program assessment.