Inquiry learning isn’t – a call for direct explicit instruction

In 2006 Paul Kirschner published, with John Sweller and Richard E Clark, a now-seminal piece of research that threatened to blow the doors off an often-accepted orthodoxy in teaching: that students learn best when they discover things by themselves. They proposed that not only was this not the case, but that the best learning frequently took place when guided direct instruction by an expert was the main strategy.

Decades of research demonstrates that for novices (the state of most students), direct explicit instruction is more effective and efficient – and in the long run enjoyable – than minimal guidance. So, when teaching new content and skills to novices, teachers are more effective when they provide explicit support and guidance. Direct, explicit instruction fully explains the concepts and skills that students are required to learn. It can be provided through all types of media and pedagogies (e.g., lectures, modelling, videos, computer-based presentations, demonstrations, class discussions, hands-on activities etc.) as long as the teacher ensures that the relevant information is explicitly provided and practised. Minimal instructional guidance, on the other hand, expects students to discover on their own most, if not all, of the concepts and skills they are supposed to learn. This approach has been given various names such as discovery learning, problem-based learning, inquiry learning, experiential learning, and constructivist learning.

Rich Mayer examined studies conducted from 1950 to the late 1980s that compared discovery learning (defined as unguided, problem-based instruction) with guided forms of instruction. In his famous three-strikes paper,2 he suggested that in each decade since the mid-1950s, after empirical studies provided solid evidence that the then-popular form of unguided approach did not work, a similar approach soon popped up under a different name with the cycle then repeating itself. This pattern produced discovery learning, then experiential learning, then problem-based and inquiry learning, then constructivist pedagogies, ad infinitum. He concluded that the ‘debate about discovery has been replayed many times in education but each time, the evidence has favored a guided approach to learning’ (p. 18).

Evidence from well-designed, properly controlled experimental studies as well as classroom studies from the 1980s to today also supports direct instructional guidance. The research has shown that when students try to learn with discovery methods or with minimal feedback, they often become lost and frustrated, and their confusion can lead to misconceptions: That because false starts (where students pursue misguided hypotheses) are common, unguided discovery is also inefficient. In a very important study,3 researchers not only tested whether science learners learned more via a discovery versus direct- instruction route but also, once learning had occurred, whether the quality of learning differed. The findings were unambiguous. Direct instruction involving considerable guidance, including examples, resulted in vastly more learning than discovery. Those relatively few students who learned via discovery showed no signs of superior quality of learning or superior transfer. Also, even if a problem or project is devised that all students succeed in completing, minimally guided instruction is much less efficient than explicit guidance. What can be taught directly in a 25-minute demonstration and discussion followed by 15 minutes of independent practice with good teacher feedback may take several class periods to learn via minimally guided projects and/or problem solving. And finally, minimally guided instruction can increase the achievement gap. A review of approximately 70 studies4 found not only that higher skilled learners tend to learn more with less guided instruction, while lower skilled learners tend to learn more with more guided instruction, but that lower skilled students who used less guided instruction received significantly lower scores on post-tests than on pre-test measures. For these relatively weak students, the failure to provide strong instructional support and guidance produced a measurable loss of learning.

Now let’s look at how we learn. There are two essential components that influence how we learn: long-term memory (LTM) and working memory (WM; often called short-term memory). LTM is a big mental warehouse of things while WM is a limited mental ‘space’ in which we think. However, to dispel a common misconception, LTM is not a passive repository of discrete, isolated fragments of information that permit us to repeat what we have learned, having only peripheral influence on complex cognitive processes such as critical thinking and problem solving. It is, rather, the central, dominant structure of human cognition. Everything we see, hear, and think about depends on and is influenced by our LTM. Expert problem solvers, for example, derive their skill by drawing on the extensive experience stored in their LTM in the form of concepts and procedures, known as mental schemas. They retrieve memories of past procedures and solutions, and then quickly select and apply the best ones for solving problems. We are skilful in an area if our LTM contains huge amounts of information concerning the area. That information permits us to quickly recognise the characteristics of a situation and indicates to us, often immediately and unconsciously, what to do and when to do it. And what are the instructional consequences of LTM? First and foremost, LTM provides us with the ultimate justification for instruction: the aim of all instruction is to add knowledge and skills to LTM. If nothing has been added to LTM, nothing has been learned.

WM, in contrast, is the cognitive structure in which conscious processing occurs. We are only conscious of the information currently being processed in WM and are more or less oblivious to the far larger amount of information stored in LTM. When processing novel information, WM is very limited in duration and capacity. We have known at least since the 1950s that almost all information stored in WM is lost within 30 seconds if it is not rehearsed and that the capacity of WM is limited to only a very small number of elements, estimated at about 7, but may be as low as 4±1.

For instruction, the interactions between WM and LTM may be even more important than the processing limitations. The limitations of WM only apply to new, to-be-learned information (i.e., information that has not yet been stored in LTM). When dealing with previously learned information stored in LTM, these limitations disappear. Since information can be brought back from LTM to WM as needed, the 30-second limit of WM becomes irrelevant. Similarly, there are no known limits to the amount of such information that can be brought into WM from LTM.

These two facts – that WM is very limited when dealing with novel information, but is not limited when dealing with information stored in LTM – explain why minimally guided instruction typically is ineffective for novices, but can be effective for experts. When given a problem to solve, novices’ only resource is their very constrained WM while experts have both their WM and all the relevant knowledge and skill stored in LTM.

One of the best examples of an instructional approach that takes into account how our working and long-term memories interact is the ‘worked example effect’. Solving a problem requires searching for a solution, which must occur using our limited WM. If the learner has no relevant concepts or procedures in LTM, the only thing they can do is blindly search for possible solution steps that bridge the gap between the problem and its solution. This process places a great burden on WM capacity because the problem solver has to continually hold and process the current problem state in WM (e.g., Where am I right now in the problem solving process? How far have I come towards finding a solution?) along with the goal state (e.g., Where do I have to go? What is the solution?), the relations between the goal state and the problem state (e.g., Is this a good step toward solving the problem? Has what I’ve done helped me get nearer to where I need to go?), the solution steps that could further reduce the differences between the two states (e.g., What should the next step be? Will that step bring me closer to the solution? Is there another solution strategy that I can use that might be better?), and any sub goals along the way. Thus, searching for a solution overburdens limited WM and diverts working-memory resources away from storing information in LTM. As a consequence, novices can engage in problem-solving activities for extended periods and learn almost nothing.

In contrast, studying worked examples reduces the burden on WM (because the solution only has to be comprehended, not discovered) and directs attention (i.e., directs WM resources) toward storing the essential relations between problem-solving moves in LTM. Students learn to recognise which moves are required for particular problems, which is the basis for developing knowledge and skill as a problem solver. As the learner progresses, various steps can be faded away so that the learner needs to think up and complete those steps themself (partially worked examples).

It is important to note that this discussion of worked examples applies to novices – not experts. In fact, the worked-example effect first disappears and then reverses as the learners’ expertise increases. That is, for experts with lots of knowledge in the LTM, solving a problem can be more effective than studying a worked example.

Why then, with all of this proof, do people continue to think that inquiry-based learning works? Turning back to Mayer’s review of the literature, educators seem to confuse constructivism as a theory of how one learns and sees the world, and constructivism as a prescription for how to teach. In cognitive science, ‘constructivism’ is a widely accepted theory of learning; it claims that learners must construct mental representations of the world by engaging in active cognitive processing (i.e., schema construction). Many educators (unfortunately including professors in colleges of education) have latched on to this notion of students having to ‘construct’ their own knowledge and assume that the best way to promote such construction is to have students discover new knowledge or solve new problems without much guidance from the teacher. Unfortunately, this assumption is both widespread and incorrect. Mayer calls it the ‘constructivist teaching fallacy’. Simply put, cognitive activity can happen with or without behavioural activity, and behavioural activity does not in any way guarantee cognitive activity. In fact, the type of active cognitive processing that students need to engage in to ‘construct’ knowledge can happen through reading a book, listening to a lecture, watching a teacher conduct an experiment while simultaneously describing what he or she is doing, etc. Learning requires the construction of knowledge. Construction is not facilitated by withholding information from students.

After a half-century of advocacy associated with instruction using minimal guidance, it appears that there is no body of sound research that supports using the technique with anyone other than the most expert students. Evidence from controlled, experimental (AKA ‘gold standard’) studies almost uniformly supports direct instructional guidance rather than minimal guidance for novice to intermediate learners. These findings and their associated theories suggest teachers should provide their students with clear, explicit instruction rather than merely assisting students in attempting to discover knowledge themselves.

Download a PDF version of this issue.


References

1
This is a condensed version of the article ‘The case for direct, explicit instruction’ written for American Educator by the original authors which itself summarised parts of the original article ‘Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching’ by Kirschner, P. A., Sweller, J. and Clark, R. E., originally published in Educational Psychologist 41 (2) pp. 75–86.

2
Mayer, R. (2004) ‘Should there be a three-strikes rule against pure discovery learning? The case for guided methods of instruction’, American Psychologist, 59 (1) pp. 14–19.

3
Klahr, D. and Nigam, M. (2004) ‘The equivalence of learning paths in early science instruction: effects of direct instruction and discovery learning’, Psychological Science 15 (10) pp. 661–667.

4
Clark, R. E. (1989) ‘When teaching kills learning: research on mathemathantics’ in Mandl, H., Bennett, N., De Corte, E. and  Friedrich, H. (eds) Learning and instruction: European research in an international context, volume 2. London, UK: Pergamon, pp. 1–22.

Comparative judgement: the next big revolution in assessment?

Director of Education at No More Marking, Daisy outlines why teachers should rethink how they assess, why they assess, and vitally, how much time should be spent doing it.

Marking writing reliably is hard. To understand why, try this thought experiment. Imagine that you have a mathematics exam paper. It’s a simple paper with just 40 questions and all those questions are fairly straightforward. One mark is available for each question, and there are no marks for method. Suppose I then give that paper to a pupil, and get them to complete it. If I then copied their answer script and gave it to a group of 100 maths teachers, I would expect that all of those teachers would agree on the mark that script should be awarded, even if they had never met before or never discussed the questions on the paper.

Now take the same pupil, and imagine they have been asked to write a short description of the town where they live. Suppose again that we copy their script, distribute it to 100 teachers, and ask them to give the script a mark out of 40. It is far less likely that the teachers will all agree on the mark that script should be awarded. Even if they had all undergone training in the meaning of the mark scheme, and met in advance to discuss what the mark scheme meant, it would be highly unlikely that they would all then independently agree on the mark that one script deserved.

To a certain extent, this is to be expected. There is no one right answer to an extended writing question, and different people will have different ideas about how to weight the various different aspects that make up a piece of writing. However, whilst we might accept that we will never get markers to agree on the exact mark, we surely do want them to be able to agree on an approximate mark. We may not all agree that a pupil deserves 20/40, but perhaps we can all agree that they deserve 20/40, plus or minus a certain number of marks. The larger this margin of error is, the more difficulty we have in working out what the assessment is telling us. Suppose, hypothetically, that the margin of error on this question was plus or minus 15. A pupil with 20/40 might have scored anywhere between 5 and 35! Large margins of error make it difficult to see how well a pupil is doing, and they also make it even more difficult to see if a pupil is making progress, as then you have to contend with the margin of error on two assessed pieces of work.

In order to know how well pupils are doing, and whether they are improving, we therefore need a method of reliably assessing extended writing. In order to consider how we might arrive at this, let us first look at two reasons why it is so difficult to mark extended writing at the moment.

First, traditional writing assessment often depends on absolute judgements. Markers look at a piece of writing and attempt to decide which grade is the best fit for it. This may feel like the obvious thing to do, but in fact humans are very bad at making such absolute judgements. This is not just true of marking essays, either, but of all kinds of absolute judgement. For example, if you are given a shade of blue and asked to identify how dark a shade it is on a scale of 1 to 10, or given a line and asked to identify the exact length of it, you will probably struggle to be successful. However, if you are given two shades of blue and asked to find the darker one, or two lines, and asked to find the longer one, you will find that much easier. Absolute judgement is hard; comparative judgement is much easier, but traditional essay marking works mainly on the absolute model.1

Second, traditional writing assessment depends on the use of prose descriptions of performance, such as those found in mark schemes or exam rubrics. The idea is that markers can use these descriptions to guide their judgements. For example, with one exam board, the description for the top band for writing is described in the following way:

  • Writing is compelling, incorporating a range of convincing and complex ideas
  • Varied and inventive use of structural features2

The next band down is described as follows:

  • Writing is highly engaging, with a range of developed complex ideas
  • Varied and effective structural features

It is already not hard to see the kinds of problems such descriptors can cause. What is the difference between ‘compelling’ and ‘highly engaging’? Or between ‘effective’ use of structural features and ‘inventive’ use? Such descriptors cause as many disagreements as they resolve, because prose descriptors are capable of being interpreted in a number of different ways. As Alison Wolf says, ‘One cannot, either in principle or in theory, develop written descriptors so tight that they can be applied reliably, by multiple assessors, to multiple assessment situations.’3

Comparative judgement offers a way of assessing writing which, as its name suggests, does not involve difficult absolute judgements, and which also reduces reliance on prose descriptors. Instead of markers grading one essay at a time, comparative judgement requires the marker to look at a pair of essays, and to judge which one is better. The judgement they make is a holistic one about the overall quality of the writing. It is not guided by a rubric, and can be completed fairly quickly. If each marker makes a series of such judgements, it is possible for an algorithm to combine all the judgements and use them to construct a measurement scale.4 This algorithm is not new: it was developed in the 1920s by Louis Thurstone.5 In the last few years, the existence of online comparative judgement engines has made it easy and quick for teachers to experiment with such a method of assessment.

At No More Marking, where I am Director of Education, we have used our comparative judgement engine for a number of projects at primary and secondary. In our assessments of pupils’ writing, we can measure the reliability of our markers, and we are routinely able to reduce the margin of error down to just plus or minus 2 marks on a 40-mark question. Teachers are also able to complete these judgements relatively rapidly, leading to reductions in workload too. In the longer term, our hope is that wider use of comparative judgement will allow teachers to identify promising teaching methods with greater accuracy, and also to reduce the influence that tick-box style mark schemes have on teaching and learning.

To find out more, read Making Good Progress – the Future of Assessment for Learning (2016) by Daisy Christodoulou, published by Oxford University Press.

Download a PDF version of this issue.


References

1
Laming, D. (2003) Human judgment: the eye of the beholder. Boston, MA: Cengage Learning.

2
AQA, GCSE English Language 8700, Paper 2 Mark Scheme.
filestore.aqa.org.uk/resources/english/AQA-87002-SMS.PDF

3
Wolf, A. (1998) ‘Portfolio assessment as national policy: the National Council for Vocational Qualifications and its quest for a pedagogical revolution’, Assessment in Education: Principles, Policy & Practice, 5 (3) pp. 413–445, p. 442.

4
Pollitt, A. (2012) ‘Comparative judgement for assessment’, International Journal of Technology and Design Education, 22 (2) pp. 157–170.

5
Thurstone, L. L. ‘A law of comparative judgment’, Psychological Review, 34 (4) pp. 273–286.

Challenging the ‘education is broken’ and Silicon Valley narratives

Over the last 100 years an unassailable myth about education has taken root in popular culture: the formal enterprise of education is in some way ‘broken’ and in urgent need of drastic reform. In the last 20 years this myth has gone into overdrive with the advent of what Audrey Watters calls the ‘Silicon Valley narrative’, described as ‘the story that the technology industry tells about the world – not only the world-as-is but the world-as-Silicon-Valley-wants-it-to-be’. This narrative positions technology as the saviour to the ‘factory model’ of education, seeks to ‘personalise’ every aspect of learning and views knowledge as obsolete in an age of Google. However, its roots lie in a familiar kind of revolutionary zeal and entrepreneurial fatuity. Writing in 1922, Thomas Edison proclaimed that:

‘I believe that the motion picture is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the use of textbooks. I should say that on average we get about two percent efficiency out of school books as they are written today.’ (Edison in Cuban, 1986, p. 9)

Many of the claims from the early 20th century were focused on the radio, with television being hailed as the next transformative force in the 1940s and ’50s; but with the advent of computing devices in the 1960s, the notion of ‘teaching machines’ began to emerge and so did a narrative of technology as not just augmenting traditional education structures, but replacing them altogether.

Techno-evangelists and have sold us the internet as a form of emancipation, freeing us from the ‘factory model’ of education.

A common trope in the ‘education is broken’ narrative is a sinister call for the annihilation of the teacher. A 1981 book – School, Work and Play (World of Tomorrow) – makes the claim that:

‘If we look further into the future, there could be no schools and no teachers. Schoolwork may not exist. Instead you will have to do homework, for you will learn everything at home using your home video computer.’ (Ardley, 1981, p. 54)

The advent of mass digital technology and the internet in the last 20 years led to ever more sensationalist claims that the fundamental enterprise of education is in some way in need of wholesale change or ‘disruption’, a term coined by Clayton Christensen in his 1997 book, The Innovator’s Dilemma. The term refers to radical approaches, often cheaper and technology-based, which challenge and ‘disrupt’ existing structures and eventually supplant them with innovative alternatives. Companies like Amazon, Netflix and others are examples of disruptive technologies that have supplanted traditional ones like high street retail and video rental services, and have provided consumers with higher-quality products at a cheaper rate. However, as Martin Weller argues, the disruptive model is one that has been applied ‘much more broadly than its original concept, to the point where it is almost meaningless and rarely critically evaluated’ (Weller, 2014, p. 125). Just because Uber offers consumers a cheaper and more efficient alternative to cabs, it does not follow that the same model will work in education. Education’s stakeholders are not ‘consumers’ for one thing and the ultimate goal of education is not efficiency.

In his 2008 book, Disrupting Class, Christensen and his co-authors argue that ‘disruption is a necessary and overdue chapter in our public schools’ and would later claim that half of all high school classes would be taught online by 2019. Other disruptive enthusiasts like Michael Staton have claimed that the traditional credential of a higher education degree are in crisis, writing in the Harvard Business Review in 2014 that university degrees are ‘doomed’ because employers can learn much more about prospective employees who use cheaper alternatives using online apps to aggregate created content and skills:

‘In these fields in the innovation economy, traditional credentials are not only unnecessary but sometimes even a liability. A software CEO I spoke with recently said he avoids job candidates with advanced software engineering degrees because they represent an over-investment in education that brings with it both higher salary demands and hubris.’

Many of these sorts of claims are focused on higher education and argue that those institutions are now bloated, anachronistic monuments to the past. In a 1997 interview in Forbes magazine, management consultant Peter F Drucker noted that: ‘Thirty years from now the big university campuses will be relics. Universities won’t survive. It’s as large a change as when we first got the printed book.’

However, despite these grandiose claims there appears to be scant evidence in which to ground them. In fact, there is an emerging picture of technology as a highly distracting influence on student’s attentional capacities and their long-term ability to focus. A recent study (Ruest, 2016) showed that children who spent up to four hours a day using devices outside of schoolwork had a much lower rate (23%) of finishing their homework, compared to children who spent less than two hours using digital devices. A 2015 report from the OECD surveyed millions of students about the use of technology and correlated then with attainment scores and found that use of technology had a detrimental effect on overall student achievement.

‘Students who use computers very frequently at school do a lot worse in most learning outcomes, even after controlling for social background and student demographics.’ (OECD, 2015)

Many studies in technology are correlational or based on self-report; however, a more recent study (Ravizza, Uitvlugt, Fenn, 2017) sought to address these issues by objectively measuring students’ use of laptops during lectures through the use of a proxy server that monitored and tracked precisely what websites were used during class. The central finding was that non-academic use of the internet in classes was highly prevalent and inversely related to performance in the final exam, regardless of interest in the class, motivation to succeed, and intelligence. In addition, using the internet for academic purposes during class did not yield a benefit in performance. The results showed that participants spent a median of 37 minutes per class browsing the internet for non-class-related purposes with their laptops and ‘spent the most time using social media, followed by reading e-mail, shopping, watching videos, chatting, reading news, and playing games’ (Ravizza, Uitvlugt, Fenn, 2017, p. 174) while they spent a total of four minutes browsing class-related websites.

A recent wide-ranging empirical review of the literature (Bulman, Fairlie, 2016) evaluating the impact of technology in terms of classroom use in schools and home use by students found that many policies promote investment in computer hardware or internet access and that the ‘majority of studies find that such policies result in increased computer use in schools, but few studies find positive effects on educational outcomes’. A 2015 report suggests that the reason for such findings is that technology in the classroom has both positive and negative effects resulting in an overall null effect:

‘Classroom computers are beneficial to student achievement when used to look up ideas and information but detrimental when used to practice skills and procedures.’ (Falck, Mang, Woessman, 2015, p. 23)

More worryingly, the work of Jean Twenge suggests that the ubiquity of phones and the ‘always-on’ culture of social media is having a detrimental effect on the mental health of the ‘iGen’ generation, those born between 1995 and 2012:

‘Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.’

It’s a bleak view of the future, often described as dystopian; but for Neil Postman, there is an interesting distinction between the dystopian visions of Orwell’s Nineteen Eighty-Four and Huxley’s Brave New World. The former portrayed a bleak vision of oppressive state control in the form of Big Brother which sought to actively ban expression and limit human agency; however, in Brave New World there is a far more horrifying phenomenon at work:

‘In Huxley’s vision, no Big Brother is required to deprive people of their autonomy, maturity and history. As he saw it, people will come to love their oppression, to adore the technologies that undo their capacities to think. What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one.’ (Postman, 1985, p. 10)

It must be said that technology has afforded us some incredible opportunities for education, such as comparative judgement or the JSTOR Shakespeare digital library where every line in his plays is hyperlinked to critical commentary. Used judiciously in a purposeful and well-structured environment, there can be many benefits for SEN students; but increasingly, we are suffering from what Sartre called ‘the agony of choice’ as we become more and more connected to the internet of things. Until relatively recently, you had to sit down and use a computer to connect to the internet but now even your central heating is online. Allowing kids to browse the internet in a lesson and then expecting they will work productively is like bringing them to McDonald’s and hoping they’ll order the salad.

Techno-evangelists and have sold us the internet as a form of emancipation, freeing us from the ‘factory model’ of education but often technology seems to represent a solution in search of a problem. (Interestingly, the model they seek to disrupt has in fact led to unprecedented improvements in educational outcomes. From 1900 to 2015, rates of global literacy increased from 21% to 86% of the global population.) What’s notable about many of these claims is that they usually come from outside education, often from entrepreneurs with little or no experience in education and with significant financial investment in a digital utopia devoid of teachers. Perhaps the most liberating and empowering thing educators can do for young people today is to create a space for them where they can read the great works of human thought undisturbed and where we can ‘disrupt’ the current culture of distraction.

Carl Hendrick is author of What Does This Look Like in the Classroom? and the Head of Learning and Research at Wellington College where he teaches English. He is also completing a PhD in education at King’s College London.

@C_hendrick

chronotopeblog.com

Download a PDF version of this issue.


References

Ardley, N. (1981) School, work and play (world of tomorrow). London: Franklin Watts Library.

Bulman, G. and Fairlie, R. (2016) ‘Technology and Education: Computers, Software and the Internet,’ in Handbook of the Economics of Education, ed. Eric A. Hanushek et al. (Elsevier), 239–280.

Christensen, C. (2011) The innovator’s dilemma: when new technologies cause great firms to fail. Boston: Harvard Business School Press.

Cuban, L. (1986) Teachers and Machines: The Classroom Use of Technology Since 1920. New York: Teachers College Press.

Falck, O. Mang C. & Woessmann, L (2015) ‘Virtually No Effect? Different Uses of Classroom Computers and their Effect on Student Achievement’, CESifo Working Paper No. 5266, March 2015

Lamont Johnson, D. & Maddux, C. (2003) ‘Technology in education: a twenty-year retrospective’, Computers in the Schools, 20 (1/2)

OECD (2015) ‘Students, Computers and Learning: Making the Connection’, PISA, OECD Publishing, available at: dx.doi.org/10.1787/9789264239555-en accessed 05.12.2017

Postman, N. (1985) Amusing Ourselves to Death: Public Discourse in the Age of Show Business.

Ravizza, S. M., Uitvlugt, M. G., Fenn, K. M. (2017) ‘Logged In and Zoned Out: How Laptop Internet Use Impacts Classroom Learning’, Psychological Science, 28:171-180

Ruest S, et al. (2016) ‘Digital Media Exposure in School-Aged Children Decreases the Frequency of Homework’ Abstract 319984. Presented at: AAP National Conference and Exhibition; Oct. 21-25, 2016; San Francisco.

Watters, A. (2016) The Curse of the Monsters of Education Technology. Tech Gypsies.

Weller, M. (2014) Battle for Open: How openness won and why it doesn’t feel like victory. London: Ubiquity Press. DOI: doi.org/10.5334/bam

Myth-Busting: Learning Styles

In the first of a series, Dr Pedro De Bruyckere explores the reality behind some of the more popular misconceptions in education, and asks if there is any truth in them.
This issue: learning styles

The great pretender – the truth behind learning styles

I’ll start this piece with a little confession. As a songwriter I couldn’t help including a song about my job as
an educational myth-buster on the first album of my band. On Kiss Me Twice by Blue and Broke, there’s a song called ‘Naïve’ and one line of the song provided some inspiration for the title of these short articles on education myths: ‘There is some truth in every lie.’

What Paul Kirschner, Casper Hulshof and myself have discovered over the past few years is that there are often some grains of truth hidden in ideas that can rightfully be called Urban Myths about Learning and Education. For example, the shape of the infamous learning pyramid – one of my favourite myths that I call ‘the Loch Ness Monster of education’ – is actually based on one of the oldest theories on the use of multimedia in the classroom, the ‘Cone of Experience’ by Edgar Dale…from 1946!

Maybe I’ll tackle that myth later in the series, but let’s first start with another big one: what is the grain of truth hidden in learning styles?

The myth in short

For the people who think you should adapt your teaching to the supposed learning styles of your pupils, know this:

1. There is no evidence that it works

2. There are plenty of different categorisations

3. If you think it works, you can try to win $5000!

If you’d like to know how to win the prize, I’ll share the short version1 with you. Take at least 70 pupils and give them all a learning style test. I’ll explain what you need to do with two possible learning styles (auditory and visual learners) but you can pick whatever theory you like (e.g., Kolb, Honey and Mumford, Felder-Silverman, etc.) from the 71 known categorisations (Coffield et al., 2004). Then you’ll need to organise the groups into two conditions:

Group 1 will be taught according to their assumed learning style. The visual learners will get their information graphically presented; the auditory learners will get to listen to the information.

Group 2 will be taught according to the opposite of their assumed learning style. The auditory learners will get their information shown to them, the visual learners will get to listen to the information.

You randomly put half of the 70 pupils in the first group, the other 35 in the second group. If you can demonstrate that the pupils in group 1 have learned a sizeable amount more than the pupils in group 2, you might be in line to win the $5000 reward that Will Talheimer offered many years ago. Check his website for the longer version of the  challenge. Do note, however: nobody has succeeded yet.

There is no correlation between following your learning preferences and better learning results.

The grain(s) of truth in the myth

As with most myths, there’s a grain of truth lurking somewhere. In fact, there are actually two grains of truth in the learning styles myth: a misleading one and a potentially helpful one.

Let’s start with the more misleading truth: people probably do not have a learning style – a best way of learning that a teacher needs to adapt to; however, people do often have learning preferences. Why is this a bit misleading? It’s because people become convinced that these preferences are the best way to learn: ‘Yeah, I just have to write stuff down and I will remember it best that way.’ There is a sad fact I need to share with you though: there is no correlation between following your learning preferences and better learning results (e.g., Rogowsky et al., 2015).

The second grain of truth is more helpful. If you combine different modalities (e.g. both visual and auditory senses) people will typically learn more. For example, dual-coding theory suggests that it’s better to combine images with words if you want to remember something (e.g., Mayer & Anderson, 1992).

I’ll leave you with Yana Weinstein (2016) from The Learning Scientists, who offers a great four-step summary of the science:

People have preferences for how they learn.

All people learn better when more senses are engaged.

Some people benefit from additional modalities more than other people.

No one suffers from the addition of a modality that’s not their favourite.

Check this page for the long version: www.worklearning.com/2014/08/04/learning-styles-challenge-year-eight/

Download a PDF version of this issue.


References

Coffield, F., Moseley, D., Hall, E., and Ecclestone, K. (2004) Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre.

Rogowsky, B. A., Calhoun, B. M., & Tallal, P. (2015) ‘Matching learning style to instructional method: effects on comprehension’, Journal of Educational Psychology, 107 (1) pp. 64–78.

Mayer, R. E., & Anderson, R. B. (1992) ‘The instructive animation: helping students build connections between words and pictures in multimedia learning’, Journal of Educational Psychology, 4, pp. 444–452.

Weinstein, Y. (2016) ‘Just semantics? Subtle but important misunderstandings about learning styles, modalities, and preferences’, The Learning Spy [blog]. www.learningspy.co.uk/featured/just-semantics-subtle-but-important-misunderstandings-about-learning-styles-modalities-and-preferences/

The Science of Learning

US-based Deans for Impact are not only one of the leading organisations driving evidence-informed teacher training, but also ground-breaking communicators of evidence-informed education. And one of their most successful publications, The Science of Learning, is part of that success. Benjamin Riley and Charis Anderson explain what it is, and why it has proven such an international success.

Benjamin Riley and Charis Anderson

When Deans for Impact launched in 2015, its members – all leaders of US educator-preparation programmes – wanted to chart a new course in education that pushed for the broader use of scientifically supported learning principles within programmes that prepare future teachers. At the same time, we wanted to make sure whatever we did would resonate with practising educators in the field. Could we create a resource to do both?

From this question, The Science of Learning – a short, six-page summary of principles of cognitive science and their application to teaching practice – was born. Three years after its publication, it remains the most widely used resource Deans for Impact has developed, with ongoing international interest. And we think the reason for this stems in part from the fact that the main authors of The Science of Learning – Daniel Willingham and Paul Bruno – spanned the ‘research to practice’ divide that so often creates a barrier to improving education.

Willingham, a professor at the University of Virginia, is a cognitive scientist. Earlier in his career, his research focused solely on the brain basis of learning and memory, but since around 2000, he has focused on the application of cognitive psychology to K-16 education. The Science of Learning offered Willingham another opportunity to bring information about cognitive psychology to educators in a useful way.

By contrast, when Bruno started working on The Science of Learning, he was fresh out of the classroom after spending five years teaching middle-school science in Oakland and Los Angeles. Bruno’s own teacher-preparation experience had left him with relatively little understanding of the science of learning, and much of what he did know he learned on his own. Based on his own experience, Bruno thought there was an enormous need to help make learning-science research accessible for educators.

‘I think it’s great when teachers take the initiative and want to dive into the research themselves,’ said Bruno, who is now a PhD student at USC Rossier. ‘But I think it is pretty unfair, for most teachers, to demand that they do that proficiently: that’s not their job.’

There’s a distinction between being a practitioner and being a researcher of how the mind works, according to Willingham. ‘Knowing what the mind does is not identical to knowing how to put those principles into practice in a classroom,’ Willingham said.

The Science of Learning focuses on the cognitive view of learning in order to focus on those principles that are most applicable to what teachers do in classrooms, such as helping students understand new ideas or motivating students to learn. The principles are organised through six framing questions – e.g., how do students understand new ideas? – and are paired with specific, concrete implications for instruction. Above all, The Science of Learning makes the research accessible.

The field of education often lacks clear paths to keep practitioners up to date on the latest relevant research. This stands in contrast to other professions, such as the medical field, where the American Medical Association takes an active interest in continuing education for physicians, according to Willingham. But in teaching, ‘I would say that most teachers feel they’re sort of on their own in navigating the research world and figuring out what’s new in research and what’s quality,’ Willingham said. Bruno agreed. ‘Particularly for a new teacher, it can be very helpful to have something like The Science of Learning that you can get your arms around and is relatively digestible,’ he said.

The lack of specificity or clarity in standards and other guidance given to teachers – both novice and more experienced – is also a real problem, in Bruno’s eyes. For example, teachers are told that it’s important for their students to have foundational knowledge as a precursor for critical thinking – but what is meant by ‘having foundational knowledge’? And what specific things do teachers need to do to help their students gain that knowledge?

‘A lot of times, educational advice can sound very aspirational, and watching teachers who are good can often seem like you’re watching something that’s indistinguishable from magic,’ Bruno said. A novice teacher who is told to differentiate her instruction, but isn’t given clear directions on what that means or looks like – or even on what basis instruction needs to be differentiated – will be left fishing for plausible ways of achieving the objective.

It’s in these types of situations where neuromyths like learning styles can easily take hold, Bruno believes. ‘Learning styles seems to offer some of this concreteness: take the activity you were doing, and turn it into something visual, or something kinesthetic,’ he said. ‘That seems actionable, and it’s something to latch onto.’

Empowering individual teachers with knowledge of learning science principles can change the way instruction is delivered in individual classrooms and contribute to changing the norms of the profession. Indeed, while we originally conceived of The Science of Learning as a tool to support individual learning, at Deans for Impact we’ve increasingly come to see the principles of learning science as central to organisational learning as well. We’re now using The Science of Learning to undergird a vision of change within educator-preparation programmes that prioritises candidate learning above all else.

In our most recent publication, Building Blocks, we laid out a vision for effective educator preparation that connects learning-science principles with practical considerations about how teacher preparation should be designed. In this vision, not only do teacher-educators teach and model behaviours that are aligned with our best scientific knowledge, but programmes themselves are designed with that knowledge at their core.

When teacher-educators model effective pedagogy, for example, it gives aspiring teachers ‘worked examples’ – step-by-step demonstrations that break down a teaching practice into its component parts – that reduce their cognitive burdens and help them see and understand the underlying concepts.

Interleaving practice opportunities throughout teacher-candidates’ preparation experience helps them better learn content and understand theory and practice as interrelated concepts. Pairing those practice opportunities with feedback that is targeted toward developing a specific skill and given as soon as possible after the skill is practice – and giving teacher-candidates another opportunity to practice the skill – make them powerful levers for improvement.

Finally, designing the arc of the preparation process to build teacher-candidate knowledge, skill, and understanding over time helps align theory to practice and creates a coherent experience for all candidates. This approach to program design is based one of the bedrock principles of cognitive science: that we learn new ideas by referencing ideas we already know.

Three years after Deans for Impact first conceived the idea for The Science of Learning, it continues to guide much of our work. We believe that cognitive science can drive improvements within individual teachers’ classrooms and within the organizations that prepare those teachers – and researchED is playing a pivotal role in helping spread these ideas across the globe. We have made a great deal of progress – and our best work lies ahead.

You can download all Deans for Impact publications (including The Science of Learning and Building Blocks) for free here: deansforimpact.org/resources

Download a PDF version of this issue.


Founded in 2015, Deans for Impact is a US nonprofit organisation that empowers, supports, and advocates on behalf of leaders at all levels of educator preparation who are committed to transforming the field and elevating the teaching profession.

Benjamin Riley is the founder and executive director of Deans for Impact. Prior to founding Deans for Impact, Ben conducted research on the New Zealand education system, worked as the policy director for a national education nonprofit, and served as deputy attorney general for the State of California. He received his bachelor’s degree from the University of Washington and JD from the Yale Law School.

Charis Anderson is the senior director of communications at Deans for Impact. Prior to joining Deans for Impact, she was the director of publications for a Boston-based national education nonprofit. Charis also worked as a reporter at a local newspaper in Massachusetts, for an independent high school in San Francisco, and at a management consulting firm. Charis received her bachelor’s degree in psychology from Williams College and her master’s degree in journalism from Columbia University Graduate School of Journalism.

Research that changed my teaching

In the first of a series in which educators explain how research has transformed their practice, English and media teacher Hélène Galdin-O’Shea tells us about one paper that changed everything for her classroom.

Research paper: ‘Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching’

Authors: Kirschner, Sweller and Clark, 2006.

The end of my first decade as a teacher was nearly the end of my career as a teacher. I had become so frustrated with the way in which ‘outstanding’ teaching was defined and enforced that I was ready to give up. It was a horrendous regime of having lessons graded against a never-ending tick-list of dubious items and the dual premises of minimal teacher talk (no more than five to ten minutes and based in great part on the flawed – and now thankfully debunked – cone of learning or learning pyramid) complete with compulsory group work (or a ‘fail’), and finding a way to demonstrate ‘visible progress’ in 20 minutes. Five minutes of talking is just about enough to give a set of learning objectives and a set of instructions for group work if you want to avoid utter confusion when the signal is given.

Organising resources which are accessible and will give students something from which they can learn new information on their own is time-consuming enough, but add to that the provision of clearly defined roles for group members in order to make them ‘accountable’, and tasks through which students can engage with the materials, can do ‘something’ with the knowledge and prepare to feedback in a way that does not make students and teacher want to kill themselves after group 3 of 6 have had a go – well, all that is quite a feast. Dishearteningly, my role of ‘facilitator’ often led to the need to re-teach the materials – and ‘un-teach’ misconceptions. Could the group work task have worked better with clearly guided instruction at the start? Certainly so. But these were the rules of the game then. And boy, did I try!

When the focus of lesson planning becomes ‘What can I do in order not to explain this explicitly?’ as opposed to ‘How can I refine my explanations and polish the scaffolding work to maximise students’ understanding?’, something has to shift. It had become painfully obvious that the way ‘independent learning’ (as cited in the ‘outstanding lesson’ criteria) had come to be interpreted in schools was unhelpful. Did it really mean letting students struggle mostly on their own trying to make sense of the materials, organising themselves and others, formulating a response, and preparing to feed back that response? Even with timely interventions to redirect or explain, the process was painful, particularly for students who had a lower starting point. Why not provide more structured guidance with instant corrective feedback to start with?

After 13 years on the job, I went online, connected with many colleagues, and started reading. I am eternally grateful to whoever pointed in the direction of a paper which gave me new teacher-life, so to speak. It was a
paper by Paul Kirschner, John Sweller and Richard Clark (2006) titled ‘Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching’ in which the authors make the case for fully guided instruction and the idea that most people learn best when provided with explicit instructional guidance. They argue that it is an ‘instructional procedure’ that takes into consideration the ‘structures that constitute human cognitive architecture’ with over 50 years of evidence from empirical studies to support its effectiveness.

The aim of all instruction is to alter long-term memory. If nothing has changed in long-term memory, nothing has been learned.

A couple of years later, someone shared a follow-up article which had been published in American Educator in 2012 – ‘Putting students on the path to learning: the case for fully guided instruction’ – which, to this day, I use with teacher trainees as it presents the research evidence in a very clear and accessible way. The first paper helped me redefine what had become for me a bête noire: the concept of ‘independent learning’, and what it may mean, firstly by shifting the idea to ‘independent practice’, and more broadly by conceptualising it as guiding students towards independent learning from a novice status to a more expert one over the course of a unit of study but also over the course of a year, a key stage, one’s formal education. In this model, guided then independent practice logically follows carefully guided instruction, feedback is proffered as an ongoing process and its two-way nature is reinforced as the teacher tweaks instruction taking cues from student response. It seems obvious now but the concept of cognitive load was an eye-opener in so far as it greatly explained why many of my students had struggled to learn and retain information through the convoluted tasks I used to prepare for them.

The paper also opened for me the ideas behind the role of memory in learning and allowed me to plan sequences of lessons aimed at carefully revisiting and building on knowledge, taking into consideration ways in which I could help my students with ‘knowledge organisation and schema acquisition’. They suggested that ‘there is also evidence that [unguided instruction] may have negative results when students acquire misconceptions or incomplete or disorganised knowledge’, which again chimed strongly with my experience. The lofty aims of ‘higher-order thinking’ that we were asked to prioritise now made sense as part of a carefully orchestrated and rehearsed foundational knowledge base, since ‘expert problem solvers derive their skill by drawing on the extensive experience stored in their long-term memory and then quickly select and apply the best procedures for solving problems.’ The paper culminated for me in the assertion that ‘the aim of all instruction is to alter long-term memory. If nothing has changed in long-term memory, nothing has been learned.’

The authors also introduced me to the worked example effect and the expertise reversal effect, the latter being summed up in: ‘The advantage of guidance begins to recede only when learners have sufficiently high prior knowledge to provide “internal” guidance.’ After a few years of chewing over these concepts and reading far more about them (starting with Barak Rosenshine’s ‘Principles of instruction’), I find it hard to believe that I was not introduced to these ideas at the start of my career. I am certain that teachers get a much better deal today but my own training can broadly be summed up by ‘Do group work’.

Now at the end of my second decade as a teacher, I feel more at peace with my practice and enthused about the future, knowing that I still have much to learn, practise and refine, but also knowing that there is a clearer path ahead in terms of finding helpful reading and research evidence, and having colleagues with whom discussions focus on student learning as opposed to nebulous proxies.

See Paul Kirschner’s article for more on this research paper.

Download a PDF version of this issue.


References

Benjes-Small, C. (2014) ‘Tales of the undead…Learning theories: the learning pyramid’, ACRLog [blog]. acrlog.org/2014/01/13/tales-of-the-undead-learning-theories-the-learning-pyramid/

Kirschner, P. A., Sweller, J. & Clark, R. E. (2006) ‘Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching’, Educational Psychologist, 41 (2) pp. 75–86. www.cogtech.usc.edu/publications/kirschner_Sweller_Clark.pdf

Clark, R., Kirschner, P. and Sweller, J. (2012) ‘Putting students on the path to learning: the case for fully guided instruction’, American Educator, 36 (1) pp. 6–11. www.aft.org/sites/default/files/periodicals/Clark.pdf

Rosenshine, B. (2012) ‘Principles of instruction: research-based strategies that all teachers should know’, American Educator, 36 (1) pp. 12–19, 39. www.aft.org/sites/default/files/periodicals/Rosenshine.pdf

The psychology of habits

Teacher, blogger and trainer Joe Kirby takes a look at the force of habit – one of the most powerful influences we have on our behaviour whether we like it or not – and how we can use this in school.

Scientific research suggests that cues and consistency make habits last.

Why do we automatically wash our hands after going to the toilet? Why do we automatically tend to put our seatbelt on when we get into a car? Why do we tend to forget our New Year’s Resolutions by March?

These puzzles can partly be explained by the psychology of habit. Knowing this scientific research can come in very handy as teachers and school leaders.

Scientific research

In 1899, one of the founders of modern psychology, William James, gave some talks to teachers on the human mind. ‘It is very important that teachers realise the importance of habit, and psychology helps us greatly at this point … Habits cover a very large part of life,’ James argued; much of our activity is automatic and habitual. ‘The more of the details of our daily life we can hand over to the effortless custody of automatism, the more our higher powers of mind will be set free for their own proper work’ (James, 1899).

Research a century on suggests that around 45% of our daily actions are habitual (Wood et al., 2002; Wood et al., 2005; Wood & Neal, 2007; Evans & Stanovich, 2013). Scientifically, habits are learned, contextual, automatic responses (Verplanken & Aarts, 1999; Wood & Neal, 2007). Simply repeating an action consistently in the same context leads to the action being activated on later exposure to the same cue (Lally & Gardner, 2013). Using the toilet is the cue for washing our hands. Getting into a car is the cue for putting on a seatbelt. When a specific behaviour is performed repeatedly in an unvarying context, a habit will develop. Habits, scientists have found, do not rely on conscious attention or motivation, so persist even after conscious motivation or interest dissipates (Bargh, 1994). Habits free mental resources for other tasks. For example, learning to drive requires conscious attention to the pedals at first, but after that becomes a learned habit, attention is freed for scanning the road and for conversation. Decades of studies show that habit strength increases following repetition of a behaviour after the same cue (Hull, 1943; Lally et al., 2010; Lally et al., 2011). Cues and consistency combine to create a new habit. One study showed that it took an average of 66 days for a habit to form, with a range of 18 to 254 days (Lally et al., 2010). The time taken for automating the habit depended partly on the complexity of the habit: drinking a glass of water every day is easier than doing 50 sit-ups every day. Psychologists now argue that habit formation advice – that is, to repeat an action consistently in the same context – offers a simple path to long-term behaviour change (Gardner, Lally & Wardle, 2012).

Cues and consistency

In schools, we can use the power of habit to improve our pupils’ lives, just as a parent says to their child, ‘What’s the magic word?’ to teach them to be thankful and thoughtful. From the research evidence, two principles suggest themselves to make a habit last:

Choose a ‘cue’ or a reminder that occurs without fail at least daily.

Repeat the action consistently after the cue for as many days in a row as possible.

The best cues recur unfailingly, such as waking up or entering or leaving a lesson. This explains why so many of us forget our New Years’ resolutions: because we haven’t turned them into daily habits with unfailing cues or consistency.

Greeting people professionally is a useful habit for young people to learn for any interview they attend and anywhere they work later in life. A simple cue is seeing a teacher. I have seen how teaching pupils to smile and greet teachers cheerfully with ‘good morning!’ or ‘good afternoon!’ helps pupils learn how to interact positively and politely. Because this cue occurs many times a day at school, pupils have many chances every day to practise. Some pupils already have this automated, and are at an advantage in later life. Schools can help all pupils to achieve this advantage by teaching and reinforcing it consistently until it is an automatic habit for everyone.

Pupils have to remember lots of items every day: uniform, books, equipment, homework and kit. Quite often, something gets forgotten. Checking they’ve got what they need in their bag the night before and in the morning is a useful habit. A simple cue is to check their bag just after they’ve woken up. When it comes to exams, having this habit automated hugely reduces stress, pressure and panic.

Focusing on with practice in lessons straight away and not time-wasting is another habit that gives pupils great advantages that accumulate rapidly over time. Compared to a pupil who wastes just the first two minutes of practice each lesson, a pupil who focuses gains an extra 10,000 minutes of learning from Year 7 to Year 11. A simple cue to start practice such as ‘Ready…go!‘ is powerful when it is consistently applied. If all teachers in the school give the same cue, it makes it easier for pupils to establish the habit.

If teachers and school leaders decide collective cues and ensure consistency together, they can set their pupils up for habitual success.

Download a PDF version of this issue.


References

Bargh, J. A. (1994) ‘The four horsemen of automaticity: awareness, intention, efficiency, and control in social cognition’ in Wyer, R. S. & Srull, T. K. (eds) Handbook of social cognition, vol. 1: basic processes. Hove: Lawrence Erlbaum Associates, pp. 1–40.

Evans, J. & Stanovich, K. (2013) ‘Dual-process theories of higher cognition: advancing the debate’, Perspectives on Psychological Science, 8 (3) pp. 223–241.

Gardner, B., Lally, P. & Wardle, J. (2012) ‘Making health habitual: the psychology of “habit-formation” and general practice’, The British Journal of General Practice, 62 (605) pp. 664–666.

Hull, C. L. (1943) Principles of behavior: an introduction to behavior theory. New York, NY: Appleton-Century-Crofts.

James, W. (1899) Talks to teachers on psychology. New York, NY: Metropolitan Books/Henry Holt and Company.

Lally, P and Gardner, B. (2013) ‘Promoting habit formation’, Health Psychology Review, 7 (sup. 1) pp. 137–158.

Lally, P., Wardle, J. & Gardner, B. (2011) ‘Experiences of habit formation: a qualitative study’, Psychology, Health & Medicine, 16 (4) pp. 484–489.

Lally, P., van Jaarsveld, C. H. M., Potts, H. W. W. & Wardle, J. (2010) How are habits formed: modelling habit formation in the real world. European Journal of Social Psycholology, 40 (6) pp. 998–1009.

Verplanken, B. & Aarts, H. (1999) ‘Habit, attitude, and planned behaviour: is habit an empty construct or an interesting case of goal-directed automaticity?’, European Review of Social Psychology, 10 (1) pp. 101–134.

Wood, W. & Neal, D. T. (2007) ‘A new look at habits and the habit-goal interface’, Psychological Review, 114 (4) pp. 843−863.

Wood, W., Quinn, J. M. & Kashy, D. A. (2002) ‘Habits in everyday life: thought, emotion, and action’, Journal of Personality and Social Psychology, 83 (6) pp. 1281−1297.

Wood, W., Tam, L. & Witt, M. G. (2005) ‘Changing circumstances, disrupting habits’, Journal of Personality and Social Psychology, 88 (6) pp. 918−933.

Read this book now! Why Don’t Students Like School?

This issue:
Why Don’t Students Like School? by Daniel Willingham

Published in 2010, Professor Daniel Willingham’s book Why Don’t Students Like School? set out to describe as simply as possible – but no simpler – the main lessons that cognitive psychology could teach us about memory, learning, focus, motivation and a host of other topics vital to education. In doing so, it helped catalyse a revival in the interest of evidence-informed education that is still blowing up around the world. Consultant and former headteacher Tom Sherrington tells us why it turned the way he taught and led teaching upside down.

Tom Sherrington

It’s incredible to consider that, as teachers, we’re only recently beginning to understand the processes we muddle through every day. Thankfully, help is at hand. Way up high on my list of ‘books every teacher should read’ is Why Don’t Students Like School? by Daniel Willingham. Packed with insights, it’s a masterpiece of communication, making the complex world of cognitive science accessible
for teachers.

Written in 2009, the book continues to be highly influential. My recent re-reading made me realise just how many ideas I’ve encountered in the last few years are covered in the book – from his sound debunking of learning styles to his exploration of knowledge as the foundation of skills and the famous line ‘memory is the residue of thought’. Of course, Willingham is not alone in his field but, without question, he is one of its best communicators and we owe him a great deal for his ability to penetrate the wall of institutional inertia and edu-dogma with evidence and wisdom.

My favourite chapter in Why Don’t Students Like School? is ‘Why do students forget everything I say?’ This frustration resonates widely with teachers I talk to. Willingham offers advice that he suggests ‘may represent the most general and useful idea that cognitive psychology can offer teachers’: Review each lesson plan in terms of what the student is likely to think about. Superficially this may sound blindingly obvious but actually it requires a great deal of thought.

Take an example – learning about thermal decomposition in chemistry. A teacher might reasonably think it useful – as well as memorable – to explore this by engaging in a practical experiment. If you heat copper carbonate, a green powder, it becomes copper oxide, a black powder, plus invisible carbon dioxide. However, if you consider what students think about whilst doing an experiment, largely it is the business of assembling apparatus and then the process of examining the original green stuff that turns into black stuff. Most of the thinking is at a macro human scale, not about atoms, formulae, chemical bonds or even the terminology. They will form valuable memories about doing experiments and some general ideas about chemical change – but not necessarily that copper carbonate decomposes to copper oxide or the related formula.

Willingham acknowledges how hard it is to build abstract understanding while also giving very clear guidance as to where to focus our energies.

If you want students to learn this reaction in detail – i.e., to retain the knowledge in long-term memory – they must spend time thinking about the words and their semantic meaning; if you want them to develop a mental model of atoms being rearranged, they need to spend time thinking about a representation of the model you want them to learn.

That’s my example, but one that Willingham cites is the use of PowerPoint. If you ask a class to present their findings from research on the Amazon rainforest, for example, via PowerPoint, they will need to spend time thinking about its features – fonts, graphics, animation tools and so on, especially if those skills are recently acquired. This is time they are not spending thinking about features of the Amazon rainforest. In the long term, they may retain more knowledge of the PowerPoint features than the key aspects of the Amazon because of the focus of their thinking. Memory is the residue of thought – so make students do things that give them no choice but to think about the ideas you want them to learn.

This powerful advice feeds into various other considerations. Willingham suggests teachers explicitly construct learning so that students think about what new words mean, rating them or ranking them; he recommends using ideas that create conflicts to resolve or using narrative structures that place ideas in meaningful sequences. At the same time, ‘attention grabbers’ and discovery learning need careful consideration because unless they provide immediate feedback that the subject is being thought about in the right way, there’s a big risk that students think about the wrong things; they will remember things but not what you actually intended.

Another favourite chapter is ‘Why is it so hard for students to understand abstract ideas?’ The key piece of advice is to make deep knowledge the spoken and unspoken emphasis. This means avoiding giving the impression that learning some superficial facts is enough; there are always underlying models and concepts. It means making explicit comparisons between connected ideas such as literary themes or techniques in different poems, building up students’ knowledge of different examples of abstract ideas, but not just learning each example at a surface level.

I love the way Willingham acknowledges how hard it is to build abstract understanding while also giving very clear guidance as to where to focus our energies. That sense of being grounded in teachers’ realities helps him to communicate his thoughts. Helpfully, Willingham devotes some of his thinking to the nature of teachers’ professional learning. His main advice should be no surprise: teaching, like any cognitive skill, must be practised to be improved. This needs experience – but that’s not enough; it also requires conscious effort and feedback. ‘Education makes better minds, and knowledge of the mind can make better education.’ Amen!

Professor Daniel Willingham’s book Why Don’t Students Like School? is available to buy on Amazon, published by Jossey-Bass ISBN 047059196X

Download a PDF version of this issue.

The fight for phonics in early years reading

One of the most important things a child will do at school is learn to read, but there are few battlefields in educational discourse as contested as how to best teach it. Here, Jennifer Buckingham outlines the evidence base for systematic synthetic phonics as the most reliable method we have – and also why so many find it hard to accept.

There is extensive research on how children learn to read and how best to teach them. One of the most consistent findings from methodologically sound scientific research is that learning to decode words using phonics is an essential element of early reading instruction.1 Language comprehension (vocabulary and understanding of semantics, syntax, and so on) is also essential to gain meaning from reading, of course. But children must first be able to accurately identify the words on the page or screen before they can bring meaning to what they are reading.2

Many high-quality studies over the last two decades in particular, including systematic reviews, have shown that classroom programmes and interventions with an explicit, systematic phonics instruction component are more effective in teaching children to read than those without such a component.3 More recently, a teaching method called systematic synthetic phonics (SSP) has garnered strong evidence in its favour.4 In synthetic phonics, teaching starts with a sequence of simple letter-sound correspondences, building to the more complex code as children master the skills of blending and segmenting.5

Systematic synthetic phonics is well-researched in school classrooms and in clinical settings. It is also supported by cognitive science research on the processes that take place in the brain when children learn to read. This research shows that reading is not like speaking: the human brain is not innately wired for reading to develop automatically with exposure to print. Making the cognitive connections between print, sound and meaning requires making physical neurological connections between three distinct areas of the brain.6 Some children create these neural connections relatively quickly but others require methodical, repeated and explicit teaching.7 This is particularly true for a complex language like English where the relationships between letters and sounds is not uniform in all words.

Despite the clear evidence supporting systematic phonics instruction, there is still debate about the role of phonics in learning to read and how to teach it effectively. The reasons for this are many, and interrelated. While the points listed here are drawn from the Australian context and experience (particularly in the state of New South Wales), they are also relevant in other countries.

  • Many teachers do not have sound knowledge of language constructs and the most effective ways to teach reading, and generally overestimate what they know.8 A recent study of prep teachers (first year of formal schooling), found that only 53% could correctly define a morpheme and only 38% could correctly define phonemic awareness.9 The latter is a powerful predictor of reading ability and a critical element of initial reading instruction.10
  • Initial Teacher Education courses do not consistently provide graduate teachers with evidence-based reading instruction strategies and this is often compounded by low-quality professional learning.11
  • Contradictions within one department lead to teachers being given strongly conflicting messages.
  • For example, the NSW government reading programme ‘L3’ is inconsistent with a document on effective, evidence-based reading instruction produced by the same government.12
  • Important policy decisions are frequently made by education ministers and department executives who don’t have a good understanding of the evidence and research. They are often guided by people whose knowledge and experience is in literacy more broadly, or even just primary education generally; while early reading instruction and intervention is a highly specialised field of research and expertise. An example of this was the NSW Ministerial Advisory Group on Literacy and Numeracy (MAGLAN), which produced a report that misrepresented important educational strategies such as response to intervention.13
  • Very few literacy teaching programmes and interventions are subjected to rigorous trials or evaluations.14
  • Endorsement of expensive and unproven interventions that invoke neuroscience or involve computers, or both. There are numerous programmes that claim to help children learn to read by doing anything but actually teaching them to read.15

Despite the clear evidence supporting systematic phonics instruction, there is still debate about the role of phonics in learning to read and how to teach it effectively.

  • The influence of people in both the public and private sectors who continue to promote theories of reading that do not reflect current research on effective reading instruction.16
  • Rejection of research-informed policy proposals without careful consideration of the evidence, instead relying on conspiracy theories and ad hominem attacks.17
  • The perception of some programmes and policies as being ‘too big to fail’. It can take years, and sometimes even decades, to replace them even after research has shown them to be ineffective (for example: reading recovery).18
  • Significant investment in resources, buildings and furniture that are connected to outmoded and ineffective ways of teaching. For example:
  • Schools have spent thousands of dollars building up libraries of levelled readers and other resources designed for reading methods based around whole language and ‘three-cueing’ approaches. This makes it difficult for those schools to make dramatic changes to reading instruction.
  • School furniture and buildings are frequently designed in ways that do not accommodate explicit instruction pedagogies. The open classroom is one example of this: research has shown that noise levels in open classrooms are a problem for students.19 Yet many new government and Catholic schools are being built with open classrooms that exacerbate these problems.
  • Widespread misinformation about effective teaching methods, including the misrepresentation of synthetic phonics and the misuse of terms like ‘explicit teaching’.20

Despite all of this, there are reasons for optimism. The NSW government has recently allowed public schools to use funding that was earmarked for the reading recovery programme for other reading interventions; the Australian government is negotiating with the state and territory governments to introduce a Year 1 Phonics Check; and the newest version of the Australian Curriculum has a much greater emphasis on phonemic awareness and phonics. Acknowledgement of the importance of explicit instruction is growing and becoming more accepted, even if it is not always put perfectly into practice. Much has been achieved but there is still much to be done.

Dr Jennifer Buckingham is a senior research fellow and director of the FIVE from FIVE reading project at The Centre for Independent Studies (www.fivefromfive.org.au). Jennifer’s doctoral research was on effective instruction for struggling readers and she has written numerous reports and peer-reviewed articles on reading instruction and literacy policy. She is a board member of the Australian Institute for Teaching and School Leadership, an Associate Investigator at the Centre for Cognition and Its Disorders at Macquarie University, a member of the Learning Difficulties Australia Council, and recently chaired an Australian Government expert advisory panel on the introduction of a Year 1 literacy and numeracy check.

Download a PDF version of this issue.


References

1
Hulme, C. & Snowling, M. J. (2013) ‘Learning to read: what we know and what we need to understand better’, Child Development Perspectives, 7 (1) pp. 1–5.

2
Stuart, M., Stainthorp, R. & Snowling, M. J. (2008) ‘Literacy as a complex activity: deconstructing the simple view of reading’, Literacy, 42 (2) pp. 59–66. www.researchgate.net/profile/Morag_Stuart/publication/233440978_Learning_to_read_developing_processes_for_recognizing_understanding_and_pronouncing_written_words/links/576bb64e08aef2a864d42e42.pdf

3
Ehri, L. C., Nunes, S. R., Stahl, S. A. & Willows, D. M. (2001) ‘Systematic phonics instruction helps students learn to read: evidence from the National Reading Panel’s meta-analysis’, Review of Educational Research, 71 (3) pp. 393–447. journals.sagepub.com/doi/abs/10.3102/00346543071003393

4
Johnston, R. S., McGeown, S. & Watson, J. E. (2011) ‘Long-term effects of synthetic versus analytic phonics teaching on the reading and spelling ability of 10 year old boys and girls’, Reading and Writing, 25 (6) pp. 1365–1384. doi.org/10.1007/s11145-011-9323-x; Seidenberg, M. (2017) Language at the speed of sight. New York, NY: Basic Books.

5
Five from Five (no date) ‘Explicit phonics instruction’. www.fivefromfive.org.au/explicit-phonics-instruction/

6
Wolf, M., Ullman-Shade, C. & Gottwald, S. (2016) ‘Lessons from the reading brain for reading development and dyslexia’, Australian Journal of Learning Difficulties, 21 (2) 143–156. DOI: 10.1080/19404158.2016.1337364

7
Rupley, W. H., Blair, T. R. & Nichols, W. D. (2009) ‘Effective reading instruction for struggling readers: the role of direct/explicit teaching’, Reading & Writing Quarterly, 25 (2–3) pp. 125–138. DOI: 10.1080/10573560802683523; doi.org/10.1080/10573560802683523

8
Snow, P. (2016) ‘Elizabeth Usher Memorial Lecture: language is literacy is language – positioning speech-language pathology in education policy, practice, paradigms and polemics’, International Journal of Speech-Language Pathology, 18 (3) pp. 216–228. www.tandfonline.com/doi/full/10.3109/17549507.2015.1112837

9
Stark, H. L., Snow, P., Eadie, P. A. & Goldfeld, S. R. (2016) ‘Language and reading instruction in early years’ classrooms: the knowledge and self-rated ability of Australian teachers’, Annals of Dyslexia, 66 (1) pp. 28–54. link.springer.com/article/10.1007/s11881-015-0112-0

10
Melby-Lervåg, M., Lyster, S. A. & Hulme, C. (2012) ‘Phonological skills and their role in learning to read: a meta-analytic review’, Psychological Bulletin, 138 (2) pp. 322–352. dx.doi.org/10.1037/a0026744

11
Meeks, L. J. & Kemp, C. R. (2017) ‘How well prepared are Australian preservice teachers to teach early reading skills?’, Australian Journal of Teacher Education, 42 (11) pp. 1–17. dx.doi.org/10.14221/ajte.2017v42n11.1

12
Neilson, R. & Howell, S. (2015) ‘A critique of the L3 Early Years Literacy Program’, Learning Difficulties Australia Bulletin 47 (2) pp. 7–12; NSW CESE (2017) ‘Effective reading instruction in the early years of school’. Sydney: NSW Centre for Education Statistics and Evaluation. www.cese.nsw.gov.au//images/stories/PDF/Effective_Reading_Instruction_AA.pdf

13
Buckingham, J. (2012) ‘Mistakes writ large if reading goes wrong’, The Sydney Morning Herald, 7 May. www.smh.com.au/federal-politics/political-news/mistakes-writ-large-if-reading-goes-wrong-20120506-1y6ry.html; Ministerial Advisory Group on Literacy and Numeracy (2012) ‘Report on the outcomes of consultation: literacy and numeracy action plan – initial framework’.

14
Meiers, M., Reid, K., McKenzie, P. & Mellor, S. (2013) Literacy and numeracy interventions in the early years of schooling: a literature review: report to the Ministerial Advisory Group on Literacy and Numeracy. research.acer.edu.au/policy_analysis_misc/20

15
Han, E. (2013) ‘Brain Gym claims challenged’, The Sydney Morning Herald, 13 January. www.smh.com.au/nsw/brain-gym-claims-challenged-20130112-2cmes.html; Wood, P. (2017) ‘Experts question Arrowsmith program for kids with learning difficulties’, ABC News Online, 21 March www.abc.net.au/news/2017-03-21/experts-question-arrowsmith-program-for-learning-difficulties/8363690

16
Emmitt, M., Hornsby, D. & Wilson, L. (2013) ‘The place of phonics in learning to read and write.’ Australian Literacy Educators’ Association. www.alea.edu.au/documents/item/773

17
Mulheron, M. (2017) ‘President writes: the darker purpose’, Education NSW Teachers Federation website. www.education.nswtf.org.au/education27/news-and-features-1/president-writes/

18
NSW CESE (2015) Reading recovery: a sector-wide analysis. Sydney: NSW Centre for Education Statistics and Evaluation. www.cese.nsw.gov.au/publications-filter/reading-recovery-evaluation

19
Mealings, K. (2015) ‘Students struggle to hear in new fad open-plan classrooms’, The Conversation, 10 February. www.theconversation.com/students-struggle-to-hear-teacher-in-new-fad-open-plan-classrooms-37102

20
Adoniou, M. (2017) ‘How the national phonics test is failing England and why it will fail Australia too’, EduResearch Matters, Australian Association for Research in Education. genius.it/www.aare.edu.au/blog/?p=2533

Battling the Bandwidth of your Brain

Why some people think cognitive load theory might be the most important thing a teacher can understand.

Recently, there has been a surge of interest in cognitive load theory, perhaps aided by comments made by Dylan Wiliam on Twitter that it is ‘the single most important thing for teachers to know’ (Wiliam, 2017). So, what is cognitive load theory, how did it arise and what are the implications for teachers in the classroom?

The origins of cognitive load theory can be traced back to the results of an experiment published by John Sweller and his colleagues in the early 1980s (Sweller, 2016). In this experiment, students were asked to transform a given number into a goal number by using a sequence of two possible moves; they could multiply by 3 or subtract 29. Unknown to the students, the problems had been designed so that they could all be solved by simply alternating the two moves e.g. ×3, –29 or ×3, –29, ×3, –29.

The students who were given these problems were all undergraduates and they solved them relatively easily. However, very few of them figured out the pattern.

By that time, it had been established that people solve novel problems by the process of means-ends analysis: Problem-solvers work backwards, comparing their current state with the goal and looking for moves that will reduce this distance. Sweller wondered whether this process drew so heavily on the mind’s resources that there was nothing left to learn the pattern. In other words, solving problems induces a heavy ‘cognitive load’.

It has been known since the 1950s that our short-term memory is severely limited. In a classic 1956 psychology paper, George Miller argued that the maximum number of items that can be held in memory for a short period is about seven (Millar, 1956). However, an important question arises: what is an ‘item’? One of the tasks Millar examined was reciting a string of random digits, with each digit representing one item. Compare this with a string of digits such as, ‘SPIDERS’ – this is no longer seven items. Instead, it represents a single item because most people already possess a concept of what a spider is. An item is therefore the largest unit of meaning that we are dealing with and this will therefore depend upon what a person already knows. When we gain new knowledge – new meanings – we therefore reduce the number of items that we need to consider, a process known as ‘chunking’.

The concept of working memory is similar to that of short-term memory except that it doesn’t just store information, it also manipulates it. The limitations of working memory are what lead to cognitive overload.

We now know that different kinds of item impose different limits (Shriffin & Nosofsky, 1994). Words are generally more intensive than digits, cutting the short-term capacity further. Many cognitive scientists today accept a model of the mind that includes a ‘working memory’ (e.g., Baddeley, 1992). The concept of working memory is similar to that of short-term memory except that it doesn’t just store information, it also manipulates it. The limitations of working memory are what lead to cognitive overload.

Sweller’s initial experiments did not involve tasks that are educationally relevant and so a natural progression was to examine the kinds of problems that students are asked to solve in real academic courses. Working with Graham Cooper, Sweller tested whether school students and university students learned more by solving simple algebra problems or by studying worked examples. If Sweller’s hunch was correct, students may well be able to solve some of these problems, but the cognitive load imposed by this would lead them to learn little. Conversely, by imposing less cognitive load, the worked examples should lead to more learning. This was confirmed by the research (Sweller & Cooper, 1985) and this finding has now been replicated in many different situations involving a wide variety of subject matter (Sweller, 2016).

However, these results seemed counterintuitive and presented researchers with a conundrum. How is it possible for small children to pick up their mother tongue by simple immersion? Wouldn’t that lead to cognitive overload? If Sweller and colleagues were right, wouldn’t we need to give children worked examples of talking and listening in order for them to learn?

The answer to this problem may be found in the work of David Geary. His suggestion is that some forms of learning are ‘biologically primary’. Humans have presumably been speaking a kind of language for hundreds of thousands, perhaps millions, of years and this is long enough for evolution to have had an impact, equipping babies with a mental module for picking up language without conscious effort. In contrast, reading and writing (and all other academic subjects, for that matter) have been around for only a few thousand years and for much of that period, only a small elite engaged with them. They therefore cannot have been affected by evolution, rely on repurposing biologically primary mental modules and are therefore known as ‘biologically secondary’ (Geary, 1995).

Cognitive load theory suggests that all biologically secondary knowledge must pass through our limited working memories in order to be stored in long-term memory. For learning new, complex academic concepts such as algebra or grammar or the causes of the First World War – as opposed to learning simple lists – it is probably wise to try to minimise cognitive load by avoiding approaches that look like problem solving and to instead utilise those that provide clear and explicit, step-by-step guidance (Kirschner et al., 2006).

In the process of its development, cognitive load theory has also incorporated a number of learning effects that are related to the load that they impose. For instance, the ‘split-attention effect’ demonstrates that it is better to place labels directly on a diagram rather than provide an adjacent key because this avoids the need to cross-reference, which imposes unnecessary load. Similarly, the ‘redundancy effect’ shows that it is best to avoid adding unnecessary additional information for students to process. For example, if a diagram of the heart clearly shows the direction of blood flow then adding a label saying which way the blood flows is redundant (Sweller, 2016). This has clear implications for teaching – don’t provide lots of text on a PowerPoint slide and simultaneously explain the same concepts verbally. In general, it is best to minimise the number of different things that students have to pay attention to at any one time. Remove those fancy borders, animations and cartoons unless they are fundamental to what is being communicated.

And this is why cognitive load theory is so powerful. Unlike much of what we are told during training and professional development, cognitive load theory has real implications for teachers in the classroom that are based on sound evidence derived from robust research designs. Perhaps Dylan Wiliam is onto something. Perhaps cognitive load theory is an important thing for teachers to know.

Download a PDF version of this issue.


References

Baddeley, A (1992) ‘Working memory’, Science, 255 (5044) pp. 556–559.

Geary, D. C. (1995) ‘Reflections of evolution and culture in children’s cognition: implications for mathematical development and instruction’, American Psychologist, 50 (1) pp. 24–37.

Kirschner, P. A., Sweller, J., & Clark, R. E. (2006) ‘Why minimal guidance during instruction does not work: an analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching’, Educational Psychologist, 41 (2) pp. 75–86.

Miller, G. A. (1956) ‘The magical number seven, plus or minus two: some limits on our capacity for processing information’, Psychological Review, 63 (2) pp. 81–97.

Shiffrin, R. M. & Nosofsky, R. M. (1994) ‘Seven plus or minus two: a commentary on capacity limitations’, Psychological Review, 101 (2) pp. 357–361.

Sweller, J. (2016) ‘Story of a research program’, Education Review, 23.

Sweller, J. & Cooper, G. A. (1985. ‘The use of worked examples as a substitute for problem solving in learning algebra’, Cognition and Instruction, 2 (1) pp. 59–89.

Wiliam, D. (2017) ‘I’ve come to the conclusion Sweller’s Cognitive Load Theory is the single most important thing for teachers to know’
bit.ly/2kouLOq [Twitter], 26 January. Retrieved from twitter.com/dylanwiliam/status/824682504602943489