Everything starts with the curriculum

We are starting to see policy makers and politicians engage with evidence bases and relatively recent discussions about research that are dominating the discourse between educators. In this article, Nuno Crato, former Minister of Education and Science in Portugal 2011–2015, describes his experience of leading education policy reform in a direction frequently characterised by how much it stresses recent ideas about curriculum, learning and assessment.

In June 2011, Portugal was coming to grips with the most serious financial crisis of its recent history. The state was broken and unable to adopt the common short- term solutions for monetarily independent countries. The country had joined the euro 12 years earlier and the state was unable to finance its debt. In May 2011, a bailout had been agreed with the IMF and the EC, and the government had fallen. Elections were held and a new prime minister had been appointed: the social democrat Pedro Passos Coelho. I was in Berlin at a stopover for a conference trip when I received a phone call and an invitation to join the government.

I am not a politician and did not join any party, but my strong educational convictions were well known by the new prime minister. I barely knew him, but he gave me total support for the reforms I had been preaching for years through books, opinion articles, and press interviews. These reforms are easy to enumerate: a strong, demanding, and well-structured knowledge-based curriculum, frequent student evaluation, rigorous initial teacher training, school autonomy, support for failing students, vocational paths, and results-based school incentives. In a practical way, they were a continuation and acceleration of Portugal’s progress in education. But in the discourse, they were a paradigm shift from a competences-based and a student-centred education, to a knowledge-based, more direct teaching approach.

Since 2000, our country had been progressively abandoning the romantic and failed ideas that dominated the school reforms of the ’80s and ’90s: loose curriculum, no students’ external evaluation, no memorisation, spurning high culture, emphasising popular culture, and so on. In 1995, the TIMSS results had been a wakeup call and then, step by step, different governments put in place a couple of reforms that went essentially in one direction: more attention to the results.

This was done by introducing some exams, discussing school results, setting up rules for teachers’ evaluation. But so far it had been done in a very inconsistent way. At the same time, the education apparatchiks were still preaching the benefits of a loose curriculum and trying to impose non-directive teaching methods.

By 2011, teachers were tired of this constant interference. For years, many new fads were imposed: competences instead of knowledge, learning in context, discovery learning – you name it. Paradoxically (or maybe not!) the ministry was controlling processes but resisting evaluating results.

The reforms we introduced in 2011 and in the subsequent years were greeted by teachers and parents as a welcome increase in quality and rigour. Unions and opposition political parties contained their resistance and only later became hostile. But I think the results speak for themselves.

From 2011 to 2015, Portugal not only continued to improve its educational system, but also accelerated that improvement.

Since the first cabinet meeting, it became clear to me that the governmental priorities in education were going to be dominated by the need to reduce expenditure. The agreement that had been signed with the troika (IMF, ECB and EC) had singled out education for a significant budget reduction. Teachers represented about one-third of civil servants in the country and salaries in schools, universities, our ministry services, and research centres represented about half of state salary budgets.

We clearly had to design a way out; ‘obtain more with less’ became the motto. To put this into practice without hurting education simply meant we had to concentrate our efforts on the essentials. And the essentials are not teachers’ salaries, school buildings, or computer equipment. The essentials are students’ learning, students’ skills development, and students’ ethical growth. In a word: students.

We would build upon previous progress. In my opinion, this progress was due essentially to one key factor: increased attention to results.

Changes started at the turn of the century. In 1996 and 1997, first-wave TIMSS results were released and revealed the appalling situation Portuguese students were in. In 2001, a fierce political and legal battle forced the ministry to disclose nationwide school grades, showing finally that some schools were able to raise their students to reasonable levels while others were unable to do the same. More interestingly, the school divide did not coincide with socioeconomic status of the students. This led to a national debate in which it become clear to parents that schools were different, and some were doing a better job than others. The ministry, school principals, and teachers were put under healthy pressure – they were challenged to do better.

In 2006, a new minister introduced exams at the end of compulsory education (at the time, 9th grade). In 2009, another minister introduced standards as a way of making the curriculum clearer and more detailed. The narrative was still relatively romantic: to encourage students to learn in a joyful environment, and so on. But the practical changes were clear.

2011: Everything starts with the curriculum

Portugal used to have a very centralised and rigid curriculum. In the school year 2011/12 we decided to assign more school time for reading and mathematics. We also gave more freedom to schools to reorganise the school timetable according to their needs.

But this was only the first change. Throughout this first year we prepared the ground for the second one by restructuring the mandatory curriculum structure to give more class time to the fundamental subjects. To begin with this meant reading and mathematics, then history, geography, sciences, then English. This was made at the expense of vague and unstructured subjects/themes such as ‘learning in company’, the ‘project area’, ‘civic education’ and the sort. Although these topics may have corresponded to important activities and ethical development, they were not structured. Frequently they were just a source of vague politically correct indoctrination – or simply a waste of time. They were not grounded in any substantive subject knowledge.

In parallel, we set up new standards, and by that we meant detailed lists of learning outcomes. Those lists needed to be precise, well structured, and conducive to sequential learning. Moreover, the listed contents should be precise enough to convey unambiguously to students, teachers, parents, textbook authors, and examiners what the desired outcomes were. This definition stands in sharp contrast to the previously adopted ‘competences’ approach. In fact, one of our major criticisms (made in a series of documents(1)) of this French-Swiss-imported approach from Perrenoud(2) and other authors was that their learning outcomes were impossible to pinpoint and to evaluate. Another major criticism we made was the undervaluation of knowledge, which was considered important only when leading to practical competences.

When needed, we also adjusted the curricular programmes. In our tradition, a ‘programme’ is a reasoned general explanation of subject content for a given discipline at a given school year or cycle of years. The new standards complemented the programmes, but sometimes the programme itself had to be adjusted.

Underlying these reforms there was a firm belief in students’ capacity to learn more and to progress further. Consequently, the new curriculum was much more ambitious, much more demanding, and much more rigorous.

Evaluation helps students

To learn is one thing, but how do we learn that we have learnt? The second major area of progress we made was to generalise, improve, and increase the frequency of standardised tests. In 2015 we put in place standardised tests in the 4th, 6th, 8th, and 12th grades. These tests were closely aligned with the curricular standards. They were public, and schools’ average results were made public, and action was taken as a result. Failing and near- to-failing students received special help and schools received resource incentives whenever they were able to show that these resources were used to improve students results. We put in place a complex system of credits that would reward and encourage those who could simultaneously reduce retention and improve students’ results in standardised tests.

The educationalist apparatchiks abhorred these changes, but they were unable to rely on their well- rehearsed, fallacious arguments. Results were obviously improving, and not only for the elite students: the number of failing and near-to-failing students decreased, and drop-out rates decreased. Teachers predominantly saw end-of-cycle tests as a boon to their efforts to encourage students to learn.

Alternatives help students

One of the most propagated but false dilemmas in education is the so-called opposition between rigour and inclusion – the idea that we cannot sharply improve education for all. It is the argument that if we are demanding, then we are increasing students’ inequalities; and if we want to help all of them to progress, then we should be guided by the weakest students’ needs and learning pace.

This dilemma assumes many forms, but it’s a false dilemma. Can’t we aim at high standards for all and give extra help to struggling students? Of course we can – and that’s what we did in 2012. Through a series of legal dispositions, the ministry gave more freedom to schools, allowing and encouraging them to assign teacher hours for this type of extra support. Simultaneously, we allowed the creation of something akin to ‘temporary tracking’. Struggling kids were not pulled out of their regular classes, but had additional studying hours with dedicated teachers. For each student, this was temporary. It lasted for months and not for years. I’m convinced this type of measure helped everybody.

Vocational training for students willing to finish schooling with a professional certificate was the second most successful measure. Following various international experiences, we created two types of vocational paths. One regular, the other for students with special academic difficulties. This helped everybody.

2015: Things can change rapidly when we pay attention to the essentials

When PISA and TIMSS results came out in December 2016, many people were surprised by the dramatic progress of Portuguese students. For the first time in our history, we exceeded the OECD average for PISA, and we did so in all three PISA areas: mathematics, reading, and sciences. In TIMSS we outperformed many more- advanced countries, jumping from 475 points to 541 points in 4th grade maths. When we started, in 1995, only two countries were below us in the rankings. Now, we had 36 countries below us. And among these, Finland – which was no minor success for us.

In many countries, from Spain to the UK and Argentina, the press highlighted these results. On December the 6th, 2016, The Economist interviewed me and highlighted the importance of standards, testing and support to under-achieving students.

One of the most reassuring results emphasised by the PISA 2015 report was the fact that Portugal was one of the very few countries/regions able to simultaneously increase the number of top performers and reduce the number of low performers. I hope readers will forgive me for being proud of our students’ results.


References

1. See, for example, my Crato, N. (2006) Eduquês em discurso directo: uma crítica da pedagogia romântica e construtivista. Lisbon: Gradiva.

2. Perrenoud, P. (2011) Construire des dompetences dès l’école. Montrouge: ESF Éditeur

Cognitive load theory in the classroom

Cognitive load theory is rapidly becoming one of the most talked-about theories of how we learn. But what are the implications for how we teach? Teacher and blogger Tom Needham outlines the basics, and what they could mean for you, in the first of this three-part series.

Six years ago, I read Why Don’t Students Like School? by Daniel Willingham, a text that not only made me reconsider almost all aspects of how I was teaching but also acted as a springboard into the depths of educational research. His explanation of the importance of memory and the conceptual distinction between working and long-term memory revolutionised how I thought about instruction and made it abundantly clear that I had not been focusing upon the vital notion of retention. Cognitive load theory is also based on the conceptual difference between working and long-term memory and provides a number of strategies to optimise instruction within that framework.

An overview of some of the theory

What is it that makes experts proficient? In 1973, a study(1) was conducted to investigate what made grandmaster chess players superior to other players. While an intuitive answer may have attributed their dominance to more proficient problem-solving abilities, the application of a generic ‘means-ends’ analytical approach or the fact that they weighed up and considered a wider range of alternative strategies, the reality was a difference in their memories. Players, both expert and novice, were shown a chessboard with pieces arranged in plausible and typical game situations for five seconds. When asked to recall the positions of the chess pieces, expert players were significantly and consistently better than novices.

However, if the pieces were arranged randomly, then this gap in performance disappeared: experts and novices performed the same. With the random configurations, experts could not rely upon recalling thousands of game configurations as the pieces did not conform to or fit game patterns that they had stored in long-term memory. Similar results have also been found in other domains, including recall of text and algebra. The conclusion of these studies was that when solving problems or engaged in cognitive work, experts within a field rely upon their larger and more-developed long-term memory deposits, patterns of information that are also called schemata. While short- term memory has a limited capacity, long-term memory capacity is vast and seemingly endless.

Recognising the fact that novices have less relevant knowledge stored in their long-term memory, Sweller et al. explain: ‘Novices need to use thinking skills. Experts use knowledge.’(2) Because ‘thinking skills’ rely upon working memory, an aspect of cognition that has a small and fixed capacity for holding and manipulating items, novices soon reach the limits and, due to excessive cognitive load, find tasks difficult or impossible as a result. The implications of these findings are striking for teachers. In a general sense, we should be spending much – if not most – of our time as teachers trying to increase our students’ domain-specific background knowledge so that we can help them overcome the seemingly unalterable capacity in their short-term memory and recall, apply and use relevant knowledge from their long term memories. Sweller et al. posit that ‘we should provide learners with as much relevant information as we are able’(3) and that ‘assisting learners to obtain needed information during problem solving should be beneficial’(4). They also posit that ‘providing [learners] with that information directly and explicitly should be even more beneficial’(5). Explicit teaching, at least for novices, is almost certainly preferable to asking students to discover things for themselves. If we are not explicit, there is a chance that students will not retain and understand what we are teaching, resulting in a missed opportunity for them to increase their knowledge.

In order to develop in expertise, students need to increase their knowledge; and in order for them to increase their knowledge efficiently, they need direct and explicit teaching.

The worked example effect

In short, the worked example effect refers to the idea that if you want novices to succeed in a particular domain, they would be better off studying the solutions to problems rather than attempting to solve them. Asking students to repeatedly write extended answers to questions ‘unnecessarily adds problem-solving search to the interacting elements, thus imposing an extraneous cognitive load’.(6) In the absence of well-developed background knowledge, students flounder because they have little stored in their long-term memories to help them. Comments in class such as ‘I don’t know how to start’ and ‘What do I write?’ are sometimes indicative of this scenario.

I teach English, and responding analytically to texts is a complex activity containing multiple components, many of which are abstruse for novice learners. If you try to describe these elements, you are forced to use abstract phrases such as ‘sophisticated analysis’ and ‘judicious use of quotations’; and, in the absence of examples, these terms merely serve to mystify the process further. This is the language of mark schemes, terminology that may make sense to experts but leaves novices confused. Creating worked examples – in English this may mean sentences, paragraphs or essays – exemplifies these opaque terms, converting the abstract into the concrete.

Sweller et al. argue that ‘worked examples can efficiently provide us with the problem-solving schemas that need to be stored in long-term memory’.(7) Studying worked examples is beneficial because it helps to build and develop students’ background knowledge within- their long term memories, information that can then be recalled and applied when attempting problems. The grandmasters in the chess study were successful because of the breadth and depth of their background knowledge. Similarly, English teachers find writing (one of the problems in our domain) easy because we have long-term memories that contain myriad ‘problem solving schemas’ and mental representations of analytical responses to texts.

If we accept the notion that short-term memory capacity is pretty much fixed – as well as the idea that we cannot really teach generic higher-order thinking skills – then building domain-specific background knowledge may be our most important job as teachers. Studying worked examples is more effective and efficient than merely attempting problems. Deconstructing and studying model sentences, paragraphs and essays should, in the long run, be superior to merely writing them.

Research into the worked-example effect in English

In Cognitive Load Theory, Sweller et al. refer to English, the humanities and the arts as ‘ill-structured learning domains’(8) to distinguish them from mathematics and science. They make the point that in maths and science problems, we can ‘clearly specify the various problem states and the problem-solving operators’(9) – essentially rules that dictate process and approach. ‘Ill-structured domains’ do not have such rigid constraints. Although there are subjective elements within English and often innumerable ways of approaching a task, different approaches may be considered of equal worth and demonstrate a comparable level of proficiency. The variables within analytical writing can, like the colours within a painter’s palette, be arranged in numerous and diverse patterns; however, these different configurations can be judged to contain equivalent skill and quality. Despite this, the researchers make the important point that ‘the cognitive architecture … does not distinguish between well-structured and ill-structured problems’,(10) meaning that the findings of Cognitive Load Theory apply to all domains. The researchers also explain that ‘the solution variations available for ill-structured problems are larger than for well-structured problems but they are not infinite and experts have learned more of the possible variations than novices’.(11) Over the years, teachers have read, thought about and produced innumerable pieces of analysis and, as a result, have developed rich schemata of this kind of knowledge which they can recall, choose from and apply when dealing with problems.

Sweller et al. point out that ‘even though some exposure to worked examples is used in most traditional instructional procedures, worked examples, to be most effective, need to be used much more systematically and consistently to reduce the influence of extraneous problem-solving demands’(12). A five-year curriculum that systematically and consistently uses worked examples should help students build a rich schemata of ‘possible variations’,(13) moving them more quickly and efficiently along the continuum from novice to expert than if they had just completed lots of writing tasks. The constant studying of concrete worked examples is far superior to describing proficiency using abstract and often vague descriptors and success criteria. When describing complex performance in the absence of concrete examples (which is the purpose of a mark scheme), the sheer breadth and possible variation of what is being described necessitates a wide lens of representation. While this is advantageous to the expert, allowing complexity to be summarised and condensed, it is obfuscatory and perhaps even meaningless for students. Experts have abundant and detailed schemata that exemplify abstract terms like ‘critical analysis’, ‘judicious references’ and ‘contextual factors’; novices do not.

In Cognitive Load Theory, two studies directly relevant to English are referenced. In the first,(14) students were given extracts from Shakespearean plays, half receiving texts with accompanying explanatory notes, the other half receiving no additional notes. Perhaps unsurprisingly, the group who were given the notes performed better on a comprehension task. In the other study,(15) students were given an essay question to answer. One group received model answers to study; the other did not. The study found that ‘the worked example group performed significantly better than the conventional problem- solving group’(16).

What does this look like in English?

If we want students to perform well in complex tasks like writing, we should be giving them the necessary information ‘directly and explicitly’. Echoing Engelmann’s sentiment that we should ‘teach everything students will need’,(17) the work of Sweller et al. also points to the superiority of explicit, direct instruction, approaches that seem more efficient and effective for novice learners. With regards to English, we should be explicitly teaching sentence structures and vocabulary. We should provide this information to students when they are completing extended writing and one way of doing this is through vocabulary tables that contain definitions and examples. Not just examples of how the vocabulary words are used, but also examples of the sentence styles that students should include. Each of these example sentences is a worked example in itself and, with effective teacher questioning and annotation, can be a powerful way of turning abstract and amorphous success criteria (‘use sophisticated sentences’/‘use a range of complex sentences’ etc.) into concrete examples that the learner can ‘study and emulate’.(18)

To minimise cognitive load, students have these tables when they are annotating the poem, allowing them to make the link between text and interpretation.

Although Cognitive Load Theory contains a number of different effects, the worked example effect is described by the researchers as being ‘the most important’;(19) and, because of this importance, we have incorporated it into all stages and aspects of our curriculum. Almost always, when students are asked to write, they will have studied a related and relevant worked example.

If you would like to know more about cognitive load theory, here are some useful resources:

1) Greg Ashman’s blog has many detailed posts about CLT.(20)

2) This succinct and practical summary.(21)

3) Oliver Caviglioli’s fantastic graphic overview of Cognitive Load Theory by Sweller, Ayres and Kalyuga.(22)

Parts of this article first appeared on Tom Needham’s blog. Reproduced with permission.


References

1. Chase, W. G. and Simon, H. A. (1973) ‘Perception in chess’, Cognitive Psychology 4 (1) pp. 55–81.

2. Sweller, J., Ayres, P. and Kalyuga, S. (2011) Cognitive load theory. Berlin: Springer, p.21

3. Ibid. p.31

4. Ibid.

5. Ibid.

6. Ibid. p.99

7.Ibid.

8. Ibid. p.102

9. Ibid.

10. Ibid.

11. Ibid.

12. Ibid. p. 100.

13. Ibid. p. 102.

14. Oksa, A., Kalyuga, S., & Chandler, P. (2010) ‘Expertise reversal effect in using explanatory notes for readers of Shakespearean text’, Instructional Science 38 (3) 217–236.

15. Kyun, S. A., Kalyuga, S. and Sweller, J. (in preparation) The effect of worked examples when learning English literature.

16. Ibid. p. 101.

17. Engelmann, S. (2014) Successful and confident students with direct instruction. Eugene, OR: NIFDI Press, p.35

18. Sweller, J., Ayres, P. and Kalyuga, S. (2011) p. 100 19. Ibid. p. 108.

20. www.gregashman.wordpress.com

21. www.goo.gl/5QowTA

22. www.goo.gl/ZfLf1x

How I became the leader of evidence in my school

Jodie Lomax is a teacher at Culcheth High School. Here she writes about her journey towards becoming a research lead – a relatively recent role in some schools that requires them to be the main facilitator for driving evidence and research use in the classroom.

Between 2013–2015, Culcheth High School experienced a tumultuous period, going through four headteachers and many senior leadership changes. This ‘led to the school lacking direction and being without focused leadership’ (1). September 2015 saw a new lease of life injected into our school community. We had a new senior leadership team, with Chris Hunt being appointed as headteacher; and collectively, the leadership team embarked on a mission to change the culture of the school to ensure that all members of the school community could live up to the school’s motto of being ‘the best that we can be’.

Having joined the Teacher Development Trust (TDT) in 2015, we went on to achieve the Bronze CPD Quality Award, a clear reflection of the commitment made to put professional development at the forefront of the school improvement agenda. However, more needed to be done; and in order to sustain incremental change and improvements, the SLT decided to appoint a research lead to further support development of teaching, learning and assessment through evidence-based research and best- practice studies.

My journey

During the summer of 2016, I somehow found my way onto EduTwitter. Twitter was not a platform I was unaccustomed to, but I was curious about the movement I had been hearing so much of. I was immediately inspired by the largely altruistic online education community. I read about Rosenshine’s ‘Principles of Instruction’(2). I saw ‘cognitive load theory’ being discussed and I was introduced to the incredible work of the Learning Scientists. I was hooked. This all seemed so simple. So obvious to a degree! Why had I not been taught this during my teaching practice?

During staff meetings and professional development sessions, SLT were increasingly referencing evidence and research to rationalise school policies and procedures. There was an underlying tone of ‘research engagement’ and the tide of CPD at CHS was changing. I was introduced to the likes of Dylan Wiliam, Rob Coe and Daisy Christodoulou. Finally, things were beginning to become clear. It was a light bulb moment in my teaching career.

I knew that engaging with research would help me to become a better teacher and enable my colleagues to make better, informed decisions about their own practice.

When the role of research lead was advertised, I knew straight away: this was the role for me. Having worked with trainee teachers for a number of years, I had developed a passion for teacher improvement and I was frustrated to find that some ITT programmes were simply not moving forward. Trainees were engaging with lesson plans that required them to record which ‘learning style’ their activity would meet and each lesson saw at least three objectives and three outcomes as ‘non- negotiable’ requirements. I knew that engaging with research would not only help me to become a better teacher but also enable my colleagues to make better, informed decisions about their own practice. Simply put, engaging in research is empowering! Being the research lead has truly revolutionised my teaching. Now I focus on teaching. I focus on the learning that takes place. I focus on assessing that learning and knowing the best ways to plug any gaps.

This was a brand-new role and I was incredibly fortunate that I was given the autonomy from Chris Hunt (headteacher) and Adam Brown (AHT responsible for professional development) to make this role my own. I got to work and immersed myself in Tom Bennett’s report on The School Research Lead published by the Education Development Trust. This report gave me a really clear picture of what my role could be and the kind of things that I should prioritise in order to pursue the development of an evidence culture at Culcheth High School. I consider myself to be the ‘auditor’ who is tasked with ‘evaluating the whole school’s relationship with current research, and then using that baseline evaluation to generate targets and a vision for where the school needed to be’.(3)

My first priority was to introduce my colleagues to the fundamental principles of what makes great teaching and the ‘must-read’ research in a way that was timely and accessible. The newly designed #AlwaysLearning page was born – a portal that contains a range of research summaries, full papers and relevant blogs/articles. This included papers such as What Makes Great Teaching?(4), Rosenshine’s ‘Principles of Instruction’ and Sweller’s study on cognitive load theory(5). The portal now includes hundreds of reading materials that staff can access during #AlwaysLearning sessions, as well as in their own time, should they so wish.

I later developed the #AlwaysLearning newsletter, which was published termly in order to provide staff with ‘research bites’ and practical ideas of how this research could be put into practice. This evolved into a ‘reading briefing’ that would take place on alternate Thursday mornings, allowing staff to collaborate with peers in order to discuss and debate a range of reading materials and consider practical ways to transfer evidence into practice. Now, we are not only engaging with research but also engaging in research by adopting the PICO format (6), originally designed for evidence-based medicine, as a fundamental part of our whole school CPD programme.

A massively reduced workload, rising staff morale and consistently improving results are clear evidence that avoiding silver bullets and using research evidence is having a positive impact on teaching and learning in our school.

Over the last two years, the school leadership team have worked tirelessly to ensure that teacher development really is ‘the main thing’(7). In our last inspection, Ofsted found that ‘teachers are provided with a well thought out programme of ongoing training which has the teachers’ standards at its core … where pupils and staff can flourish’. We have been awarded the Silver CPD Quality Award by the TDT, who noted that ‘effective professional development includes significant engagement with external expertise and research to support and challenge practice. There have been many initiatives to support engagement with external providers, including the #AlwaysLearning page, appointment of the school research lead, and the research element within coaching plans.’ Most recently, CHS featured in the Parliamentary Review 2017/18, which highlighted the fact that ‘professional development at Culcheth High School takes an evidence-based approach’. This level of feedback is a true testament to the positive impact that a research lead can have on moving towards a culture where everything that you do is supported by evidence.

In 2017, I was awarded the Accomplished Lesson Study Practitioner Award accredited by Sheffield Hallam University in conjunction with the TDT. This programme offered me an insight into how the lesson study process can aid teacher engagement with research. I was tasked with designing and implementing a ‘lesson study’ programme that fitted in with the context of our school and our professional development needs. This provides colleagues with dedicated time to engage with research, collaborate with peers and aim to put evidence into practice and evaluate what works. Feedback has been positive and colleagues particularly enjoy the opportunity to collaborate with their peers and explore.

Where are we going?

Colleagues have recently completed an evaluation survey into their own research engagement inspired by the evaluation tools recommended by the Chartered College of Teaching. 93% of colleagues reported that they are aware of how and where to access appropriate research materials. 85% of colleagues reported that they have an ‘evidence mind-set’ and are conscious of the need to engage with evidence to improve practice, whilst 83% reported that their increasing engagement with evidence and research is improving their practice. A large proportion of colleagues said that they wanted to be given more time – more time to engage in collaborative research with peers, more time to put evidence into practice and more time to evaluate their own practice, using research to drive their practice forward.

The governing body and SLT, in consultation with the school community, have shown huge commitment to driving our evidence-informed school improvement agenda forward by changing the structure of the school timetable: students leave school an hour early one day a fortnight and this time is purely dedicated to #AlwaysLearning and collaborative engagement with research and evidence. This is a clear reflection that embedding evidence-informed professional development is absolutely ‘the main thing’; and a massively reduced workload, rising staff morale and consistently improving results are clear evidence that avoiding silver bullets and using research evidence is having a positive impact on teaching and learning in our school.

As Simon Smith (@smithsmm) wrote in his recent blog, ‘quality research should inform our practice but we need to be wary of assuming there is a silver bullet’. He argues that ‘when teachers are more knowledgeable about what works, that can only be good for schools’. However, as a school it is crucial that we always remain ‘healthily sceptical’. It is only through scepticism can we evoke change.


References

1. The Parliamentary Review (2018) The Parliamentary Review 2017/18 – secondary education: north west. Available at: www.goo. gl/1Tmftz

2. Rosenshine, B. (2008) ‘Principles of instruction: research-based strategies that all teachers should know’, American Educator 36 (1) pp. 12–39.

3. Bennett, T. (2016) The school research lead. Reading: Education Development Trust.

4. Coe, R., Aloisi, C., Higgins, S. and Major, L. E. (2014) What makes great teaching? London: The Sutton Trust.

5. Sweller, J. (1988) ‘Cognitive load during problem solving: effects on learning’, Cognitive Science 12 (2) pp. 257–285.

6. Jones, G. (2015) ‘The school research lead and asking better questions – part one’, Evidence Based Educational Leadership [blog]. Available at: www.goo.gl/9he4WB

7. Covey, S. (2004) The 8th habit: from effectiveness to greatness. New York, NY: Free Press.

Opinion – Knowledge is the Road to Joy

The work of E D Hirsch and many others has been cited as pivotal in the recent interest – particularly in the UK and the US – of ‘knowledge-based curriculums’. That’s great, says Will Orr-Ewing – as long as we don’t forget joy.

A knowledge-based approach is on the march in UK schools. For any traditionalist who was working in the early 2000s – when a knowledge-based approach would have been dismissed as boring, reactionary and (thanks to Google) redundant – this must feel like an unexpected victory. It is a mark of how far we have come from the days of the 2007 National Curriculum and the RSA Open Minds Curriculum that the majority of the UK’s most prominent schools and educationalists now publicly favour a knowledge-based (or knowledge-rich) approach and the education minister can proudly call himself a ‘Hirschian’.

With the battle won (in theory if not quite yet in practice) and the victors sweeping the battlefield, finishing off dead and wounded progressives, many educationalists are now moving on from philosophy to implementation. Before they do, it is worth pausing to stake a philosophical claim that might determine the forms this implementation might take. This claim, neglected in debates over the last decade but treasured by older thinkers, is that knowledge – whatever its other educational benefits – brings joy. That knowledge gained is not just a means to other ends but is its own reward, and that this is one of its most important features and benefits. It is understandable that, in the fierce heat of contemporary squabbles, heads and educationalists prefer to talk up the more empirical benefits of a knowledge approach; but, by doing so, they leave the implementation of a knowledge-based approach open to those who would happily squander its joy for its effectiveness. In order to illustrate the way that a knowledge approach is currently advocated, it is necessary to summarise the arguments of its defenders very briefly. There are three main strands, all interrelated and often evoked as one.

1. Knowledge = access. Children need a secure knowledge base to access, firstly, texts of increasing complexity (cf. E D Hirsch, Daniel Willingham, Doug Lemov et al.) and, secondly, higher-order skills such as creativity, interdisciplinary thinking, critical thinking etc. (cf. Dylan Wiliam, Daisy Christodoulou, David Didau, Joe Kirby et al.). Here is a representative quote from Carl Hendrick: ‘The extent to which we can think critically about something is directly related to how much we “know” about that specific domain and “knowing” means changes in long-term memory.’ This contention is sometimes summarised as ‘the Matthew effect’ based on the passage from Matthew’s Gospel: ‘For all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.’

2. Knowledge = success. Because higher-order skills, including exam skills, cannot be accessed without knowledge, the best way to prepare for long-term exam success is via a knowledge-rich curriculum. The work of schools such as Michaela and those in the Inspiration Trust exemplify this approach. Christine Counsell, Director of Education for the latter, says: ‘I feel quite passionate about the broad curriculum in key stage 3 serving attainment in GCSE.’

3. Knowledge = power. Building on the two positions above, if schools do not teach knowledge, only those children from more privileged backgrounds whose parents pass on their own knowledge (even if obliviously) will be able to read well, access higher- order skills and achieve exam success. This is the social justice case for a knowledge approach advanced by all of the above, as well as the likes of the West London Free School. See also Michael Young’s concept of ‘powerful knowledge’.

These arguments, prosecuted on Twitter, blogs and at conferences, have generally and rightly won out – remarkably so, given the headwinds of a progressive teaching establishment. And yet, despite the fact that such arguments are often labelled ‘traditional’, they feel rather too bound within late modernity’s norms and values. As you have read in the above, knowledge is almost exclusively presented as a means rather than an end. The search for empirical benefits, able to justify approaches in only instrumentalist terms, has missed the marrow at the heart of knowledge and so risks erecting an educational project as thin and dreary as the orthodoxy it correctly seeks to replace.

Perhaps we need older perspectives – from an Aristotle or a C S Lewis or anyone who might be said to defend a liberal education in the old sense of that phrase – to remind us of just how much we are selling knowledge short. This older view of what knowledge can do is perhaps best encapsulated in the writing of Charlotte Mason, who saw herself both as the inheritor of this ‘liberal education’ tradition and as being charged with spreading its fruits to children of every background in late Victorian and Edwardian England. Here is what a knowledge-based approach meant to her:

‘We launch children upon too arid and confined a life. Personal delight and joy in living is a chief object of education … It is for their own sakes that children should get knowledge. The power to take a generous view of men and their motives, to see where the greatness of a given character lies, to have one’s judgment of a present event illustrated and corrected by historic and literary parallels … these are admirable assets within the power of every one according to the measure of his mind; and these are not the only gains which knowledge affords. The person who can live upon his own intellectual resources and never know a dull hour (though anxious and sad hours will come) is indeed enviable in these days of intellectual inanition, when we depend upon spectacular entertainments pour passer le temps.’

In her writing and in her schools, knowledge was never presented as a means to something else.

She talked of a child’s ‘knowledge-hunger’, an appetite of the mind akin to the appetite of the body for food. Knowledge was inherently ‘delightful’, ‘enlivening’, ‘vitalising’, helping children to see a world that pulsated with meaning. It required no further justification. Beyond the philosophical differences, she also contrasts with today’s defenders of knowledge in the implementation of her vision. There are many interesting ways in which the approaches diverge (and, naturally, converge) but the three summaries below will stand as illustrations:

1. Role of the teacher. It seems fair to say that those that promote knowledge today also tend to favour a heightened role for the teacher than the ‘guide on the side’ proposed by progressives. Many knowledge-rich schools make much of their teachers’ subject knowledge for instance. Mason would not have had a problem with this per se but she worried that a charismatic teacher could get in the way between a child and knowledge. There is an interesting piece by one of her followers on her views on Vygotsky’s ‘scaffolding’, which shows her dislike of the way teachers would often unwittingly come between children and ‘the mountain’ (or what she elsewhere called ‘the feast’) of knowledge through excessive talking. Teachers of course have their role to play in elucidating meaning but their role was one of ‘masterly inactivity’, something which is unlikely to find any favour in contemporary knowledge advocates, who tend to favour direct instruction and other ‘sage on the stage’ roles for the teacher, sometimes going as far as prescribing scripts for teachers.

2. Books vs textbooks. Because Mason feared that teachers often got in the way between children and knowledge, her lessons were rooted in reading. She condemned the way that educationalists ‘wrote down’ to children in ‘dry as dust’ textbooks, diluting the delightful aspects of knowledge, and would have disapproved of the generally pro-textbook stance of knowledge’s defenders today, not to mention the printable worksheets, précis and simplified versions that are still so common across all classrooms today.
She placed her trust not in all books but in certain well- chosen books, especially those with lively narratives and the right expressions, which expertly conveyed meaning from the mind of the author to the mind of the child. The teacher’s role is to elucidate the meaning in the books but not to be the main purveyor of the knowledge itself.

3. Knowledge demonstrated vs teaching to the test. Today’s defenders of knowledge seem to see the UK’s examination system as being a worthy demonstration of their pupils’ knowledge, boasting of high attainment in GCSE or, in the case of private schools, of places won at top senior schools or universities. Mason, on the other hand, worried that any teaching to the test, any academic marks or prizes, winnowed the innate desire within children for knowledge for its own sake. She favoured a method called narration, whereby children told back (either written or out loud) what they had heard or read. Now that schools can boast of their pupils’ knowledge via social media, YouTube etc., where are the demonstrations of that joyful knowledge that Mason would surely have used if she was still alive today? (Her equivalent was to publish a list of substantive nouns and proper nouns written in a typical exam in her schools – e.g. Africa, Alsace- Lorraine, Antigonus, Abdomen, Antennae, Aphis, Antwerp, Alder, etc.) The closest that comes to it are Michaela’s moving videos of their children chanting great poetry, but where are the others?

By aligning a knowledge approach with textbooks, charismatic teaching and excellent examination prep, amongst many other implementations, there is a danger that today’s defenders of knowledge are dampening exactly that aspect of knowledge that makes it so genuinely ‘rich’, ‘powerful’ and delightful. It is time to reclaim joy as the rightful aim of a knowledge-based approach (could it even be hoped that a knowledge approach implemented on Mason’s grounds could go some way to pushing back at the awful incidence of childhood unhappiness we see about us?) and time to experiment with other methods that protect and uphold this worthy goal for a great and liberal education.

Graham Nuthall: Educational research at its best

Professor Emeritus Graham Nuthall, an educational researcher from New Zealand, is credited with one of the longest series of studies of teaching and learning in the classroom that has ever been carried out. A pioneer in his field, his research focused on the classroom, and what impact certain factors – for example, teaching – had on the outcomes of learners. Perhaps his most famous work is The Hidden Lives of Learners, which is increasingly being seen as a seminal text for understanding learning.

Jan Tishauser, programme manager for researchED Netherlands, explores his contribution to the education debate, and why his work is extraordinarily relevant today.

The outcomes of the research that Graham Nuthall conducted into the classroom experience of learners are little known, notwithstanding the far-reaching implications for our classroom practice. He demonstrated the need for formative assessment and discovered which factors influence learning most. He also pinpointed metacognition’s role on learning outcomes.

Nuthall started recording classroom conversations as a student. He kept on doing this during his whole career from 1960 until 2000. In some ways his research was an expedition into unknown territory. His first question was: what actually happens during a lesson? His final research question was: what is the role of ability in learning?

Taking off

It all started in 1960, when Nuthall (at that time a young student) obtained permission from a number of experienced teachers to record their lessons with a number of students. At this time, he had not yet developed a sound design for his research. He was simply driven by curiosity, wondering what actually happens in a lesson. He worked under the assumption that one needs to observe experienced teachers to spot good teaching.

On the surface, his initial results show a seemingly spontaneous interaction between teachers and students; but beneath this surface, his analysis showed set patterns of communication and predictable structures and rules for social interaction. Nuthall replicated his research in the US and Japan; these rituals were identical everywhere. But the purpose of these rituals was not clear at that time. He concluded that ‘like language, teaching has its own underlying grammatical rules’.

Learning that experience makes no difference

In the period between 1968 and 1974, Nuthall and his PhD students started to work with an experimental design. Together with a group of teachers, they scripted a series of lessons about the black-backed gull. They wanted to know whether a teacher’s experience or training influenced the learning of students. They analysed differences between three groups of teachers: experienced teachers, inexperienced teacher trainees and teacher trainees who were trained to analyse their lessons using micro-teaching and recording. The results were rather unexpected: experience and training made no difference; instead it was only the type of feedback the teachers gave and their style of questioning students that mattered.

Dead end

Nuthall and his PhD students thought they were on to something and continued to work with scripted lessons. They worked with experienced teachers, made recordings, did pre and post tests, trying to find the factors that had a positive effect on learning outcomes.

Finally they came up with results: the way teachers gave feedback, questioned students and activated students made a difference. This might not seem so amazing to us now, but in 1974, these were promising results. One of the problems that was brought to the surface through their intensive monitoring of the interactions in the classroom was the enormously complex reality of the classroom. To supplement their findings, they would have to do hundreds of intensive follow-up studies, which would most likely produce an endless, useless list of dos and don’ts. It could lead to a ‘robotification’ of the teacher, while their own research had shown them that this is impossible and undesirable:

‘I realized I was following a path that satisfied the cultural rituals of the research community, but would be of little value to teachers, and probably do them harm.’ Nuthall hit a dead end. He describes this period as ‘roaming in the desert’.

A focus on student learning

Then Adrienne Alton-Lee, an experienced teacher, started working on a PhD in 1978. Her research question focused on the students. What causes a student to learn the course material? In her classroom practice she was unable to predict when a given student would have learned the material and when they would not. Alton-Lee dissected the course material in great detail, down to what she called ‘concepts’ and ‘items’, using a rolodex system. For example, a simple series of lessons on climate could contain as many as 500 items.

What stands out most in Nuthall’s research is that only the ‘three times’ rule has predictive value. Ability or intelligence or similar properties do not.

A ‘concept’ could be: Antarctica is the driest continent. Examples of ‘items’:

  • There is little precipitation.
  • There is more precipitation in the Sahara.
  • Because of the low temperatures the snow never melts.

Every 15 seconds, all student communication and every action was registered, such as what they did, or what they said to themselves and to others. All the material a student encountered was registered and everything a student made or wrote was photographed. This led to a dissertation published in a leading magazine.

Replication crisis

Because Alton-Lee had followed a mere three students, Nuthall decided he needed replication studies. He designed three follow-up studies in order to replicate her findings. Technological advancements made it possible to gather even more information. Linking the students’ learning experiences, the course material and the outcomes seemed to work. Together, they collected a mountain of information.

They identified four simultaneous processes going on:

1. The invisible thinking of the student

2. The self-talk

3. The social interaction between peers (mostly invisible to the teacher)

4. The teacher-led public discussion

The self-talk and interaction between peers is well hidden. This was illustrated by the fact that while each student had an observer, even they missed 40% of the talk that was on tape. Nuthall concluded that the opinions from peers were more important and better believed than the teacher’s opinions, including those related to the course matter.

The study also concluded that:

  • When you start a lesson, half of what you are about to teach is already known.
  • Every student holds a different piece of the puzzle.
  • Almost every student learns something different in your lesson.
  • In practice, they learn more from each other than from the teacher – including misconceptions – which is obviously not always a good thing.

The often-chaotic nature of the classroom explains the function of the rituals that Nuthall found in his first study. The rituals allow the teacher to focus on the class as a whole; the teacher simply doesn’t have the resources to follow individual students. Part of the ritual is the ‘nodding and smiling’ of the students who draw the attention of the teacher. Students also make sure to appear to focus on their work whenever the teacher is in their vicinity. ‘Appear’ is the key word here.

Eureka!

Ultimately, Nuthall decided to precisely map out the learning process of one student in relation to one topic. He analysed the interaction of ‘John’ in regards to the topic ‘The migration to New York’. That’s when some light was finally shed on a recurring pattern.

His analysis of John’s learning experience made it possible to define learning in the following terms: it is a positive change of what we know or can do; it takes place by means of a sequence of events and learning experiences; each experience builds on the previous one and every change in the order of the learning experiences will lead to a different outcome. The learning activities of a student consist of understanding and making sense of the learning experiences. A student understands, learns and remembers a concept if they have encountered all the underlying information three times.

They built on this insight and did one replication study after another with increasing numbers of students, classes and topics. And they could predict with 85% certainty which student would correctly answer which question on a test.

If ability doesn’t matter, what does?

What stands out most in Nuthall’s research is that only the ‘three times’ rule has predictive value. Ability or intelligence or similar properties do not. Yet the ‘better’ students learn more. Nuthall dedicated his last research period to solving this conundrum. These students had more prior knowledge and they profited more from the lessons. The secret seems to be that they make sure to get more out of the lessons. They possess better metacognitive skills; they understand what it takes to get results.

The Hidden Lives of Learners

At the end of his life, Nuthall hastily wrote The Hidden Lives of Learners, drawing these conclusions for the classroom based on his research:

  • Standardised tests appear to offer certainty, but are no more reliable than interviews held with students.
  • Learning activities should be designed to take into account how memory works.
  • The subject matter should be repeated in different ways.
  • Follow the individual learning experience.
  • Less is more: we should confine the curriculum to the big questions. Teachers need the time to design rich learning experiences, conduct pre- tests and get to know the social processes in the class. Learners need the time and the space to really master the content.

Nuthall’s diligent research efforts gave us lasting insights into the fundamentals of learning and teaching. We should take his research into account both in our current teaching practice and in our curriculum design. For me, the two fundamentals are that learning takes time and that it is not necessarily related to ability. The latter is really a finding that should encourage us all to set high goals for ourselves and our students.


BIBLIOGRAPHY

Nuthall, G. and Alton-Lee, A. (1993) ‘Predicting learning from student experience of teaching: a theory of student knowledge construction in classrooms’, American Educational Research Journal 30 (4) pp. 799–840.

Nuthall, G. (1999) ‘The way students learn: acquiring knowledge from an integrated science and social studies unit’, Elementary School Journal 99 (4) pp. 303–341.

Nuthall, G. (2004) ‘Relating classroom teaching to student learning: a critical analysis of why research has failed to bridge the theory- practice gap’, Harvard Educational Review 74 (3) pp. 273–306.

Nuthall, G. (2007) The hidden lives of learners. Wellington: NZCER Press.

Nuthall, G. (2012a) ‘Understanding what students learn’ in Kaur, B. (ed.) Understanding teaching and learning. Rotterdam: Sense, pp. 1–40.

Nuthall, G. (2012b) ‘The acquisition of conceptual knowledge in the classroom: a case study’ in Kaur, B. (ed.) Understanding teaching and learning. Rotterdam: Sense, pp. 97–134.

Wright, C. J. and Nuthall, G. (1970) ‘Relationships between teacher behaviours and pupil achievement in three experimental elementary science lessons’, American Educational Research Journal 7 (4) pp. 477–491.

The light is winning

WHY RESEARCH IS (SLOWLY) TRANSFORMING TEACHING. TOM BENNETT’S THOUGHTS ON researchED’S SUCCESSES AND CHALLENGES SO FAR

At the recent researchED in Haninge, Sweden, researchED magazine’s editor Tom Bennett closed the conference with a speech that tried to understand where we had got to in evidence-informed education, and what the landscape now looked like. This is a transcript of that speech.

The sleep of reason produces monsters – at least it does in education, where we see teaching full of myths, snake oil and poorly evidenced practices and strategies. Why have we succumbed so much to learning styles and worse, and why have we found ourselves basing our vital practice on gut feelings, hunches and intuition? I think it’s because misconceptions creep into the spaces where:

• we don’t know much about the topic,

• we like the answers junk science provides, or

• we’re too busy to find out the facts.

How did we get here? Let’s reframe that question. Where did you acquire your ideas about teaching, learning, pedagogy etc? Chances are your answer revolves around the following: teacher training; memories of your own school experience; your mentor; your early class experiences.

Up to a point, that’s fine. Teaching is to a great extent a craft. But craft without structured evidence to interrogate its biases and misconceptions can lead to what I call ‘folk teaching’, where we reproduce the mistakes of our predecessors as easily as we do their successes.

So what? Because merely folk teaching leaves us at the mercy of snake oil, fads, fashions, ideology, bias. We can think of an ocean of cargo cult voodoo that often dominated educational discourse in the past: Shift Happens; TED talks; the Great Interactive Whiteboard Con; most links you see shared on Facebook. We recall the training days hosted by inexpert experts; the books by charismatic gurus; the often quoted rentagobs that fill TV, radio and print and seem to know so much about classrooms despite never having worked in one. Know- nothings elevated by other know-nothings.

In this landscape, discussions about teaching become a battle of prejudices – Pokémon debates where we simply hurl one unprovable claim against another until someone blinks.

A new hope?

My naive ambition in 2013 when I began researchED was simple: we should lean on evidence where it exists; we should try to become more research-literate as a profession; and crucially we should ask for evidence at every turn. That was as far as I had gotten, strategy-wise. But surprisingly, amazingly, researchED took off, despite its lack of blueprint or funding. It was a movement that wanted to happen, and we started to respond to demand by hosting events across the UK and, quickly, around the world. Since then we have been to 14 countries, 5 continents, and seen 17,000 unique visitors to our events. researchED has 30,000 followers on Twitter (not counting the local accounts), and we have been graced with 1000 speakers (none of whom are paid). We pay no salaries (least of all to myself) and entirely self-fund each event. It is a humbling testimony to what can be achieved for next to nothing if love and altruism and mutual benefit are all you want to achieve. And it reminds me of the best in people – always.

The dangers of research

But it is important to always retain a sense of caution alongside the enthusiasm. The sleep of reason produces monsters, even with good intentions. There have been some reasonable responses and criticisms of this new age of evidence enquiry:

Evidence in the wild

Bad research – the ‘not even wrong’ categories like learning styles – isn’t the only problem. What happens to evidence in the wild is crucial. One thing this has taught me is that high-quality research is, by itself, not enough. If it doesn’t reach the classroom in a useful state then it may as well not have happened. And often good research gets lost in translation. I call this the Magic Mirror. Sometimes research goes through the mirror and schools turn it into something else. Research translation is as important as research generation. Poor old assessment for learning drops into the Black Box and gets mangled into levelled homework and termly tests, weird mutant versions of what it was meant to be. And some research is simply misunderstood: project-based learning, homework, collaborative learning all have utility in the right contexts. But how many teachers know the nuance of their evidence bases? Homework, for example, has variable utility depending on circumstances. Grasping the when and the how of ‘what works’ is essential, otherwise we oversimplify.

A brave new world that hath such teachers in it

I think researchED is a symptom of a new age of evidence interest. Perhaps also a catalyst – one of many that now exist, from the Deans for Impact1 to the Learning Scientists2 to the Five from Five3 programme and many more. This is indicative of an appetite that was always there. We now host more conferences, visit more countries every year. We have more first-timers, both attendees and speakers. Like the can of worms opened, the worms cannot now go back in the can. This car has no reverse gear. Successful innovations, once perceived, cannot be unseen.

Policy makers

I once asked ex-UK premier Tony Blair what research he relied on when making education decisions. He replied that there ‘wasn’t any useful evidence at the time’. This attitude still dominates the biggest lever-pullers. We still see at a policy level multiple factors driving decisions away from evidence bases:

• Budgets
• Policy/ministerial churn
• Lack of insider representation
• Reliance on personal experiences

But the more the profession talks the language of evidence, the more they will have to listen to it. And I have always believed that we should reward policy-makers when they participate in evidence-driven discussions. That’s why I’m proud we try to engage rather than barrack our political representatives. And why every year we invite ministers of every party to our party.

Schools

Leadership is still the biggest lever in driving evidence adoption. One evidence-literate school leader cascades far more than one teacher. Some schools are now embracing the ‘research lead’ role, and devoting staff resources to this area. There is a moral and a practical duty for leadership to attend to evidence, because an era of dwindling resources demands better, more efficient decisions – less waste, more impact, from training to workload to tech. Let us abandon the days we tried to buy our way out of our problems, as if a chequebook were a magic lamp. And I sometimes wonder if raising budgets isn’t by itself insufficient, because the most important thing is to be judicious in spending the money we have.

Teachers

In the absence of a coherent, evidence-informed system it is necessary for teachers to drive their own research articulacy. It is necessary. Teachers should not be pseudo- researchers, but they should become literate; share, disseminate and interpret high-quality research, and help us to develop a herd immunity, where enough of us are learned enough to recognise the zombie learning and junk pedagogy when it rises – as it always does – from the grave.

Embrace ambiguity

We have one more duty to observe. Teachers must become active participants in the research ecosystem rather than massive recipients. But teaching is driven by practice, and the data is subtler than we suspect. We frequently seek definite answers where none exist. Research often unpacks ambiguity, and we need to embrace nuance, uncertainty and probability rather than dress high-quality research up as eternal and immutable fact. We should avoid universals and certainty – and always remember that context is king. Otherwise we perpetuate dogma and become that which we seek to surpass.

The gatekeepers

One thing I didn’t expect – but should have – is that the existing system objects to its own reinvention. Whenever power shifts, former custodians of power seek to preserve privilege; and this new age of evidence adoption has frequently been dismissed by some academics, some education faculties, commercial interests, some teaching bodies. But the habit of command dies slowly. Education has relied on arguments from authority for decades. Evidence challenges their dominance like mystics challenge the Church. I have faith that evidence and truth will win, but it will not be because it was easy. Arguments must be made; evidence bases must be made transparent.

Evidence doesn’t obliterate professionalism – it liberates it

We enter a new age of evidence. Once seen it cannot be unseen, and science cannot be uninvented, although ideas can change. Fears that evidence makes us slaves to research are no more rational than the fear that understanding how to cook makes you a worse chef. It empowers. If you object to where evidence takes us, then find better evidence. Otherwise, ask yourself if your opinion is dogma, or if something more animates your objections.

Caveat emptor. In a complex field we need interpreters and brokers of research, but we must also take care not to create a new priesthood – the neo-shamans of evidence, who act as irrefutable guardians of divine truth. The OECD, for example, in some ways has become the new international inspectorate, blessing or banishing entire countries on the basis of their data. Is this healthy? I don’t think so. Beware also the New Generation of Consultants selling ‘Snake Oil 2.0’ who have updated their absurdities by simply stapling the phrase ‘evidence-based’ onto their bags of magic beans. And don’t think I’m ignoring the danger of researchED succumbing to this, like mortal ring bearers corrupted by Sauron. This is why we curate events to include challenge and debate, like the grit in the oyster that helps to make the pearl.

The future

We begin to see new models of professional groupings emerge – digital collaborations, conference communities that no longer require permission to exist. Self-propelled, self-sustaining, self-regulating, they exist only as long as people want to go. These fluid, accessible, dynamic, virtual colleges are needed until they are no longer needed because the profession will have reinvented itself. We’re not there yet. Which is why we commit to cheap, accessible events that are democratic, inclusive and most of all, directed at discovering what works – and when, and why, and how.

My ambition is that we begin to drive this voluntary professional development, and then that cascades back into schools and starts conversations which set off sparks in classrooms – ones that catch fire and burn down dogma. And also that initial teacher training increasingly

makes evidence its foundation (where it does not do so already), platforming the best of what we know rather than perpetuating the best of what we prefer. For new teachers to be given skills to discern good evidence from bad. And for that to eventually bleed into leadership; and from there, into the structures that govern us.

I’m reminded of the story about the eternal battle between darkness and light in the sky. A pessimist could look up and think that darkness was nearly everywhere. But the optimist doesn’t see that. The optimist knows that once, there was only darkness.

If you ask me, the light’s winning.

This transcript was first published on Tom’s blog, The Behaviour Guru.

REFERENCES

  1. www.deansforimpact.org/resources/the-science-of-learning/
  2. www.learningscientists.org
  3. www.fivefromfive.org.au

 

From the editor

The relationship between education policy and education evidence has never been easy. The realpolitik of education is pulled hither and thither by many horses, and research bases are only one of several influences. In 2010 the CfBT report Instinct or Reason: How education policy is made asked every surviving post-war UK minister what the principal reasons behind their policy decisions in education were. The answers were sobering, if unsurprising:

  • Urgency – a sense that ‘something must be done’
  • Ideology – the values and beliefs of policymakers
  • International exemplars
  • Cost
  • Electoral popularity
  • Pressure groups
  • Personal experience
  • Research evidence

Notice research there; a dusty bottom.

There are many reasons why this is perfectly understandable, of course. Parties are elected to deliver a manifesto, which is composed to reflect the values and ideologies they seek to represent. Evidence that confounds or contradicts these platforms can be seen as an obstacle rather than an ally to the policy process.

But there is cause for hope. The growing and international appetite for evidence-informed education we see at researchED events and beyond is fuelling a renewed appetite for evidence-informed policy to drive that agenda.

Change in policy can be slow; ministerial churn can be fast. In this issue, I speak to Nick Gibb, the UK Schools Minister, a politician who, probably more than most in the UK, has spearheaded a drive towards evidence-informed education, particularly in the field of phonics and literacy, but also more broadly in pedagogy. This interest at a ministerial level in the affairs of what happens in the classroom has not been met with open arms, and Gibb has attracted criticism for walking into what was once described as the ‘secret garden’ of education.

It is easy for politicians and policy-makers to look to education for the engine of their reform programmes. The Jesuit philosophy of catching them young is attractive; you have a reasonably compliant cohort of tomorrow’s scientists and sailors who crucially, can’t yet vote. Society-building and vocational imperatives are also big drivers in policy behaviour. But where does the ambitious politico turn for expertise and answers? Why, the experts. But which ones? In a field as contested as education, it is understandable if politicians recruit advisors who flatter rather than inform.

Which is why evidence-informed education has never been needed more. Education strategies must be as evidence-informed as possible, from the classroom to the Oval Office. It is entirely right that democracies should define the goals of education; it is imperative that once that will has been conceived, evidence should be the backbone of how we seek to realise it.

Which is why at researchED we engage with everyone involved in the education ecosystem, from teaching assistants to cabinet ministers, with the ambition that informed and careful conversations will save us from the dogma and superstition that has characterised our extraordinary and turbulent profession. I hope you enjoy our second issue of researchED magazine, and find something to challenge, inspire and enthuse you in your practice.

Thanks for reading.

Tom

Give me your answer do: An interview with Daisy Christodoulou

Education’s fastest talker tells us about mythbusting, why assessment drives everything else, and the seven myths of edutech

Daisy Christodoulou is the author of Seven Myths about Education and Making Good Progress?: The Future of Assessment for Learning, as well as the influential blog, The Wing to Heaven. She is currently the Director of Education at No More Marking, a provider of online comparative judgement. She works closely with schools on developing new approaches to assessment. Before that she was Head of Assessment at Ark Schools, a network of 35 academy schools. She has taught English in two London comprehensives and has been part of UK government commissions on the future of teacher training and assessment.

@daisychristo

What’s your background?

I did Teach First, trained as an English teacher, in a school in London for three years, then another secondary school. I was working in a school that went into special measures. It was challenging. And I learned that a large amount of advice out there for us – or what was being mandated for teachers – didn’t reflect reality.

Like what?

We were getting a lot of Ofsted scrutiny. I write about this in Seven Myths. The kind of information we were getting about how you succeed for Ofsted, and lots of the advice wasn’t based in reality and it didn’t have any evidence backing it up.

For example?

The biggest thing I came back to in Seven Myths was an example of a best practice lesson for an English teacher about Romeo and Juliet: teaching students by getting them to make puppets. These aren’t straw men. One criticism Seven Myths gets is that this is a ‘straw man’. But it’s all based on Ofsted reports from that era. If only I’d made this up, if only this had been a figment of my imagination and not best practice. The problem with that – and it’s not just a knee-jerk reaction, ‘all puppets are stupid’ – is that when you look at the evidence, you remember what you think about. And what you think about is how you made the puppets. You won’t be thinking about Romeo and Juliet, you’ll be thinking about puppet mechanics. It’s not that I’m averse to making puppets. If that’s your aim, great. But as an English teacher, learning about Romeo and Juliet, that advice to make puppets wasn’t very helpful.

Why do you hate puppets so much? I think we need to unpack this a bit more.

*crickets*

The reason why facts do matter isn’t an ideological argument. It’s an evidence-based argument.

So you were an English teacher in challenging schools. Fast forward, you’ve written an international sensation of a book. What happened in between? What caused the awakening?

Part of it was a nagging feeling that something wasn’t’ right. All the examples in the book are backed up – they’re referenced from Ofsted inspections or consultants or ITT. There were other things that I put in the book that were also pretty bonkers. You would hear consultants talk about ‘talkless teaching’ – there was this point where if you were actually intervening or talking or teaching, you must be doing something wrong. It was a nagging feeling that it was wrong. It didn’t make sense. What you’re inclined to do is think ‘Well, all of these people are saying the same thing. It can’t be them; it must be me.’ The awakening led to me reading more, and researching more, and realising that evidence suggested maybe my nagging feelings had something to them.

What kind of things were you reading?

Willingham, obviously. That was a lightbulb moment. And the first real insight I had was reading Hirsch, and his Cultural Literacy. Thing about that is that it’s – as Willingham says – a book about cognitive science, and all the heat and the light is generated by the list of the facts at the end. I then read a bit by Herbert Simon – who is enormously interesting, one of the great polymaths of the 20th century – and his work on chess players, how they think and learn. And he was incredibly insightful. And realising that there’s this research out there by a Nobel Prize winner, that was completely contradicting so much of what I was hearing in teacher training.

And that inspired you to write?

It did. I got so frustrated hearing what I was hearing. It’s hard to imagine now but back in 2009, 2010, these ideas were things that people just took for granted – ‘You can just google it.’ It was just so frustrating. Everyone saying these things. And there was all this evidence out there by serious people saying, ‘No, this is not the case. It’s not how we learn, you can’t rely on Google, you can’t access memory through the cloud.’ And that was how Seven Myths came about. They were just the seven things I got most annoyed by.

Can you summarise the main ideas?

The über myth is that facts don’t matter or knowledge doesn’t matter. It’s been around a long time, at least back to Rousseau. The modern conception around thinking skills, and so on, they seem very modern but they are actually a rehashing of things that are over 100 years old in some cases. And the reason why facts do matter isn’t an ideological argument. It’s an evidence-based argument. We need facts in long-term memory in order to think, because we have working memory and long-term memory and our working memory is very limited, and long-term memory is the seat of all intellectual skill. Working memory can only hold four to seven items of information in it at any one time, so whenever you solve a problem, your working memory can very quickly become overwhelmed. So particularly with very young children, you give them a multiple-step maths problem. If they’re not secure on their maths facts and processes, by the time they get to the end, they’ve forgotten the beginning. That’s not because they’re stupid. We’ve all got a working memory issue.

So, the idea is to get as many facts or chunks of facts into long-term memory as possible, and free up that precious space in working memory. That’s the value of e.g. maths facts. It’s also necessary if you want to be able to read and you want to read fluently, but you don’t want to have to sound out every word or stop to look up every word in the dictionary. If you have to do all that – as you’ll know from learning a foreign language – then you quickly get overwhelmed. But when you can read fluently, it’s a smooth process and you can read for hours and not get tired and enjoy the act of it. But if you stop and start, it’s not a pleasant process and you can’t enjoy the meaning.

But surely nobody is against teaching facts?

(Laughs) That’s why the structure of the book is designed to try and show you that some people actually are against teaching facts. That’s why the structure of each chapter is ‘What does the research say?’, ‘What are people saying today in theory?’ and ‘What are recommending in practice?’ I structured it like that because a lot of the rhetoric in education is frustrating.

You’ll get some who’ll spend a chapter saying why facts are bad, and projects are great. I’m not against teaching facts. It’s very easy to spend a long time dismissing facts, rubbishing facts and then saying ‘But of course we’re not against teaching facts.’ So what I wanted to do was to try to move beyond an argument about words and to actually look at practice. What is the actual lesson advice you are expected to follow? The moment you start to dig into that you realise that all the types of lessons and practice that people were recommending were disagreeing with what the evidence said. And lots of lesson types that fitted the evidence were being dismissed as worst practice.

The best example of this is direct instruction. DI has an enormous research base behind it, huge amounts of evidence. Whenever you try to deploy DI-style tactics in a lesson, people will react with horror. It was the kind of thing you saw in the literature; the advice teachers were getting was to avoid that kind of approach.

Where was this advice coming from?

The whole point was that I was trying to find reputable examples of people in authority who were recommending this. And that’s why I often go back to Ofsted. It’s not because I think Ofsted were the only ones responsible. There were a lot of people doing this. The issue with Ofsted is that everyone accepts their authority and they have a very big record of their reports. But it wasn’t just them. The whole general world view reflected it. Ofsted weren’t saying things that were controversial to the wider world. They weren’t criticised for this. They were criticised for other things. I should say that I think Ofsted have gone through a big reform process and have changed a lot of this.

I asked online what people thought the impact had been on them. There was a deluge of support from people talking about the immensity of your influence. Were you surprised?

Yes! It felt quite niche. I remember going through all the Ofsted reports and I was thinking ‘This is just a moment in time. In one country in one system. Who’s going to be interested?’ I thought it would be quite ephemeral, and it might date because of the reports and era it was in. But I’m most pleased that people are still reading it – and that it was controversial to begin with, but that as time has gone on, and people have thought about it, it seems to have people warming to it. It wasn’t intended to be an ideological polemic. It was meant to be about the evidence; ‘Here is the state of how we learn.’

If you were publishing it for the first time today, would you change anything?

No, I think it’s fine as it is. Although the thing I realised needed expanding very quickly was assessment. I think there’s a section in Seven Myths – very short – where I’m critical of teacher assessments. It’s just a couple of lines, and there were clearly a lot of people who seized upon that and thought, ‘Oh she just wants teaching to the test.’ What happened was that people associated a knowledge-based approach with teaching to the test or a massive exam focus. I realised – that was just a couple of sentences – I didn’t talk about exams very much at all. And they are such a massive part of our modern education system that I realised we have got to address that. Because there are massive problems with the way some teach to the test, there are legitimate critiques about the exam factory model of schooling that I have a lot of sympathy for. And I’d always been aware of that. I didn’t address it enough in the book. You can’t address education without this discussion: the role of exams.

Seven Myths became very well known, especially in the UK. How did you get from that to assessment?

When I read the responses to Seven Myths, it felt like the most interesting arguments were about exams – how does this fit in with them? The second thing: I was working with schools about how to make some of my ideas a reality, and what I realised very quickly was that you can’t do anything about curriculum – especially in English schools – unless you do something about assessment.

Why?

Look at GCSEs. I was working at this when levels were abolished. Even at primary, if you try to introduce a new curriculum approach, people instantly say, ‘How can I level this?’ So for example, say you want to bring in a direct instruction approach. How do I give a level at the end of it? If your new system of curriculum doesn’t match up with the way you assess it currently, you have a problem. And that was the issue I kept running into. Look at DI programmes like expressive writing. That doesn’t fit very well with an old UK national curriculum approach. So what do you do? Tweak it? Or do you bring the levels in? Change the assessment? To what?

So when you started to look into assessments, where did that lead you?

The big thing I struggled with, this idea that you just separate formative and summative assessment. Because when I started teaching, what you were seeing was lots of assessments that you would do six times a year, and the problem with that is you were assessing big, complex tasks. But these big, complex tasks, like essays, just because they’re in an assessment, actually they’re like projects. One of my arguments is that projects are not a good way to learn. But if you are assessing kids with a big complex task every six weeks, you don’t have the time to be breaking that task down into smaller chunks. And the big argument in Seven Myths is that we need to decompose the skill. As a practical example, as an English teacher, you try to judge a piece of writing.

A great book published a year ago, The Writing Revolution, is really good on this. The problem it says we have is that we aren’t training them to do writing; we don’t teach writing, and that is exactly the issue I find. That we were assessing writing  – a lot – but at what point do we sit them down and say, ‘Here are the nuts and bolts of writing’? When you break it down, this is what you need. This wasn’t compatible with a levelled or even a graded approach. Because when you grade or level you do want to assess a large piece of writing. So, when you teach it you want to break it down. And the analogy I use in Making Good Progress is that when you run a marathon, 26.2 miles is the end goal. But nobody, unless you’re already an elite marathon runner, no one begins by running 26.2 miles. Nobody runs 26.2 miles in every training session. And nobody thinks that the way you make progress to your end goal is by running marathons. So people do all kinds of other tasks. They go to the gym. They do cross-training, swimming, shorter runs, speed work. And all of those tasks go towards the complex goal.

So that’s how I got so involved in assessment: by realising that if you wanted to focus on a knowledge-based curriculum, I realised that the only way you could properly do it was within the framework of the assessment you were working on.

Which leads us neatly to comparative assessment.

As an English teacher, the biggest thing is that assessing writing is really hard. The minute you are writing in an extended way, those pieces are extremely hard to mark reliably. And not only that, but they start to have a negative impact on teaching and learning. Because what you end up with is marking to the rubric. And the rubric might say something like ‘uses vocabulary originally…’. There’s a list of things that define good writing. And the problem with that is that those sentences end up becoming the lesson objective. This creates the problem that you’re not teaching at the nuts-and-bolts level. You’re teaching at this generic level. You start saying things to students like ‘You need to infer more insightfully.’ Hang on, how helpful is that? The whole point of feedback is to give people something they can do next. The rubric isn’t designed to be helpful like that! But it’s not even that useful for markers, because two different markers can interpret the same line in different ways.

So what comparative judgement tries to do is to help with reliability, efficiency and validity. The first two are quick wins. You get much better agreement and you’ll get there much quicker. And that’s amazing. There’s another benefit: it lets you move away from the rubric. So when you look at two pieces of writing beside each other and you ask, ‘Which is the better piece?’, you just go on your gut instinct on your knowledge of what good writing is. And the power is that you move away from teaching to the rubric.

How do people criticise this?

I think people find it odd at first when you move away from the mark scheme, when you say use your gut instinct. They’re quick to ask ‘How do I know my gut instinct is right? And even if it is, what about everyone else’s?’ The way you get around those issues is that the thing about comparative assessment is that it generates an enormously sophisticated model. You have data on everyone’s judgement and every judge, so you can tell if the judge is an outlier. And it’s quite rare. So you can see if they’re in line with the group or not. The initial criticism is that ‘this just feels hopelessly subjective’. But we can prove it isn’t, because we can show you after that the reliability you get from this, the agreement and consistency between judges in the room is greater than the process with a rubric. And we can prove that. It feels subjective, but it isn’t; and marking with a rubric feels objective…but it isn’t.

What’s next?

I’m still very involved in assessment. But I really want to do some writing on education technology. Comparative judgement is quite a tech approach so I’ve been thinking about it, And what I find fascinating is that here are some really amazing innovative examples of tech use, but there are also a lot of gimmicks. And being in the world of ed-tech at its worst can feel like education from years ago: ‘Kids don’t need to know stuff, they can just google it.’ That is like a mantra in ed-tech. It’s early stages, but I want to find out which approaches in technology work with the mind and are going to help learning, and which ones aren’t there yet. It might be, in some ways, similar to Seven Myths, because it’ll be looking at different approaches to technology and wondering which ones are working with the grain of how our minds work and which ones aren’t.

Seven Myths of Education (2014) is available to buy from Routledge. Making Good Progress (2017) is available from Oxford University Press.

The grateful ped(agogue)

Why giving thanks may be a gift that gives to the giver

From the philosophers Epictetus and Confucius to our own parents and teachers, wise thinkers have always encouraged us to count our blessings. Joe Kirby puts this sage advice to the test, and explains why it’s great to be grateful.

The secret to happiness? Gratitude – or so the Greek philosopher Epictetus said in Rome, some 2000 years ago. In Ancient China, Confucius said it was ‘better to light one small candle of gratitude than to curse the darkness’. Buddhists put it even more succinctly: ‘grateful heart – peaceful mind’. For centuries, great thinkers around the world have taught this simple idea: ‘Want to be happy? Be grateful!’

Let’s put this ancient wisdom to the test of modern science and see what psychologists have learned. What actually happens when people express what they’re grateful for?

Research

Two decades of seminal psychological research studies have found that after practising gratitude, people say they feel happier. In two studies, people wrote nine weekly gratitude journal entries, or daily entries for two weeks.1 Both groups reported better wellbeing, optimism and social connectedness than control groups. These studies were replicated with a third group.2 In another study, people kept a daily gratitude journal for a week, and reported lasting increases in happiness, even six months later.3 A 2006 study found that practising gratitude raised and sustained positive mood.4 But this was only with adults. What about teenagers and children?

A 2006 study of 221 young teenagers asked them to list five things they felt thankful for daily for two weeks. This enhanced their optimism and life satisfaction and decreased negative emotion, including after a three-week follow-up.5 A 2009 study found that children with lower positive emotion levels especially benefit from gratitude interventions.6 Two more studies replicated the findings: writing gratitude letters increased participants’ happiness and life satisfaction.7,8 After ten years of clinical trials, the world’s leading scientific expert on the topic, Robert Emmons, concluded that gratitude makes a measurable, positive impact on happiness.9

Other researchers found that people reported that gratitude improved relationships.10,11,12 Further studies also found that expressing gratitude increases people’s patience.13,14

One complication comes out of this research. One study suggested weekly appreciative writing outperformed daily.15 Perhaps writing too frequently loses freshness and meaning?

A recent trial, just published this year, involved students seeking counselling for depression and anxiety, with clinically low levels of mental health. They were divided into three groups: one wrote gratitude letters, one group wrote their deepest thoughts about negative experiences, and one did not do any writing. What did they find? Those expressing gratitude reported significantly better mental health four weeks afterwards – and even larger effects 12 weeks afterwards.16 Perhaps Confucius was right.

Three applications in schools

How might we apply these research insights in schools?

1. Termly postcards to teachers

Once a term in forms, tutors can give students gratitude postcards to write to teachers that have made a difference in their lives. It is easy for students to forget how much teachers do for them. It makes children feel happy to notice and acknowledge those who support them. It also makes teachers feel happy to be thoughtfully appreciated. Teachers can model this by writing appreciative postcards to one pupil each day. If a school does this, each year, teachers will have written 200 cards, and there’d be some 10,000 acts of encouragement. Students like showing these to their parents to make them feel proud. Some display them proudly on their fridges at home. Some students I know even keep and frame postcards they earn over the years!

2. Termly postcards to families

In forms, tutors can ask students to write gratitude postcards to their own parents, siblings or families at the end of term. It is hard for children and teenagers to remember how much the adults and family members in their lives do for them, and how sad they’d be if they lost them. Students and parents feel much more positively about the school when they see how much their family relationships matter to teachers.

3. Thanks to end lessons and form

Every day, teachers and students make great efforts. Leaving lessons creates an opportunity for students and teachers to say ‘Thank you!’ to show they appreciate each other. If both say ‘thank you’ politely as they part, this creates a very upbeat atmosphere around the school. Combine this with a mantra – ‘It’s great to be grateful!’ – to encourage students who are appreciative. Assemblies on the benefits of gratitude can help children understand why it’s helpful in life to really notice the good things we have in our lives.

Applying the research of gratitude is a promising way of helping children, teachers and families feel happy about school.


References

1. Emmons, R. and McCullough, M. (2003) ‘Counting blessings versus burdens: an experimental investigation of gratitude and subjective well-being in daily life’, Journal of Personality and Social Psychology 84 (2) pp. 377–389.

2. Ibid.

3. Seligman, M. E., Steen T. A., Park N. and Peterson, C. (2005) ‘Positive psychology progress: empirical validation of interventions’, American Psychologist 60 (5) pp. 410–21.

4. Sheldon, K. M. and Lyubomirsky, S. (2006) ‘How to increase and sustain positive emotion: the effects of expressing gratitude’, The Journal of Positive Psychology 1 (2) pp. 73–82.

5. Froh, J. J., Sefick, W. J. and Emmons, R. A. (2008) ‘Counting blessings in early adolescents: an experimental study of gratitude and subjective well-being’, Journal of School Psychology 46 (2) pp. 213–233.

6. Froh, J. J., Kashdan, T. B., Ozimkowski, K. M. and Miller, N. (2009) ‘Who benefits the most from a gratitude intervention in children and adolescents?’, The Journal of Positive Psychology 4 (5) pp. 408–422.

7. Toepfer, S. M. and Walker, K. (2009) ‘Letters of gratitude: improving well-being through expressive writing’, Journal of Writing Research 1 (3) pp. 181–198.

8. Toepfer, S. M., Cichy, K. and Peters, P. (2012) ‘Letters of gratitude: further evidence for author benefits’, Journal of Happiness Studies 13 (10) pp. 187–201.

9. Emmons, R. A. (2013) Gratitude works! San Francisco, CA: Jossey-Bass.

10. Bartlett, M. Y. and DeSteno, D. (2006) ‘Gratitude and prosocial behavior: helping when it costs you’, Psychological Science 17 (4) pp. 319–325.

11. Lambert, N. M., Clark, M. S., Durtschi, J., Fincham, F. D. and Graham, S. M. (2010) ‘Benefits of expressing gratitude: expressing gratitude to a partner changes one’s view of the relationship’, Psychological Science 21 (4) pp. 574–580.

12. Grant, A. M. and Gino, F. (2010) ‘A little thanks goes a long way: explaining why gratitude expressions motivate prosocial behavior’, Journal of Personal and Social Psychology 98 (6) pp. 946–955.

13. DeSteno, D., Li, Y., Dickens, L. and Lerner, J. S. (2014) ‘Gratitude: a tool for reducing economic impatience’, Psychological Science 25 (6) pp. 1262–1267.

14. Dickens, L. and DeSteno, D. (2016) ‘The grateful are patient: heightened daily gratitude is associated with attenuated temporal discounting’, Emotion 16 (4) pp. 421–425.

15. Ibid. 3.

16. Wong, Y. J., Owen, J., Gabana, N. T., Brown, J. W., McInnis, S., Toth, P. and Gilman, L. (2018) ‘Does gratitude writing improve the mental health of psychotherapy clients? Evidence from a randomized controlled trial’, Psychotherapy Research 28 (2) pp. 192–202.

Harder, better, faster, longer?

Rebecca Foster explains how to introduce ‘desirable’ difficulties into your teaching – and why learning shouldn’t be easy.

‘The mistake we pop stars fall into is stating the obvious. “War is bad. Starvation is bad. Don’t chop down the rainforest.” It’s boring. It’s much better to hide it, to fold the meaning into some sort of metaphor or maze, if you like, and for the listener to have to journey to find it.’

Sting

The fetishisation of ease is ubiquitous: you only need to look down at your smartphone to see how advances in technology have converged to squeeze a multitude of processes into one hand-held device for your convenience – a camera, easy access to cat videos and social media all in one place! We don’t even have to get up from our sofas to change the TV channel or rely on a map to get us from A to B anymore. But at what cost this ease? In making life as easy as possible, what are we losing? Aren’t some difficulties in fact desirable?

These are questions we ought to be asking of our classroom practice too. When we make learning easy in the classroom, what is the cost? The work of Bjork and other researchers suggests that practices that ‘appear optimal during instruction’,1 such as massing study sessions and blocking practice, ‘can fail to support long-term retention and transfer of knowledge’. Whereas introducing certain difficulties that ‘slow the apparent rate of learning’, such as reducing feedback to the learner and interleaving practice on separate topics or tasks, ‘remarkably’ has the opposite effect.

Bjork asks the question why, ‘if the research picture is so clear’, are ‘massed practice, excessive feedback, fixed conditions of training, and limited opportunities for retrieval practice – among other nonproductive manipulations – such common features of real-world training programs?’2 One answer, in school contexts, might be a type of ‘operant conditioning’ teachers are exposed to. Several school systems serve to reinforce practices that encourage the teacher to increase the performance rate of their students to satisfy a demand for ‘rapid progress’. For example, frequent data-trawls encourage teachers to teach in a way that will maximise the short-term performance of their students. If I have to enter data on a student six times a year, and especially if that data is used to judge my performance as a teacher or inform the pay I’m entitled to, am I not motivated to do what’s necessary to push students over short-term hurdles? Notwithstanding the perfectly admirable desire as a teacher to see my students perform well.

It’s a bit like confiscating everybody’s satnavs: probably not a great idea if their timely arrival on a certain day is important; but if you want people to get better at finding their way in the longer term then it’s a sensible strategy that has merit.

As teachers we may also be led to favour practices that increase performance at the acquisition of learning stage because many of the ‘desirable’ difficulties Bjork suggests will produce ‘the best retention performance’3 result in ‘poorer performance’ at the point of learning new information. It’s manifestly unintuitive to a teacher to degrade the performance of students in the classroom. It’s a bit like confiscating everybody’s satnavs: probably not a great idea if their timely arrival on a certain day is important; but if you want people to get better at finding their way in the longer term then it’s a sensible strategy that has merit.

While short-term performance goals are understandable, our sights as teachers need to stretch far beyond the end of the lesson, unit or course of study. With supportive whole-school structures, teachers can be freed up to introduce desirable difficulties that may impede short-term performance but have long-term positive impact.

I’ve been leading the English department at my current school for two years and have introduced a range of ‘desirable difficulties’ that have been a challenge for both teachers and students. However, the effectiveness of the learning taking place in the English lessons in my department is revealed by the level of retention demonstrated by our students over time.

Distributing practice

One of the biggest changes I introduced was a move away from massed practice or traditional term-long units of study. In the past students might study a novel for a term and then move on to study creative writing followed by four other units – each conveniently one term long. I can only assume that the rationale for the length of the units was because that’s how the year is broken up and an end-of-unit assessment would fall just before a data drop, with all of the work leading up to that building the knowledge and skills necessary to perform well in that assessment. However, when that topic was returned to a year or more later, students’ long-term recall or performance was hindered by this approach.

Now, at KS3, we have two key units that are studied for roughly half of the year: a novel and a Shakespeare play. These are interleaved with studying poetry, fiction writing, non-fiction writing and analysis of both fiction and non-fiction. In practice this means that no two English lessons within a single week are on the same topic. Whilst this was a real challenge for teachers at first, our students haven’t been in the least bit phased and we’ve seen the impact this model has had on the development of our students’ knowledge and skills.

Of course, were I working in a school that demanded an assessment every six weeks, I may find myself in hot water; but thankfully, I work in a school that only requires one data entry a year at KS3 and two or three at KS4.

Using tests as learning events

Lots of evidence points to the idea that recalling information is more effective than a further study event and also serves the purpose of providing feedback to students about their current knowledge or a given topic. In my department, we have introduced a range of tests as learning events, including retrieval practice starters and knowledge tests. One of the most effective things we’ve introduced, after reading Battle Hymn of the Tiger Teachers: The Michaela Way, is self-quizzing homework. Students are required to test how much they can recall from their knowledge organisers and then, in a different coloured pen, fill in any gaps or make corrections. Not only is this weekly, structured activity improving students’ learning of key knowledge, but it’s also providing regular feedback to both teacher and learner about what they know or don’t know. Furthermore, it has the added benefit of not needing to be marked – a difficulty that is certainly not desirable!