Give me your answer do: An interview with Daisy Christodoulou

Education’s fastest talker tells us about mythbusting, why assessment drives everything else, and the seven myths of edutech

Daisy Christodoulou is the author of Seven Myths about Education and Making Good Progress?: The Future of Assessment for Learning, as well as the influential blog, The Wing to Heaven. She is currently the Director of Education at No More Marking, a provider of online comparative judgement. She works closely with schools on developing new approaches to assessment. Before that she was Head of Assessment at Ark Schools, a network of 35 academy schools. She has taught English in two London comprehensives and has been part of UK government commissions on the future of teacher training and assessment.

@daisychristo

What’s your background?

I did Teach First, trained as an English teacher, in a school in London for three years, then another secondary school. I was working in a school that went into special measures. It was challenging. And I learned that a large amount of advice out there for us – or what was being mandated for teachers – didn’t reflect reality.

Like what?

We were getting a lot of Ofsted scrutiny. I write about this in Seven Myths. The kind of information we were getting about how you succeed for Ofsted, and lots of the advice wasn’t based in reality and it didn’t have any evidence backing it up.

For example?

The biggest thing I came back to in Seven Myths was an example of a best practice lesson for an English teacher about Romeo and Juliet: teaching students by getting them to make puppets. These aren’t straw men. One criticism Seven Myths gets is that this is a ‘straw man’. But it’s all based on Ofsted reports from that era. If only I’d made this up, if only this had been a figment of my imagination and not best practice. The problem with that – and it’s not just a knee-jerk reaction, ‘all puppets are stupid’ – is that when you look at the evidence, you remember what you think about. And what you think about is how you made the puppets. You won’t be thinking about Romeo and Juliet, you’ll be thinking about puppet mechanics. It’s not that I’m averse to making puppets. If that’s your aim, great. But as an English teacher, learning about Romeo and Juliet, that advice to make puppets wasn’t very helpful.

Why do you hate puppets so much? I think we need to unpack this a bit more.

*crickets*

The reason why facts do matter isn’t an ideological argument. It’s an evidence-based argument.

So you were an English teacher in challenging schools. Fast forward, you’ve written an international sensation of a book. What happened in between? What caused the awakening?

Part of it was a nagging feeling that something wasn’t’ right. All the examples in the book are backed up – they’re referenced from Ofsted inspections or consultants or ITT. There were other things that I put in the book that were also pretty bonkers. You would hear consultants talk about ‘talkless teaching’ – there was this point where if you were actually intervening or talking or teaching, you must be doing something wrong. It was a nagging feeling that it was wrong. It didn’t make sense. What you’re inclined to do is think ‘Well, all of these people are saying the same thing. It can’t be them; it must be me.’ The awakening led to me reading more, and researching more, and realising that evidence suggested maybe my nagging feelings had something to them.

What kind of things were you reading?

Willingham, obviously. That was a lightbulb moment. And the first real insight I had was reading Hirsch, and his Cultural Literacy. Thing about that is that it’s – as Willingham says – a book about cognitive science, and all the heat and the light is generated by the list of the facts at the end. I then read a bit by Herbert Simon – who is enormously interesting, one of the great polymaths of the 20th century – and his work on chess players, how they think and learn. And he was incredibly insightful. And realising that there’s this research out there by a Nobel Prize winner, that was completely contradicting so much of what I was hearing in teacher training.

And that inspired you to write?

It did. I got so frustrated hearing what I was hearing. It’s hard to imagine now but back in 2009, 2010, these ideas were things that people just took for granted – ‘You can just google it.’ It was just so frustrating. Everyone saying these things. And there was all this evidence out there by serious people saying, ‘No, this is not the case. It’s not how we learn, you can’t rely on Google, you can’t access memory through the cloud.’ And that was how Seven Myths came about. They were just the seven things I got most annoyed by.

Can you summarise the main ideas?

The über myth is that facts don’t matter or knowledge doesn’t matter. It’s been around a long time, at least back to Rousseau. The modern conception around thinking skills, and so on, they seem very modern but they are actually a rehashing of things that are over 100 years old in some cases. And the reason why facts do matter isn’t an ideological argument. It’s an evidence-based argument. We need facts in long-term memory in order to think, because we have working memory and long-term memory and our working memory is very limited, and long-term memory is the seat of all intellectual skill. Working memory can only hold four to seven items of information in it at any one time, so whenever you solve a problem, your working memory can very quickly become overwhelmed. So particularly with very young children, you give them a multiple-step maths problem. If they’re not secure on their maths facts and processes, by the time they get to the end, they’ve forgotten the beginning. That’s not because they’re stupid. We’ve all got a working memory issue.

So, the idea is to get as many facts or chunks of facts into long-term memory as possible, and free up that precious space in working memory. That’s the value of e.g. maths facts. It’s also necessary if you want to be able to read and you want to read fluently, but you don’t want to have to sound out every word or stop to look up every word in the dictionary. If you have to do all that – as you’ll know from learning a foreign language – then you quickly get overwhelmed. But when you can read fluently, it’s a smooth process and you can read for hours and not get tired and enjoy the act of it. But if you stop and start, it’s not a pleasant process and you can’t enjoy the meaning.

But surely nobody is against teaching facts?

(Laughs) That’s why the structure of the book is designed to try and show you that some people actually are against teaching facts. That’s why the structure of each chapter is ‘What does the research say?’, ‘What are people saying today in theory?’ and ‘What are recommending in practice?’ I structured it like that because a lot of the rhetoric in education is frustrating.

You’ll get some who’ll spend a chapter saying why facts are bad, and projects are great. I’m not against teaching facts. It’s very easy to spend a long time dismissing facts, rubbishing facts and then saying ‘But of course we’re not against teaching facts.’ So what I wanted to do was to try to move beyond an argument about words and to actually look at practice. What is the actual lesson advice you are expected to follow? The moment you start to dig into that you realise that all the types of lessons and practice that people were recommending were disagreeing with what the evidence said. And lots of lesson types that fitted the evidence were being dismissed as worst practice.

The best example of this is direct instruction. DI has an enormous research base behind it, huge amounts of evidence. Whenever you try to deploy DI-style tactics in a lesson, people will react with horror. It was the kind of thing you saw in the literature; the advice teachers were getting was to avoid that kind of approach.

Where was this advice coming from?

The whole point was that I was trying to find reputable examples of people in authority who were recommending this. And that’s why I often go back to Ofsted. It’s not because I think Ofsted were the only ones responsible. There were a lot of people doing this. The issue with Ofsted is that everyone accepts their authority and they have a very big record of their reports. But it wasn’t just them. The whole general world view reflected it. Ofsted weren’t saying things that were controversial to the wider world. They weren’t criticised for this. They were criticised for other things. I should say that I think Ofsted have gone through a big reform process and have changed a lot of this.

I asked online what people thought the impact had been on them. There was a deluge of support from people talking about the immensity of your influence. Were you surprised?

Yes! It felt quite niche. I remember going through all the Ofsted reports and I was thinking ‘This is just a moment in time. In one country in one system. Who’s going to be interested?’ I thought it would be quite ephemeral, and it might date because of the reports and era it was in. But I’m most pleased that people are still reading it – and that it was controversial to begin with, but that as time has gone on, and people have thought about it, it seems to have people warming to it. It wasn’t intended to be an ideological polemic. It was meant to be about the evidence; ‘Here is the state of how we learn.’

If you were publishing it for the first time today, would you change anything?

No, I think it’s fine as it is. Although the thing I realised needed expanding very quickly was assessment. I think there’s a section in Seven Myths – very short – where I’m critical of teacher assessments. It’s just a couple of lines, and there were clearly a lot of people who seized upon that and thought, ‘Oh she just wants teaching to the test.’ What happened was that people associated a knowledge-based approach with teaching to the test or a massive exam focus. I realised – that was just a couple of sentences – I didn’t talk about exams very much at all. And they are such a massive part of our modern education system that I realised we have got to address that. Because there are massive problems with the way some teach to the test, there are legitimate critiques about the exam factory model of schooling that I have a lot of sympathy for. And I’d always been aware of that. I didn’t address it enough in the book. You can’t address education without this discussion: the role of exams.

Seven Myths became very well known, especially in the UK. How did you get from that to assessment?

When I read the responses to Seven Myths, it felt like the most interesting arguments were about exams – how does this fit in with them? The second thing: I was working with schools about how to make some of my ideas a reality, and what I realised very quickly was that you can’t do anything about curriculum – especially in English schools – unless you do something about assessment.

Why?

Look at GCSEs. I was working at this when levels were abolished. Even at primary, if you try to introduce a new curriculum approach, people instantly say, ‘How can I level this?’ So for example, say you want to bring in a direct instruction approach. How do I give a level at the end of it? If your new system of curriculum doesn’t match up with the way you assess it currently, you have a problem. And that was the issue I kept running into. Look at DI programmes like expressive writing. That doesn’t fit very well with an old UK national curriculum approach. So what do you do? Tweak it? Or do you bring the levels in? Change the assessment? To what?

So when you started to look into assessments, where did that lead you?

The big thing I struggled with, this idea that you just separate formative and summative assessment. Because when I started teaching, what you were seeing was lots of assessments that you would do six times a year, and the problem with that is you were assessing big, complex tasks. But these big, complex tasks, like essays, just because they’re in an assessment, actually they’re like projects. One of my arguments is that projects are not a good way to learn. But if you are assessing kids with a big complex task every six weeks, you don’t have the time to be breaking that task down into smaller chunks. And the big argument in Seven Myths is that we need to decompose the skill. As a practical example, as an English teacher, you try to judge a piece of writing.

A great book published a year ago, The Writing Revolution, is really good on this. The problem it says we have is that we aren’t training them to do writing; we don’t teach writing, and that is exactly the issue I find. That we were assessing writing  – a lot – but at what point do we sit them down and say, ‘Here are the nuts and bolts of writing’? When you break it down, this is what you need. This wasn’t compatible with a levelled or even a graded approach. Because when you grade or level you do want to assess a large piece of writing. So, when you teach it you want to break it down. And the analogy I use in Making Good Progress is that when you run a marathon, 26.2 miles is the end goal. But nobody, unless you’re already an elite marathon runner, no one begins by running 26.2 miles. Nobody runs 26.2 miles in every training session. And nobody thinks that the way you make progress to your end goal is by running marathons. So people do all kinds of other tasks. They go to the gym. They do cross-training, swimming, shorter runs, speed work. And all of those tasks go towards the complex goal.

So that’s how I got so involved in assessment: by realising that if you wanted to focus on a knowledge-based curriculum, I realised that the only way you could properly do it was within the framework of the assessment you were working on.

Which leads us neatly to comparative assessment.

As an English teacher, the biggest thing is that assessing writing is really hard. The minute you are writing in an extended way, those pieces are extremely hard to mark reliably. And not only that, but they start to have a negative impact on teaching and learning. Because what you end up with is marking to the rubric. And the rubric might say something like ‘uses vocabulary originally…’. There’s a list of things that define good writing. And the problem with that is that those sentences end up becoming the lesson objective. This creates the problem that you’re not teaching at the nuts-and-bolts level. You’re teaching at this generic level. You start saying things to students like ‘You need to infer more insightfully.’ Hang on, how helpful is that? The whole point of feedback is to give people something they can do next. The rubric isn’t designed to be helpful like that! But it’s not even that useful for markers, because two different markers can interpret the same line in different ways.

So what comparative judgement tries to do is to help with reliability, efficiency and validity. The first two are quick wins. You get much better agreement and you’ll get there much quicker. And that’s amazing. There’s another benefit: it lets you move away from the rubric. So when you look at two pieces of writing beside each other and you ask, ‘Which is the better piece?’, you just go on your gut instinct on your knowledge of what good writing is. And the power is that you move away from teaching to the rubric.

How do people criticise this?

I think people find it odd at first when you move away from the mark scheme, when you say use your gut instinct. They’re quick to ask ‘How do I know my gut instinct is right? And even if it is, what about everyone else’s?’ The way you get around those issues is that the thing about comparative assessment is that it generates an enormously sophisticated model. You have data on everyone’s judgement and every judge, so you can tell if the judge is an outlier. And it’s quite rare. So you can see if they’re in line with the group or not. The initial criticism is that ‘this just feels hopelessly subjective’. But we can prove it isn’t, because we can show you after that the reliability you get from this, the agreement and consistency between judges in the room is greater than the process with a rubric. And we can prove that. It feels subjective, but it isn’t; and marking with a rubric feels objective…but it isn’t.

What’s next?

I’m still very involved in assessment. But I really want to do some writing on education technology. Comparative judgement is quite a tech approach so I’ve been thinking about it, And what I find fascinating is that here are some really amazing innovative examples of tech use, but there are also a lot of gimmicks. And being in the world of ed-tech at its worst can feel like education from years ago: ‘Kids don’t need to know stuff, they can just google it.’ That is like a mantra in ed-tech. It’s early stages, but I want to find out which approaches in technology work with the mind and are going to help learning, and which ones aren’t there yet. It might be, in some ways, similar to Seven Myths, because it’ll be looking at different approaches to technology and wondering which ones are working with the grain of how our minds work and which ones aren’t.

Seven Myths of Education (2014) is available to buy from Routledge. Making Good Progress (2017) is available from Oxford University Press.