Eric Kalenze, researchED ambassador to the US, writes about how quickly his understanding of evidence in education has changed, and how being part of a network was crucial to that growth.
Do you remember the educator you used to be? Like, the one you were before you learned all you have from education research?
If you haven’t done so in a while, I invite you to think back to the person you were so many research ‘thresholds’1 ago. Compare what you believed then about matters like effective learning conditions, kids’ development, assessing students’ progress, etc., to the things you think now.
Also, compare the practices you designed and carried out then to those of now. Do they look the same, or did you alter them over time to reflect the research insights you acquired?
And consider the support network you had when all those research-sparked epiphanies started popping: the people, in other words, you shared your new learning with, had your thinking pushed by, got clarifications from when necessary, and collaborated with on new practical actions. Were you surrounded by fellow travelers in your school/workplace, for instance, all of you similarly inspired by common sources? Or were you on your own to take in new concepts and accordingly re-design your instruction (and subsequently run online for necessary support, answers and echoes)?
I’m suggesting you consider these kinds of questions because the ‘pre-research educator’ has been on my mind a lot of late – first, because I’ve recently had the chance to get re-acquainted with my own pre-research self; and second, because that re-acquaintance has reminded me of how exciting it is to have an evidence-informed improvement movement like researchED gaining momentum in the US.
For with researchED, we finally have a way – through a network of fellow educator-learners, that is – to bridge the fads and snake-oil slicks out there and get the best instructional information straight to the people applying it every day. And let’s face it: with so much of the field having been unaware for so long about what research actually says about kids’ learning and the conditions that enable such learning, we’ve needed a better way for some time now. (As the late Jeanne Chall observed in her 2000 posthumously released classic The Academic Achievement Challenge, educators choose practices ‘in a direction opposite from the existing research evidence’.2)
Now let me back up a minute to explain how I came to be thinking about all this.
I’ve been able to spend some time with my own pre-research self via work I’m doing on my next book,3 and it’s been remarkably instructive. Through interviews with former colleagues, supervisors, and students, as well as through a review of various planning documents and classroom activities I’d created, I’ve been struck by a couple of revelations. First, the research I was studying at the time really did transform my instructional priorities, planning, and execution – and, of course, kids’ results (!). In other words, this is not something that my imagination has overblown through the years and frozen into some ego-protecting amber. Next (and importantly to this piece), I was struck by how difficult it was, learning and designing largely by myself, to bring that research into practice.
The time period covered by my book-in-progress is 2004–2008, which means I was nowhere near Twitter (heck, it didn’t exist until 2006), and a watershed cognitive-science-and-education title like Dan Willingham’s Why Don’t Students Like School? hadn’t even been published. (However, I was familiar with Dan via his ‘Ask the Cognitive Scientist’ column in the professional journal of the American Federation of Teachers, American Educator.4) As my only real guides in the early 2000s were the references sections of the works I was reading, my research wasn’t particularly time- or energy-efficient.
I could see my random practices deepening into actual classroom premiums. And I’m not sure I’d have seen the same without such a network to affirm and push me.
Also, self-study revealed to me that my research learning and application was a bit too random. Essentially, as I looked over my past work I could see that I was pretty much choosing research-guided solutions according to my classroom’s most pressing needs. To put it another way: while I may have been doing something to build background knowledge here and tweaking my writing/conventions instruction there, I was really taking a ‘band-aid’ approach to applying research. While consistency and depth weren’t helped by the various priorities of my department and school (at multiple points of my self-study I found myself wondering, ‘What’s this meaningless film unit doing in here? And why in hell did I take them to the computer lab for this thing?’), it remains what it is: as my pre-research self was growing into using research-informed practices, I was rather all over the place.
Still, looking back on it this many years out, it’s clear to see which ideas from research were resonating with me enough to productively build around.5 Getting there was a few-years-long process, though, and it was undoubtedly buoyed by my school-within-a-school (which, again, I joined in 2004) colleagues. Indeed: by the latter part of the 2004–2008 span that is the focus of my book, I could see my random practices deepening into actual classroom premiums – philosophies, even. And I’m not sure I’d have seen the same without such a network to affirm and push me.
I also gradually acquired the confidence necessary to challenge instructional truths many of my colleagues had long accepted as self-evident, thus widening my impact beyond my classroom. (Like I say, this experience was profound. If you are interested in learning more, see the book when it’s ready!)
Though conducting this kind of mesearch wasn’t ever my aim, doing so through my current book-work led me to consider a number of important things about building evidence-supported practices in education. Most of all, it reminded me that everyone starts somewhere, and that some help can go a long way to building focused, sensible instructional practices supported by evidence.
As researchED exists for educators to hold one another up through just these sorts of learning, design, and application efforts, I’m thrilled to be part of organising it here in the US (and, of course, taking part at other conferences and online). We’ve needed a better way for a long time, and I feel like with researchED it might actually be here. I can’t wait to see how many education professionals’ careers – and, by extension, kids’ futures – benefit via the researchED learning network.
Cousin, G. (2006) ‘An introduction to threshold concepts’, Planet 17 (1) pp. 4–5. Available at: www.goo.gl/zXVKAn
Chall, J. S. (2002) The academic achievement challenge: what really works in the classroom? New York, NY: Guilford.
By the way: the book revolves around a profound teaching/leadership experience I had while a classroom teacher (designing and teaching in a school-within-a-school for at-risk high-schoolers, that is). I’m aiming to have drafting complete by late spring or so and, if all goes well, a release in early 2019 on John Catt Educational.
I found in my self-search that I’d used Willingham’s piece for AFT on teaching critical thinking in some staff PD I’d done in that period. See Willingham, D. T. (2007) ‘Critical thinking: why is it so hard to teach?’, American Educator 31 (2) pp. 8–19.
It’s clear that Ravitch’s Left Back (2001) and Egan’s Getting it Wrong from the Beginning (2004) had helped me see where my teacher-training had come from, for example, so I’d started jettisoning some pieces I’d long considered obligatory. Also, based on adjustments I was making to my English instruction, I can see the effects of having read at least some E.D. Hirsch (in particular The Schools We Need (1999) and The Knowledge Deficit (2006)), Anne Cunningham and Keith Stanovich (their sample piece for American Educator, ‘What Reading Does for the Mind’ (1998)), and John McWhorter’s The Power of Babel (2003) and Word on the Street (2007).