How does reading work?

Wednesday 8th Apr 2026, 12.30pm

Aside from the odd unfamiliar or complicated word, a lot of us won’t think twice when it comes to reading. It’s a skill we take for granted. But, relatively speaking, it’s actually a fairly new skill – beginning about 5500 years ago. This suggests it’s not something we evolved to do, so, how does it work? We chat to Prof Ole Jensen from Oxford’s Departments of Experimental Psychology and Psychiatry, who is trying to gain a better understanding of the neural processes behind reading, with the ultimate aim of helping those who are struggling to learn.

Read Transcript

[Music]

Emily Elias: Remember learning how to read. It starts out really hard, and then pretty soon you get the hang of it. And then like now you don’t even realise that you’re doing it. What is going on inside our brains when we are reading? On this episode of the Oxford Sparks Big Questions podcast, we’re asking, how does reading work?

Hello, I’m Emily Elias, and this is the show where we seek out the brightest minds at the University of Oxford, and we ask them the big questions. And for this one, we have found a researcher who’s trying to understand how reading works so he can help kids master the task.

Ole Jensen: My name is Ole Jensen. So I’m a professor at Oxford, and I work half my time in the Department of Experimental Psychology and the other half in Psychiatry. So the idea is that we do cognitive neuroscience with brain imaging as a focus. And what we discover in terms of mechanisms supporting cognition – the idea is to also translate that into clinical applications in psychiatry.

Emily: And particularly you’re interested in reading.

Ole: Yeah.

Emily: I think I take my reading for granted these days. I just read. I never really think too deeply about it. Can you just tell me how reading works?

Ole: Yeah. So that is what we are trying to uncover. So it turns out that, of course, reading has been studied quite a lot in terms of eye tracking and different kind of behavioural experiments, but not so much is known about the neuronal mechanisms supporting reading. And that is what we are trying to get closer to understanding. So the interesting thing about reading is that the first written languages goes back about five thousand years, right? So it’s a relatively new skill to be able to read and write. So that also suggests that we have not evolved to have reading skills. In other words, our reading abilities rely on the same hardware as visual processing does. So we think by taking some of the insight that we have gained on normal visual processing, so to speak, and spatial attention, and apply that to reading, we should be able to better understand the neurophysiology supporting reading.

Emily: So what is my brain doing when I see words on a page? What connections is it making?

Ole: Yeah. So there are several stages. So first of all, there’s the letters, the text. So at least when you are learning to read, you learn how to pronounce that text. And after that you also have to sort of look up that word you are pronouncing. So there is what we call lexical access. And eventually you want to also understand what that word is meaning. So that’s meaning or semantics. And then that meaning of the word needs to be integrated into context, so you can understand the sentence you’re reading. So there’s all these levels of the reading process that one can try to tease apart.

Emily: And what part of our brain is getting activated when we’re reading?

Ole: So first of all, there’s the occipital cortex, that is the back of your head doing the visual processing. Then after that, there’s the so-called visual word form area that is doing the early processing of the word form, so to speak. And then as you go down the so-called ventral stream, that’s sort of the side part of the brain, the temporal lobe, there’s higher and higher level processing going on, where eventually you also process the meaning of the words. Then at the same time, we now also have evidence that the auditory cortex is being engaged. So it seems like this subvocal translation of the word to phonology is occurring even in skilled readers. And then things have to work together. You also have to move your eyes, right? One word at a time. However, there’s the parafoveal processing, or the reading of the next word over. That also has to occur for you to plan the eye movements to the upcoming words. So all this has to be coordinated.

Emily: So how do we go from like “see Spot, see Spot run, run, Spot run” to like Shakespeare or something that really kind of twists our brain?

Ole: Yeah. So that’s a very good question, right? Because it also requires that you take the information you’re reading and then integrate it. You have to integrate that into your existing knowledge. That’s something we also hope to get at. But as a first stage, we are more sort of in the visual end of the process where we are trying to understand how you go from text to phonology to meaning, and then to sentence integration. And our focus is on how you take in multiple words at a time. So no doubt that when you are learning to read, you learn how to pronounce the words. And eventually you learn to read sentences and then you don’t do that overtly, right? But when you become a fluent reader, you also take in multiple words at the same time, right? So that means that you fixate on a word, but before you move your eyes to the next word, you already start to sort of pre-process that. And it’s that pre-processing of the upcoming words that we can study with our brain imaging tools.

Emily: Tell me about your study. What exactly are you looking at when it comes to how somebody reads text?

Ole: Yeah. So there are two aspects. So what we want to understand is how important is it to process these words in the parafovea, meaning the upcoming words.

Emily: Parafovea is a big word for me. That would trip me up if I was reading it.

Ole: Yeah. So with parafovea, we mean that when you fixate on a word, that word is sort of in your direct field of view. And it turns out that the further out you go in terms of your visual field, the more blurry things become. So there’s a question as to how well you can process multiple words out there to the right of the word you are reading, because things become gradually more blurred. So what we do is that we measure the brain response to not only the word you fixate at, but also to the next word over. And what we can do is we can make tricks, so we can change that next word over so it doesn’t fit into the context, right? So “the man took a walk with his pizza”. That sort of sounds weird to us, right? You would have expected “dog”. So there’s a bit of a violation to our expectation there. So now we can ask when that word is not the word we fixate at, but in the so-called parafovea, is there a different brain response to that? So if the brain responds stronger to one of these words that doesn’t fit into the context, that says something about the brain regions being involved in processing the meaning of that word, but also the time course of that processing.

Also, your ability to process these upcoming words is predictive of how fast you can read, right? So faster readers are also the ones that take in more than one word at a time.

Emily: And as we’re going through all of those little stages, our brains are sort of getting faster and faster at taking in that information.

Ole: Yeah. You can argue that as you move from being a person who has just learned to read to become a fluent reader, it’s about optimising these different steps of the processing chain.

Emily: And has this sort of stuff been looked at before?

Ole: Yeah. There has been work done before on this, but there has also been recent developments in the tools for looking at the brain activity that allows us to look at phenomena that have not been looked at before.

Emily: What sort of things are you talking about?

Ole: Yeah. So for instance, we are doing different tricks. So we have flickering words, for instance, very fast at sixty hertz. So you don’t see that flicker, but the brain responds to that flicker. So that word we are flickering could, for instance, be the next word or the word you haven’t really fixated at yet. Then we can measure the brain response to that flickering. We call it rapid invisible frequency tagging. We can measure that response, and we can then manipulate whether the word fits into the sentence context. And then by measuring that response, we can make statements about how much spatial attention you move to that word and to what extent that allocation of spatial attention is dependent on how well that word fits into the sentence context.

Emily: So what is that going to look like? Are you literally going to put somebody in an MRI and make them read stuff?

Ole: Yeah. So the tool we work with is called MEG, magnetoencephalography. There’s a tool where we measure the brain activity in terms of magnetic fields. So you might have done this in physics where you take a battery and a wire and then hook up the wire to the battery and you have a compass, and then you see a deflection on the compass produced by the electrical currents. So now it turns out that when neurons are communicating, there’s also electrical currents running in the brain. These currents produce magnetic fields that we can detect outside the head.

Emily: So would it look like somebody wearing like a funny cap with lots of wires on it?

Ole: Yeah. So actually what it looks like now is like a big old-fashioned hairdryer that you put your head into, because the conventional systems are sort of quite big and bulky. However, there’s a new generation of these systems that relies on sensors that can be placed closer to the head. And a system like this was just installed at Oxford. And this is the system we want to use in children. And the big advantage is that the helmets, the sensor arrays of these systems, can be adapted to the smaller heads of the children. So we should be able to get better data.

Emily: How old are the children going to be that you’re going to be working with on this project?

Ole: Yeah. So that would be a range from seven to eleven. However, we will start with this and develop the tools. Maybe we want to also look in younger children eventually. But first of all, we want to look at relatively proficient readers, develop the tools, and then maybe go younger, but also look into children with problems acquiring reading.

Emily: What is your hope that you could sort of unlock with this research?

Ole: I think what really excites me is that we have been working on visual attention and brain oscillations for decades, and here we have a practical application that we can apply to an important skill, right, meaning reading. And down the line, by also doing this research in children, there could also be applications where we might be able to provide guidance on how to teach children to read. That is, we are of course not there yet, but that is one of our long-term goals. Then in the longer term, that could have several practical applications. For instance, dyslexia is, of course, an issue for many children. We now can try to look at, and maybe we can do sort of a sub-characterisation because there might be different problems for different children associated with reading problems. So this is what we hope to sort of tease apart. And then if we can identify the more specific problems, of course, that can also be used to guide intervention.

[music]

Emily: This podcast was brought to you by Oxford Sparks from the University of Oxford, with music by John Lyons. And a special thanks to Dr Ole Jensen [Professor]. Tell us what you think about this podcast. We are on the internet at Oxford Sparks, or you can go to our website oxfordsparks.ox.ac.uk.

I’m Emily Elias. Bye for now.

[music]

Topics: |