Every office or family has one – the colleague who mistakenly walked into a wall or the sibling who mistook a closed glass door for an open entrance.
Most of us, however, seem to have an innate sense of a room’s geometry. When we roll out of bed, our feet know exactly where to find the floor. We know we can fit through a doorway without needing a measuring tape to confirm its height and width.
Now, a team of neuroscientists, including Psychology professor Marieke Mur of Western’s Brain and Mind Institute, has determined the areas of the brain that help us navigate our surroundings safely. These findings will not only show how we travel safely from Point A to Point B, but provide key insights for artificial-intelligence technology aimed at mimicking our visual powers.
“We were interested in learning what regions of the brain were involved in understanding the boundaries of a scene,” said Mur, who co-authored the study with Linda Henriksson of Aalto University in Finland and principal investigator Nikolaus Kriegeskorte of Columbia University. “It’s very important for our navigation and for understanding where we can walk safely and where we can’t walk.”
How we ‘see’ our colourful, shape-filled, texture-rich world is a complex process that involves detailed and continuous stages of processing and communication between the retina and different parts of the brain.
Researchers in this study wanted to discover how people seem to know almost instantaneously the layout of a room, and where exactly the brain processes that information. Using two non-invasive brain-imaging technologies, they presented study participants with series of slightly different images of three-dimensional scenes.
If the first image showed a room with three walls, a ceiling and a floor, the next image might depict just two walls and a floor, or walls with no ceiling. Researchers introduced different textures, such as clouds and fences. All told, there were 32 different spatial layouts in three different textures for a total of 96 scenes.
As each image popped into view, researchers monitored participants’ brain activity with functional magnetic resonance imaging (fMRI), which offered high resolution of spatial information, and magnetoencephalography (MEG), which provided measurements of the speed of neural activity in the brain.
As they constructed, deconstructed and reconstructed these virtual rooms, researchers saw how the brain received and interpreted these visual scenes.
Together, the scans showed that a brain processing area called the occipital place area (OPA) was diligently interpreting the geometry of what participants saw. The OPA’s millions of neurons were already known to be responsible for encoding scenes rather than objects, but this work found the area was key in helping a person process their space and place in their surroundings.
And the neurons were doing that’s comparable to how quickly we interpret whether a face near us is that of a dangerous lion or a friendly parent.
“Within 100 milliseconds, you see that the region that is involved in extracting that information is already at work. It’s very fast,” Mur said. “Things that are important for us, that are important for behaviour or survival, require fast processing speed.”
The textures of the walls, floor and ceiling didn’t appear to make a difference in the brain area’s ability and speed in computing the room’s layout, the study showed.
The paper, Rapid invariant encoding of scene layout in human OPA, is newly published in the journal Neuron. Mur’s work on the paper took place while she was at the MRC Cognition and Brain Sciences Unit, University of Cambridge.