Close your eyes.
You are immersed in a rainforest where birds cackle, monkeys howl and a light rain stirs a rustling in the trees.
Close your eyes again.
You are seated at an Austrian playground, the happy shrieks of children filling the air. In the next instant, you are transported to the middle of the Autobahn, where a cacophony of cars reverberates from your scalp to your toes.
Now open your eyes.
You are in the centre of a geodesic dome arrayed with 91 speakers – a virtual acoustic space that’s helping neuroscientists at Western understand how our brains process sounds.
“For too long, auditory research has involved people listening to single sounds in sound booths – and that’s just not how we listen in the real world and make sense of our auditory environment,” said Ingrid Johnsrude, Director of Western’s Brain and Mind Institute.
“This array allows us to present real auditory scenes to our research participants to really understand how they make sense of and organize sounds. It is part of our emphasis at the Brain and Mind Institute to study human behaviour and cognition in real-world environments.”
As people age, their brains are less nimble at interpreting layers of sounds: a conversation at a café, for example, or verbal instructions in a waiting room.
But researchers are working out exactly how and where the brain processes those complex, competing signals.
Is it easier to decipher familiar voices or unfamiliar ones in a soup of other sounds? Does our response change in lower registers or higher ones? Do we interpret a thread of conversation better or worse, or just differently, if the background is a constant buzz of voices or if there’s a sudden loud noise? Exactly what sound inputs can the brain untangle, and under what ideal or adverse circumstances?
Participants in neuroscience studies are often asked to offer feedback through behavioural reports – ‘I hear this, I can’t hear that’ – or through electroencephalographs (EEGs) that record their brain waves in response to sound.
But just as watching a nature film on a mobile phone pales in comparison to the real thing, listening to traffic noise through headphones is no match for the actual Autobahn.
“We’re used to thinking of virtual reality, visually,” Johnsrude said. “This is virtual reality, auditorially.”
For added realism, and to adapt to different research questions, researchers outside the room can manipulate which speakers emit which sounds and can toggle foreground voices and background noise levels as well, just as volumes and cadences change in real life.
The virtual acoustic space is capable of rendering three-dimensional soundfields and programmable acoustics – making it a four-dimensional facility.
“Combining auditory space and time is going to be really interesting for research,” Johnsrude said.
The research has implications that include one of the main audiological concerns of people middle-aged and older: that they can no longer pick up threads of conversation in a noisy room.
It also has applications for use in understanding how blind people use sound to navigate through rooms and among crowds.
“This equipment represents an impressive feat of hardware and software engineering. This was more than three years in development and only a handful of systems like this exist in North America,” she said.
The geodesic dome, speakers and assorted other equipment is newly installed by Austrian firm sonible.com and its $700,000 cost is supported through investments from the Canadian Foundation for Innovation (CFI) and the Ontario Research Fund (ORF).