A new study from Western’s renowned Brain and Mind Institute shows that when humans reach out and grab things, they do not rely on the same visual cues that are used to perceive an object’s size.
Images of people and objects projected onto human eyes are constantly shrinking, expanding and changing shape as they move through the world. Yet remarkably, the world is seen as stable, and things are perceived to be the size they really are.
This is a good thing because otherwise perception of the world would be chaotic and impossible to interpret. The ability to see the real-world size of objects despite dramatic changes in the images captured by human eyes is called ‘size constancy.’
It is thought that the brain creates size constancy by taking into account how far away an object is: the farther away the object, the smaller the retinal image. As a consequence, even though the image of a car driving away from a person becomes smaller and smaller on the retina, people continue to see the car as being the same size as when it was nearer.
It is seldom appreciated, however, that object constancies must also operate for the visual guidance of goal-directed action. For example, when someone reaches out to pick up an object, the hand’s ‘in-flight’ aperture is scaled to the size of the goal object and ignores the decrease in retinal image size with increased viewing distance. This phenomenon is called ‘grip constancy.’
The new international study, published by Current Biology, directed by distinguished university professor Melvyn Goodale investigated whether or not grip constancy depends on the same visual structures in the brain as perceptual size constancy and discovered this is not the case. Goodale collaborated on the study with Robert Whitwell, a visiting scientist from the University of British Columbia, and neuroscientists from University of Trento, University of Exeter and La Trobe University.
For the study, the researchers examined the visual abilities of a woman (known by the initials MC) who suffered damage to primary visual cortex and related structures on both sides of her brain as a result of a stroke.
MC’s estimates of object size do not match the real size of objects but instead co-vary with the size of the image on her eye – thus the closer the object, the bigger she thinks it is. Remarkably, however, the opening of her hand scales to the real size of objects located at different distances when she reaches out to pick up them up.
“MC’s preserved ability to match her grasp to the real size of a goal object – despite a striking deficit in the ability to perceive the size of that object – means that grip constancy must depend on pathways in the brain that bypass primary visual cortex and other brain structures that mediate our perception of the world,” said Whitwell, lead author on the study.
According to Goodale, this new understanding can help engineers who are trying to devise machine vision systems for everything from object recognition to self-driving cars.
“These findings represent a first step in understanding how our brain provides us with a compelling but stable representation of the visual world – and at the same time allows us to interact with objects in that world,” said Goodale.