Summary: The brain builds a hierarchy of knowledge by connecting lower-order sensory details with higher-order concepts, shaping our perception of the world. A new study shows how visual experiences affect the brain’s feedback connections, allowing us to integrate context and recognize patterns based on past experiences.
Researchers have found that visual input refines these connections, improving our ability to interpret complex stimuli and update our understanding of the environment.
Key data:
- Feedback connections in the brain link higher-order concepts with lower-order sensory details, thus supporting perception.
- Visual experiences refine these connections and enable more effective integration of contextual information.
- Understanding the combination of prior knowledge and new sensory impressions can provide insight into disorders such as autism and schizophrenia.
source: Champalimaud Center for the Unknown
How do we learn to understand our environment?
Over time, our brains build a hierarchy of knowledge in which higher-order concepts are linked to the lower-order features that make them up. For example, we learn that cupboards have drawers and that Dalmatians have black and white spots, rather than the other way around.
This interconnected framework shapes our expectations and perceptions of the world, allowing us to identify what we see based on context and experience.
“Take an elephant,” says Leopoldo Petreanu, lead author of the The Caixa-funded study.
“Elephants are associated with lower-level attributes such as color, size and weight, as well as higher-level contexts such as jungles or safaris. Linking concepts helps us understand the world and interpret ambiguous stimuli. If you are on a safari, you are more likely to spot an elephant behind the bushes than otherwise.
“If you know it is an elephant, it is similarly likely that you will perceive it as grey even in the dim light of dusk. But where in the brain tissue is this prior knowledge stored and how is it learned?”
The brain’s visual system consists of a network of cooperating areas, with lower areas processing simple details (e.g., small areas of space, colors, edges) and higher areas representing more complex concepts (e.g., larger areas of space, animals, faces).
Cells in higher areas send “feedback” connections to lower areas and are thus able to learn and internalize real-world relationships shaped by experience. Cells that encode an “elephant,” for example, could send feedback to cells that process features such as “gray,” “big,” and “heavy.”
The researchers therefore investigated how visual experiences influence the organization of these feedback projections, whose functional role is still largely unknown.
“We wanted to understand how these feedback projections store information about the world,” says Rodrigo Dias, one of the study’s lead authors.
“To do this, we studied the effects of visual experience on feedback projections to a lower visual area called V1 in mice. We bred two groups of mice differently: one in a normal environment with regular light exposure and the other in darkness. We then observed how the feedback connections and the cells they target in V1 responded to different areas of the visual field.”
In mice raised in the dark, the feedback connections and the V1 cells directly beneath them represented the same areas of visual space. Lead author Radhika Rajan takes up the story:
“It was amazing to see how well the spatial representations of higher and lower areas matched in the mice raised in the dark.
“This suggests that the brain has an inherent genetic blueprint for organizing these spatially aligned connections, independent of visual input.”
However, in normally raised mice, these connections were less precisely coordinated and there were more feedback inputs conveying information from surrounding areas of the visual field.
Rajan continues, “We found that feedback during visual experience provides more contextual and novel information and improves the ability of V1 cells to capture information from a larger area of the visual scene.”
This effect depended on the origin in the higher visual area: feedback projections from deeper layers were more likely to convey environmental information than those from superficial layers.
In addition, the team found that in normally raised mice, feedback inputs in the deep layers of the brain to V1 are organized according to the patterns they “prefer” to see, such as vertical or horizontal lines.
“For example,” says Dias, “inputs that favor vertical lines avoid sending environmental information to areas located along the vertical direction. In contrast, we found no such bias in connectivity in mice raised in the dark.”
“This suggests that visual experience plays a crucial role in fine-tuning feedback connections and shaping the spatial information transmitted from higher to lower visual areas,” notes Petreanu.
“We have developed a computational model that shows how experience leads to a selection process that reduces the connections between feedback and V1 cells whose representations overlap too much. This minimizes redundancy, allowing V1 cells to integrate a more diverse range of feedback.”
Perhaps contradictory is the assumption that the brain encodes learned knowledge by connecting cells that represent unrelated concepts and that are less likely to be activated together due to real-world patterns.
This could be an energy-efficient way to store information so that when a new stimulus, such as a pink elephant, is encountered, the brain’s preconfigured wiring maximizes activation, improves recognition, and updates predictions about the world.
Identifying this brain interface where existing knowledge connects with new sensory information could be helpful in developing interventions in cases where this integration process fails.
Petreanu concludes: “It is believed that such imbalances occur in disorders such as autism and schizophrenia. In autistic people, people can perceive everything as novel because previous information is not strong enough to influence perception.
“Conversely, in schizophrenia, prior information may be overly dominant, leading to perceptions that are internally generated rather than based on actual sensory input. Understanding how sensory information and prior knowledge are integrated may help to correct these imbalances.”
About this research news on learning and visual neuroscience
Author: Hedi Young
Source: Champalimaud Center for the Unknown
Contact: Hedi Young – Champalimaud Center for the Unknown
Picture: The image is from Neuroscience News.
Original research: Open access.
“Visual experience reduces spatial redundancy between cortical feedback inputs and primary neurons of the visual cortex” by Leopoldo Petreanu et al. Neuron
Abstract
Visual experiences reduce spatial redundancy between cortical feedback inputs and primary visual cortex neurons
Highlights
- Visual experience reduces the receptive field overlap between LM inputs and V1 neurons
- LM inputs from L5 transmit more environmental information to V1 neurons than those from L2/3
- The tuning-dependent organization of the LM inputs of L5 requires visual experience
- Minimizing spatial redundancy explains visual experience effects in LM inputs
Summary
The role of experience in the organization of cortical feedback (FB) is still unknown. We measured the effects of manipulating visual experience on the retinotopic specificity of supragranular and infragranular projections from the lateromedial (LM) visual area in layer (L)1 of the mouse primary visual cortex (V1).
LM inputs were, on average, retinotopically matched with V1 neurons in normal and dark-reared mice, but visual exposure reduced the proportion of spatially overlapping inputs to V1. FB inputs from L5 conveyed more environmental information to V1 than those from L2/3.
The organization of LM inputs from L5 depended on their orientation preference and was perturbed by dark righting.
These observations were recapitulated by a model in which visual experience minimizes the receptive field overlap between LM inputs and V1 neurons.
Our results provide a mechanism for the dependence of environmental modulations on visual experience and provide clues as to how expected interareal coactivation patterns are learned in cortical circuits.