Movatterモバイル変換


[0]ホーム

URL:


Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
Thehttps:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

NIH NLM Logo
Log inShow account info
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
pubmed logo
Advanced Clipboard
User Guide

Full text links

Nature Publishing Group full text link Nature Publishing Group Free PMC article
Full text links

Actions

Share

.2017 Jan 10:8:13995.
doi: 10.1038/ncomms13995.

Organization of high-level visual cortex in human infants

Affiliations

Organization of high-level visual cortex in human infants

Ben Deen et al. Nat Commun..

Abstract

How much of the structure of the human mind and brain is already specified at birth, and how much arises from experience? In this article, we consider the test case of extrastriate visual cortex, where a highly systematic functional organization is present in virtually every normal adult, including regions preferring behaviourally significant stimulus categories, such as faces, bodies, and scenes. Novel methods were developed to scan awake infants with fMRI, while they viewed multiple categories of visual stimuli. Here we report that the visual cortex of 4-6-month-old infants contains regions that respond preferentially to abstract categories (faces and scenes), with a spatial organization similar to adults. However, precise response profiles and patterns of activity across multiple visual categories differ between infants and adults. These results demonstrate that the large-scale organization of category preferences in visual cortex is adult-like within a few months after birth, but is subsequently refined through development.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Category-sensitive responses to faces and scenes in infants show adult-like spatial organization.
Regions preferring faces over scenes are reported in red/yellow, and regions preferring scenes over faces in blue. The top two rows of whole-brain activation maps show results from the two individual infants with the largest amount of usable data, while the third shows a group map with statistics across infants. Maps are thresholded atP<0.01 voxelwise, and corrected for multiple comparisons using a clusterwise threshold ofP<0.05.
Figure 2
Figure 2. The location and reliability of responses to faces and scenes is consistent across infants and adults.
Brain images show heat maps of region-of-interest (ROI) locations across participants (% of ROIs that included a given voxel), with ROIs defined as the top 5% of voxels responding to faces over scenes (or vice versa) within an anatomical region. Bar plots show each ROI's response (per cent signal change, PSC) to faces and scenes in independent data, separately for Expts. 1, 2–8. Error bars show the standard deviation of a permutation-based null distribution for the corresponding value. Baseline corresponds to the response to scrambled scenes (Expts. 1–3, 7–8) or scrambled objects (Expts. 4–6). Statistics for infant data are presented in the main text; as expected, face and scene preferences were highly significant in adults for all regions (permutation test,n=3; allP's<10−15).
Figure 3
Figure 3. Comparison of categorical and visual feature-based models of region-of-interest (ROI) responses.
(a) Schematic showing how high- and low-frequency content and rectilinearity were computed from movie frames. (b) Mean values of these visual features across the stimuli used in Expts. 1–2, normalized such that the maximum value across categories is set to 1. (c) Model fits of category and visual feature models to ROI responses, with error bars specifying standard error. In all three face-preferring regions, there was no significant difference between the category model and the best-performing visual feature model (ventral face region,t(54)=−0.48,P=0.64; lateral face region,t(54)=0.25,P=0.80; STS face region,t(54)=1.55,P=0.13). In these regions, the category model (including a high response to faces) and the rectinilinearity model (a low response to rectilinearity) made very similar predictions; other types of stimuli (such as curve-scrambled faces) may be needed to distinguish these hypotheses. In contrast, for scene regions, the category model and low-level feature models made distinct predictions due to the inclusion of a highly rectilinear non-scene condition (grid-scrambled movies). For the two scene-preferring regions, the category model significantly outperformed all visual feature models. For brevity, we report statistics only for the comparison with the best-performing model (ventral scene region,t(54)=3.56,P=7.8 × 10−4; lateral scene region,t(54)=2.56,P=0.013). HF, high-frequency content; L+H+R=low-frequency content, high-frequency content and rectilinearity; LF, low-frequency content; Recti, rectilinearity.
Figure 4
Figure 4. Infants lack strongly category-selective regions.
Region-of-interest (ROI) responses (per cent signal change, PSC, in independent data) in regions defined by comparing faces to objects and scenes to objects, in infants and adults. Bar plots show responses of ROIs defined as the top 5% of voxels within an anatomical region, while line graphs show how the difference between face and object or scene and object responses varies as a function of ROI size. Adults show strongly selective responses, substantially higher to the preferred category than any other category, while infants do not show a reliable or selective response at any ROI size. Error bars show the standard deviation of a permutation-based null distribution for the corresponding PSC value or PSC difference. Baseline corresponds to the response to scrambled scenes (Expts. 2–3) or scrambled objects (Expts. 4–6).
Figure 5
Figure 5. Distinct representational similarity for multiple visual categories in infants and adults.
Left two images show representational similarity matrices: correlations between spatial patterns of response across extrastriate visual cortex, to faces (F), objects (O), bodies (B) and scenes (S). Bar graph on the right shows rank correlations (Kendall's tau) between similarity measures from pairs of participants. Within group (adult–adult and infant–infant) rank correlations are significantly higher than between group (adult–infant) rank correlations, indicating a reliably distinct similarity structure across groups. *denotesP<0.05.
See this image and copyright information in PMC

Similar articles

See all similar articles

Cited by

See all "Cited by" articles

References

    1. Downing P. E., Jiang Y., Shuman M. & Kanwisher N. A cortical area selective for visual processing of the human body. Science 293, 2470–2473 (2001). - PubMed
    1. Epstein R. & Kanwisher N. A cortical representation of the local visual environment. Nature 392, 598–601 (1998). - PubMed
    1. Kanwisher N. Functional specificity in the human brain: a window into the functional architecture of the mind. Proc. Natl Acad. Sci. 107, 11163–11170 (2010). - PMC - PubMed
    1. Kanwisher N., McDermott J. & Chun M. M. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302–4311 (1997). - PMC - PubMed
    1. Huth A. G., Nishimoto S., Vu A. T. & Gallant J. L. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron 76, 1210–1224 (2012). - PMC - PubMed

Publication types

MeSH terms

Related information

Grants and funding

LinkOut - more resources

Full text links
Nature Publishing Group full text link Nature Publishing Group Free PMC article
Cite
Send To

NCBI Literature Resources

MeSHPMCBookshelfDisclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.


[8]ページ先頭

©2009-2025 Movatter.jp