1
|
Ratan Murty NA, Bashivan P, Abate A, DiCarlo JJ, Kanwisher N. Computational models of category-selective brain regions enable high-throughput tests of selectivity. Nat Commun 2021; 12:5540. [PMID: 34545079 PMCID: PMC8452636 DOI: 10.1038/s41467-021-25409-6] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 08/04/2021] [Indexed: 02/08/2023] Open
Abstract
Cortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, and evolution. But claims of category selectivity are not quantitatively precise and remain vulnerable to empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response to novel images in the fusiform face area, parahippocampal place area, and extrastriate body area, outperforming descriptive models and experts. We use these models to subject claims of category selectivity to strong tests, by screening for and synthesizing images predicted to produce high responses. We find that these high-response-predicted images are all unambiguous members of the hypothesized preferred category for each region. These results provide accurate, image-computable encoding models of each category-selective region, strengthen evidence for domain specificity in the brain, and point the way for future research characterizing the functional organization of the brain with unprecedented computational precision.
Collapse
|
Research Support, N.I.H., Extramural |
4 |
54 |
2
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proc Natl Acad Sci U S A 2020; 117:23011-23020. [PMID: 32839334 PMCID: PMC7502773 DOI: 10.1073/pnas.2004607117] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Collapse
|
Research Support, N.I.H., Extramural |
5 |
40 |
3
|
Ratan Murty NA, Arun SP. Dynamics of 3D view invariance in monkey inferotemporal cortex. J Neurophysiol 2015; 113:2180-94. [PMID: 25609108 PMCID: PMC4416554 DOI: 10.1152/jn.00810.2014] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2014] [Accepted: 01/20/2015] [Indexed: 11/22/2022] Open
Abstract
Rotations in depth are challenging for object vision because features can appear, disappear, be stretched or compressed. Yet we easily recognize objects across views. Are the underlying representations view invariant or dependent? This question has been intensely debated in human vision, but the neuronal representations remain poorly understood. Here, we show that for naturalistic objects, neurons in the monkey inferotemporal (IT) cortex undergo a dynamic transition in time, whereby they are initially sensitive to viewpoint and later encode view-invariant object identity. This transition depended on two aspects of object structure: it was strongest when objects foreshortened strongly across views and were similar to each other. View invariance in IT neurons was present even when objects were reduced to silhouettes, suggesting that it can arise through similarity between external contours of objects across views. Our results elucidate the viewpoint debate by showing that view invariance arises dynamically in IT neurons out of a representation that is initially view dependent.
Collapse
|
Research Support, Non-U.S. Gov't |
10 |
38 |
4
|
Khosla M, Ratan Murty NA, Kanwisher N. A highly selective response to food in human visual cortex revealed by hypothesis-free voxel decomposition. Curr Biol 2022; 32:4159-4171.e9. [PMID: 36027910 PMCID: PMC9561032 DOI: 10.1016/j.cub.2022.08.009] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 08/03/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022]
Abstract
Prior work has identified cortical regions selectively responsive to specific categories of visual stimuli. However, this hypothesis-driven work cannot reveal how prominent these category selectivities are in the overall functional organization of the visual cortex, or what others might exist that scientists have not thought to look for. Furthermore, standard voxel-wise tests cannot detect distinct neural selectivities that coexist within voxels. To overcome these limitations, we used data-driven voxel decomposition methods to identify the main components underlying fMRI responses to thousands of complex photographic images. Our hypothesis-neutral analysis rediscovered components selective for faces, places, bodies, and words, validating our method and showing that these selectivities are dominant features of the ventral visual pathway. The analysis also revealed an unexpected component with a distinct anatomical distribution that responded highly selectively to images of food. Alternative accounts based on low- to mid-level visual features, such as color, shape, or texture, failed to account for the food selectivity of this component. High-throughput testing and control experiments with matched stimuli on a highly accurate computational model of this component confirm its selectivity for food. We registered our methods and hypotheses before replicating them on held-out participants and in a novel dataset. These findings demonstrate the power of data-driven methods and show that the dominant neural responses of the ventral visual pathway include not only selectivities for faces, scenes, bodies, and words but also the visually heterogeneous category of food, thus constraining accounts of when and why functional specialization arises in the cortex.
Collapse
|
Research Support, N.I.H., Extramural |
3 |
26 |
5
|
Murty NAR, Tiwary E, Sharma R, Nair N, Gupta R. γ-Glutamyl transpeptidase from Bacillus pumilus KS 12: decoupling autoprocessing from catalysis and molecular characterization of N-terminal region. Enzyme Microb Technol 2011; 50:159-64. [PMID: 22305170 DOI: 10.1016/j.enzmictec.2011.08.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2011] [Revised: 08/17/2011] [Accepted: 08/23/2011] [Indexed: 11/25/2022]
Abstract
Gamma glutamyl transpeptidase from Bacillus pumilus KS12 (GGTBP) was cloned, expressed in pET-28-E. coli expression system as a heterodimeric enzyme with molecular weights of 45 and 20 kDa for large and small subunit, respectively. It was purified by nickel affinity chromatography with hydrolytic and transpeptidase activity of 1.82 U/mg and 4.35 U/mg, respectively. Sequence analysis revealed that GGTBP was most closely related to Bacillus licheniformis GGT and had all the catalytic residues and nucleophiles for autoprocessing recognized from E. coli. It was optimally active at pH 8 and 60°C. It exhibited pH stability from pH 6-9 and high thermostability with t(1/2) of 15 min at 70°C. It had K(m), V(max) of 0.045 mM, 4.35 μmol/mg/min, respectively. Decoupling of autoprocessing by co-expressing large and small subunit in pET-Duet1-E. coli expression system yielded active enzyme with transpeptidase activity of 5.31 U/mg. Though N-terminal truncations of rGGTBP upto 95 aa did not affect autoprocessing of GGT however activity was lost with truncation beyond 63 aa.
Collapse
|
Journal Article |
14 |
21 |
6
|
Hatamimajoumerd E, Ratan Murty NA, Pitts M, Cohen MA. Decoding perceptual awareness across the brain with a no-report fMRI masking paradigm. Curr Biol 2022; 32:4139-4149.e4. [PMID: 35981538 DOI: 10.1016/j.cub.2022.07.068] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/16/2022] [Accepted: 07/26/2022] [Indexed: 12/14/2022]
Abstract
Does perceptual awareness arise within the sensory regions of the brain or within higher-level regions (e.g., the frontal lobe)? To answer this question, researchers traditionally compare neural activity when observers report being aware versus being unaware of a stimulus. However, it is unclear whether the resulting activations are associated with the conscious perception of the stimulus or the post-perceptual processes associated with reporting that stimulus. To address this limitation, we used both report and no-report conditions in a visual masking paradigm while participants were scanned using functional MRI (fMRI). We found that the overall univariate response to visible stimuli in the frontal lobe was robust in the report condition but disappeared in the no-report condition. However, using multivariate patterns, we could still decode in both conditions whether a stimulus reached conscious awareness across the brain, including in the frontal lobe. These results help reconcile key discrepancies in the recent literature and provide a path forward for identifying the neural mechanisms associated with perceptual awareness.
Collapse
|
|
3 |
16 |
7
|
Ratan Murty NA, Arun SP. Seeing a straight line on a curved surface: decoupling of patterns from surfaces by single IT neurons. J Neurophysiol 2016; 117:104-116. [PMID: 27733595 PMCID: PMC5209550 DOI: 10.1152/jn.00551.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Accepted: 10/08/2016] [Indexed: 11/22/2022] Open
Abstract
We have no difficulty seeing a straight line on a curved piece of paper, but in fact, doing so requires decoupling the shape of the surface from the pattern itself. Here we report a novel form of invariance in the visual cortex: single neurons in monkey inferior temporal cortex respond similarly to congruent transformations of patterns and surfaces, in effect decoupling patterns from the surface on which they are overlaid. We have no difficulty seeing a straight line drawn on a paper even when the paper is bent, but this inference is in fact nontrivial. Doing so requires either matching local features or representing the pattern after factoring out the surface shape. Here we show that single neurons in the monkey inferior temporal (IT) cortex show invariant responses to patterns across rigid and nonrigid changes of surfaces. We recorded neuronal responses to stimuli in which the pattern and the surrounding surface were varied independently. In a subset of neurons, we found pattern-surface interactions that produced similar responses to stimuli across congruent pattern and surface transformations. These interactions produced systematic shifts in curvature tuning of patterns when overlaid on convex and flat surfaces. Our results show that surfaces are factored out of patterns by single neurons, thereby enabling complex perceptual inferences. NEW & NOTEWORTHY We have no difficulty seeing a straight line on a curved piece of paper, but in fact, doing so requires decoupling the shape of the surface from the pattern itself. Here we report a novel form of invariance in the visual cortex: single neurons in monkey inferior temporal cortex respond similarly to congruent transformations of patterns and surfaces, in effect decoupling patterns from the surface on which they are overlaid.
Collapse
|
Research Support, Non-U.S. Gov't |
9 |
7 |
8
|
Murty N. Detection and determination of phenylhydrazine. Talanta 1984; 31:466. [DOI: 10.1016/0039-9140(84)80120-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/1983] [Revised: 12/17/1983] [Accepted: 01/13/1984] [Indexed: 10/18/2022]
|
|
41 |
5 |
9
|
Kamps FS, Richardson H, Murty NAR, Kanwisher N, Saxe R. Using child-friendly movie stimuli to study the development of face, place, and object regions from age 3 to 12 years. Hum Brain Mapp 2022; 43:2782-2800. [PMID: 35274789 PMCID: PMC9120553 DOI: 10.1002/hbm.25815] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 02/11/2022] [Accepted: 02/13/2022] [Indexed: 01/21/2023] Open
Abstract
Scanning young children while they watch short, engaging, commercially-produced movies has emerged as a promising approach for increasing data retention and quality. Movie stimuli also evoke a richer variety of cognitive processes than traditional experiments, allowing the study of multiple aspects of brain development simultaneously. However, because these stimuli are uncontrolled, it is unclear how effectively distinct profiles of brain activity can be distinguished from the resulting data. Here we develop an approach for identifying multiple distinct subject-specific Regions of Interest (ssROIs) using fMRI data collected during movie-viewing. We focused on the test case of higher-level visual regions selective for faces, scenes, and objects. Adults (N = 13) were scanned while viewing a 5.6-min child-friendly movie, as well as a traditional localizer experiment with blocks of faces, scenes, and objects. We found that just 2.7 min of movie data could identify subject-specific face, scene, and object regions. While successful, movie-defined ssROIS still showed weaker domain selectivity than traditional ssROIs. Having validated our approach in adults, we then used the same methods on movie data collected from 3 to 12-year-old children (N = 122). Movie response timecourses in 3-year-old children's face, scene, and object regions were already significantly and specifically predicted by timecourses from the corresponding regions in adults. We also found evidence of continued developmental change, particularly in the face-selective posterior superior temporal sulcus. Taken together, our results reveal both early maturity and functional change in face, scene, and object regions, and more broadly highlight the promise of short, child-friendly movies for developmental cognitive neuroscience.
Collapse
|
Research Support, N.I.H., Extramural |
3 |
4 |
10
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Strong face selectivity in the fusiform can develop in the absence of visual experience. J Vis 2019. [DOI: 10.1167/19.10.54a] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
|
6 |
1 |
11
|
Ratan Murty NA, Arun SP. Effect of silhouetting and inversion on view invariance in the monkey inferotemporal cortex. J Neurophysiol 2017; 118:353-362. [PMID: 28381484 PMCID: PMC5501916 DOI: 10.1152/jn.00008.2017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Revised: 03/31/2017] [Accepted: 04/01/2017] [Indexed: 11/23/2022] Open
Abstract
We easily recognize objects across changes in viewpoint, but the underlying features are unknown. Here, we show that view invariance in monkey inferotemporal cortex is driven mainly by external object contours and is not specialized for object orientation. We also find that the responses to natural objects match with that of their silhouettes early in the response, and with inverted versions later in the response—indicative of a coarse-to-fine processing sequence in the brain. We effortlessly recognize objects across changes in viewpoint, but we know relatively little about the features that underlie viewpoint invariance in the brain. Here, we set out to characterize how viewpoint invariance in monkey inferior temporal (IT) neurons is influenced by two image manipulations—silhouetting and inversion. Reducing an object into its silhouette removes internal detail, so this would reveal how much viewpoint invariance depends on the external contours. Inverting an object retains but rearranges features, so this would reveal how much viewpoint invariance depends on the arrangement and orientation of features. Our main findings are 1) view invariance is weakened by silhouetting but not by inversion; 2) view invariance was stronger in neurons that generalized across silhouetting and inversion; 3) neuronal responses to natural objects matched early with that of silhouettes and only later to that of inverted objects, indicative of coarse-to-fine processing; and 4) the impact of silhouetting and inversion depended on object structure. Taken together, our results elucidate the underlying features and dynamics of view-invariant object representations in the brain. NEW & NOTEWORTHY We easily recognize objects across changes in viewpoint, but the underlying features are unknown. Here, we show that view invariance in the monkey inferotemporal cortex is driven mainly by external object contours and is not specialized for object orientation. We also find that the responses to natural objects match with that of their silhouettes early in the response, and with inverted versions later in the response—indicative of a coarse-to-fine processing sequence in the brain.
Collapse
|
|
8 |
|
12
|
Lahner B, Dwivedi K, Iamshchinina P, Graumann M, Lascelles A, Roig G, Gifford AT, Pan B, Jin S, Ratan Murty NA, Kay K, Oliva A, Cichy R. Modeling short visual events through the BOLD moments video fMRI dataset and metadata. Nat Commun 2024; 15:6241. [PMID: 39048577 PMCID: PMC11269733 DOI: 10.1038/s41467-024-50310-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos' extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Collapse
|
research-article |
1 |
|
13
|
Ratan Murty NA, Arun S. Surfaces are factored out of patterns by monkey IT neurons. J Vis 2016. [DOI: 10.1167/16.12.180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
|
9 |
|
14
|
abate A, Mieczkowski E, Khosla M, DiCarlo J, Kanwisher N, Murty NAR. Computational Models Recapitulate Key Signatures of Face, Body and Scene Processing in the FFA, EBA, and PPA. J Vis 2022. [DOI: 10.1167/jov.22.14.4337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
|
3 |
|
15
|
Cohen M, Lydic K, Ratan Murty NA. Perceptual awareness of natural scenes is limited by higher-level visual features: Evidence from deep neural networks. J Vis 2022. [DOI: 10.1167/jov.22.14.4383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
|
3 |
|
16
|
Hatamimajoumerd E, Murty NAR, Pitts M, Cohen M. What are the neural correlates of perceptual awareness? Evidence from an fMRI no-report masking paradigm. J Vis 2022. [DOI: 10.1167/jov.22.14.3732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
|
3 |
|
17
|
Luo X, Rechardt A, Sun G, Nejad KK, Yáñez F, Yilmaz B, Lee K, Cohen AO, Borghesani V, Pashkov A, Marinazzo D, Nicholas J, Salatiello A, Sucholutsky I, Minervini P, Razavi S, Rocca R, Yusifov E, Okalova T, Gu N, Ferianc M, Khona M, Patil KR, Lee PS, Mata R, Myers NE, Bizley JK, Musslick S, Bilgin IP, Niso G, Ales JM, Gaebler M, Ratan Murty NA, Loued-Khenissi L, Behler A, Hall CM, Dafflon J, Bao SD, Love BC. Large language models surpass human experts in predicting neuroscience results. Nat Hum Behav 2025; 9:305-315. [PMID: 39604572 PMCID: PMC11860209 DOI: 10.1038/s41562-024-02046-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 10/02/2024] [Indexed: 11/29/2024]
Abstract
Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts. Here, to evaluate this possibility, we created BrainBench, a forward-looking benchmark for predicting neuroscience results. We find that LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs indicated high confidence in their predictions, their responses were more likely to be correct, which presages a future where LLMs assist humans in making discoveries. Our approach is not neuroscience specific and is transferable to other knowledge-intensive endeavours.
Collapse
|
research-article |
1 |
|
18
|
Dipani A, McNeal N, Ratan Murty NA. Linking faces to social cognition: The temporal pole as a potential social switch. Proc Natl Acad Sci U S A 2024; 121:e2411735121. [PMID: 39024106 PMCID: PMC11295026 DOI: 10.1073/pnas.2411735121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
|
Comment |
1 |
|
19
|
Khosla M, Murty NAR, Kanwisher N. Data-driven component modeling reveals the functional organization of high-level visual cortex. J Vis 2022. [DOI: 10.1167/jov.22.14.4184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|
|
3 |
|