1
|
Kanwisher N. Animal models of the human brain: Successes, limitations, and alternatives. Curr Opin Neurobiol 2025; 90:102969. [PMID: 39914250 DOI: 10.1016/j.conb.2024.102969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Revised: 12/19/2024] [Accepted: 12/21/2024] [Indexed: 02/21/2025]
Abstract
The last three decades of research in human cognitive neuroscience have given us an initial "parts list" for the human mind in the form of a set of cortical regions with distinct and often very specific functions. But current neuroscientific methods in humans have limited ability to reveal exactly what these regions represent and compute, the causal role of each in behavior, and the interactions among regions that produce real-world cognition. Animal models can help to answer these questions when homologues exist in other species, like the face system in macaques. When homologues do not exist in animals, for example for speech and music perception, and understanding of language or other people's thoughts, intracranial recordings in humans play a central role, along with a new alternative to animal models: artificial neural networks.
Collapse
Affiliation(s)
- Nancy Kanwisher
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, United States.
| |
Collapse
|
2
|
Sabu A, Irvine D, Grayden DB, Fallon J. Ensemble responses of auditory midbrain neurons in the cat to speech stimuli at different signal-to-noise ratios. Hear Res 2025; 456:109163. [PMID: 39657280 DOI: 10.1016/j.heares.2024.109163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 11/13/2024] [Accepted: 12/02/2024] [Indexed: 12/12/2024]
Abstract
Originally reserved for those who are profoundly deaf, cochlear implantation is now common for people with partial hearing loss, particularly when combined with a hearing aid. This combined intervention enhances speech comprehension and sound quality when compared to electrical stimulation alone, particularly in noisy environments, but the physiological basis for the benefits is not well understood. Our long-term aim is to elucidate the underlying physiological mechanisms of this improvement, and as a first step in this process, we have investigated in normal hearing cats, the degree to which the patterns of neural activity evoked in the inferior colliculus (IC) by speech sounds in various levels of noise allows discrimination between those sounds. Neuronal responses were recorded simultaneously from 32 sites across the tonotopic axis of the IC in anaesthetised normal hearing cats (n = 7). Speech sounds were presented at 20, 40 and 60 dB SPL in quiet and with increasing levels of additive noise (signal-to-noise ratios (SNRs) -20, -15, -10, -5, 0, +5, +10, +15, +20 dB). Neural discrimination was assessed using a Euclidean measure of distance between neural responses, resulting in a function reflecting speech sound differentiation across various SNRs. Responses of IC neurons reliably encoded the speech stimuli when presented in quiet, with optimal performance when an analysis bin-width of 5-10 ms was used. Discrimination thresholds did not depend on stimulus level and were best for shorter analysis binwidths. This study sheds light on how the auditory midbrain represents speech sounds and provides baseline data with which responses to electro-acoustic speech sounds in partially deafened animals can be compared.
Collapse
Affiliation(s)
- Anu Sabu
- Bionics Institute, Fitzroy, Victoria, Australia; Medical Bionics Department, The University of Melbourne, Parkville, Victoria, Australia.
| | - Dexter Irvine
- Bionics Institute, Fitzroy, Victoria, Australia; School of Psychological Sciences, Monash University, Clayton, Victoria, Australia
| | - David B Grayden
- Bionics Institute, Fitzroy, Victoria, Australia; Department of Biomedical Engineering and Graeme Clark Institute, The University of Melbourne, Melbourne, Victoria, Australia
| | - James Fallon
- Bionics Institute, Fitzroy, Victoria, Australia; Medical Bionics Department, The University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
3
|
Méndez JM, Cooper BG, Goller F. Note similarities affect syntactic stability in zebra finches. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2025; 211:35-52. [PMID: 39133335 DOI: 10.1007/s00359-024-01713-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 07/30/2024] [Accepted: 08/05/2024] [Indexed: 08/13/2024]
Abstract
The acquisition of an acoustic template is a fundamental component of vocal imitation learning, which is used to refine innate vocalizations and develop a species-specific song. In the absence of a model, birds fail to develop species typical songs. In zebra finches (Taeniopygia guttata), tutored birds produce songs with a stereotyped sequence of distinct acoustic elements, or notes, which form the song motif. Songs of untutored individuals feature atypical acoustic and temporal structure. Here we studied songs and associated respiratory patterns of tutored and untutored male zebra finches to investigate whether similar acoustic notes influence the sequence of song elements. A subgroup of animals developed songs with multiple acoustically similar notes that are produced with alike respiratory motor gestures. These birds also showed increased syntactic variability in their adult motif. Sequence variability tended to occur near song elements which showed high similarity in acoustic structure and underlying respiratory motor gestures. The duration and depth of the inspirations preceding the syllables where syntactic variation occurred did not allow prediction of the following sequence of notes, suggesting that the varying duration and air requirement of the following expiratory pulse is not predictively encoded in the motor program. This study provides a novel method for calculation of motor/acoustic similarity, and the results of this study suggest that the note is a fundamental acoustic unit in the organization of the motif and could play a role in the neural code for song syntax.
Collapse
Affiliation(s)
- Jorge M Méndez
- Department of Physics and Astronomy, Minnesota State University-Mankato, Mankato, MN, USA.
| | - Brenton G Cooper
- Department of Psychology, Texas Christian University, Fort Worth, TX, USA
| | - Franz Goller
- Department of Biology, University of Utah, Salt Lake City, UT, USA
- Institute of Zoophysiology, University of Münster, Münster, Germany
| |
Collapse
|
4
|
Schmid P, Reichert C, Knight RT, Dürschmid S. Differential contributions of the C1 ERP and broadband high-frequency activity to visual processing. J Neurophysiol 2025; 133:78-84. [PMID: 39589840 DOI: 10.1152/jn.00292.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 11/08/2024] [Accepted: 11/18/2024] [Indexed: 11/28/2024] Open
Abstract
The high-frequency activity (HFA; 80-150 Hz) in human intracranial recordings shows a differential modulation to different degrees in contrast when stimuli are behaviorally relevant, indicating a feedforward process. However, the HFA is also significantly dominated by superficial layers and exhibits a peak before 200 ms, suggesting that it is more likely a feedback signal. Magnetoencephalographic (MEG) recordings are suited to reveal an HFA modulation similar to its modulation in intracranial recordings. This allows for noninvasive, direct comparison of HFA with the C1, an established measure for feedforward input to V1, to test whether HFA represents feedforward or rather feedback. In simultaneous recordings, we used the EEG-C1 event-related potential (ERP) component and MEG-HFA to define feedforward processing in visual cortices. C1 latency preceded the HFA peak modulation, which had a more sustained response. Furthermore, modulation parameters like onset, peak time, and peak amplitude were uncorrelated. Most importantly, the C1 but not HFA distinguished small task-irrelevant contrast differences in visual stimulation. These results highlight the differential roles for the C1 and HFA in visual processing with the C1 measuring feedforward discrimination ability and HFA indexing feedforward and feedback processing.NEW & NOTEWORTHY Whether the broadband high-frequency activity (HFA) represents exclusively feedforward or feedback processing remains unclear. In this study, we compared the response characteristics of the HFA-magnetoencephalographic (MEG) and the C1-EEG component to systematic contrast modulations of task-irrelevant visual stimulation. Our findings reveal that the more sustained HFA follows the C1 component and, unlike the C1, is not modulated by task-irrelevant contrast differences. This timing of the HFA modulation suggests that HFA encompasses both feedforward and feedback processing.
Collapse
Affiliation(s)
- Paul Schmid
- Department of Cellular Neuroscience, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Christoph Reichert
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Robert T Knight
- Department of Psychology and the Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States
| | - Stefan Dürschmid
- Department of Cellular Neuroscience, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Department of Psychology and the Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States
| |
Collapse
|
5
|
Li J, Cao D, Li W, Sarnthein J, Jiang T. Re-evaluating human MTL in working memory: insights from intracranial recordings. Trends Cogn Sci 2024; 28:1132-1144. [PMID: 39174398 DOI: 10.1016/j.tics.2024.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 07/26/2024] [Accepted: 07/26/2024] [Indexed: 08/24/2024]
Abstract
The study of human working memory (WM) holds significant importance in neuroscience; yet, exploring the role of the medial temporal lobe (MTL) in WM has been limited by the technological constraints of noninvasive methods. Recent advancements in human intracranial neural recordings have indicated the involvement of the MTL in WM processes. These recordings show that different regions of the MTL are involved in distinct aspects of WM processing and also dynamically interact with each other and the broader brain network. These findings support incorporating the MTL into models of the neural basis of WM. This integration can better reflect the complex neural mechanisms underlying WM and enhance our understanding of WM's flexibility, adaptability, and precision.
Collapse
Affiliation(s)
- Jin Li
- School of Psychology, Capital Normal University, Beijing, 100048, China
| | - Dan Cao
- School of Psychology, Capital Normal University, Beijing, 100048, China; Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Wenlu Li
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Johannes Sarnthein
- Department of Neurosurgery, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland; Zurich Neuroscience Center, ETH Zurich, 8057 Zurich, Switzerland
| | - Tianzi Jiang
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Xiaoxiang Institute for Brain Health and Yongzhou Central Hospital, Yongzhou 425000, Hunan Province, China.
| |
Collapse
|
6
|
Zhang D, Wang Z, Qian Y, Zhao Z, Liu Y, Hao X, Li W, Lu S, Zhu H, Chen L, Xu K, Li Y, Lu J. A brain-to-text framework for decoding natural tonal sentences. Cell Rep 2024; 43:114924. [PMID: 39485790 DOI: 10.1016/j.celrep.2024.114924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 09/09/2024] [Accepted: 10/14/2024] [Indexed: 11/03/2024] Open
Abstract
Speech brain-computer interfaces (BCIs) directly translate brain activity into speech sound and text. Despite successful applications in non-tonal languages, the distinct syllabic structures and pivotal lexical information conveyed through tonal nuances present challenges in BCI decoding for tonal languages like Mandarin Chinese. Here, we designed a brain-to-text framework to decode Mandarin sentences from invasive neural recordings. Our framework dissects speech onset, base syllables, and lexical tones, integrating them with contextual information through Bayesian likelihood and a Viterbi decoder. The results demonstrate accurate tone and syllable decoding during naturalistic speech production. The overall word error rate (WER) for 10 offline-decoded tonal sentences with a vocabulary of 40 high-frequency Chinese characters is 21% (chance: 95.3%) averaged across five participants, and tone decoding accuracy reaches 93% (chance: 25%), surpassing previous intracranial Mandarin tonal syllable decoders. This study provides a robust and generalizable approach for brain-to-text decoding of continuous tonal speech sentences.
Collapse
Affiliation(s)
- Daohan Zhang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Zhenjie Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China
| | - Youkun Qian
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Zehao Zhao
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Yan Liu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Xiaotao Hao
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Wanxin Li
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Shuo Lu
- Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou 510080, China
| | - Honglin Zhu
- Faculty of Life Sciences and Medicine, King's College London, London SE1 1UL, UK
| | - Luyao Chen
- School of International Chinese Language Education, Beijing Normal University, Beijing 100875, China
| | - Kunyu Xu
- Institute of Modern Languages and Linguistics, Fudan University, Shanghai 200433, China
| | - Yuanning Li
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China.
| | - Junfeng Lu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China; National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China; Institute of Modern Languages and Linguistics, Fudan University, Shanghai 200433, China; MOE Frontiers Center for Brain Science, Huashan Hospital, Fudan University, Shanghai 200040, China.
| |
Collapse
|
7
|
Lakretz Y, Friedmann N, King JR, Mankin E, Rangel A, Tankus A, Dehaene S, Fried I. Modality-Specific and Amodal Language Processing by Single Neurons. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.16.623907. [PMID: 39605371 PMCID: PMC11601528 DOI: 10.1101/2024.11.16.623907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
According to psycholinguistic theories, during language processing, spoken and written words are first encoded along independent phonological and orthographic dimensions, then enter into modality-independent syntactic and semantic codes. Non-invasive brain imaging has isolated several cortical regions putatively associated with those processing stages, but lacks the resolution to identify the corresponding neural codes. Here, we describe the firing responses of over 1000 neurons, and mesoscale field potentials from over 1400 microwires and 1500 iEEG contacts in 21 awake neurosurgical patients with implanted electrodes during written and spoken sentence comprehension. Using forward modeling of temporal receptive fields, we determined which sensory or abstract dimensions are encoded. We observed a double dissociation between superior temporal neurons sensitive to phonemes and phonological features and previously unreported ventral occipito-temporal neurons sensitive to letters and orthographic features. We also discovered novel neurons, primarily located in middle temporal and inferior frontal areas, which are modality-independent and show responsiveness to higher linguistic features. Overall, these findings show how language processing can be linked to neural dynamics, across multiple brain regions at various resolutions and down to the level of single neurons.
Collapse
Affiliation(s)
- Yair Lakretz
- Laboratoire des Sciences Cognitives et Psycholinguistiques, Département d’études cognitives, Ecole Normale Supérieure, PSL University, CNRS, Paris, France
- Cognitive Neuroimaging Unit, CEA, INSERM U 992, Université Paris-Saclay, NeuroSpin center, Gif-sur-Yvette, France
| | - Naama Friedmann
- School of Education, Tel-Aviv University, Tel-Aviv, Israel
- Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| | - Jean-Rémi King
- Laboratoire des Systèmes Perceptifs, Département d’études cognitives, Ecole Normale Supérieure, PSL University, CNRS, Paris, France
| | - Emily Mankin
- Department of Neurosurgery, David Geffen School of Medicine, UCLA, Los-Angeles, California, USA
| | - Anthony Rangel
- Department of Neurosurgery, David Geffen School of Medicine, UCLA, Los-Angeles, California, USA
| | - Ariel Tankus
- Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
- Faculty of Medicine, Tel Aviv University, Tel-Aviv, Israel
- Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel-Aviv, Israel
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA, INSERM U 992, Université Paris-Saclay, NeuroSpin center, Gif-sur-Yvette, France
- Collège de France, Université Paris Sciences Lettres (PSL), Paris, France
| | - Itzhak Fried
- Department of Neurosurgery, David Geffen School of Medicine, UCLA, Los-Angeles, California, USA
- Faculty of Medicine, Tel Aviv University, Tel-Aviv, Israel
| |
Collapse
|
8
|
Kurteff GL, Field AM, Asghar S, Tyler-Kabara EC, Clarke D, Weiner HL, Anderson AE, Watrous AJ, Buchanan RJ, Modur PN, Hamilton LS. Spatiotemporal Mapping of Auditory Onsets during Speech Production. J Neurosci 2024; 44:e1109242024. [PMID: 39455254 PMCID: PMC11580786 DOI: 10.1523/jneurosci.1109-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/31/2024] [Accepted: 10/08/2024] [Indexed: 10/28/2024] Open
Abstract
The human auditory cortex is organized according to the timing and spectral characteristics of speech sounds during speech perception. During listening, the posterior superior temporal gyrus is organized according to onset responses, which segment acoustic boundaries in speech, and sustained responses, which further process phonological content. When we speak, the auditory system is actively processing the sound of our own voice to detect and correct speech errors in real time. This manifests in neural recordings as suppression of auditory responses during speech production compared with perception, but whether this differentially affects the onset and sustained temporal profiles is not known. Here, we investigated this question using intracranial EEG recorded from seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy while they performed a reading/listening task. We identified onset and sustained responses to speech in the bilateral auditory cortex and observed a selective suppression of onset responses during speech production. We conclude that onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production and are therefore suppressed. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, auditory onset responses and phonological feature tuning were present in the posterior insula during both speech perception and production, suggesting an anatomically and functionally separate auditory processing zone that we believe to be involved in multisensory integration during speech perception and feedback control.
Collapse
Affiliation(s)
- Garret Lynn Kurteff
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas 78712
| | - Alyssa M Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas 78712
| | - Saman Asghar
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas 78712
- Departments of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030
| | - Elizabeth C Tyler-Kabara
- Departments of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
- Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
| | - Dave Clarke
- Departments of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
- Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
- Neurology, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
| | - Howard L Weiner
- Departments of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030
| | - Anne E Anderson
- Pediatrics, Baylor College of Medicine, Houston, Texas 77030
| | - Andrew J Watrous
- Departments of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030
| | - Robert J Buchanan
- Departments of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
| | - Pradeep N Modur
- Neurology, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
| | - Liberty S Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, Texas 78712
- Neurology, Dell Medical School, The University of Texas at Austin, Austin, Texas 78712
| |
Collapse
|
9
|
Sohoglu E, Beckers L, Davis MH. Convergent neural signatures of speech prediction error are a biological marker for spoken word recognition. Nat Commun 2024; 15:9984. [PMID: 39557848 PMCID: PMC11574182 DOI: 10.1038/s41467-024-53782-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 10/17/2024] [Indexed: 11/20/2024] Open
Abstract
We use MEG and fMRI to determine how predictions are combined with speech input in superior temporal cortex. We compare neural responses to words in which first syllables strongly or weakly predict second syllables (e.g., "bingo", "snigger" versus "tango", "meagre"). We further compare neural responses to the same second syllables when predictions mismatch with input during pseudoword perception (e.g., "snigo" and "meago"). Neural representations of second syllables are suppressed by strong predictions when predictions match sensory input but show the opposite effect when predictions mismatch. Computational simulations show that this interaction is consistent with prediction error but not alternative (sharpened signal) computations. Neural signatures of prediction error are observed 200 ms after second syllable onset and in early auditory regions (bilateral Heschl's gyrus and STG). These findings demonstrate prediction error computations during the identification of familiar spoken words and perception of unfamiliar pseudowords.
Collapse
Affiliation(s)
- Ediz Sohoglu
- School of Psychology, University of Sussex, Brighton, UK.
| | - Loes Beckers
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, The Netherlands
- Cochlear Ltd., Mechelen, Belgium
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
| |
Collapse
|
10
|
Stringer C, Pachitariu M. Analysis methods for large-scale neuronal recordings. Science 2024; 386:eadp7429. [PMID: 39509504 DOI: 10.1126/science.adp7429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 09/27/2024] [Indexed: 11/15/2024]
Abstract
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Collapse
Affiliation(s)
- Carsen Stringer
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| | - Marius Pachitariu
- Howard Hughes Medical Institute (HHMI) Janelia Research Campus, Ashburn, VA, USA
| |
Collapse
|
11
|
Park S, Lipton M, Dadarlat MC. Decoding multi-limb movements from two-photon calcium imaging of neuronal activity using deep learning. J Neural Eng 2024; 21:066006. [PMID: 39508456 DOI: 10.1088/1741-2552/ad83c0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 09/26/2024] [Indexed: 11/15/2024]
Abstract
Objective.Brain-machine interfaces (BMIs) aim to restore sensorimotor function to individuals suffering from neural injury and disease. A critical step in implementing a BMI is to decode movement intention from recorded neural activity patterns in sensorimotor areas. Optical imaging, including two-photon (2p) calcium imaging, is an attractive approach for recording large-scale neural activity with high spatial resolution using a minimally-invasive technique. However, relating slow two-photon calcium imaging data to fast behaviors is challenging due to the relatively low optical imaging sampling rates. Nevertheless, neural activity recorded with 2p calcium imaging has been used to decode information about stereotyped single-limb movements and to control BMIs. Here, we expand upon prior work by applying deep learning to decode multi-limb movements of running mice from 2p calcium imaging data.Approach.We developed a recurrent encoder-decoder network (LSTM-encdec) in which the output is longer than the input.Main results.LSTM-encdec could accurately decode information about all four limbs (contralateral and ipsilateral front and hind limbs) from calcium imaging data recorded in a single cortical hemisphere.Significance.Our approach provides interpretability measures to validate decoding accuracy and expands the utility of BMIs by establishing the groundwork for control of multiple limbs. Our work contributes to the advancement of neural decoding techniques and the development of next-generation optical BMIs.
Collapse
Affiliation(s)
- Seungbin Park
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47906, United States of America
| | - Megan Lipton
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47906, United States of America
| | - Maria C Dadarlat
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47906, United States of America
| |
Collapse
|
12
|
Theofanopoulou C. Tapping into the vocal learning and rhythmic synchronization hypothesis. BMC Neurosci 2024; 25:63. [PMID: 39506690 PMCID: PMC11539701 DOI: 10.1186/s12868-024-00863-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/19/2024] [Indexed: 11/08/2024] Open
Abstract
In this article, I present three main points that could benefit the "vocal learning and rhythmic synchronization hypothesis", encompassing neurogenetic mechanisms of gene expression transmission and single motor neuron function, classification of different behavioral motor phenotypes (e.g., spontaneous vs. voluntary), and other evolutionary considerations (i.e., the involvement of reward mechanisms).
Collapse
Affiliation(s)
- Constantina Theofanopoulou
- Rockefeller University, New York, NY, USA.
- Center for Ballet and the Arts, New York University, New York, NY, USA.
- Drexel University, Philadelphia, PA, USA.
| |
Collapse
|
13
|
Lee JY, Lee S, Mishra A, Yan X, McMahan B, Gaisford B, Kobashigawa C, Qu M, Xie C, Kao JC. Non-invasive brain-machine interface control with artificial intelligence copilots. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.09.615886. [PMID: 39416032 PMCID: PMC11482823 DOI: 10.1101/2024.10.09.615886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
Motor brain-machine interfaces (BMIs) decode neural signals to help people with paralysis move and communicate. Even with important advances in the last two decades, BMIs face key obstacles to clinical viability. Invasive BMIs achieve proficient cursor and robotic arm control but require neurosurgery, posing significant risk to patients. Non-invasive BMIs do not have neurosurgical risk, but achieve lower performance, sometimes being prohibitively frustrating to use and preventing widespread adoption. We take a step toward breaking this performance-risk tradeoff by building performant non-invasive BMIs. The critical limitation that bounds decoder performance in non-invasive BMIs is their poor neural signal-to-noise ratio. To overcome this, we contribute (1) a novel EEG decoding approach and (2) artificial intelligence (AI) copilots that infer task goals and aid action completion. We demonstrate that with this "AI-BMI," in tandem with a new adaptive decoding approach using a convolutional neural network (CNN) and ReFIT-like Kalman filter (KF), healthy users and a paralyzed participant can autonomously and proficiently control computer cursors and robotic arms. Using an AI copilot improves goal acquisition speed by up to 4.3× in the standard center-out 8 cursor control task and enables users to control a robotic arm to perform the sequential pick-and-place task, moving 4 randomly placed blocks to 4 randomly chosen locations. As AI copilots improve, this approach may result in clinically viable non-invasive AI-BMIs.
Collapse
Affiliation(s)
- Johannes Y. Lee
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Sangjoon Lee
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Abhishek Mishra
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Xu Yan
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Brandon McMahan
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Brent Gaisford
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Charles Kobashigawa
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Mike Qu
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Chang Xie
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
| | - Jonathan C. Kao
- Dept of Electrical and Computer Engineering, University of California, Los Angeles, CA, 90024, United States
- Neurosciences Program, University of California, Los Angeles, CA, 90024, United States
| |
Collapse
|
14
|
Hahn MA, Lendner JD, Anwander M, Slama KSJ, Knight RT, Lin JJ, Helfrich RF. A tradeoff between efficiency and robustness in the hippocampal-neocortical memory network during human and rodent sleep. Prog Neurobiol 2024; 242:102672. [PMID: 39369838 DOI: 10.1016/j.pneurobio.2024.102672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 08/30/2024] [Accepted: 10/03/2024] [Indexed: 10/08/2024]
Abstract
Sleep constitutes a brain state of disengagement from the external world that supports memory consolidation and restores cognitive resources. The precise mechanisms how sleep and its varied stages support information processing remain largely unknown. Synaptic scaling models imply that daytime learning accumulates neural information, which is then consolidated and downregulated during sleep. Currently, there is a lack of in-vivo data from humans and rodents that elucidate if, and how, sleep renormalizes information processing capacities. From an information-theoretical perspective, a consolidation process should entail a reduction in neural pattern variability over the course of a night. Here, in a cross-species intracranial study, we identify a tradeoff in the neural population code during sleep where information coding efficiency is higher in the neocortex than in hippocampal archicortex in humans than in rodents as well as during wakefulness compared to sleep. Critically, non-REM sleep selectively reduces information coding efficiency through pattern repetition in the neocortex in both species, indicating a transition to a more robust information coding regime. Conversely, the coding regime in the hippocampus remained consistent from wakefulness to non-REM sleep. These findings suggest that new information could be imprinted to the long-term mnemonic storage in the neocortex through pattern repetition during sleep. Lastly, our results show that task engagement increased coding efficiency, while medically-induced unconsciousness disrupted the population code. In sum, these findings suggest that neural pattern variability could constitute a fundamental principle underlying cognitive engagement and memory formation, while pattern repetition reflects robust coding, possibly underlying the consolidation process.
Collapse
Affiliation(s)
- Michael A Hahn
- Hertie-Institute for Clinical Brain Research, University Medical Center Tübingen, Otfried-Müller Str. 27, Tübingen 72076, Germany.
| | - Janna D Lendner
- Hertie-Institute for Clinical Brain Research, University Medical Center Tübingen, Otfried-Müller Str. 27, Tübingen 72076, Germany; Department of Anesthesiology and Intensive Care Medicine, University Medical Center Tübingen, Hoppe-Seyler-Str 3, Tübingen 72076, Germany
| | - Matthias Anwander
- Hertie-Institute for Clinical Brain Research, University Medical Center Tübingen, Otfried-Müller Str. 27, Tübingen 72076, Germany
| | - Katarina S J Slama
- Department of Psychology and the Helen Wills Neuroscience Institute, UC Berkeley, 130 Barker Hall, Berkeley, CA 94720, USA
| | - Robert T Knight
- Department of Psychology and the Helen Wills Neuroscience Institute, UC Berkeley, 130 Barker Hall, Berkeley, CA 94720, USA
| | - Jack J Lin
- Department of Neurology, UC Davis, 3160 Folsom Blvd, Sacramento, CA 95816, USA; Center for Mind and Brain, UC Davis, 267 Cousteau Pl, Davis, CA 95618, USA
| | - Randolph F Helfrich
- Hertie-Institute for Clinical Brain Research, University Medical Center Tübingen, Otfried-Müller Str. 27, Tübingen 72076, Germany.
| |
Collapse
|
15
|
Franks K, Schaefer A. Olfactory neurons selectively respond to related visual and verbal cues. Nature 2024; 634:547-548. [PMID: 39384914 DOI: 10.1038/d41586-024-03056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/11/2024]
|
16
|
Regev TI, Casto C, Hosseini EA, Adamek M, Ritaccio AL, Willie JT, Brunner P, Fedorenko E. Neural populations in the language network differ in the size of their temporal receptive windows. Nat Hum Behav 2024; 8:1924-1942. [PMID: 39187713 DOI: 10.1038/s41562-024-01944-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 07/03/2024] [Indexed: 08/28/2024]
Abstract
Despite long knowing what brain areas support language comprehension, our knowledge of the neural computations that these frontal and temporal regions implement remains limited. One important unresolved question concerns functional differences among the neural populations that comprise the language network. Here we leveraged the high spatiotemporal resolution of human intracranial recordings (n = 22) to examine responses to sentences and linguistically degraded conditions. We discovered three response profiles that differ in their temporal dynamics. These profiles appear to reflect different temporal receptive windows, with average windows of about 1, 4 and 6 words, respectively. Neural populations exhibiting these profiles are interleaved across the language network, which suggests that all language regions have direct access to distinct, multiscale representations of linguistic input-a property that may be critical for the efficiency and robustness of language processing.
Collapse
Affiliation(s)
- Tamar I Regev
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Colton Casto
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Program in Speech and Hearing Bioscience and Technology (SHBT), Harvard University, Boston, MA, USA.
- Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Allston, MA, USA.
| | - Eghbal A Hosseini
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Markus Adamek
- National Center for Adaptive Neurotechnologies, Albany, NY, USA
- Department of Neurosurgery, Washington University School of Medicine, St Louis, MO, USA
| | | | - Jon T Willie
- National Center for Adaptive Neurotechnologies, Albany, NY, USA
- Department of Neurosurgery, Washington University School of Medicine, St Louis, MO, USA
| | - Peter Brunner
- National Center for Adaptive Neurotechnologies, Albany, NY, USA
- Department of Neurosurgery, Washington University School of Medicine, St Louis, MO, USA
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Evelina Fedorenko
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Program in Speech and Hearing Bioscience and Technology (SHBT), Harvard University, Boston, MA, USA.
| |
Collapse
|
17
|
Mackey CA, Duecker K, Neymotin S, Dura-Bernal S, Haegens S, Barczak A, O'Connell MN, Jones SR, Ding M, Ghuman AS, Schroeder CE. Is there a ubiquitous spectrolaminar motif of local field potential power across primate neocortex? BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.18.613490. [PMID: 39345528 PMCID: PMC11429918 DOI: 10.1101/2024.09.18.613490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Mendoza-Halliday, Major et al., 2024 ("The Paper")1 advocates a local field potential (LFP)-based approach to functional identification of cortical layers during "laminar" (simultaneous recordings from all cortical layers) multielectrode recordings in nonhuman primates (NHPs). The Paper describes a "ubiquitous spectrolaminar motif" in the primate neocortex: 1) 75-150 Hz power peaks in the supragranular layers, 2) 10-19 Hz power peaks in the infragranular layers and 3) the crossing point of their laminar power gradients identifies Layer 4 (L4). Identification of L4 is critical in general, but especially for The Paper as the "motif" discovery is couched within a framework whose central hypothesis is that gamma activity originates in the supragranular layers and reflects feedforward activity, while alpha-beta activity originates in the infragranular layers and reflects feedback activity. In an impressive scientific effort, The Paper analyzed laminar data from 14 cortical areas in 2 prior macaque studies and compared them to marmoset, mouse, and human data to further bolster the canonical nature of the motif. Identification of such canonical principles of brain operation is clearly a topic of broad scientific interest. Similarly, a reliable online method for L4 identification would be of broad scientific value for the rapidly increasing use of laminar recordings using numerous evolving technologies. Despite The Paper's strengths, and its potential for scientific impact, a series of concerns that are fundamental to the analysis and interpretation of laminar activity profile data in general, and local field potential (LFP) signals in particular, led us to question its conclusions. We thus evaluated the generality of The Paper's methods and findings using new sets of data comprised of stimulus-evoked laminar response profiles from primary and higher-order auditory cortices (A1 and belt cortex), and primary visual cortex (V1). The rationale for using these areas as a test bed for new methods is that their laminar anatomy and physiology have already been extensively characterized by prior studies, and there is general agreement across laboratories on key matters like L4 identification. Our analyses indicate that The Paper's findings do not generalize well to any of these cortical areas. In particular, we find The Paper's methods for L4 identification to be unreliable. Moreover, both methodological and statistical concerns, outlined below and in the supplement, question the stated prevalence of the motif in The Paper's published dataset. After summarizing our findings and related broader concerns, we briefly critique the evidence from biophysical modeling studies cited to support The Paper's conclusions. While our findings are at odds with the proposition of a ubiquitous spectrolaminar motif in the primate neocortex, The Paper already has, and will continue to spark debate and further experimentation. Hopefully this countervailing presentation will lead to robust collegial efforts to define optimal strategies for applying laminar recording methods in future studies.
Collapse
Affiliation(s)
- C A Mackey
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - K Duecker
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
| | - S Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
- Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA
| | - S Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - S Haegens
- Department of Psychiatry, Columbia University, New York, USA
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, USA
| | - A Barczak
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - M N O'Connell
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
- Department of Psychiatry, New York University School of Medicine, 550 First Avenue, New York, NY 10016, USA
| | - S R Jones
- Department of Neuroscience, Brown University, Providence, Rhode Island 02912
- Center for Neurorestoration and Neurotechnology, Providence VA Medical Center, Providence, Rhode Island 02908
| | - M Ding
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL
| | - A S Ghuman
- Center for Neuroscience at the University of Pittsburgh, University of Pittsburgh, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - C E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
- Departments of Psychiatry and Neurology, Columbia University, New York, USA
| |
Collapse
|
18
|
Karthik G, Cao CZ, Demidenko MI, Jahn A, Stacey WC, Wasade VS, Brang D. Auditory cortex encodes lipreading information through spatially distributed activity. Curr Biol 2024; 34:4021-4032.e5. [PMID: 39153482 PMCID: PMC11387126 DOI: 10.1016/j.cub.2024.07.073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/29/2024] [Accepted: 07/19/2024] [Indexed: 08/19/2024]
Abstract
Watching a speaker's face improves speech perception accuracy. This benefit is enabled, in part, by implicit lipreading abilities present in the general population. While it is established that lipreading can alter the perception of a heard word, it is unknown how these visual signals are represented in the auditory system or how they interact with auditory speech representations. One influential, but untested, hypothesis is that visual speech modulates the population-coded representations of phonetic and phonemic features in the auditory system. This model is largely supported by data showing that silent lipreading evokes activity in the auditory cortex, but these activations could alternatively reflect general effects of arousal or attention or the encoding of non-linguistic features such as visual timing information. This gap limits our understanding of how vision supports speech perception. To test the hypothesis that the auditory system encodes visual speech information, we acquired functional magnetic resonance imaging (fMRI) data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy during auditory and visual speech perception tasks. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time course of classification using intracranial recordings, lipread words were classified at earlier time points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.
Collapse
Affiliation(s)
- Ganesan Karthik
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Cody Zhewei Cao
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | | | - Andrew Jahn
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | - William C Stacey
- Department of Neurology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Vibhangini S Wasade
- Henry Ford Hospital, Detroit, MI 48202, USA; Department of Neurology, Wayne State University School of Medicine, Detroit, MI 48201, USA
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA.
| |
Collapse
|
19
|
Grinde B. Consciousness makes sense in the light of evolution. Neurosci Biobehav Rev 2024; 164:105824. [PMID: 39047928 DOI: 10.1016/j.neubiorev.2024.105824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/18/2024] [Accepted: 07/21/2024] [Indexed: 07/27/2024]
Abstract
I believe consciousness is a property of advanced nervous systems, and as such a product of evolution. Thus, to understand consciousness we need to describe the trajectory leading to its evolution and the selective advantages conferred. A deeper understanding of the neurology would be a significant contribution, but other advanced functions, such as hearing and vision, are explained with a comparable lack of detailed knowledge of the brain processes responsible. In this paper, I try to add details and credence to a previously suggested, evolution-based model of consciousness. According to this model, the feature started to evolve in early amniotes (reptiles, birds, and mammals) some 320 million years ago. The reason was the introduction of feelings as a strategy for making behavioral decisions.
Collapse
Affiliation(s)
- Bjørn Grinde
- Professor Emeritus, University of Oslo, Problemveien 11, Oslo 0313, Norway.
| |
Collapse
|
20
|
Cleary DR, Tchoe Y, Bourhis A, Dickey CW, Stedelin B, Ganji M, Lee SH, Lee J, Siler DA, Brown EC, Rosen BQ, Kaestner E, Yang JC, Soper DJ, Han SJ, Paulk AC, Cash SS, Raslan AM, Dayeh SA, Halgren E. Syllable processing is organized in discrete subregions of the human superior temporal gyrus. PLoS Biol 2024; 22:e3002774. [PMID: 39241107 PMCID: PMC11410217 DOI: 10.1371/journal.pbio.3002774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 09/18/2024] [Accepted: 07/29/2024] [Indexed: 09/08/2024] Open
Abstract
Modular organization at approximately 1 mm scale could be fundamental to cortical processing, but its presence in human association cortex is unknown. Using custom-built, high-density electrode arrays placed on the cortical surface of 7 patients undergoing awake craniotomy for tumor excision, we investigated receptive speech processing in the left (dominant) human posterior superior temporal gyrus. Responses to consonant-vowel syllables and noise-vocoded controls recorded with 1,024 channel micro-grids at 200 μm pitch demonstrated roughly circular domains approximately 1.7 mm in diameter, with sharp boundaries observed in 128 channel linear arrays at 50 μm pitch, possibly consistent with a columnar organization. Peak latencies to syllables in different modules were bimodally distributed centered at 252 and 386 ms. Adjacent modules were sharply delineated from each other by their distinct time courses and stimulus selectivity. We suggest that receptive language cortex may be organized in discrete processing modules.
Collapse
Affiliation(s)
- Daniel R Cleary
- Department of Neurosurgery, University of California San Diego, La Jolla, California, United States of America
- Department of Neurological Surgery, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Youngbin Tchoe
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, California, United States of America
| | - Andrew Bourhis
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, California, United States of America
| | - Charles W Dickey
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, United States of America
| | - Brittany Stedelin
- Department of Neurological Surgery, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Mehran Ganji
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, California, United States of America
| | - Sang Heon Lee
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, California, United States of America
| | - Jihwan Lee
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, California, United States of America
| | - Dominic A Siler
- Department of Neurological Surgery, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Erik C Brown
- Department of Neurological Surgery, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Burke Q Rosen
- Department of Neuroscience, Washington University School of Medicine, St. Louis, Missouri, United States of America
| | - Erik Kaestner
- Center for Multimodal Imaging and Genetics, University of California San Diego, La Jolla, California, United States of America
| | - Jimmy C Yang
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | - Daniel J Soper
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | - Seunggu Jude Han
- Department of Neurological Surgery, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Angelique C Paulk
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Ahmed M Raslan
- Department of Neurological Surgery, Oregon Health & Science University, Portland, Oregon, United States of America
| | - Shadi A Dayeh
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, California, United States of America
- Materials Science and Engineering Program, University of California San Diego, La Jolla, California, United States of America
- Department of Bioengineering, University of California San Diego, La Jolla, California, United States of America
| | - Eric Halgren
- Department of Radiology, University of California San Diego, La Jolla, California, United States of America
- Department of Neuroscience, University of California San Diego, La Jolla, California, United States of America
| |
Collapse
|
21
|
Wu Y, Chang T, Chen S, Jiang N, Mao Q, Yang Y, He J. Exploration of Brain Tumor Localization with High-density Electrocorticography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039891 DOI: 10.1109/embc53108.2024.10781554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
The fast and accurate detection of brain tumor locations has always been a focus of research in the field of neuroscience. The main emphasis of this paper is to propose a potential intraoperative fast glioma localization technique. This study utilizes the rapid and high temporal resolution characteristics of electrocorticography to investigate the electroactive properties of the glioma region from a functional variability perspective and employs its differences to detect the location of the glioma. Analysis of the two-minute data shows that the structural similarity and cosine similarity of tumor localization can reach 0.8277 and 0.9460 respectively. Additionally, this method can achieve stable average performance similar to the full-length data in a 1-second data length, indicating the potential for this technology to realize real-time automatic glioma localization, and it is expected to be applied in clinical intraoperative settings.
Collapse
|
22
|
Lee AT, Chang EF, Paredes MF, Nowakowski TJ. Large-scale neurophysiology and single-cell profiling in human neuroscience. Nature 2024; 630:587-595. [PMID: 38898291 DOI: 10.1038/s41586-024-07405-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 04/09/2024] [Indexed: 06/21/2024]
Abstract
Advances in large-scale single-unit human neurophysiology, single-cell RNA sequencing, spatial transcriptomics and long-term ex vivo tissue culture of surgically resected human brain tissue have provided an unprecedented opportunity to study human neuroscience. In this Perspective, we describe the development of these paradigms, including Neuropixels and recent brain-cell atlas efforts, and discuss how their convergence will further investigations into the cellular underpinnings of network-level activity in the human brain. Specifically, we introduce a workflow in which functionally mapped samples of human brain tissue resected during awake brain surgery can be cultured ex vivo for multi-modal cellular and functional profiling. We then explore how advances in human neuroscience will affect clinical practice, and conclude by discussing societal and ethical implications to consider. Potential findings from the field of human neuroscience will be vast, ranging from insights into human neurodiversity and evolution to providing cell-type-specific access to study and manipulate diseased circuits in pathology. This Perspective aims to provide a unifying framework for the field of human neuroscience as we welcome an exciting era for understanding the functional cytoarchitecture of the human brain.
Collapse
Affiliation(s)
- Anthony T Lee
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
| | - Mercedes F Paredes
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Tomasz J Nowakowski
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA.
- Department of Anatomy, University of California, San Francisco, San Francisco, CA, USA.
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, San Francisco, CA, USA.
- Eli and Edythe Broad Center for Regeneration Medicine and Stem Cell Research, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
23
|
Fedorenko E, Piantadosi ST, Gibson EAF. Language is primarily a tool for communication rather than thought. Nature 2024; 630:575-586. [PMID: 38898296 DOI: 10.1038/s41586-024-07522-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/03/2024] [Indexed: 06/21/2024]
Abstract
Language is a defining characteristic of our species, but the function, or functions, that it serves has been debated for centuries. Here we bring recent evidence from neuroscience and allied disciplines to argue that in modern humans, language is a tool for communication, contrary to a prominent view that we use language for thinking. We begin by introducing the brain network that supports linguistic ability in humans. We then review evidence for a double dissociation between language and thought, and discuss several properties of language that suggest that it is optimized for communication. We conclude that although the emergence of language has unquestionably transformed human culture, language does not appear to be a prerequisite for complex thought, including symbolic thought. Instead, language is a powerful tool for the transmission of cultural knowledge; it plausibly co-evolved with our thinking and reasoning capacities, and only reflects, rather than gives rise to, the signature sophistication of human cognition.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Massachusetts Institute of Technology, Cambridge, MA, USA.
- Speech and Hearing in Bioscience and Technology Program at Harvard University, Boston, MA, USA.
| | | | | |
Collapse
|
24
|
Kurteff GL, Field AM, Asghar S, Tyler-Kabara EC, Clarke D, Weiner HL, Anderson AE, Watrous AJ, Buchanan RJ, Modur PN, Hamilton LS. Processing of auditory feedback in perisylvian and insular cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.14.593257. [PMID: 38798574 PMCID: PMC11118286 DOI: 10.1101/2024.05.14.593257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.
Collapse
Affiliation(s)
- Garret Lynn Kurteff
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Alyssa M. Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Saman Asghar
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Elizabeth C. Tyler-Kabara
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Dave Clarke
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Howard L. Weiner
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Anne E. Anderson
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Andrew J. Watrous
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Robert J. Buchanan
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Pradeep N. Modur
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Liberty S. Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Lead contact
| |
Collapse
|
25
|
Fedorenko E, Ivanova AA, Regev TI. The language network as a natural kind within the broader landscape of the human brain. Nat Rev Neurosci 2024; 25:289-312. [PMID: 38609551 DOI: 10.1038/s41583-024-00802-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2024] [Indexed: 04/14/2024]
Abstract
Language behaviour is complex, but neuroscientific evidence disentangles it into distinct components supported by dedicated brain areas or networks. In this Review, we describe the 'core' language network, which includes left-hemisphere frontal and temporal areas, and show that it is strongly interconnected, independent of input and output modalities, causally important for language and language-selective. We discuss evidence that this language network plausibly stores language knowledge and supports core linguistic computations related to accessing words and constructions from memory and combining them to interpret (decode) or generate (encode) linguistic messages. We emphasize that the language network works closely with, but is distinct from, both lower-level - perceptual and motor - mechanisms and higher-level systems of knowledge and reasoning. The perceptual and motor mechanisms process linguistic signals, but, in contrast to the language network, are sensitive only to these signals' surface properties, not their meanings; the systems of knowledge and reasoning (such as the system that supports social reasoning) are sometimes engaged during language use but are not language-selective. This Review lays a foundation both for in-depth investigations of these different components of the language processing pipeline and for probing inter-component interactions.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- The Program in Speech and Hearing in Bioscience and Technology, Harvard University, Cambridge, MA, USA.
| | - Anna A Ivanova
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Tamar I Regev
- Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
26
|
Raghavan VS, O’Sullivan J, Herrero J, Bickel S, Mehta AD, Mesgarani N. Improving auditory attention decoding by classifying intracranial responses to glimpsed and masked acoustic events. IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:10.1162/imag_a_00148. [PMID: 39867597 PMCID: PMC11759098 DOI: 10.1162/imag_a_00148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/28/2025]
Abstract
Listeners with hearing loss have trouble following a conversation in multitalker environments. While modern hearing aids can generally amplify speech, these devices are unable to tune into a target speaker without first knowing to which speaker a user aims to attend. Brain-controlled hearing aids have been proposed using auditory attention decoding (AAD) methods, but current methods use the same model to compare the speech stimulus and neural response, regardless of the dynamic overlap between talkers which is known to influence neural encoding. Here, we propose a novel framework that directly classifies event-related potentials (ERPs) evoked by glimpsed and masked acoustic events to determine whether the source of the event was attended. We present a system that identifies auditory events using the local maxima in the envelope rate of change, assesses the temporal masking of auditory events relative to competing speakers, and utilizes masking-specific ERP classifiers to determine if the source of the event was attended. Using intracranial electrophysiological recordings, we showed that high gamma ERPs from recording sites in auditory cortex can effectively decode the attention of subjects. This method of AAD provides higher accuracy, shorter switch times, and more stable decoding results compared with traditional correlational methods, permitting the quick and accurate detection of changes in a listener's attentional focus. This framework also holds unique potential for detecting instances of divided attention and inattention. Overall, we extend the scope of AAD algorithms by introducing the first linear, direct-classification method for determining a listener's attentional focus that leverages the latest research in multitalker speech perception. This work represents another step toward informing the development of effective and intuitive brain-controlled hearing assistive devices.
Collapse
Affiliation(s)
- Vinay S. Raghavan
- Department of Electrical Engineering, Columbia University, New York, NY, United States
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - James O’Sullivan
- Department of Electrical Engineering, Columbia University, New York, NY, United States
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Jose Herrero
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, United States
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, United States
| | - Stephan Bickel
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, United States
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, United States
- Department of Neurology, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, United States
| | - Ashesh D. Mehta
- The Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, United States
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, United States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, United States
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| |
Collapse
|
27
|
Free T. Recording the brain in vivo: emerging technologies for the exploration of mental health conditions. Biotechniques 2024; 76:121-124. [PMID: 38482795 DOI: 10.2144/btn-2024-0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2024] Open
Abstract
Standfirst Mounting interest in mental health conditions over the last two decades has been coupled with the increasing sophistication of techniques to study the brain in vivo. [Formula: see text].
Collapse
Affiliation(s)
- Tristan Free
- Senior Digital Editor, Taylor & Francis, Unitec House, 2 Albert Place, London, N3 1QB, UK
| |
Collapse
|
28
|
Boubenec Y. How speech is produced and perceived in the human cortex. Nature 2024; 626:485-486. [PMID: 38297041 DOI: 10.1038/d41586-024-00078-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
|
29
|
Naddaf M. Mind-reading devices are revealing the brain's secrets. Nature 2024; 626:706-708. [PMID: 38378830 DOI: 10.1038/d41586-024-00481-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
|
30
|
Cleary DR, Tchoe Y, Bourhis A, Dickey CW, Stedelin B, Ganji M, Lee SH, Lee J, Siler DA, Brown EC, Rosen BQ, Kaestner E, Yang JC, Soper DJ, Han SJ, Paulk AC, Cash SS, Raslan AMT, Dayeh SA, Halgren E. Modular Phoneme Processing in Human Superior Temporal Gyrus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.17.576120. [PMID: 38293030 PMCID: PMC10827201 DOI: 10.1101/2024.01.17.576120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Modular organization is fundamental to cortical processing, but its presence is human association cortex is unknown. We characterized phoneme processing with 128-1024 channel micro-arrays at 50-200µm pitch on superior temporal gyrus of 7 patients. High gamma responses were highly correlated within ~1.7mm diameter modules, sharply delineated from adjacent modules with distinct time-courses and phoneme-selectivity. We suggest that receptive language cortex may be organized in discrete processing modules.
Collapse
Affiliation(s)
- Daniel R Cleary
- Department of Neurosurgery, University of California, San Diego, La Jolla, CA 92093, USA
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
| | - Youngbin Tchoe
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Andrew Bourhis
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Charles W Dickey
- Department of Neurosurgery, University of California, San Diego, La Jolla, CA 92093, USA
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA
- Center for Multimodal Imaging and Genetics, University of California San Diego, La Jolla, CA 92093, USA
- Department of Neurology and Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
- Materials Science and Engineering Program, University of California San Diego, La Jolla, CA 92093, USA
- Departments of Bioengineering, University of California San Diego, La Jolla, CA 92093, USA
- Departments of Radiology and Neuroscience, University of California San Diego, La Jolla, CA 92093, USA
| | - Brittany Stedelin
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
| | - Mehran Ganji
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Sang Hoen Lee
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Jihwan Lee
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Dominic A Siler
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
| | - Erik C Brown
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
| | - Burke Q Rosen
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Erik Kaestner
- Center for Multimodal Imaging and Genetics, University of California San Diego, La Jolla, CA 92093, USA
| | - Jimmy C Yang
- Department of Neurology and Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Daniel J Soper
- Department of Neurology and Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Seunggu Jude Han
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
| | - Angelique C Paulk
- Department of Neurology and Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Sydney S Cash
- Department of Neurology and Neurosurgery, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Ahmed M T Raslan
- Department of Neurological Surgery, Oregon Health & Science University, Portland, OR 97239, USA
| | - Shadi A Dayeh
- Departments of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 92093, USA
- Materials Science and Engineering Program, University of California San Diego, La Jolla, CA 92093, USA
- Departments of Bioengineering, University of California San Diego, La Jolla, CA 92093, USA
| | - Eric Halgren
- Departments of Radiology and Neuroscience, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|