1
|
Xie R, Huang H, Cai Q, Lu J, Chen T, Xie K, Chen J, Chen C. Hippocampus-sparing volume-modulated arc therapy in patients with World Health Organization grade II glioma: a feasibility study. Front Oncol 2025; 14:1445558. [PMID: 39902134 PMCID: PMC11788287 DOI: 10.3389/fonc.2024.1445558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 12/31/2024] [Indexed: 02/05/2025] Open
Abstract
Background Radiotherapy can improve the survival rates of patients with glioma; meanwhile, impaired cognitive functions have been brought to the forefront with the offending organ, the radiosensitive hippocampus. This study aimed to assess the feasibility of hippocampus-sparing volumetric-modulated arc therapy (HS VMAT) in patients with World Health Organization (WHO) grade II glioma. Methods HS VMAT plans and non-hippocampus-sparing volumetric-modulated arc therapy (NHS VMAT) plans were generated using a computed tomography (CT) dataset of 10 patients who underwent postoperative radiotherapy. The dose volume histogram (DVH), homogeneity index (HI), conformity index (CI), and irradiated dose of the hippocampus and other organs at risk (OARs) were analyzed. Results No significant differences were observed in HI and CI between the two plans. Regarding the protection of OARs, HS VMAT plans were equally capable and even lowered the radiation dosages to the brainstem (35.56 vs. 41.74 Gy, p = 0.017) and spinal cord (1.34 vs. 1.43 Gy, p = 0.006). Notably, HS VMAT plans markedly decreased doses to the ipsilateral hippocampus and the contralateral hippocampus, demonstrating its efficacy in hippocampal dose reduction. Conclusion The HS VMAT plan can be used to efficiently lower the dosage delivered to the hippocampus and may, to some extent, help lessen the risk of cognitive damage. The encouraging results of our study need to be further validated by clinical trials to confirm the benefits of the HS VMAT plans in preserving cognitive functions in patients with glioma.
Collapse
Affiliation(s)
- Renxian Xie
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
- Shantou University Medical College, Shantou, China
| | - Hongxin Huang
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
- Shantou University Medical College, Shantou, China
| | - Qingxin Cai
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Jiayang Lu
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Tong Chen
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
- Shantou University Medical College, Shantou, China
| | - Keyan Xie
- Shantou University Medical College, Shantou, China
| | - Jianzhou Chen
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Chuangzhen Chen
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| |
Collapse
|
2
|
Clark AM, Huynh A, Poletti M. Oculomotor Contributions to Foveal Crowding. J Neurosci 2024; 44:e0594242024. [PMID: 39455258 PMCID: PMC11604144 DOI: 10.1523/jneurosci.0594-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 10/10/2024] [Accepted: 10/17/2024] [Indexed: 10/28/2024] Open
Abstract
Crowding, the phenomenon of impaired visual discrimination due to nearby objects, has been extensively studied and linked to cortical mechanisms. Traditionally, crowding has been studied extrafoveally; its underlying mechanisms in the central fovea, where acuity is highest, remain debated. While low-level oculomotor factors are not thought to play a role in crowding, this study shows that they are key factors in defining foveal crowding. Here, we investigate the influence of fixational behavior on foveal crowding and provide a comprehensive assessment of the magnitude and extent of this phenomenon (N = 13 human participants, four males). Leveraging on a unique blend of tools for high-precision eyetracking and retinal stabilization, we show that removing the retinal motion introduced by oculomotor behavior with retinal stabilization, diminishes the negative effects of crowding. Ultimately, these results indicate that ocular drift contributes to foveal crowding resulting in the same pooling region being stimulated both by the target and nearby objects over the course of time, not just in space. The temporal aspect of this phenomenon is peculiar to crowding at this scale and indicates that the mechanisms contributing to foveal and extrafoveal crowding differ.
Collapse
Affiliation(s)
- Ashley M Clark
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627
- Center for Visual Science, University of Rochester, Rochester, New York 14642
| | - Aaron Huynh
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627
- Department of Neuroscience, University of Rochester, Rochester, New York 14642
| | - Martina Poletti
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627
- Center for Visual Science, University of Rochester, Rochester, New York 14642
- Department of Neuroscience, University of Rochester, Rochester, New York 14642
| |
Collapse
|
3
|
Moran C, Johnson PA, Landau AN, Hogendoorn H. Decoding Remapped Spatial Information in the Peri-Saccadic Period. J Neurosci 2024; 44:e2134232024. [PMID: 38871460 PMCID: PMC11270511 DOI: 10.1523/jneurosci.2134-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/20/2024] [Accepted: 04/22/2024] [Indexed: 06/15/2024] Open
Abstract
It has been suggested that, prior to a saccade, visual neurons predictively respond to stimuli that will fall in their receptive fields after completion of the saccade. This saccadic remapping process is thought to compensate for the shift of the visual world across the retina caused by eye movements. To map the timing of this predictive process in the brain, we recorded neural activity using electroencephalography during a saccade task. Human participants (male and female) made saccades between two fixation points while covertly attending to oriented gratings briefly presented at various locations on the screen. Data recorded during trials in which participants maintained fixation were used to train classifiers on stimuli in different positions. Subsequently, data collected during saccade trials were used to test for the presence of remapped stimulus information at the post-saccadic retinotopic location in the peri-saccadic period, providing unique insight into when remapped information becomes available. We found that the stimulus could be decoded at the remapped location ∼180 ms post-stimulus onset, but only when the stimulus was presented 100-200 ms before saccade onset. Within this range, we found that the timing of remapping was dictated by stimulus onset rather than saccade onset. We conclude that presenting the stimulus immediately before the saccade allows for optimal integration of the corollary discharge signal with the incoming peripheral visual information, resulting in a remapping of activation to the relevant post-saccadic retinotopic neurons.
Collapse
Affiliation(s)
- Caoimhe Moran
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Philippa A Johnson
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, Leiden 2333 AK, The Netherlands
| | - Ayelet N Landau
- Department of Psychology,Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
- Department of Cognitive and Brain Sciences, Hebrew University of Jerusalem, Mount Scopus, Jerusalem 9190501, Israel
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Parkville, Melbourne, Victoria 3052, Australia
- School of Psychology and Counselling, Queensland University of Technology, Kelvin Grove, Queensland 4059, Australia
| |
Collapse
|
4
|
Kreyenmeier P, Kumbhani R, Movshon JA, Spering M. Shared Mechanisms Drive Ocular Following and Motion Perception. eNeuro 2024; 11:ENEURO.0204-24.2024. [PMID: 38834301 PMCID: PMC11208981 DOI: 10.1523/eneuro.0204-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2024] [Accepted: 05/11/2024] [Indexed: 06/06/2024] Open
Abstract
How features of complex visual patterns are combined to drive perception and eye movements is not well understood. Here we simultaneously assessed human observers' perceptual direction estimates and ocular following responses (OFR) evoked by moving plaids made from two summed gratings with varying contrast ratios. When the gratings were of equal contrast, observers' eye movements and perceptual reports followed the motion of the plaid pattern. However, when the contrasts were unequal, eye movements and reports during early phases of the OFR were biased toward the direction of the high-contrast grating component; during later phases, both responses followed the plaid pattern direction. The shift from component- to pattern-driven behavior resembles the shift in tuning seen under similar conditions in neuronal responses recorded from monkey MT. Moreover, for some conditions, pattern tracking and perceptual reports were correlated on a trial-by-trial basis. The OFR may therefore provide a precise behavioral readout of the dynamics of neural motion integration for complex visual patterns.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
| | - Romesh Kumbhani
- Center for Neural Science, New York University, New York, New York 10003
| | - J Anthony Movshon
- Center for Neural Science, New York University, New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia V5Z 3N9, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia V6T 1Z3, Canada
| |
Collapse
|
5
|
Liao C, Sawayama M, Xiao B. Probing the Link Between Vision and Language in Material Perception Using Psychophysics and Unsupervised Learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.25.577219. [PMID: 38328102 PMCID: PMC10849714 DOI: 10.1101/2024.01.25.577219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
We can visually discriminate and recognize a wide range of materials. Meanwhile, we use language to express our subjective understanding of visual input and communicate relevant information about the materials. Here, we investigate the relationship between visual judgment and language expression in material perception to understand how visual features relate to semantic representations. We use deep generative networks to construct an expandable image space to systematically create materials of well-defined and ambiguous categories. From such a space, we sampled diverse stimuli and compared the representations of materials from two behavioral tasks: visual material similarity judgments and free-form verbal descriptions. Our findings reveal a moderate but significant correlation between vision and language on a categorical level. However, analyzing the representations with an unsupervised alignment method, we discover structural differences that arise at the image-to-image level, especially among materials morphed between known categories. Moreover, visual judgments exhibit more individual differences compared to verbal descriptions. Our results show that while verbal descriptions capture material qualities on the coarse level, they may not fully convey the visual features that characterize the material's optical properties. Analyzing the image representation of materials obtained from various pre-trained data-rich deep neural networks, we find that human visual judgments' similarity structures align more closely with those of the text-guided visual-semantic model than purely vision-based models. Our findings suggest that while semantic representations facilitate material categorization, non-semantic visual features also play a significant role in discriminating materials at a finer level. This work illustrates the need to consider the vision-language relationship in building a comprehensive model for material perception. Moreover, we propose a novel framework for quantitatively evaluating the alignment and misalignment between representations from different modalities, leveraging information from human behaviors and computational models.
Collapse
Affiliation(s)
- Chenxi Liao
- American University, Department of Neuroscience, Washington DC, 20016, USA
| | - Masataka Sawayama
- The University of Tokyo, Graduate School of Information Science and Technology, Tokyo, 113-0033, Japan
| | - Bei Xiao
- American University, Department of Computer Science, Washington, DC, 20016, USA
| |
Collapse
|
6
|
Tünçok E, Carrasco M, Winawer J. Spatial attention alters visual cortical representation during target anticipation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.02.583127. [PMID: 38496524 PMCID: PMC10942396 DOI: 10.1101/2024.03.02.583127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Attention enables us to efficiently and flexibly interact with the environment by prioritizing some image features in preparation for responding to a stimulus. Using a concurrent psychophysics- fMRI experiment, we investigated how covert spatial attention affects responses in human visual cortex prior to target onset, and how it affects subsequent behavioral performance. Performance improved at cued locations and worsened at uncued locations, relative to distributed attention, demonstrating a selective tradeoff in processing. Pre-target BOLD responses in cortical visual field maps changed in two ways: First, there was a stimulus-independent baseline shift, positive in map locations near the cued location and negative elsewhere, paralleling the behavioral results. Second, population receptive field centers shifted toward the attended location. Both effects increased in higher visual areas. Together, the results show that spatial attention has large effects on visual cortex prior to target appearance, altering neural response properties throughout and across multiple visual field maps.
Collapse
|
7
|
Lee GM, Rodríguez-Deliz CL, Bushnell BN, Majaj NJ, Movshon JA, Kiorpes L. Developmentally stable representations of naturalistic image structure in macaque visual cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.24.581889. [PMID: 38463955 PMCID: PMC10925106 DOI: 10.1101/2024.02.24.581889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
We studied visual development in macaque monkeys using texture stimuli, matched in local spectral content but varying in "naturalistic" structure. In adult monkeys, naturalistic textures preferentially drive neurons in areas V2 and V4, but not V1. We paired behavioral measurements of naturalness sensitivity with separately-obtained neuronal population recordings from neurons in areas V1, V2, V4, and inferotemporal cortex (IT). We made behavioral measurements from 16 weeks of age and physiological measurements as early as 20 weeks, and continued through 56 weeks. Behavioral sensitivity reached half of maximum at roughly 25 weeks of age. Neural sensitivities remained stable from the earliest ages tested. As in adults, neural sensitivity to naturalistic structure increased from V1 to V2 to V4. While sensitivities in V2 and IT were similar, the dimensionality of the IT representation was more similar to V4's than to V2's.
Collapse
Affiliation(s)
- Gerick M. Lee
- Center for Neural Science New York University New York, NY, USA 10003
| | | | | | - Najib J. Majaj
- Center for Neural Science New York University New York, NY, USA 10003
| | | | - Lynne Kiorpes
- Center for Neural Science New York University New York, NY, USA 10003
| |
Collapse
|
8
|
Samonds JM, Szinte M, Barr C, Montagnini A, Masson GS, Priebe NJ. Mammals Achieve Common Neural Coverage of Visual Scenes Using Distinct Sampling Behaviors. eNeuro 2024; 11:ENEURO.0287-23.2023. [PMID: 38164577 PMCID: PMC10860624 DOI: 10.1523/eneuro.0287-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/24/2023] [Accepted: 10/30/2023] [Indexed: 01/03/2024] Open
Abstract
Most vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across fixations to construct a complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of mice, cats, marmosets, macaques, and humans. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.
Collapse
Affiliation(s)
- Jason M Samonds
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| | - Martin Szinte
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Carrie Barr
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| | - Anna Montagnini
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Nicholas J Priebe
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| |
Collapse
|
9
|
Barthélemy FV, Fleuriet J, Perrinet LU, Masson GS. A behavioral receptive field for ocular following in monkeys: Spatial summation and its spatial frequency tuning. eNeuro 2022; 9:ENEURO.0374-21.2022. [PMID: 35760525 PMCID: PMC9275147 DOI: 10.1523/eneuro.0374-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 11/21/2022] Open
Abstract
In human and non-human primates, reflexive tracking eye movements can be initiated at very short latency in response to a rapid shift of the image. Previous studies in humans have shown that only a part of the central visual field is optimal for driving ocular following responses. Herein, we have investigated spatial summation of motion information across a wide range of spatial frequencies and speeds of drifting gratings by recording short-latency ocular following responses in macaque monkeys. We show that optimal stimulus size for driving ocular responses cover a small (<20° diameter), central part of the visual field that shrinks with higher spatial frequency. This signature of linear motion integration remains invariant with speed and temporal frequency. For low and medium spatial frequencies, we found a strong suppressive influence from surround motion, evidenced by a decrease of response amplitude for stimulus sizes larger than optimal. Such suppression disappears with gratings at high frequencies. The contribution of peripheral motion was investigated by presenting grating annuli of increasing eccentricity. We observed an exponential decay of response amplitude with grating eccentricity, the decrease being faster for higher spatial frequencies. Weaker surround suppression can thus be explained by sparser eccentric inputs at high frequencies. A Difference-of-Gaussians model best renders the antagonistic contributions of peripheral and central motions. Its best-fit parameters coincide with several, well-known spatial properties of area MT neuronal populations. These results describe the mechanism by which central motion information is automatically integrated in a context-dependent manner to drive ocular responses.Significance statementOcular following is driven by visual motion at ultra-short latency in both humans and monkeys. Its dynamics reflect the properties of low-level motion integration. Here, we show that a strong center-surround suppression mechanism modulates initial eye velocity. Its spatial properties are dependent upon visual inputs' spatial frequency but are insensitive to either its temporal frequency or speed. These properties are best described with a Difference-of-Gaussian model of spatial integration. The model parameters reflect many spatial characteristics of motion sensitive neuronal populations in monkey area MT. Our results further outline the computational properties of the behavioral receptive field underpinning automatic, context-dependent motion integration.
Collapse
Affiliation(s)
- Frédéric V Barthélemy
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| | - Jérome Fleuriet
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
- Assistance Publique-Hôpitaux de Paris, Intensive Care Unit, Raymond Poincaré Hospital, Garches, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone, UMR7289, CNRS & Aix-Marseille Université, 13385 Marseille, France
| |
Collapse
|