1
|
Park J, Soucy E, Segawa J, Mair R, Konkle T. Immersive scene representation in human visual cortex with ultra-wide-angle neuroimaging. Nat Commun 2024; 15:5477. [PMID: 38942766 PMCID: PMC11213904 DOI: 10.1038/s41467-024-49669-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 06/13/2024] [Indexed: 06/30/2024] Open
Abstract
While human vision spans 220°, traditional functional MRI setups display images only up to central 10-15°. Thus, it remains unknown how the brain represents a scene perceived across the full visual field. Here, we introduce a method for ultra-wide angle display and probe signatures of immersive scene representation. An unobstructed view of 175° is achieved by bouncing the projected image off angled-mirrors onto a custom-built curved screen. To avoid perceptual distortion, scenes are created with wide field-of-view from custom virtual environments. We find that immersive scene representation drives medial cortex with far-peripheral preferences, but shows minimal modulation in classic scene regions. Further, scene and face-selective regions maintain their content preferences even with extreme far-periphery stimulation, highlighting that not all far-peripheral information is automatically integrated into scene regions computations. This work provides clarifying evidence on content vs. peripheral preferences in scene representation and opens new avenues to research immersive vision.
Collapse
Affiliation(s)
- Jeongho Park
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Edward Soucy
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Jennifer Segawa
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Ross Mair
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Kempner Institute for Biological and Artificial Intelligence, Harvard University, Boston, MA, USA
| |
Collapse
|
2
|
Satish A, Keller VG, Raza S, Fitzpatrick S, Horner AJ. Theta and alpha oscillations in human hippocampus and medial parietal cortex support the formation of location-based representations. Hippocampus 2024; 34:284-301. [PMID: 38520305 DOI: 10.1002/hipo.23605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 02/13/2024] [Accepted: 03/06/2024] [Indexed: 03/25/2024]
Abstract
Our ability to navigate in a new environment depends on learning new locations. Mental representations of locations are quickly accessible during navigation and allow us to know where we are regardless of our current viewpoint. Recent functional magnetic resonance imaging (fMRI) research using pattern classification has shown that these location-based representations emerge in the retrosplenial cortex and parahippocampal gyrus, regions theorized to be critically involved in spatial navigation. However, little is currently known about the oscillatory dynamics that support the formation of location-based representations. We used magnetoencephalogram (MEG) recordings to investigate region-specific oscillatory activity in a task where participants could form location-based representations. Participants viewed videos showing that two perceptually distinct scenes (180° apart) belonged to the same location. This "overlap" video allowed participants to bind the two distinct scenes together into a more coherent location-based representation. Participants also viewed control "non-overlap" videos where two distinct scenes from two different locations were shown, where no location-based representation could be formed. In a post-video behavioral task, participants successfully matched the two viewpoints shown in the overlap videos, but not the non-overlap videos, indicating they successfully learned the locations in the overlap condition. Comparing oscillatory activity between the overlap and non-overlap videos, we found greater theta and alpha/beta power during the overlap relative to non-overlap videos, specifically at time-points when we expected scene integration to occur. These oscillations localized to regions in the medial parietal cortex (precuneus and retrosplenial cortex) and the medial temporal lobe, including the hippocampus. Therefore, we find that theta and alpha/beta oscillations in the hippocampus and medial parietal cortex are likely involved in the formation of location-based representations.
Collapse
Affiliation(s)
- Akul Satish
- Department of Psychology, University of York, York, UK
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Sumaiyah Raza
- Department of Psychology, University of York, York, UK
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Aidan J Horner
- Department of Psychology, University of York, York, UK
- York Biomedical Research Institute, University of York, York, UK
| |
Collapse
|
3
|
Westebbe L, Liang Y, Blaser E. The Accuracy and Precision of Memory for Natural Scenes: A Walk in the Park. Open Mind (Camb) 2024; 8:131-147. [PMID: 38435706 PMCID: PMC10898787 DOI: 10.1162/opmi_a_00122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/17/2024] [Indexed: 03/05/2024] Open
Abstract
It is challenging to quantify the accuracy and precision of scene memory because it is unclear what 'space' scenes occupy (how can we quantify error when misremembering a natural scene?). To address this, we exploited the ecologically valid, metric space in which scenes occur and are represented: routes. In a delayed estimation task, participants briefly saw a target scene drawn from a video of an outdoor 'route loop', then used a continuous report wheel of the route to pinpoint the scene. Accuracy was high and unbiased, indicating there was no net boundary extension/contraction. Interestingly, precision was higher for routes that were more self-similar (as characterized by the half-life, in meters, of a route's Multiscale Structural Similarity index), consistent with previous work finding a 'similarity advantage' where memory precision is regulated according to task demands. Overall, scenes were remembered to within a few meters of their actual location.
Collapse
Affiliation(s)
- Leo Westebbe
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Yibiao Liang
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Erik Blaser
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
4
|
Park J, Soucy E, Segawa J, Mair R, Konkle T. Immersive scene representation in human visual cortex with ultra-wide angle neuroimaging. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.14.540275. [PMID: 37292806 PMCID: PMC10245572 DOI: 10.1101/2023.05.14.540275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While humans experience the visual environment in a panoramic 220° view, traditional functional MRI setups are limited to display images like postcards in the central 10-15° of the visual field. Thus, it remains unknown how a scene is represented in the brain when perceived across the full visual field. Here, we developed a novel method for ultra-wide angle visual presentation and probed for signatures of immersive scene representation. To accomplish this, we bounced the projected image off angled-mirrors directly onto a custom-built curved screen, creating an unobstructed view of 175°. Scene images were created from custom-built virtual environments with a compatible wide field-of-view to avoid perceptual distortion. We found that immersive scene representation drives medial cortex with far-peripheral preferences, but surprisingly had little effect on classic scene regions. That is, scene regions showed relatively minimal modulation over dramatic changes of visual size. Further, we found that scene and face-selective regions maintain their content preferences even under conditions of central scotoma, when only the extreme far-peripheral visual field is stimulated. These results highlight that not all far-peripheral information is automatically integrated into the computations of scene regions, and that there are routes to high-level visual areas that do not require direct stimulation of the central visual field. Broadly, this work provides new clarifying evidence on content vs. peripheral preferences in scene representation, and opens new neuroimaging research avenues to understand immersive visual representation.
Collapse
Affiliation(s)
| | | | | | - Ross Mair
- Center for Brain Science, Harvard University
- Department of Radiology, Harvard Medical School
- Department of Radiology, Massachusetts General Hospital
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Biological and Artificial Intelligence, Harvard University
| |
Collapse
|
5
|
Chai XJ, Tang L, Gabrieli JDE, Ofen N. From vision to memory: How scene-sensitive regions support episodic memory formation during child development. Dev Cogn Neurosci 2024; 65:101340. [PMID: 38218015 PMCID: PMC10825658 DOI: 10.1016/j.dcn.2024.101340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 12/21/2023] [Accepted: 01/02/2024] [Indexed: 01/15/2024] Open
Abstract
Previous brain imaging studies have identified three brain regions that selectively respond to visual scenes, the parahippocampal place area (PPA), the occipital place area (OPA), and the retrosplenial cortex (RSC). There is growing evidence that these visual scene-sensitive regions process different types of scene information and may have different developmental timelines in supporting scene perception. How these scene-sensitive regions support memory functions during child development is largely unknown. We investigated PPA, OPA and RSC activations associated with episodic memory formation in childhood (5-7 years of age) and young adulthood, using a subsequent scene memory paradigm and a functional localizer for scenes. PPA, OPA, and RSC subsequent memory activation and functional connectivity differed between children and adults. Subsequent memory effects were found in activations of all three scene regions in adults. In children, however, robust subsequent memory effects were only found in the PPA. Functional connectivity during successful encoding was significant among the three regions in adults, but not in children. PPA subsequently memory activations and PPA-RSC subsequent memory functional connectivity correlated with accuracy in adults, but not children. These age-related differences add new evidence linking protracted development of the scene-sensitive regions to the protracted development of episodic memory.
Collapse
Affiliation(s)
- Xiaoqian J Chai
- Department of Neurology and Neurosurgery, McGill University, USA.
| | - Lingfei Tang
- Department of Psychology and the Institute of Gerontology, Wayne State University, USA
| | - John DE Gabrieli
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Noa Ofen
- Department of Psychology and the Institute of Gerontology, Wayne State University, USA; Center for Vital Longevity and School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA.
| |
Collapse
|
6
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. Nat Neurosci 2024; 27:339-347. [PMID: 38168931 PMCID: PMC10923171 DOI: 10.1038/s41593-023-01512-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 10/31/2023] [Indexed: 01/05/2024]
Abstract
Conventional views of brain organization suggest that regions at the top of the cortical hierarchy processes internally oriented information using an abstract amodal neural code. Despite this, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here we report that retinotopic coding structures interactions between internally oriented (mnemonic) and externally oriented (perceptual) brain areas. Using functional magnetic resonance imaging, we observed robust inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. These functionally linked retinotopic populations in mnemonic and perceptual areas exhibit spatially specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually inhibitory dynamic. These results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, providing a scaffold for their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Edward H Silson
- Psychosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
7
|
Benchimol-Elkaim B, Khoury B, Tsimicalis A. Nature-based mindfulness programs using virtual reality to reduce pediatric perioperative anxiety: a narrative review. Front Pediatr 2024; 12:1334221. [PMID: 38283632 PMCID: PMC10820709 DOI: 10.3389/fped.2024.1334221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/02/2024] [Indexed: 01/30/2024] Open
Abstract
Over 75% of pediatric surgery patients experience preoperative anxiety, which can lead to complicated recoveries. Current interventions are less effective for children over 12 years old. New interventions, like mindfulness-based ones (MBIs), are needed to address this issue. MBIs work well for reducing mental health symptoms in youth, but they can be challenging for beginners. Virtual reality (VR) nature settings can help bridge this gap, providing an engaging 3-D practice environment that minimizes distractions and enhances presence. However, no study has investigated the combined effects of mindfulness training in natural VR settings for pediatric surgery patients, creating a significant gap for a novel intervention. This paper aims to fill that gap by presenting a narrative review exploring the potential of a nature-based mindfulness program using VR to reduce pediatric preoperative anxiety. It begins by addressing the risks of anxiety in children undergoing surgery, emphasizing its impact on physical recovery, and supporting the use of VR for anxiety reduction in hospitals. The review then delves into VR's role in nature and mindfulness, discussing theoretical concepts, clinical applications, and effectiveness. It also examines how the combination of mindfulness, nature, and VR can create an effective intervention, supported by relevant literature. Finally, it synthesizes the existing literature's limitations, findings, gaps, and contradictions, concluding with research and clinical implications.
Collapse
Affiliation(s)
| | - Bassam Khoury
- Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada
| | - Argerie Tsimicalis
- Ingram School of Nursing, McGill University, Montreal, QC, Canada
- Shriners Hospital for Children, Montreal, QC, Canada
| |
Collapse
|
8
|
Steel A, Garcia BD, Goyal K, Mynick A, Robertson CE. Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex. J Neurosci 2023; 43:5723-5737. [PMID: 37474310 PMCID: PMC10401646 DOI: 10.1523/jneurosci.2043-22.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/22/2023] Open
Abstract
To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.SIGNIFICANCE STATEMENT As we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Brenda D Garcia
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Kala Goyal
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Anna Mynick
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
9
|
Alexander AS, Place R, Starrett MJ, Chrastil ER, Nitz DA. Rethinking retrosplenial cortex: Perspectives and predictions. Neuron 2023; 111:150-175. [PMID: 36460006 DOI: 10.1016/j.neuron.2022.11.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 08/09/2022] [Accepted: 11/06/2022] [Indexed: 12/03/2022]
Abstract
The last decade has produced exciting new ideas about retrosplenial cortex (RSC) and its role in integrating diverse inputs. Here, we review the diversity in forms of spatial and directional tuning of RSC activity, temporal organization of RSC activity, and features of RSC interconnectivity with other brain structures. We find that RSC anatomy and dynamics are more consistent with roles in multiple sensorimotor and cognitive processes than with any isolated function. However, two more generalized categories of function may best characterize roles for RSC in complex cognitive processes: (1) shifting and relating perspectives for spatial cognition and (2) prediction and error correction for current sensory states with internal representations of the environment. Both functions likely take advantage of RSC's capacity to encode conjunctions among sensory, motor, and spatial mapping information streams. Together, these functions provide the scaffold for intelligent actions, such as navigation, perspective taking, interaction with others, and error detection.
Collapse
Affiliation(s)
- Andrew S Alexander
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Ryan Place
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA
| | - Michael J Starrett
- Department of Neurobiology & Behavior, University of California, Irvine, Irvine, CA 92697, USA
| | - Elizabeth R Chrastil
- Department of Neurobiology & Behavior, University of California, Irvine, Irvine, CA 92697, USA; Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, USA.
| | - Douglas A Nitz
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
10
|
Dissociating Hippocampal and Cortical Contributions to Predictive Processing. J Neurosci 2023; 43:184-186. [PMID: 36646458 PMCID: PMC9838692 DOI: 10.1523/jneurosci.1840-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 11/07/2022] [Accepted: 11/11/2022] [Indexed: 01/13/2023] Open
|
11
|
Bruni F, Mancuso V, Stramba-Badiale C, Greci L, Cavallo M, Borghesi F, Riva G, Cipresso P, Stramba-Badiale M, Pedroli E. ObReco-2: Two-step validation of a tool to assess memory deficits using 360° videos. Front Aging Neurosci 2022; 14:875748. [PMID: 35966782 PMCID: PMC9366856 DOI: 10.3389/fnagi.2022.875748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 07/05/2022] [Indexed: 11/29/2022] Open
Abstract
Traditional neuropsychological evaluations are usually carried out using psychometric paper and pencil tests. Nevertheless, there is a continuous discussion concerning their efficacy to capture life-like abilities. The introduction of new technologies, such as Virtual Reality (VR) and 360° spherical photos and videos, has improved the ecological validity of the neuropsychological assessment. The possibility of simulating realistic environments and situations allows clinicians to evaluate patients in realistic activities. Moreover, 360° photos and videos seem to provide higher levels of graphical realism and technical user-friendliness compared to standard VR, regardless of their limitations in terms of interactivity. We developed a novel 360° tool, ObReco-2 (Object Recognition version 2), for the assessment of visual memory which simulates a daily situation in a virtual house. More precisely, patients are asked to memorize some objects that need to be moved for a relocation. After this phase, they are asked to recall them after 15 min and later to recognize them in the same environment. Here we present a first study about the usability of ObReco-2, and a second one exploring its clinical efficacy and updated usability data. We focused on Free Recall and Recognition scores, comparing the performances obtained by the participants in the standard and the 360° test. The preliminary results support the use of 360° technology for enhancing the ecological value of standard memory assessment tests.
Collapse
Affiliation(s)
| | | | - Chiara Stramba-Badiale
- Applied Technology for Neuropsychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Luca Greci
- Institute of Intelligent Industrial Technologies and System for Advanced Manufacturing, Milan, Italy
| | - Marco Cavallo
- Faculty of Psychology, eCampus University, Novedrate, Italy
| | - Francesca Borghesi
- Applied Technology for Neuropsychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Giuseppe Riva
- Applied Technology for Neuropsychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Human Technology Lab, Catholic University of the Sacred Heart, Milan, Italy
| | - Pietro Cipresso
- Applied Technology for Neuropsychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, University of Turin, Turin, Italy
| | - Marco Stramba-Badiale
- Department of Geriatrics and Cardiovascular Medicine, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Elisa Pedroli
- Faculty of Psychology, eCampus University, Novedrate, Italy
- Applied Technology for Neuropsychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
12
|
Pedroli E, Mancuso V, Stramba-Badiale C, Cipresso P, Tuena C, Greci L, Goulene K, Stramba-Badiale M, Riva G, Gaggioli A. Brain M-App’s Structure and Usability: A New Application for Cognitive Rehabilitation at Home. Front Hum Neurosci 2022; 16:898633. [PMID: 35782042 PMCID: PMC9248351 DOI: 10.3389/fnhum.2022.898633] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/24/2022] [Indexed: 12/02/2022] Open
Abstract
Cognitive frailty is defined as a clinical condition characterized by both physical frailty and cognitive impairment, without reaching the criteria for dementia. The major goal of rehabilitation intervention is to assist patients in performing ordinary personal duties without the assistance of another person, or at the very least to remove the need for additional support, using adaptive approaches and facilities. In this regard, home-based rehabilitation allows patients to continue an intervention begun in a hospital setting while also ensuring support and assistance when access to healthcare systems is limited, such as during the present pandemic situation. We thus present Brain m-App, a tablet-based application designed for home-based cognitive rehabilitation of frail subjects, addressing spatial memory, attention, and executive functions. This app exploits the potential of 360° videos which are well-suited to home-based rehabilitation. The Brain m-app is made up of 10 days of activities that include a variety of exercises. The activities were chosen based on those patients used to do during their clinical practice in the hospital with the aim to improve their independence and autonomy in daily tasks. The preliminary usability test, which was conducted on five older people, revealed a sufficient level of usability, however, the sample size was modest. Results from the clinical study with 10 patients, revealed that Brain m-App improved especially executive functions and memory performances.
Collapse
Affiliation(s)
- Elisa Pedroli
- Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
- Faculty of Psychology, eCampus University, Novedrate, Italy
| | - Valentina Mancuso
- Faculty of Psychology, eCampus University, Novedrate, Italy
- *Correspondence: Valentina Mancuso,
| | - Chiara Stramba-Badiale
- Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
| | - Pietro Cipresso
- Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, University of Turin, Turin, Italy
| | - Cosimo Tuena
- Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, Universitá Cattolica del Sacro Cuore, Milan, Italy
| | - Luca Greci
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing – National Research Council, Milan, Italy
| | - Karine Goulene
- Department of Geriatrics and Cardiovascular Medicine, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
| | - Marco Stramba-Badiale
- Department of Geriatrics and Cardiovascular Medicine, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
| | - Giuseppe Riva
- Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
- Humane Technology Lab, Universitá Cattolica del Sacro Cuore, Milan, Italy
| | - Andrea Gaggioli
- Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
- Department of Psychology, Universitá Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
13
|
Děchtěrenko F, Lukavský J. False memories when viewing overlapping scenes. PeerJ 2022; 10:e13187. [PMID: 35411252 PMCID: PMC8994494 DOI: 10.7717/peerj.13187] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 03/08/2022] [Indexed: 01/12/2023] Open
Abstract
Humans can memorize and later recognize many objects and complex scenes. In this study, we prepared large photographs and presented participants with only partial views to test the fidelity of their memories. The unpresented parts of the photographs were used as a source of distractors with similar semantic and perceptual information. Additionally, we presented overlapping views to determine whether the second presentation provided a memory advantage for later recognition tests. Experiment 1 (N = 28) showed that while people were good at recognizing presented content and identifying new foils, they showed a remarkable level of uncertainty about foils selected from the unseen parts of presented photographs (false alarm, 59%). The recognition accuracy was higher for the parts that were shown twice, irrespective of whether the same identical photograph was viewed twice or whether two photographs with overlapping content were observed. In Experiment 2 (N = 28), the memorability of the large image was estimated by a pre-trained deep neural network. Neither the recognition accuracy for an image part nor the tendency for false alarms correlated with the memorability. Finally, in Experiment 3 (N = 21), we repeated the experiment while measuring eye movements. Fixations were biased toward the center of the original large photograph in the first presentation, and this bias was repeated during the second presentation in both identical and overlapping views. Altogether, our experiments show that people recognize parts of remembered photographs, but they find it difficult to reject foils from unseen parts, suggesting that their memory representation is not sufficiently detailed to rule them out as distractors.
Collapse
Affiliation(s)
- Filip Děchtěrenko
- Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic
| | - Jiří Lukavský
- Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic
| |
Collapse
|
14
|
Yan Y, Burgess N, Bicanski A. A model of head direction and landmark coding in complex environments. PLoS Comput Biol 2021; 17:e1009434. [PMID: 34570749 PMCID: PMC8496825 DOI: 10.1371/journal.pcbi.1009434] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/07/2021] [Accepted: 09/08/2021] [Indexed: 01/27/2023] Open
Abstract
Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja's Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.
Collapse
Affiliation(s)
- Yijia Yan
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Andrej Bicanski
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- School of Psychology, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
15
|
Steel A, Billings MM, Silson EH, Robertson CE. A network linking scene perception and spatial memory systems in posterior cerebral cortex. Nat Commun 2021; 12:2632. [PMID: 33976141 PMCID: PMC8113503 DOI: 10.1038/s41467-021-22848-z] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 04/05/2021] [Indexed: 02/03/2023] Open
Abstract
The neural systems supporting scene-perception and spatial-memory systems of the human brain are well-described. But how do these neural systems interact? Here, using fine-grained individual-subject fMRI, we report three cortical areas of the human brain, each lying immediately anterior to a region of the scene perception network in posterior cerebral cortex, that selectively activate when recalling familiar real-world locations. Despite their close proximity to the scene-perception areas, network analyses show that these regions constitute a distinct functional network that interfaces with spatial memory systems during naturalistic scene understanding. These "place-memory areas" offer a new framework for understanding how the brain implements memory-guided visual behaviors, including navigation.
Collapse
Affiliation(s)
- Adam Steel
- grid.254880.30000 0001 2179 2404Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Madeleine M. Billings
- grid.254880.30000 0001 2179 2404Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Edward H. Silson
- grid.4305.20000 0004 1936 7988Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, EH8 9JZ UK
| | - Caroline E. Robertson
- grid.254880.30000 0001 2179 2404Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
16
|
The parahippocampal place area and hippocampus encode the spatial significance of landmark objects. Neuroimage 2021; 236:118081. [PMID: 33882351 DOI: 10.1016/j.neuroimage.2021.118081] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 03/13/2021] [Accepted: 04/12/2021] [Indexed: 11/23/2022] Open
Abstract
Landmark objects are points of reference that can anchor one's internal cognitive map to the external world while navigating. They are especially useful in indoor environments where other cues such as spatial geometries are often similar across locations. We used functional magnetic resonance imaging (fMRI) and multivariate pattern analysis (MVPA) to understand how the spatial significance of landmark objects is represented in the human brain. Participants learned the spatial layout of a virtual building with arbitrary objects as unique landmarks in each room during a navigation task. They were scanned while viewing the objects before and after learning. MVPA revealed that the neural representation of landmark objects in the right parahippocampal place area (rPPA) and the hippocampus transformed systematically according to their locations. Specifically, objects in different rooms became more distinguishable than objects in the same room. These results demonstrate that rPPA and the hippocampus encode the spatial significance of landmark objects in indoor spaces.
Collapse
|
17
|
Baumann O, Mattingley JB. Extrahippocampal contributions to spatial navigation in humans: A review of the neuroimaging evidence. Hippocampus 2021; 31:640-657. [DOI: 10.1002/hipo.23313] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/18/2021] [Accepted: 01/24/2021] [Indexed: 11/11/2022]
Affiliation(s)
- Oliver Baumann
- School of Psychology Bond University Robina Queensland Australia
| | - Jason B. Mattingley
- Queensland Brain Institute The University of Queensland Brisbane Queensland Australia
- School of Psychology The University of Queensland Brisbane Queensland Australia
- Canadian Institute for Advanced Research (CIFAR) Toronto Ontario Canada
| |
Collapse
|
18
|
Berens SC, Joensen BH, Horner AJ. Tracking the Emergence of Location-based Spatial Representations in Human Scene-Selective Cortex. J Cogn Neurosci 2020; 33:445-462. [PMID: 33284080 PMCID: PMC8658499 DOI: 10.1162/jocn_a_01654] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Scene-selective regions of the human brain form allocentric representations of locations in our environment. These representations are independent of heading direction and allow us to know where we are regardless of our direction of travel. However, we know little about how these location-based representations are formed. Using fMRI representational similarity analysis and linear mixed models, we tracked the emergence of location-based representations in scene-selective brain regions. We estimated patterns of activity for two distinct scenes, taken before and after participants learnt they were from the same location. During a learning phase, we presented participants with two types of panoramic videos: (1) an overlap video condition displaying two distinct scenes (0° and 180°) from the same location and (2) a no-overlap video displaying two distinct scenes from different locations (which served as a control condition). In the parahippocampal cortex
(PHC) and retrosplenial cortex (RSC), representations of scenes from the same location became more similar to each other only after they had been shown in the overlap condition, suggesting the emergence of viewpoint-independent location-based representations. Whereas these representations emerged in the PHC regardless of task performance, RSC representations only emerged for locations where participants could behaviorally identify the two scenes as belonging to the same location. The results suggest that we can track the emergence of location-based representations in the PHC and RSC in a single fMRI experiment. Further, they support computational models that propose the RSC plays a key role in transforming viewpoint-independent representations into behaviorally relevant representations of specific viewpoints.
Collapse
Affiliation(s)
| | - Bárður H Joensen
- University of York.,UCL Institute of Cognitive Neuroscience.,UCL Institute of Neurology
| | | |
Collapse
|
19
|
Huffman DJ, Ekstrom AD. An Important Step toward Understanding the Role of Body-based Cues on Human Spatial Memory for Large-Scale Environments. J Cogn Neurosci 2020; 33:167-179. [PMID: 33226317 DOI: 10.1162/jocn_a_01653] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Moving our body through space is fundamental to human navigation; however, technical and physical limitations have hindered our ability to study the role of these body-based cues experimentally. We recently designed an experiment using novel immersive virtual-reality technology, which allowed us to tightly control the availability of body-based cues to determine how these cues influence human spatial memory [Huffman, D. J., & Ekstrom, A. D. A modality-independent network underlies the retrieval of large-scale spatial environments in the human brain. Neuron, 104, 611-622, 2019]. Our analysis of behavior and fMRI data revealed a similar pattern of results across a range of body-based cues conditions, thus suggesting that participants likely relied primarily on vision to form and retrieve abstract, holistic representations of the large-scale environments in our experiment. We ended our paper by discussing a number of caveats and future directions for research on the role of body-based cues in human spatial memory. Here, we reiterate and expand on this discussion, and we use a commentary in this issue by A. Steel, C. E. Robertson, and J. S. Taube (Current promises and limitations of combined virtual reality and functional magnetic resonance imaging research in humans: A commentary on Huffman and Ekstrom (2019). Journal of Cognitive Neuroscience, 2020) as a helpful discussion point regarding some of the questions that we think will be the most interesting in the coming years. We highlight the exciting possibility of taking a more naturalistic approach to study the behavior, cognition, and neuroscience of navigation. Moreover, we share the hope that researchers who study navigation in humans and nonhuman animals will synergize to provide more rapid advancements in our understanding of cognition and the brain.
Collapse
|
20
|
Steel A, Robertson CE, Taube JS. Current Promises and Limitations of Combined Virtual Reality and Functional Magnetic Resonance Imaging Research in Humans: A Commentary on Huffman and Ekstrom (2019). J Cogn Neurosci 2020; 33:159-166. [PMID: 33054553 DOI: 10.1162/jocn_a_01635] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Real-world navigation requires movement of the body through space, producing a continuous stream of visual and self-motion signals, including proprioceptive, vestibular, and motor efference cues. These multimodal cues are integrated to form a spatial cognitive map, an abstract, amodal representation of the environment. How the brain combines these disparate inputs and the relative importance of these inputs to cognitive map formation and recall are key unresolved questions in cognitive neuroscience. Recent advances in virtual reality technology allow participants to experience body-based cues when virtually navigating, and thus it is now possible to consider these issues in new detail. Here, we discuss a recent publication that addresses some of these issues (D. J. Huffman and A. D. Ekstrom. A modality-independent network underlies the retrieval of large-scale spatial environments in the human brain. Neuron, 104, 611-622, 2019). In doing so, we also review recent progress in the study of human spatial cognition and raise several questions that might be addressed in future studies.
Collapse
|
21
|
Riva G, Wiederhold BK. How Cyberpsychology and Virtual Reality Can Help Us to Overcome the Psychological Burden of Coronavirus. CYBERPSYCHOLOGY BEHAVIOR AND SOCIAL NETWORKING 2020; 23:277-279. [PMID: 32310689 DOI: 10.1089/cyber.2020.29183.gri] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Affiliation(s)
- Giuseppe Riva
- Applied Technology for Neuro-Psychology Lab, IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Brenda K Wiederhold
- Virtual Reality Medical Center, La Jolla, California, USA.,Virtual Reality Medical Institute, Brussels, Belgium
| |
Collapse
|
22
|
Fischer LF, Mojica Soto-Albors R, Buck F, Harnett MT. Representation of visual landmarks in retrosplenial cortex. eLife 2020; 9:51458. [PMID: 32154781 PMCID: PMC7064342 DOI: 10.7554/elife.51458] [Citation(s) in RCA: 61] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 02/03/2020] [Indexed: 11/13/2022] Open
Abstract
The process by which visual information is incorporated into the brain’s spatial framework to represent landmarks is poorly understood. Studies in humans and rodents suggest that retrosplenial cortex (RSC) plays a key role in these computations. We developed an RSC-dependent behavioral task in which head-fixed mice learned the spatial relationship between visual landmark cues and hidden reward locations. Two-photon imaging revealed that these cues served as dominant reference points for most task-active neurons and anchored the spatial code in RSC. This encoding was more robust after task acquisition. Decoupling the virtual environment from mouse behavior degraded spatial representations and provided evidence that supralinear integration of visual and motor inputs contributes to landmark encoding. V1 axons recorded in RSC were less modulated by task engagement but showed surprisingly similar spatial tuning. Our data indicate that landmark representations in RSC are the result of local integration of visual, motor, and spatial information. When moving through a city, people often use notable or familiar landmarks to help them navigate. Landmarks provide us with information about where we are and where we need to go next. But despite the ease with which we – and most other animals – use landmarks to find our way around, it remains unclear exactly how the brain makes this possible. One area that seems to have a key role is the retrosplenial cortex, which is located deep within the back of the brain in humans. This area becomes more active when animals use visual landmarks to navigate. It is also one of the first brain regions to be affected in Alzheimer's disease, which may help to explain why patients with this condition can become lost and disoriented, even in places they have been many times before. To find out how the retrosplenial cortex supports navigation, Fischer et al. measured its activity in mice exploring a virtual reality world. The mice ran through simulated corridors in which visual landmarks indicated where hidden rewards could be found. The activity of most neurons in the retrosplenial cortex was most strongly influenced by the mouse’s position relative to the landmark; for example, some neurons were always active 10 centimeters after the landmark. In other experiments, when the landmarks were present but no longer indicated the location of a reward, the same neurons were much less active. Fischer et al. also measured the activity of the neurons when the mice were running with nothing shown on the virtual reality, and when they saw a landmark but did not run. Notably, the activity seen when the mice were using the landmarks to find rewards was greater than the sum of that recorded when the mice were just running or just seeing the landmark without a reward, making the “landmark response” an example of so-called supralinear processing. Fischer et al. showed that visual centers of the brain send information about landmarks to retrosplenial cortex. But only the latter adjusts its activity depending on whether the mouse is using that landmark to navigate. These findings provide the first evidence for a “landmark code” at the level of neurons and lay the foundations for studying impaired navigation in patients with Alzheimer's disease. By showing that retrosplenial cortex neurons combine different types of input in a supralinear fashion, the results also point to general principles for how neurons in the brain perform complex calculations.
Collapse
Affiliation(s)
- Lukas F Fischer
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Raul Mojica Soto-Albors
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Friederike Buck
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Mark T Harnett
- Department of Brain and Cognitive Sciences, MGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
23
|
Cooper RA, Ritchey M. Progression from Feature-Specific Brain Activity to Hippocampal Binding during Episodic Encoding. J Neurosci 2020; 40:1701-1709. [PMID: 31826947 PMCID: PMC7046330 DOI: 10.1523/jneurosci.1971-19.2019] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 12/04/2019] [Accepted: 12/05/2019] [Indexed: 12/16/2022] Open
Abstract
The hallmark of episodic memory is recollecting multiple perceptual details tied to a specific spatial-temporal context. To remember an event, it is therefore necessary to integrate such details into a coherent representation during initial encoding. Here we tested how the brain encodes and binds multiple, distinct kinds of features in parallel, and how this process evolves over time during the event itself. We analyzed data from 27 human subjects (16 females, 11 males) who learned a series of objects uniquely associated with a color, a panoramic scene location, and an emotional sound while fMRI data were collected. By modeling how brain activity relates to memory for upcoming or just-viewed information, we were able to test how the neural signatures of individual features as well as the integrated event changed over the course of encoding. We observed a striking dissociation between early and late encoding processes: left inferior frontal and visuo-perceptual signals at the onset of an event tracked the amount of detail subsequently recalled and were dissociable based on distinct remembered features. In contrast, memory-related brain activity shifted to the left hippocampus toward the end of an event, which was particularly sensitive to binding item color and sound associations with spatial information. These results provide evidence of early, simultaneous feature-specific neural responses during episodic encoding that predict later remembering and suggest that the hippocampus integrates these features into a coherent experience at an event transition.SIGNIFICANCE STATEMENT Understanding and remembering complex experiences are crucial for many socio-cognitive abilities, including being able to navigate our environment, predict the future, and share experiences with others. Probing the neural mechanisms by which features become bound into meaningful episodes is a vital part of understanding how we view and reconstruct the rich detail of our environment. By testing memory for multimodal events, our findings show a functional dissociation between early encoding processes that engage lateral frontal and sensory regions to successfully encode event features, and later encoding processes that recruit hippocampus to bind these features together. These results highlight the importance of considering the temporal dynamics of encoding processes supporting multimodal event representations.
Collapse
Affiliation(s)
- Rose A Cooper
- Department of Psychology, Boston College, Chestnut Hill, Massachusetts 02467
| | - Maureen Ritchey
- Department of Psychology, Boston College, Chestnut Hill, Massachusetts 02467
| |
Collapse
|
24
|
Riva G, Bernardelli L, Browning MHEM, Castelnuovo G, Cavedoni S, Chirico A, Cipresso P, de Paula DMB, Di Lernia D, Fernández-Álvarez J, Figueras-Puigderrajols N, Fuji K, Gaggioli A, Gutiérrez-Maldonado J, Hong U, Mancuso V, Mazzeo M, Molinari E, Moretti LF, Ortiz de Gortari AB, Pagnini F, Pedroli E, Repetto C, Sforza F, Stramba-Badiale C, Tuena C, Malighetti C, Villani D, Wiederhold BK. COVID Feel Good-An Easy Self-Help Virtual Reality Protocol to Overcome the Psychological Burden of Coronavirus. Front Psychiatry 2020; 11:563319. [PMID: 33173511 PMCID: PMC7538634 DOI: 10.3389/fpsyt.2020.563319] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 08/31/2020] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Living in the time of the COVID-19 means experiencing not only a global health emergency but also extreme psychological stress with potential emotional side effects such as sadness, grief, irritability, and mood swings. Crucially, lockdown and confinement measures isolate people who become the first and the only ones in charge of their own mental health: people are left alone facing a novel and potentially lethal situation, and, at the same time, they need to develop adaptive strategies to face it, at home. In this view, easy-to-use, inexpensive, and scientifically validated self-help solutions aiming to reduce the psychological burden of coronavirus are extremely necessary. AIMS This pragmatic trial aims to provide the evidence that a weekly self-help virtual reality (VR) protocol can help overcome the psychological burden of the Coronavirus by relieving anxiety, improving well-being, and reinforcing social connectedness. The protocol will be based on the "Secret Garden" 360 VR video online (www.covidfeelgood.com) which simulates a natural environment aiming to promote relaxation and self-reflection. Three hundred sixty-degree or spherical videos allow the user to control the viewing direction. In this way, the user can explore the content from any angle like a panorama and experience presence and immersion. The "Secret Garden" video is combined with daily exercises that are designed to be experienced with another person (not necessarily physically together), to facilitate a process of critical examination and eventual revision of core assumptions and beliefs related to personal identity, relationships, and goals. METHODS This is a multicentric, pragmatic pilot randomized controlled trial involving individuals who experienced the COVID-19 pandemic and underwent a lockdown and quarantine procedures. The trial is approved by the Ethics Committee of the Istituto Auxologico Italiano. Each research group in all the countries joining the pragmatic trial, aims at enrolling at least 30 individuals in the experimental group experiencing the self-help protocol, and 30 in the control group, over a period of 3 months to verify the feasibility of the intervention. CONCLUSION The goal of this protocol is for VR to become the "surgical mask" of mental health treatment. Although surgical masks do not provide the wearer with a reliable level of protection against the coronavirus compared with FFP2 or FFP3 masks, surgical masks are very effective in protecting others from the wearer's respiratory emissions. The goal of the VR protocol is the same: not necessarily to solve complex mental health problems but rather to improve well-being and preserve social connectedness through the beneficial social effects generated by positive emotions.
Collapse
Affiliation(s)
- Giuseppe Riva
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | | | | - Gianluca Castelnuovo
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | | - Alice Chirico
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Pietro Cipresso
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | | - Daniele Di Lernia
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | | | | - Kei Fuji
- Division of Psychology, University of Tsukuba, Tokyo, Japan
| | - Andrea Gaggioli
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | | - Upyong Hong
- Department of Media and Communication, Konkuk University, Seoul, South Korea
| | | | - Milena Mazzeo
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Enrico Molinari
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Luciana F Moretti
- Sociedad Española de Realidad Virtual y Psicología, Las Rozas - Madrid, Spain
| | - Angelica B Ortiz de Gortari
- The Centre for the Science of Learning & Technology (SLATE), University of Bergen, Bergen, Norway.,Psychology and Neuroscience of Cognition Research Unit, University of Liège, Liège, Belgium
| | - Francesco Pagnini
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Elisa Pedroli
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Faculty of Psychology, University of eCampus, Novedrate, Italy
| | - Claudia Repetto
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | | | | | - Cosimo Tuena
- IRCCS Istituto Auxologico Italiano, Milan, Italy.,Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Clelia Malighetti
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Daniela Villani
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Brenda K Wiederhold
- Virtual Reality Medical Center, La Jolla, CA, United States.,Virtual Reality Medical Institute, Brussels, Belgium
| |
Collapse
|
25
|
Ventura S, Brivio E, Riva G, Baños RM. Immersive Versus Non-immersive Experience: Exploring the Feasibility of Memory Assessment Through 360° Technology. Front Psychol 2019; 10:2509. [PMID: 31798492 PMCID: PMC6868024 DOI: 10.3389/fpsyg.2019.02509] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Accepted: 10/23/2019] [Indexed: 01/03/2023] Open
Abstract
Episodic memory is essential to effectively perform a number of daily activities, as it enables individuals to consciously recall experiences within their spatial and temporal environments. Virtual Reality (VR) serves as an efficacious instrument to assess cognitive functions like attention and memory. Previous studies have adopted computer-simulated VR to assess memory, which realized greater benefits compared to traditional procedures (paper and pencil). One of the most recent trends of immersive VR experiences is the 360° technology. In order to evaluate its capabilities, this study aims to compare memory performance through two tasks: immersive task and non-immersive task. These tasks differ based on the participant's view of the 360° picture: (1) head-mounted display (HMD) for immersive task and (2) tablet for non-immersive task. This study seeks to compare how memory is facilitated in both the 360° immersive picture as well as the non-immersive 360° picture. A repeated measure design was carried out in a sample of 42 participants, randomized into two groups of 21. Group 1 first observed Picture A (immersive) followed by Picture B (non-immersive) while Group 2 began with Picture B and then looked at Picture A. Each 360° picture contains specific items with some items appearing in both. Memory evaluation is assessed immediately after the exposure (recall task), then again after a 10-min delay (recognition task). Results reveal that Group 1, which began with the immersive task, demonstrated stronger memory performance in the long term as compared to Group 2, which began with the non-immersive task. Preliminary data ultimately supports the efficacy of the 360° technology in evaluating cognitive function.
Collapse
Affiliation(s)
- Sara Ventura
- Department of Personality, Assessment and Psychological Treatments, University of Valencia, Valencia, Spain
| | - Eleonora Brivio
- Department of Psychology, Centro Studi e Ricerche di Psicologia della Comunicazione, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Giuseppe Riva
- Department of Psychology, Centro Studi e Ricerche di Psicologia della Comunicazione, Università Cattolica del Sacro Cuore, Milan, Italy.,Applied Technology for Neuro-Psychology Laboratory, Auxologico Institute, Milan, Italy
| | - Rosa M Baños
- Department of Personality, Assessment and Psychological Treatments, University of Valencia, Valencia, Spain.,CIBERObn Ciber Physiopathology of Obesity and Nutrition, Madrid, Spain
| |
Collapse
|
26
|
Ramanoël S, York E, Le Petit M, Lagrené K, Habas C, Arleo A. Age-Related Differences in Functional and Structural Connectivity in the Spatial Navigation Brain Network. Front Neural Circuits 2019; 13:69. [PMID: 31736716 PMCID: PMC6828843 DOI: 10.3389/fncir.2019.00069] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/09/2019] [Indexed: 12/13/2022] Open
Abstract
Spatial navigation involves multiple cognitive processes including multisensory integration, visuospatial coding, memory, and decision-making. These functions are mediated by the interplay of cerebral structures that can be broadly separated into a posterior network (subserving visual and spatial processing) and an anterior network (dedicated to memory and navigation planning). Within these networks, areas such as the hippocampus (HC) are known to be affected by aging and to be associated with cognitive decline and navigation impairments. However, age-related changes in brain connectivity within the spatial navigation network remain to be investigated. For this purpose, we performed a neuroimaging study combining functional and structural connectivity analyses between cerebral regions involved in spatial navigation. Nineteen young (μ = 27 years, σ = 4.3; 10 F) and 22 older (μ = 73 years, σ = 4.1; 10 F) participants were examined in this study. Our analyses focused on the parahippocampal place area (PPA), the retrosplenial cortex (RSC), the occipital place area (OPA), and the projections into the visual cortex of central and peripheral visual fields, delineated from independent functional localizers. In addition, we segmented the HC and the medial prefrontal cortex (mPFC) from anatomical images. Our results show an age-related decrease in functional connectivity between low-visual areas and the HC, associated with an increase in functional connectivity between OPA and PPA in older participants compared to young subjects. Concerning the structural connectivity, we found age-related differences in white matter integrity within the navigation brain network, with the exception of the OPA. The OPA is known to be involved in egocentric navigation, as opposed to allocentric strategies which are more related to the hippocampal region. The increase in functional connectivity between the OPA and PPA may thus reflect a compensatory mechanism for the age-related alterations around the HC, favoring the use of the preserved structural network mediating egocentric navigation. Overall, these findings on age-related differences of functional and structural connectivity may help to elucidate the cerebral bases of spatial navigation deficits in healthy and pathological aging.
Collapse
Affiliation(s)
- Stephen Ramanoël
- Sorbonne Universités, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Elizabeth York
- Sorbonne Universités, INSERM, CNRS, Institut de la Vision, Paris, France.,Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Marine Le Petit
- Sorbonne Universités, INSERM, CNRS, Institut de la Vision, Paris, France.,Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - Karine Lagrené
- Sorbonne Universités, INSERM, CNRS, Institut de la Vision, Paris, France
| | | | - Angelo Arleo
- Sorbonne Universités, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
27
|
Pedroli E, Cipresso P, Greci L, Arlati S, Boilini L, Stefanelli L, Rossi M, Goulene K, Sacco M, Stramba-Badiale M, Gaggioli A, Riva G. An Immersive Motor Protocol for Frailty Rehabilitation. Front Neurol 2019; 10:1078. [PMID: 31681149 PMCID: PMC6803811 DOI: 10.3389/fneur.2019.01078] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 09/24/2019] [Indexed: 01/22/2023] Open
Abstract
Frailty is a pre-clinical condition that worsens physical health and quality of life. One of the most frequent symptoms of frailty is an increased risk of falling. In order to reduce this risk, we propose an innovative virtual reality motor rehabilitation program based on an immersive tool. All exercises will take place in the CAVE, a four-screen room with a stationary bike. The protocol will include two types of exercises for the improvement of balance: "Positive Bike" and "Avoid the Rocks." We will choose evaluation scales related to the functional aspects and subjective perception of balance. Our aim is to prove that our innovative motor rehabilitation protocol is as effective as or more effective than classical rehabilitation.
Collapse
Affiliation(s)
- Elisa Pedroli
- Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Pietro Cipresso
- Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Luca Greci
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council, Milan, Italy
| | - Sara Arlati
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council, Milan, Italy
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Lorenzo Boilini
- Department of Geriatrics and Cardiovascular Medicine, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Laura Stefanelli
- Department of Geriatrics and Cardiovascular Medicine, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Monica Rossi
- Department of Geriatrics and Cardiovascular Medicine, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Karine Goulene
- Department of Geriatrics and Cardiovascular Medicine, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Marco Sacco
- Institute of Intelligent Industrial Technologies and Systems for Advanced Manufacturing, National Research Council, Milan, Italy
| | - Marco Stramba-Badiale
- Department of Geriatrics and Cardiovascular Medicine, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Andrea Gaggioli
- Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Giuseppe Riva
- Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano - Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| |
Collapse
|
28
|
Abstract
Humans are remarkably adept at perceiving and understanding complex real-world scenes. Uncovering the neural basis of this ability is an important goal of vision science. Neuroimaging studies have identified three cortical regions that respond selectively to scenes: parahippocampal place area, retrosplenial complex/medial place area, and occipital place area. Here, we review what is known about the visual and functional properties of these brain areas. Scene-selective regions exhibit retinotopic properties and sensitivity to low-level visual features that are characteristic of scenes. They also mediate higher-level representations of layout, objects, and surface properties that allow individual scenes to be recognized and their spatial structure ascertained. Challenges for the future include developing computational models of information processing in scene regions, investigating how these regions support scene perception under ecologically realistic conditions, and understanding how they operate in the context of larger brain networks.
Collapse
Affiliation(s)
- Russell A Epstein
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA;
| | - Chris I Baker
- Section on Learning and Plasticity, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20892, USA;
| |
Collapse
|
29
|
Elshout JA, van den Berg AV, Haak KV. Human V2A: A map of the peripheral visual hemifield with functional connections to scene-selective cortex. J Vis 2019; 18:22. [PMID: 30267074 PMCID: PMC6159387 DOI: 10.1167/18.9.22] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans can recognize a scene in the blink of an eye. This gist-based visual scene perception is thought to be underpinned by specialized visual processing emphasizing the visual periphery at a cortical locus relatively low in the visual processing hierarchy. Using wide-field retinotopic mapping and population receptive field (pRF) modeling, we identified a new visual hemifield map anterior of area V2d and inferior to area V6, which we propose to call area V2A. Based on its location relative to other visual areas, V2A may correspond to area 23V described in nonhuman primates. The pRF analysis revealed unique receptive field properties for V2A: a large (FWHM ∼23°) and constant receptive field size across the central ∼70° of the visual field. Resting-state fMRI connectivity analysis further suggests that V2A is ideally suited to quickly feed the scene-processing network with information that is not biased towards the center of the visual field. Our findings not only indicate a likely cortical locus for the initial stages of gist-based visual scene perception, but also suggest a reappraisal of the organization of human dorsomedial occipital cortex with a strip of separate hemifield representations anterior to the early visual areas (V1, V2d, and V3d).
Collapse
Affiliation(s)
- Joris A Elshout
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| | - Albert V van den Berg
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| | - Koen V Haak
- Department of Cognitive Neuroscience, Radboud University Medical Centre, Nijmegen, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
30
|
Nag S, Berman D, Golomb JD. Category-selective areas in human visual cortex exhibit preferences for stimulus depth. Neuroimage 2019; 196:289-301. [PMID: 30978498 DOI: 10.1016/j.neuroimage.2019.04.025] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 03/21/2019] [Accepted: 04/07/2019] [Indexed: 12/01/2022] Open
Abstract
Multiple regions in the human brain are dedicated to accomplish the feat of object recognition; yet our brains must also compute the 2D and 3D locations of the objects we encounter in order to make sense of our visual environments. A number of studies have explored how various object category-selective regions are sensitive to and have preferences for specific 2D spatial locations in addition to processing their preferred-stimulus categories, but there is no survey of how these regions respond to depth information. In a blocked functional MRI experiment, subjects viewed a series of category-specific (i.e., faces, objects, scenes) and unspecific (e.g., random moving dots) stimuli with red/green anaglyph glasses. Critically, these stimuli were presented at different depth planes such that they appeared in front of, behind, or at the same (i.e., middle) depth plane as the fixation point (Experiment 1) or simultaneously in front of and behind fixation (i.e., mixed depth; Experiment 2). Comparisons of mean response magnitudes between back, middle, and front depth planes reveal that face and object regions OFA and LOC exhibit a preference for front depths, and motion area MT+ exhibits a strong linear preference for front, followed by middle, followed by back depth planes. In contrast, scene-selective regions PPA and OPA prefer front and/or back depth planes (relative to middle). Moreover, the occipital place area demonstrates a strong preference for "mixed" depth above and beyond back alone, raising potential implications about its particular role in scene perception. Crucially, the observed depth preferences in nearly all areas were evoked irrespective of the semantic stimulus category being viewed. These results reveal that the object category-selective regions may play a role in processing or incorporating depth information that is orthogonal to their primary processing of object category information.
Collapse
Affiliation(s)
- Samoni Nag
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, USA; Department of Psychology, The George Washington University, USA
| | - Daniel Berman
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, USA
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, USA.
| |
Collapse
|
31
|
Cooper RA, Ritchey M. Cortico-hippocampal network connections support the multidimensional quality of episodic memory. eLife 2019; 8:45591. [PMID: 30900990 PMCID: PMC6450667 DOI: 10.7554/elife.45591] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 03/22/2019] [Indexed: 12/16/2022] Open
Abstract
Episodic memories reflect a bound representation of multimodal features that can be reinstated with varying precision. Yet little is known about how brain networks involved in memory, including the hippocampus and posterior-medial (PM) and anterior-temporal (AT) systems, interact to support the quality and content of recollection. Participants learned color, spatial, and emotion associations of objects, later reconstructing the visual features using a continuous color spectrum and 360-degree panorama scenes. Behaviorally, dependencies in memory were observed for the gist but not precision of event associations. Supporting this integration, hippocampus, AT, and PM regions showed increased connectivity and reduced modularity during retrieval compared to encoding. These inter-network connections tracked a multidimensional, objective measure of memory quality. Moreover, distinct patterns of connectivity tracked item color and spatial memory precision. These findings demonstrate how hippocampal-cortical connections reconfigure during episodic retrieval, and how such dynamic interactions might flexibly support the multidimensional quality of remembered events.
Collapse
Affiliation(s)
- Rose A Cooper
- Department of Psychology, Boston College, Boston, United States
| | - Maureen Ritchey
- Department of Psychology, Boston College, Boston, United States
| |
Collapse
|
32
|
Lescroart MD, Gallant JL. Human Scene-Selective Areas Represent 3D Configurations of Surfaces. Neuron 2018; 101:178-192.e7. [PMID: 30497771 DOI: 10.1016/j.neuron.2018.11.004] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Revised: 08/01/2018] [Accepted: 11/02/2018] [Indexed: 10/27/2022]
Abstract
It has been argued that scene-selective areas in the human brain represent both the 3D structure of the local visual environment and low-level 2D features (such as spatial frequency) that provide cues for 3D structure. To evaluate the degree to which each of these hypotheses explains variance in scene-selective areas, we develop an encoding model of 3D scene structure and test it against a model of low-level 2D features. We fit the models to fMRI data recorded while subjects viewed visual scenes. The fit models reveal that scene-selective areas represent the distance to and orientation of large surfaces, at least partly independent of low-level features. Principal component analysis of the model weights reveals that the most important dimensions of 3D structure are distance and openness. Finally, reconstructions of the stimuli based on the model weights demonstrate that our model captures unprecedented detail about the local visual environment from scene-selective areas.
Collapse
Affiliation(s)
- Mark D Lescroart
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
33
|
Serino S, Repetto C. New Trends in Episodic Memory Assessment: Immersive 360° Ecological Videos. Front Psychol 2018; 9:1878. [PMID: 30333780 PMCID: PMC6176050 DOI: 10.3389/fpsyg.2018.01878] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 09/13/2018] [Indexed: 12/27/2022] Open
Abstract
How best to measure memory in a reliable and valid way has been intensely debated in neuropsychological literature. Specifically, classical neuropsychological tests often fail to predict real-life performance or capture the multifaceted nature of memory function. To solve these issues, there has been a growing emphasis on the use of more ecological memory assessment. In this scenario, several virtual reality based tools have been developed to evaluate memory function. The aim of the current perspective is to discuss critically the possibilities offered for episodic memory assessment by one of the most innovative trends in the technology field, i.e., 360° videos. Immersivity, egocentric view and realism appear to be crucial features of 360° videos enabling them to enhance the ecological validity of classical assessment tools of memory abilities.
Collapse
Affiliation(s)
- Silvia Serino
- Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy.,Applied Technology for Neuro-Psychology Laboratory, Istituto Auxologico Italiano, Milan, Italy
| | - Claudia Repetto
- Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| |
Collapse
|
34
|
Clark BJ, Simmons CM, Berkowitz LE, Wilber AA. The retrosplenial-parietal network and reference frame coordination for spatial navigation. Behav Neurosci 2018; 132:416-429. [PMID: 30091619 PMCID: PMC6188841 DOI: 10.1037/bne0000260] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
The retrosplenial cortex is anatomically positioned to integrate sensory, motor, and visual information and is thought to have an important role in processing spatial information and guiding behavior through complex environments. Anatomical and theoretical work has argued that the retrosplenial cortex participates in spatial behavior in concert with input from the parietal cortex. Although the nature of these interactions is unknown, a central position is that the functional connectivity is hierarchical with egocentric spatial information processed in the parietal cortex and higher-level allocentric mappings generated in the retrosplenial cortex. Here, we review the evidence supporting this proposal. We begin by summarizing the key anatomical features of the retrosplenial-parietal network, and then review studies investigating the neural correlates of these regions during spatial behavior. Our summary of this literature suggests that the retrosplenial-parietal circuitry does not represent a strict hierarchical parcellation of function between the two regions but instead a heterogeneous mixture of egocentric-allocentric coding and integration across frames of reference. We also suggest that this circuitry should be represented as a gradient of egocentric-to-allocentric information processing from parietal to retrosplenial cortices, with more specialized encoding of global allocentric frameworks within the retrosplenial cortex and more specialized egocentric and local allocentric representations in parietal cortex. We conclude by identifying the major gaps in this literature and suggest new avenues of research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
35
|
Lavoie EB, Valevicius AM, Boser QA, Kovic O, Vette AH, Pilarski PM, Hebert JS, Chapman CS. Using synchronized eye and motion tracking to determine high-precision eye-movement patterns during object-interaction tasks. J Vis 2018; 18:18. [DOI: 10.1167/18.6.18] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Ewen B. Lavoie
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Aïda M. Valevicius
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Quinn A. Boser
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Ognjen Kovic
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H. Vette
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, Alberta, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M. Pilarski
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Jacqueline S. Hebert
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, Alberta, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S. Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
36
|
Mitchell AS, Czajkowski R, Zhang N, Jeffery K, Nelson AJD. Retrosplenial cortex and its role in spatial cognition. Brain Neurosci Adv 2018; 2:2398212818757098. [PMID: 30221204 PMCID: PMC6095108 DOI: 10.1177/2398212818757098] [Citation(s) in RCA: 134] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2017] [Accepted: 12/18/2017] [Indexed: 12/21/2022] Open
Abstract
Retrosplenial cortex is a region within the posterior neocortical system, heavily interconnected with an array of brain networks, both cortical and subcortical, that is, engaged by a myriad of cognitive tasks. Although there is no consensus as to its precise function, evidence from both human and animal studies clearly points to a role in spatial cognition. However, the spatial processing impairments that follow retrosplenial cortex damage are not straightforward to characterise, leading to difficulties in defining the exact nature of its role. In this article, we review this literature and classify the types of ideas that have been put forward into three broad, somewhat overlapping classes: (1) learning of landmark location, stability and permanence; (2) integration between spatial reference frames; and (3) consolidation and retrieval of spatial knowledge (schemas). We evaluate these models and suggest ways to test them, before briefly discussing whether the spatial function may be a subset of a more general function in episodic memory.
Collapse
Affiliation(s)
- Anna S. Mitchell
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Rafal Czajkowski
- Department of Molecular and Cellular Neurobiology, Nencki Institute of Experimental Biology, Warsaw, Poland
| | - Ningyu Zhang
- Institute of Behavioural Neuroscience, Division of Psychology and Language Sciences, University College London, London, UK
| | - Kate Jeffery
- Institute of Behavioural Neuroscience, Division of Psychology and Language Sciences, University College London, London, UK
| | | |
Collapse
|
37
|
Pennartz CMA. Consciousness, Representation, Action: The Importance of Being Goal-Directed. Trends Cogn Sci 2017; 22:137-153. [PMID: 29233478 DOI: 10.1016/j.tics.2017.10.006] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 10/26/2017] [Accepted: 10/30/2017] [Indexed: 12/14/2022]
Abstract
Recent years have witnessed fierce debates on the dependence of consciousness on interactions between a subject and the environment. Reviewing neuroscientific, computational, and clinical evidence, I will address three questions. First, does conscious experience necessarily depend on acute interactions between a subject and the environment? Second, does it depend on specific perception-action loops in the longer run? Third, which types of action does consciousness cohere with, if not with all of them? I argue that conscious contents do not necessarily depend on acute or long-term brain-environment interactions. Instead, consciousness is proposed to be specifically associated with, and subserve, deliberate, goal-directed behavior (GDB). Brain systems implied in conscious representation are highly connected to, but distinct from, neural substrates mediating GDB and declarative memory.
Collapse
Affiliation(s)
- Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, The Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, The Netherlands.
| |
Collapse
|
38
|
An independent, landmark-dominated head-direction signal in dysgranular retrosplenial cortex. Nat Neurosci 2016; 20:173-175. [PMID: 27991898 PMCID: PMC5274535 DOI: 10.1038/nn.4465] [Citation(s) in RCA: 135] [Impact Index Per Article: 16.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Accepted: 11/21/2016] [Indexed: 12/14/2022]
Abstract
We investigated how landmarks influence the brain’s computation of head direction and found that in a bi-directionally symmetrical environment, some neurons in dysgranular retrosplenial cortex showed bi-directional firing patterns. This indicates dominance of neural activity by local environmental cues even when these conflict with the global head direction signal. It suggests a mechanism for associating landmarks to or dissociating them from the head direction signal, according to their directional stability/utility.
Collapse
|