1
|
Blanc T, Godard C, Grevent D, El Beheiry M, Salomon LJ, Hajj B, Masson JB. Photorealistic rendering of fetal faces from raw magnetic resonance imaging data. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2025. [PMID: 39825872 DOI: 10.1002/uog.29165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 11/11/2024] [Accepted: 11/29/2024] [Indexed: 01/20/2025]
Affiliation(s)
- T Blanc
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, Paris, France
| | - C Godard
- Decision and Bayesian Computation, Neuroscience & Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Paris, France
- Epiméthée, INRIA, Paris, France
- AVATAR MEDICAL, Paris, France
| | - D Grevent
- LUMIERE Platform, EA Fetus 7328, Université de Paris Cité, Paris, France
- Department of Radiology, Necker-Enfants Malades Hospital, AP-HP, Paris, France
| | - M El Beheiry
- Decision and Bayesian Computation, Neuroscience & Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Paris, France
- Epiméthée, INRIA, Paris, France
- AVATAR MEDICAL, Paris, France
| | - L J Salomon
- LUMIERE Platform, EA Fetus 7328, Université de Paris Cité, Paris, France
- Department of Obstetrics, Fetal Medicine and Surgery, Necker-Enfants Malades Hospital, AP-HP, Paris, France
| | - B Hajj
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, Paris, France
| | - J-B Masson
- Decision and Bayesian Computation, Neuroscience & Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Paris, France
- Epiméthée, INRIA, Paris, France
- AVATAR MEDICAL, Paris, France
| |
Collapse
|
2
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
3
|
Sasaki N, Lee S. Evaluation of Remote Surgical Hands-on Training in Veterinary Education Using a Hololens Mixed Reality Head-Mounted Display. JOURNAL OF VETERINARY MEDICAL EDUCATION 2024:e20230115. [PMID: 39504195 DOI: 10.3138/jvme-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
Conferencing system-assisted online classes have been conducted worldwide since the COVID-19 pandemic, and the use of three-dimensional glasses may improve pre-clinical veterinary education. However, students' satisfaction with this technique rather than their ability to perform surgery using these items has not been assessed. This study could potentially assess students' satisfaction with technique/instruction rather than their ability to perform surgery using these items.This study aimed to evaluate the effectiveness of remote online hands-on training in veterinary education using 3D glasses. Sixty students enrolled at the Faculty of Veterinary Medicineat Yamaguchi University voluntarily participated and were randomly divided into a 3D glasses and tablet group, each with 30 students. Each student completed one orthopedic and one ophthalmological task. The orthopedic task was performing surgery on a limb model, whereas the ophthalmological task involved incising a cornea on an eye model. The 3D glasses group participated in the ophthalmology task, then the orthopedic task, at a separate venue from the instructor. The tablet group participated in the same tasks using a tablet. In the student questionnaire, orthopedic screw fixation showed significantly higher levels of satisfaction in the 3D glasses group than in the tablet group, indicating a preference for this method. By contrast, for ophthalmic corneal suturing, the tablet group showed a significantly higher level of satisfaction than the 3D glasses group. Our findings showed that 3D glasses have a high educational value in practical training requiring depth and angle information.
Collapse
Affiliation(s)
- Naoki Sasaki
- Equine Emergency Surgery and Critical Care, Joint Faculty of Veterinary Medicine, Department of Clinical Veterinary Science, Yamaguchi University
| | - Sanchan Lee
- Equine Emergency Surgery and Critical Care, Joint Faculty of Veterinary Medicine, Department of Clinical Veterinary Science, Yamaguchi University
| |
Collapse
|
4
|
De la Cruz-Ku G, Mallouh MP, Torres Roman JS, Linshaw D. Three-dimensional virtual reality in surgical planning for breast cancer with reconstruction. SAGE Open Med Case Rep 2023; 11:2050313X231179299. [PMID: 37325162 PMCID: PMC10262605 DOI: 10.1177/2050313x231179299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 05/15/2023] [Indexed: 06/17/2023] Open
Abstract
Breast surgery is performed to achieve local control in patients with breast cancer. Visualization of the anatomy with a virtual reality software platform reconstructed from magnetic resonance imaging data improves surgical planning with regards to volume and localization of the tumor, lymph nodes, blood vessels, and surrounding tissue to perform oncoplastic tissue rearrangement. We report the use and advantages of virtual reality added to the magnetic resonance imaging assessment in a 36-year-old woman with breast cancer who underwent nipple sparing mastectomy with tissue expander reconstruction.
Collapse
Affiliation(s)
- Gabriel De la Cruz-Ku
- Department of Surgery, University of Massachusetts Medical School, Worcester, MA, USA
- Universidad Científica del Sur, Lima, Perú
| | - Michael P Mallouh
- Department of Surgery, University of Massachusetts Medical School, Worcester, MA, USA
| | - Jr Smith Torres Roman
- South American Center for Education and Research in Public Health, Universidad Norbert Wiener, Lima, Perú
| | - David Linshaw
- Department of Surgery, University of Massachusetts Medical School, Worcester, MA, USA
| |
Collapse
|
5
|
Cordero Cervantes D, Khare H, Wilson AM, Mendoza ND, Coulon-Mahdi O, Lichtman JW, Zurzolo C. 3D reconstruction of the cerebellar germinal layer reveals tunneling connections between developing granule cells. SCIENCE ADVANCES 2023; 9:eadf3471. [PMID: 37018410 PMCID: PMC10075961 DOI: 10.1126/sciadv.adf3471] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
Abstract
The difficulty of retrieving high-resolution, in vivo evidence of the proliferative and migratory processes occurring in neural germinal zones has limited our understanding of neurodevelopmental mechanisms. Here, we used a connectomic approach using a high-resolution, serial-sectioning scanning electron microscopy volume to investigate the laminar cytoarchitecture of the transient external granular layer (EGL) of the developing cerebellum, where granule cells coordinate a series of mitotic and migratory events. By integrating image segmentation, three-dimensional reconstruction, and deep-learning approaches, we found and characterized anatomically complex intercellular connections bridging pairs of cerebellar granule cells throughout the EGL. Connected cells were either mitotic, migratory, or transitioning between these two cell stages, displaying a chronological continuum of proliferative and migratory events never previously observed in vivo at this resolution. This unprecedented ultrastructural characterization poses intriguing hypotheses about intercellular connectivity between developing progenitors and its possible role in the development of the central nervous system.
Collapse
Affiliation(s)
- Diégo Cordero Cervantes
- Membrane Traffic and Pathogenesis, Institut Pasteur, Université Paris Cité, CNRS UMR 3691, F-75015 Paris, France
- Université Paris-Saclay, 91405 Orsay, France
| | - Harshavardhan Khare
- Membrane Traffic and Pathogenesis, Institut Pasteur, Université Paris Cité, CNRS UMR 3691, F-75015 Paris, France
| | - Alyssa Michelle Wilson
- Department of Neurology, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Nathaly Dongo Mendoza
- Membrane Traffic and Pathogenesis, Institut Pasteur, Université Paris Cité, CNRS UMR 3691, F-75015 Paris, France
- Research Center in Bioengineering, Universidad de Ingeniería y Tecnología-UTEC, Lima 15049, Peru
| | - Orfane Coulon-Mahdi
- Membrane Traffic and Pathogenesis, Institut Pasteur, Université Paris Cité, CNRS UMR 3691, F-75015 Paris, France
| | - Jeff William Lichtman
- Department of Molecular and Cellular Biology, Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Chiara Zurzolo
- Membrane Traffic and Pathogenesis, Institut Pasteur, Université Paris Cité, CNRS UMR 3691, F-75015 Paris, France
| |
Collapse
|
6
|
Yang G, Wang L, Qin X, Chen X, Liang Y, Jin X, Chen C, Zhang W, Pan W, Li H. Heterogeneities of zebrafish vasculature development studied by a high throughput light-sheet flow imaging system. BIOMEDICAL OPTICS EXPRESS 2022; 13:5344-5357. [PMID: 36425637 PMCID: PMC9664872 DOI: 10.1364/boe.470058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/17/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
Zebrafish is one of the ideal model animals to study the structural and functional heterogeneities in development. However, the lack of high throughput 3D imaging techniques has limited studies to only a few samples, despite zebrafish spawning tens of embryos at once. Here, we report a light-sheet flow imaging system (LS-FIS) based on light-sheet illumination and a continuous flow imager. LS-FIS enables whole-larva 3D imaging of tens of samples within half an hour. The high throughput 3D imaging capability of LS-FIS was demonstrated with the developmental study of the zebrafish vasculature from 3 to 9 days post-fertilization. Statistical analysis shows significant variances in trunk vessel development but less in hyaloid vessel development.
Collapse
Affiliation(s)
- Guang Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Linbo Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xiaofei Qin
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xiaohu Chen
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Yong Liang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xin Jin
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Chong Chen
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou 215163, China
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Wenjuan Zhang
- Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai 200031, China
| | - Weijun Pan
- Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai 200031, China
| | - Hui Li
- Jiangsu Key Laboratory of Medical Optics,
Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| |
Collapse
|
7
|
Valades-Cruz CA, Leconte L, Fouche G, Blanc T, Van Hille N, Fournier K, Laurent T, Gallean B, Deslandes F, Hajj B, Faure E, Argelaguet F, Trubuil A, Isenberg T, Masson JB, Salamero J, Kervrann C. Challenges of intracellular visualization using virtual and augmented reality. FRONTIERS IN BIOINFORMATICS 2022; 2:997082. [PMID: 36304296 PMCID: PMC9580941 DOI: 10.3389/fbinf.2022.997082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/26/2022] [Indexed: 11/22/2022] Open
Abstract
Microscopy image observation is commonly performed on 2D screens, which limits human capacities to grasp volumetric, complex, and discrete biological dynamics. With the massive production of multidimensional images (3D + time, multi-channels) and derived images (e.g., restored images, segmentation maps, and object tracks), scientists need appropriate visualization and navigation methods to better apprehend the amount of information in their content. New modes of visualization have emerged, including virtual reality (VR)/augmented reality (AR) approaches which should allow more accurate analysis and exploration of large time series of volumetric images, such as those produced by the latest 3D + time fluorescence microscopy. They include integrated algorithms that allow researchers to interactively explore complex spatiotemporal objects at the scale of single cells or multicellular systems, almost in a real time manner. In practice, however, immersion of the user within 3D + time microscopy data represents both a paradigm shift in human-image interaction and an acculturation challenge, for the concerned community. To promote a broader adoption of these approaches by biologists, further dialogue is needed between the bioimaging community and the VR&AR developers.
Collapse
Affiliation(s)
- Cesar Augusto Valades-Cruz
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Ludovic Leconte
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Gwendal Fouche
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
- Inria, CNRS, IRISA, University Rennes, Rennes, France
| | - Thomas Blanc
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France
| | | | - Kevin Fournier
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
- Inria, CNRS, IRISA, University Rennes, Rennes, France
| | - Tao Laurent
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | | | | | - Bassam Hajj
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, Sorbonne Universites, CNRS UMR168, Paris, France
| | - Emmanuel Faure
- LIRMM, Université Montpellier, CNRS, Montpellier, France
| | | | - Alain Trubuil
- MaIAGE, INRAE, Université Paris-Saclay, Jouy-en-Josas, France
| | | | - Jean-Baptiste Masson
- Decision and Bayesian Computation, Neuroscience and Computational Biology Departments, CNRS UMR 3571, Institut Pasteur, Université Paris Cité, Paris, France
| | - Jean Salamero
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| | - Charles Kervrann
- SERPICO Project Team, Inria Centre Rennes-Bretagne Atlantique, Rennes, France
- SERPICO/STED Team, UMR144 CNRS Institut Curie, PSL Research University, Sorbonne Universites, Paris, France
| |
Collapse
|
8
|
Taylor S, Soneji S. Bioinformatics and the Metaverse: Are We Ready? FRONTIERS IN BIOINFORMATICS 2022; 2:863676. [PMID: 36304263 PMCID: PMC9580841 DOI: 10.3389/fbinf.2022.863676] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 04/20/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 forced humanity to think about new ways of working globally without physically being present with other people, and eXtended Reality (XR) systems (defined as Virtual Reality, Augmented Reality and Mixed Reality) offer a potentially elegant solution. Previously seen as mainly for gaming, commercial and research institutions are investigating XR solutions to solve real world problems from training, simulation, mental health, data analysis, and studying disease progression. More recently large corporations such as Microsoft and Meta have announced they are developing the Metaverse as a new paradigm to interact with the digital world. This article will look at how visualization can leverage the Metaverse in bioinformatics research, the pros and cons of this technology, and what the future may hold.
Collapse
Affiliation(s)
- Stephen Taylor
- Analysis, Visualization and Informatics Group, MRC Weatherall Institute of Computational Biology, MRC Weatherall Institute of Molecular Medicine, Oxford, United Kingdom
- *Correspondence: Stephen Taylor,
| | - Shamit Soneji
- Division of Molecular Hematology, Department of Laboratory Medicine, Faculty of Medicine, BMC, Lund University, Lund, Sweden
- Lund Stem Cell Center, Faculty of Medicine, BMC, Lund University, Lund, Sweden
| |
Collapse
|
9
|
Guérinot C, Marcon V, Godard C, Blanc T, Verdier H, Planchon G, Raimondi F, Boddaert N, Alonso M, Sailor K, Lledo PM, Hajj B, El Beheiry M, Masson JB. New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. FRONTIERS IN BIOINFORMATICS 2022; 1:777101. [PMID: 36303792 PMCID: PMC9580868 DOI: 10.3389/fbinf.2021.777101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/15/2021] [Indexed: 01/02/2023] Open
Abstract
Three-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images.
Collapse
Affiliation(s)
- Corentin Guérinot
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
- Sorbonne Université, Collège Doctoral, Paris, France
| | - Valentin Marcon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Charlotte Godard
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Thomas Blanc
- Sorbonne Université, Collège Doctoral, Paris, France
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
| | - Hippolyte Verdier
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Histopathology and Bio-Imaging Group, Sanofi R&D, Vitry-Sur-Seine, France
- Université de Paris, UFR de Physique, Paris, France
| | - Guillaume Planchon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Francesca Raimondi
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Unité Médicochirurgicale de Cardiologie Congénitale et Pédiatrique, Centre de Référence des Malformations Cardiaques Congénitales Complexes M3C, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Nathalie Boddaert
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Mariana Alonso
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Kurt Sailor
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Pierre-Marie Lledo
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Bassam Hajj
- Sorbonne Université, Collège Doctoral, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| |
Collapse
|
10
|
Blanc T, Verdier H, Regnier L, Planchon G, Guérinot C, El Beheiry M, Masson JB, Hajj B. Towards Human in the Loop Analysis of Complex Point Clouds: Advanced Visualizations, Quantifications, and Communication Features in Virtual Reality. FRONTIERS IN BIOINFORMATICS 2022; 1:775379. [PMID: 36303735 PMCID: PMC9580855 DOI: 10.3389/fbinf.2021.775379] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 12/24/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple fields in biological and medical research produce large amounts of point cloud data with high dimensionality and complexity. In addition, a large set of experiments generate point clouds, including segmented medical data or single-molecule localization microscopy. In the latter, individual molecules are observed within their natural cellular environment. Analyzing this type of experimental data is a complex task and presents unique challenges, where providing extra physical dimensions for visualization and analysis could be beneficial. Furthermore, whether highly noisy data comes from single-molecule recordings or segmented medical data, the necessity to guide analysis with user intervention creates both an ergonomic challenge to facilitate this interaction and a computational challenge to provide fluid interactions as information is being processed. Several applications, including our software DIVA for image stack and our platform Genuage for point clouds, have leveraged Virtual Reality (VR) to visualize and interact with data in 3D. While the visualization aspects can be made compatible with different types of data, quantifications, on the other hand, are far from being standard. In addition, complex analysis can require significant computational resources, making the real-time VR experience uncomfortable. Moreover, visualization software is mainly designed to represent a set of data points but lacks flexibility in manipulating and analyzing the data. This paper introduces new libraries to enhance the interaction and human-in-the-loop analysis of point cloud data in virtual reality and integrate them into the open-source platform Genuage. We first detail a new toolbox of communication tools that enhance user experience and improve flexibility. Then, we introduce a mapping toolbox allowing the representation of physical properties in space overlaid on a 3D mesh while maintaining a point cloud dedicated shader. We introduce later a new and programmable video capture tool in VR and desktop modes for intuitive data dissemination. Finally, we highlight the protocols that allow simultaneous analysis and fluid manipulation of data with a high refresh rate. We illustrate this principle by performing real-time inference of random walk properties of recorded trajectories with a pre-trained Graph Neural Network running in Python.
Collapse
Affiliation(s)
- Thomas Blanc
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, Paris, France
| | - Hippolyte Verdier
- Decision and Bayesian Computation, CNRS USR 3756, Department of Computational Biology and Neuroscience, CNRS UMR 3571, Université de Paris, Institut Pasteur, Université de Paris, Paris, France
| | - Louise Regnier
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, Paris, France
| | - Guillaume Planchon
- Decision and Bayesian Computation, CNRS USR 3756, Department of Computational Biology and Neuroscience, CNRS UMR 3571, Université de Paris, Institut Pasteur, Université de Paris, Paris, France
| | - Corentin Guérinot
- Decision and Bayesian Computation, CNRS USR 3756, Department of Computational Biology and Neuroscience, CNRS UMR 3571, Université de Paris, Institut Pasteur, Université de Paris, Paris, France
- Sorbonne Universités, Collège Doctoral, Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, CNRS USR 3756, Department of Computational Biology and Neuroscience, CNRS UMR 3571, Université de Paris, Institut Pasteur, Université de Paris, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, CNRS USR 3756, Department of Computational Biology and Neuroscience, CNRS UMR 3571, Université de Paris, Institut Pasteur, Université de Paris, Paris, France
- *Correspondence: Jean-Baptiste Masson, ; Bassam Hajj,
| | - Bassam Hajj
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, Paris, France
- *Correspondence: Jean-Baptiste Masson, ; Bassam Hajj,
| |
Collapse
|
11
|
El Beheiry M, Gaillard T, Girard N, Darrigues L, Osdoit M, Feron JG, Sabaila A, Laas E, Fourchotte V, Laki F, Lecuru F, Couturaud B, Binder JP, Masson JB, Reyal F, Malhaire C. Breast Magnetic Resonance Image Analysis for Surgeons Using Virtual Reality: A Comparative Study. JCO Clin Cancer Inform 2021; 5:1127-1133. [PMID: 34767435 DOI: 10.1200/cci.21.00048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 08/23/2021] [Accepted: 09/29/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE The treatment of breast cancer, the leading cause of cancer and cancer mortality among women worldwide, is mainly on the basis of surgery. In this study, we describe the use of a medical image visualization tool on the basis of virtual reality (VR), entitled DIVA, in the context of breast cancer tumor localization among surgeons. The aim of this study was to evaluate the speed and accuracy of surgeons using DIVA for medical image analysis of breast magnetic resonance image (MRI) scans relative to standard image slice-based visualization tools. MATERIALS AND METHODS In our study, residents and practicing surgeons used two breast MRI reading modalities: the common slice-based radiology interface and the DIVA system in its VR mode. Metrics measured were compared in relation to postoperative anatomical-pathologic reports. RESULTS Eighteen breast surgeons from the Institut Curie performed all the analysis presented. The MRI analysis time was significantly lower with the DIVA system than with the slice-based visualization for residents, practitioners, and subsequently the entire group (P < .001). The accuracy of determination of which breast contained the lesion significantly increased with DIVA for residents (P = .003) and practitioners (P = .04). There was little difference between the DIVA and slice-based visualization for the determination of the number of lesions. The accuracy of quadrant determination was significantly improved by DIVA for practicing surgeons (P = .01) but not significantly for residents (P = .49). CONCLUSION This study indicates that the VR visualization of medical images systematically improves surgeons' analysis of preoperative breast MRI scans across several different metrics irrespective of surgeon seniority.
Collapse
Affiliation(s)
- Mohamed El Beheiry
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) and Neuroscience Department CNRS UMR 3571, Institut Pasteur and CNRS, Paris, France
| | - Thomas Gaillard
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | - Noémie Girard
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | - Lauren Darrigues
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | - Marie Osdoit
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | | | - Anne Sabaila
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | - Enora Laas
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | | | - Fatima Laki
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | - Fabrice Lecuru
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | - Benoit Couturaud
- Surgery Department, Institut Curie, PSL Research University, Paris, France
| | | | - Jean-Baptiste Masson
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) and Neuroscience Department CNRS UMR 3571, Institut Pasteur and CNRS, Paris, France
| | - Fabien Reyal
- Surgery Department, Institut Curie, PSL Research University, Paris, France
- U932, Immunity and Cancer, INSERM, Institut Curie, Paris, France
| | - Caroline Malhaire
- Department of Medical Imaging, Institut Curie, PSL Research University, Paris, France
- Institut Curie, INSERM, LITO Laboratory, Orsay, France
| |
Collapse
|
12
|
Raimondi F, Vida V, Godard C, Bertelli F, Reffo E, Boddaert N, El Beheiry M, Masson JB. Fast-track virtual reality for cardiac imaging in congenital heart disease. J Card Surg 2021; 36:2598-2602. [PMID: 33760302 DOI: 10.1111/jocs.15508] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 02/03/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIM OF THE STUDY We sought to evaluate the appropriateness of cardiac anatomy renderings by a new virtual reality (VR) technology, entitled DIVA, directly applicable to raw magnetic resonance imaging (MRI) data without intermediate segmentation steps in comparison to standard three-dimensional (3D) rendering techniques (3D PDF and 3D printing). Differences in post-processing times were also evaluated. METHODS We reconstructed 3D (STL, 3D-PDF, and 3D printed ones) and VR models of three patients with different types of complex congenital heart disease (CHD). We then asked a senior pediatric heart surgeon to compare and grade the results obtained. RESULTS All anatomical structures were well visualized in both VR and 3D PDF/printed models. Ventricular-arterial connections and their relationship with the great vessels were better visualized with the VR model (Case 2); aortic arch anatomy and details were also better visualized by the VR model (Case 3). The median post-processing time to get VR models using DIVA was 5 min in comparison to 8 h (range 8-12 h including printing time) for 3D models (PDF/printed). CONCLUSIONS VR directly applied to non-segmented 3D-MRI data set is a promising technique for 3D advanced modeling in CHD. It is systematically more consistent and faster when compared to standard 3D-modeling techniques.
Collapse
Affiliation(s)
- Francesca Raimondi
- Unité médico-chirurgicale de cardiologie congénitale et pédiatrique, centre de référence des maladies cardiaques congénitales complexes-M3C, Hôpital universitaire Necker-Enfants Malades, Université de Paris, France.,Decision and Bayesian Computation, Computation Biology Department, CNRS, URS 3756, Neuroscience Department, CNRS UMR 3571, Institut Pasteur, Paris, France.,Pediatric Radiology Unit, Hôpital universitaire Necker-Enfants Malades, Université de Paris, France
| | - Vladimiro Vida
- Pediatric and Congenital Cardiac Surgery Unit, University of Padua, Italy
| | - Charlotte Godard
- Decision and Bayesian Computation, Computation Biology Department, CNRS, URS 3756, Neuroscience Department, CNRS UMR 3571, Institut Pasteur, Paris, France
| | - Francesco Bertelli
- Pediatric and Congenital Cardiac Surgery Unit, University of Padua, Italy
| | - Elena Reffo
- Pediatric Cardiology Unit, University of Padua, Italy
| | - Nathalie Boddaert
- Pediatric Radiology Unit, Hôpital universitaire Necker-Enfants Malades, Université de Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, Computation Biology Department, CNRS, URS 3756, Neuroscience Department, CNRS UMR 3571, Institut Pasteur, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, Computation Biology Department, CNRS, URS 3756, Neuroscience Department, CNRS UMR 3571, Institut Pasteur, Paris, France
| |
Collapse
|
13
|
Laas E, El Beheiry M, Masson JB, Malhaire C. Partial breast resection for multifocal lower quadrant breast tumour using virtual reality. BMJ Case Rep 2021; 14:14/3/e241608. [PMID: 33727303 PMCID: PMC7970286 DOI: 10.1136/bcr-2021-241608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
Oncoplastic surgery allows an increase in the number of indications for conservative breast cancer treatments. However, uncertainty as to whether it can be performed still exists in certain situations such as with multicentric or multifocal lesions, even when the breast volume can accommodate it. With the aid of a virtual reality software, DIVA, allowing the precise visualisation of tumours and breast volumes based entirely on the patient's MRI, we report the ability to rapidly confirm and secure an indication for partial surgery of multiple lesions in a 31-year-old patient. With the described approach, the patient did not have to suffer significant disfigurement from cancerous breast surgery without compromising safety.
Collapse
Affiliation(s)
- Enora Laas
- Surgery Department, Institut Curie, Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, Neuroscience Department, CNRS UMR 3571, Institut Pasteur, Paris, France .,Decision and Bayesian Computation, Computational Biology Department, CNRS USR 3756, Institut Pasteur, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, Neuroscience Department, CNRS UMR 3571, Institut Pasteur, Paris, France .,Decision and Bayesian Computation, Computational Biology Department, CNRS USR 3756, Institut Pasteur, Paris, France
| | | |
Collapse
|
14
|
Bouaoud J, El Beheiry M, Jablon E, Schouman T, Bertolus C, Picard A, Masson JB, Khonsari RH. DIVA, a 3D virtual reality platform, improves undergraduate craniofacial trauma education. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2020; 122:367-371. [PMID: 33007493 DOI: 10.1016/j.jormas.2020.09.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 09/11/2020] [Indexed: 12/15/2022]
Abstract
Craniofacial fractures management is challenging to teach due to the complex anatomy of the head, even when using three-dimensional CT-scan images. DIVA is a software allowing the straightforward visualization of CT-scans in a user-friendly three-dimensional virtual reality environment. Here, we assess DIVA as an educational tool for craniofacial trauma for undergraduate medical students. Three craniofacial trauma cases (jaw fracture, naso-orbital-ethmoid complex fracture and Le Fort 3 fracture) were submitted to 50 undergraduate medical students, who had to provide diagnoses and treatment plans. Each student then filled an 8-item questionnaire assessing satisfaction, potential benefit, ease of use and tolerance. Additionally, 4 postgraduate students were requested to explore these cases and to place 6 anatomical landmarks on both virtual reality renderings and usual slice-based three-dimensional CT-scan visualizations. High degrees of satisfaction (98%) without specific tolerance issues (86%) were reported. The potential benefit in a better understanding of craniofacial trauma using virtual reality was reported by almost all students (98%). Virtual reality allowed a reliable localization of key anatomical landmarks when compared with standard three-dimensional CT-scan visualization. Virtual reality interfaces such DIVA are beneficial to medical students for a better understanding of craniofacial trauma and allow a reliable rendering of craniofacial anatomy.
Collapse
Affiliation(s)
- Jebrane Bouaoud
- Assistance Publique - Hôpitaux de Paris, Service de Chirurgie Maxillo-Faciale et Chirurgie Plastique, Hôpital Universitaire Necker - Enfants Malades, Université Paris Descartes, Université de Paris, Paris, France; Assistance Publique - Hôpitaux de Paris, Service de Chirurgie Maxillo-Faciale et Stomatologie, Hôpital Universitaire Pitié-Salpêtrière, Université Pierre et Marie Curie, Sorbonne Université, Paris, France.
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, Neuroscience Department UMR 3571 & USR 3756 (C3BI/DBC), Institut Pasteur & CNRS, Paris, France
| | - Eve Jablon
- Université Paris Descartes, Université de Paris, Paris, France
| | - Thomas Schouman
- Assistance Publique - Hôpitaux de Paris, Service de Chirurgie Maxillo-Faciale et Stomatologie, Hôpital Universitaire Pitié-Salpêtrière, Université Pierre et Marie Curie, Sorbonne Université, Paris, France
| | - Chloé Bertolus
- Assistance Publique - Hôpitaux de Paris, Service de Chirurgie Maxillo-Faciale et Stomatologie, Hôpital Universitaire Pitié-Salpêtrière, Université Pierre et Marie Curie, Sorbonne Université, Paris, France
| | - Arnaud Picard
- Assistance Publique - Hôpitaux de Paris, Service de Chirurgie Maxillo-Faciale et Chirurgie Plastique, Hôpital Universitaire Necker - Enfants Malades, Université Paris Descartes, Université de Paris, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, Neuroscience Department UMR 3571 & USR 3756 (C3BI/DBC), Institut Pasteur & CNRS, Paris, France
| | - Roman H Khonsari
- Assistance Publique - Hôpitaux de Paris, Service de Chirurgie Maxillo-Faciale et Chirurgie Plastique, Hôpital Universitaire Necker - Enfants Malades, Université Paris Descartes, Université de Paris, Paris, France
| |
Collapse
|