1
|
Otesteanu CF, Caldelari R, Heussler V, Sznitman R. Machine learning for predicting Plasmodium liver stage development in vitro using microscopy imaging. Comput Struct Biotechnol J 2024; 24:334-342. [PMID: 38690550 PMCID: PMC11059334 DOI: 10.1016/j.csbj.2024.04.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/02/2024] Open
Abstract
Malaria, a significant global health challenge, is caused by Plasmodium parasites. The Plasmodium liver stage plays a pivotal role in the establishment of the infection. This study focuses on the liver stage development of the model organism Plasmodium berghei, employing fluorescent microscopy imaging and convolutional neural networks (CNNs) for analysis. Convolutional neural networks have been recently proposed as a viable option for tasks such as malaria detection, prediction of host-pathogen interactions, or drug discovery. Our research aimed to predict the transition of Plasmodium-infected liver cells to the merozoite stage, a key development phase, 15 hours in advance. We collected and analyzed hourly imaging data over a span of at least 38 hours from 400 sequences, encompassing 502 parasites. Our method was compared to human annotations to validate its efficacy. Performance metrics, including the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, were evaluated on an independent test dataset. The outcomes revealed an AUC of 0.873, a sensitivity of 84.6%, and a specificity of 83.3%, underscoring the potential of our CNN-based framework to predict liver stage development of P. berghei. These findings not only demonstrate the feasibility of our methodology but also could potentially contribute to the broader understanding of parasite biology.
Collapse
Affiliation(s)
- Corin F. Otesteanu
- Artificial Intelligence in Medicine group, University of Bern, Switzerland
| | - Reto Caldelari
- Institute of Cell Biology, University of Bern, Switzerland
| | | | - Raphael Sznitman
- Artificial Intelligence in Medicine group, University of Bern, Switzerland
| |
Collapse
|
2
|
Dotti P, Fernandez-Tenorio M, Janicek R, Márquez-Neila P, Wullschleger M, Sznitman R, Egger M. A deep learning-based approach for efficient detection and classification of local Ca²⁺ release events in Full-Frame confocal imaging. Cell Calcium 2024; 121:102893. [PMID: 38701707 DOI: 10.1016/j.ceca.2024.102893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/24/2024] [Accepted: 04/23/2024] [Indexed: 05/05/2024]
Abstract
The release of Ca2+ ions from intracellular stores plays a crucial role in many cellular processes, acting as a secondary messenger in various cell types, including cardiomyocytes, smooth muscle cells, hepatocytes, and many others. Detecting and classifying associated local Ca2+ release events is particularly important, as these events provide insight into the mechanisms, interplay, and interdependencies of local Ca2+release events underlying global intracellular Ca2+signaling. However, time-consuming and labor-intensive procedures often complicate analysis, especially with low signal-to-noise ratio imaging data. Here, we present an innovative deep learning-based approach for automatically detecting and classifying local Ca2+ release events. This approach is exemplified with rapid full-frame confocal imaging data recorded in isolated cardiomyocytes. To demonstrate the robustness and accuracy of our method, we first use conventional evaluation methods by comparing the intersection between manual annotations and the segmentation of Ca2+ release events provided by the deep learning method, as well as the annotated and recognized instances of individual events. In addition to these methods, we compare the performance of the proposed model with the annotation of six experts in the field. Our model can recognize more than 75 % of the annotated Ca2+ release events and correctly classify more than 75 %. A key result was that there were no significant differences between the annotations produced by human experts and the result of the proposed deep learning model. We conclude that the proposed approach is a robust and time-saving alternative to conventional full-frame confocal imaging analysis of local intracellular Ca2+ events.
Collapse
Affiliation(s)
- Prisca Dotti
- Department of Physiology, Universität Bern, Bern, Switzerland; ARTORG Center, Universität Bern, Bern, Switzerland
| | | | | | | | | | | | - Marcel Egger
- Department of Physiology, Universität Bern, Bern, Switzerland.
| |
Collapse
|
3
|
Hahne C, Chabouh G, Chavignon A, Couture O, Sznitman R. RF-ULM: Ultrasound Localization Microscopy Learned from Radio-Frequency Wavefronts. IEEE Trans Med Imaging 2024; PP:1-1. [PMID: 38640052 DOI: 10.1109/tmi.2024.3391297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/21/2024]
Abstract
In Ultrasound Localization Microscopy (ULM), achieving high-resolution images relies on the precise localization of contrast agent particles across a series of beamformed frames. However, our study uncovers an enormous potential: The process of delay-and-sum beamforming leads to an irreversible reduction of Radio-Frequency (RF) channel data, while its implications for localization remain largely unexplored. The rich contextual information embedded within RF wavefronts, including their hyperbolic shape and phase, offers great promise for guiding Deep Neural Networks (DNNs) in challenging localization scenarios. To fully exploit this data, we propose to directly localize scatterers in RF channel data. Our approach involves a custom super-resolution DNN using learned feature channel shuffling, non-maximum suppression, and a semi-global convolutional block for reliable and accurate wavefront localization. Additionally, we introduce a geometric point transformation that facilitates seamless mapping to the B-mode coordinate space. To understand the impact of beamforming on ULM, we validate the effectiveness of our method by conducting an extensive comparison with State-Of-The-Art (SOTA) techniques. We present the inaugural in vivo results from a wavefront-localizing DNN, highlighting its real-world practicality. Our findings show that RF-ULM bridges the domain shift between synthetic and real datasets, offering a considerable advantage in terms of precision and complexity. To enable the broader research community to benefit from our findings, our code and the associated SOTA methods are made available at https://github.com/hahnec/rf-ulm.
Collapse
|
4
|
Ghamsarian N, El-Shabrawi Y, Nasirihaghighi S, Putzgruber-Adamitsch D, Zinkernagel M, Wolf S, Schoeffmann K, Sznitman R. Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos. Sci Data 2024; 11:373. [PMID: 38609405 PMCID: PMC11014927 DOI: 10.1038/s41597-024-03193-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/28/2024] [Indexed: 04/14/2024] Open
Abstract
In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.
Collapse
Affiliation(s)
- Negin Ghamsarian
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| | - Yosuf El-Shabrawi
- Department of Ophthalmology, Klinikum Klagenfurt, Klagenfurt, Austria
| | - Sahar Nasirihaghighi
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria
| | | | | | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, Bern, Switzerland
| | - Klaus Schoeffmann
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria.
| | - Raphael Sznitman
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
5
|
Christ M, Habra O, Monnin K, Vallotton K, Sznitman R, Wolf S, Zinkernagel M, Márquez Neila P. Deep Learning-Based Automated Detection of Retinal Breaks and Detachments on Fundus Photography. Transl Vis Sci Technol 2024; 13:1. [PMID: 38564203 PMCID: PMC10996975 DOI: 10.1167/tvst.13.4.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/18/2024] [Indexed: 04/04/2024] Open
Abstract
Purpose The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.
Collapse
Affiliation(s)
- Merlin Christ
- Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Oussama Habra
- Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Killian Monnin
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Kevin Vallotton
- Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
- Bern Photographic Reading Center, Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Martin Zinkernagel
- Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
- Bern Photographic Reading Center, Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Pablo Márquez Neila
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
6
|
Desideri LF, Scandella D, Berger L, Sznitman R, Zinkernagel M, Anguita R. Prediction of chronic central serous chorioretinopathy through combined manual annotation and AI-assisted volume measurement of flat irregular pigment epithelium. Ophthalmologica 2024:000538543. [PMID: 38555632 DOI: 10.1159/000538543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
INTRODUCTION The aim of this study is to investigate the role of an artificial intelligence (AI)-developed OCT program to predict the clinical course of central serous chorioretinopathy (CSC ) based on baseline pigment epithelium detachment (PED) features. METHODS Single-center, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were recruited and OCTs were analyzed by an AI-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland), providing automatic detection and volumetric quantification of PEDs. Flat irregular PED presence was annotated manually and afterwards measured by the AI program automatically. RESULTS 115 eyes of 101 patients with CSC were included, of which 70 were diagnosed with chronic CSC and 45 with acute CSC. It was found that patients with baseline presence of foveal flat PEDs and multiple flat foveal and extrafoveal PEDs had a higher chance of developing chronic form. AI-based volumetric analysis revealed no significant differences between the groups. CONCLUSIONS While more evidence is needed to confirm the effectiveness of AI-based PED quantitative analysis, this study highlights the significance of identifying flat irregular PEDs at the earliest stage possible in patients with CSC, to optimize patient management and long-term visual outcomes.
Collapse
|
7
|
Ferro Desideri L, Anguita R, Berger LE, Feenstra HMA, Scandella D, Sznitman R, Boon CJF, van Dijk EHC, Zinkernagel MS. BASELINE SPECTRAL DOMAIN OPTICAL COHERENCE TOMOGRAPHIC RETINAL LAYER FEATURES IDENTIFIED BY ARTIFICIAL INTELLIGENCE PREDICT THE COURSE OF CENTRAL SEROUS CHORIORETINOPATHY. Retina 2024; 44:316-323. [PMID: 37883530 DOI: 10.1097/iae.0000000000003965] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 09/30/2023] [Indexed: 10/28/2023]
Abstract
PURPOSE To identify optical coherence tomography (OCT) features to predict the course of central serous chorioretinopathy (CSC) with an artificial intelligence-based program. METHODS Multicenter, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were enrolled. Baseline OCTs were examined by an artificial intelligence-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). Through this platform, automated retinal layer thicknesses and volumes, including intaretinal and subretinal fluid, and pigment epithelium detachment were measured. Baseline OCT features were compared between acute CSC and chronic CSC patients. RESULTS One hundred and sixty eyes of 144 patients with CSC were enrolled, of which 100 had chronic CSC and 60 acute CSC. Retinal layer analysis of baseline OCT scans showed that the inner nuclear layer, the outer nuclear layer, and the photoreceptor-retinal pigmented epithelium complex were significantly thicker at baseline in eyes with acute CSC in comparison with those with chronic CSC ( P < 0.001). Similarly, choriocapillaris and choroidal stroma and retinal thickness (RT) were thicker in acute CSC than chronic CSC eyes ( P = 0.001). Volume analysis revealed average greater subretinal fluid volumes in the acute CSC group in comparison with chronic CSC ( P = 0.041). CONCLUSION Optical coherence tomography features may be helpful to predict the clinical course of CSC. The baseline presence of an increased thickness in the outer retinal layers, choriocapillaris and choroidal stroma, and subretinal fluid volume seems to be associated with acute course of the disease.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Bern Photographic Reading Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Rodrigo Anguita
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London
| | - Lieselotte E Berger
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for Bio-Medical Research, University of Bern, Bern, Switzerland
| | - Helena M A Feenstra
- ARTORG Research Center Biomedical Engineering Research, University of Bern, Bern, Switzerland; and
| | - Davide Scandella
- ARTORG Research Center Biomedical Engineering Research, University of Bern, Bern, Switzerland; and
| | - Raphael Sznitman
- ARTORG Research Center Biomedical Engineering Research, University of Bern, Bern, Switzerland; and
| | - Camiel J F Boon
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
- †Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Elon H C van Dijk
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Martin S Zinkernagel
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Bern Photographic Reading Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for Bio-Medical Research, University of Bern, Bern, Switzerland
| |
Collapse
|
8
|
Ghamsarian N, Wolf S, Zinkernagel M, Schoeffmann K, Sznitman R. DeepPyramid+: medical image segmentation using Pyramid View Fusion and Deformable Pyramid Reception. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-023-03046-2. [PMID: 38189905 DOI: 10.1007/s11548-023-03046-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 12/07/2023] [Indexed: 01/09/2024]
Abstract
PURPOSE Semantic segmentation plays a pivotal role in many applications related to medical image and video analysis. However, designing a neural network architecture for medical image and surgical video segmentation is challenging due to the diverse features of relevant classes, including heterogeneity, deformability, transparency, blunt boundaries, and various distortions. We propose a network architecture, DeepPyramid+, which addresses diverse challenges encountered in medical image and surgical video segmentation. METHODS The proposed DeepPyramid+ incorporates two major modules, namely "Pyramid View Fusion" (PVF) and "Deformable Pyramid Reception" (DPR), to address the outlined challenges. PVF replicates a deduction process within the neural network, aligning with the human visual system, thereby enhancing the representation of relative information at each pixel position. Complementarily, DPR introduces shape- and scale-adaptive feature extraction techniques using dilated deformable convolutions, enhancing accuracy and robustness in handling heterogeneous classes and deformable shapes. RESULTS Extensive experiments conducted on diverse datasets, including endometriosis videos, MRI images, OCT scans, and cataract and laparoscopy videos, demonstrate the effectiveness of DeepPyramid+ in handling various challenges such as shape and scale variation, reflection, and blur degradation. DeepPyramid+ demonstrates significant improvements in segmentation performance, achieving up to a 3.65% increase in Dice coefficient for intra-domain segmentation and up to a 17% increase in Dice coefficient for cross-domain segmentation. CONCLUSIONS DeepPyramid+ consistently outperforms state-of-the-art networks across diverse modalities considering different backbone networks, showcasing its versatility. Accordingly, DeepPyramid+ emerges as a robust and effective solution, successfully overcoming the intricate challenges associated with relevant content segmentation in medical images and surgical videos. Its consistent performance and adaptability indicate its potential to enhance precision in computerized medical image and surgical video analysis applications.
Collapse
Affiliation(s)
- Negin Ghamsarian
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
| | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, Bern, Switzerland
| | | | - Klaus Schoeffmann
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
9
|
Tejero JG, Neila PM, Kurmann T, Gallardo M, Zinkernagel M, Wolf S, Sznitman R. Predicting OCT biological marker localization from weak annotations. Sci Rep 2023; 13:19667. [PMID: 37952011 PMCID: PMC10640596 DOI: 10.1038/s41598-023-47019-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/08/2023] [Indexed: 11/14/2023] Open
Abstract
Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.
Collapse
Affiliation(s)
- Javier Gamazo Tejero
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland.
| | - Pablo Márquez Neila
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Thomas Kurmann
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Mathias Gallardo
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Martin Zinkernagel
- Department of Ophthalmology, Bern University Hospital, 3010, Bern, Switzerland
| | - Sebastian Wolf
- Department of Ophthalmology, Bern University Hospital, 3010, Bern, Switzerland
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| |
Collapse
|
10
|
Zbinden L, Catucci D, Suter Y, Hulbert L, Berzigotti A, Brönnimann M, Ebner L, Christe A, Obmann VC, Sznitman R, Huber AT. Automated liver segmental volume ratio quantification on non-contrast T1-Vibe Dixon liver MRI using deep learning. Eur J Radiol 2023; 167:111047. [PMID: 37690351 DOI: 10.1016/j.ejrad.2023.111047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 07/29/2023] [Accepted: 08/13/2023] [Indexed: 09/12/2023]
Abstract
PURPOSE To evaluate the effectiveness of automated liver segmental volume quantification and calculation of the liver segmental volume ratio (LSVR) on a non-contrast T1-vibe Dixon liver MRI sequence using a deep learning segmentation pipeline. METHOD A dataset of 200 liver MRI with a non-contrast 3 mm T1-vibe Dixon sequence was manually labeledslice-by-sliceby an expert for Couinaud liver segments, while portal and hepatic veins were labeled separately. A convolutional neural networkwas trainedusing 170 liver MRI for training and 30 for evaluation. Liver segmental volumes without liver vessels were retrieved and LSVR was calculated as the liver segmental volumes I-III divided by the liver segmental volumes IV-VIII. LSVR was compared with the expert manual LSVR calculation and the LSVR calculated on CT scans in 30 patients with CT and MRI within 6 months. RESULTS Theconvolutional neural networkclassified the Couinaud segments I-VIII with an average Dice score of 0.770 ± 0.03, ranging between 0.726 ± 0.13 (segment IVb) and 0.810 ± 0.09 (segment V). The calculated mean LSVR with liver MRI unseen by the model was 0.32 ± 0.14, as compared with manually quantified LSVR of 0.33 ± 0.15, resulting in a mean absolute error (MAE) of 0.02. A comparable LSVR of 0.35 ± 0.14 with a MAE of 0.04 resulted with the LSRV retrieved from the CT scans. The automated LSVR showed significant correlation with the manual MRI LSVR (Spearman r = 0.97, p < 0.001) and CT LSVR (Spearman r = 0.95, p < 0.001). CONCLUSIONS A convolutional neural network allowed for accurate automated liver segmental volume quantification and calculation of LSVR based on a non-contrast T1-vibe Dixon sequence.
Collapse
Affiliation(s)
- Lukas Zbinden
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Damiano Catucci
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland; Graduate School for Health Sciences, University of Bern, Switzerland
| | - Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Leona Hulbert
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Annalisa Berzigotti
- Hepatology, Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Michael Brönnimann
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Verena Carola Obmann
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Adrian Thomas Huber
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland.
| |
Collapse
|
11
|
Sampaio P, Lopez-Antuña M, Storni F, Wicht J, Sökeland G, Wartenberg M, Márquez-Neila P, Candinas D, Demory BO, Perren A, Sznitman R. Müller matrix polarimetry for pancreatic tissue characterization. Sci Rep 2023; 13:16417. [PMID: 37775538 PMCID: PMC10541901 DOI: 10.1038/s41598-023-43195-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 09/20/2023] [Indexed: 10/01/2023] Open
Abstract
Polarimetry is an optical characterization technique capable of analyzing the polarization state of light reflected by materials and biological samples. In this study, we investigate the potential of Müller matrix polarimetry (MMP) to analyze fresh pancreatic tissue samples. Due to its highly heterogeneous appearance, pancreatic tissue type differentiation is a complex task. Furthermore, its challenging location in the body makes creating direct imaging difficult. However, accurate and reliable methods for diagnosing pancreatic diseases are critical for improving patient outcomes. To this end, we measured the Müller matrices of ex-vivo unfixed human pancreatic tissue and leverage the feature-learning capabilities of a machine-learning model to derive an optimized data representation that minimizes normal-abnormal classification error. We show experimentally that our approach accurately differentiates between normal and abnormal pancreatic tissue. This is, to our knowledge, the first study to use ex-vivo unfixed human pancreatic tissue combined with feature-learning from raw Müller matrix readings for this purpose.
Collapse
Affiliation(s)
- Paulo Sampaio
- ARTORG Center, University of Bern, Bern, Switzerland.
| | | | - Federico Storni
- Department of Visceral surgery and medicine, Bern University Hospital, Bern, Switzerland
| | - Jonatan Wicht
- ARTORG Center, University of Bern, Bern, Switzerland
- Center for Space and Habitability, University of Bern, Bern, Switzerland
| | - Greta Sökeland
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | - Martin Wartenberg
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | | | - Daniel Candinas
- Department of Visceral surgery and medicine, Bern University Hospital, Bern, Switzerland
| | | | - Aurel Perren
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | | |
Collapse
|
12
|
Hayoz M, Hahne C, Gallardo M, Candinas D, Kurmann T, Allan M, Sznitman R. Learning how to robustly estimate camera pose in endoscopic videos. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02919-w. [PMID: 37184768 PMCID: PMC10329609 DOI: 10.1007/s11548-023-02919-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/13/2023] [Indexed: 05/16/2023]
Abstract
PURPOSE Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. METHOD We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. RESULTS Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. CONCLUSION We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally invasive surgery.
Collapse
Affiliation(s)
- Michel Hayoz
- ARTORG Center, University of Bern, Bern, Switzerland.
| | | | | | - Daniel Candinas
- Department of Visceral Surgery and Medicine, Inselspital, Bern, Switzerland
| | | | | | | |
Collapse
|
13
|
Hu J, Mougiakakou S, Xue S, Afshar-Oromieh A, Hautz W, Christe A, Sznitman R, Rominger A, Ebner L, Shi K. Artificial intelligence for reducing the radiation burden of medical imaging for the diagnosis of coronavirus disease. Eur Phys J Plus 2023; 138:391. [PMID: 37192839 PMCID: PMC10165296 DOI: 10.1140/epjp/s13360-023-03745-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 01/25/2023] [Indexed: 05/18/2023]
Abstract
Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health.
Collapse
Affiliation(s)
- Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Stavroula Mougiakakou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Song Xue
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Wolf Hautz
- Department of University Emergency Center of Inselspital, University of Bern, Freiburgstrasse 15, 3010 Bern, Switzerland
| | - Andreas Christe
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Lukas Ebner
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| |
Collapse
|
14
|
Jungo A, Doorenbos L, Da Col T, Beelen M, Zinkernagel M, Márquez-Neila P, Sznitman R. Unsupervised out-of-distribution detection for safer robotically guided retinal microsurgery. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02909-y. [PMID: 37133678 DOI: 10.1007/s11548-023-02909-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 04/03/2023] [Indexed: 05/04/2023]
Abstract
PURPOSE A fundamental problem in designing safe machine learning systems is identifying when samples presented to a deployed model differ from those observed at training time. Detecting so-called out-of-distribution (OoD) samples is crucial in safety-critical applications such as robotically guided retinal microsurgery, where distances between the instrument and the retina are derived from sequences of 1D images that are acquired by an instrument-integrated optical coherence tomography (iiOCT) probe. METHODS This work investigates the feasibility of using an OoD detector to identify when images from the iiOCT probe are inappropriate for subsequent machine learning-based distance estimation. We show how a simple OoD detector based on the Mahalanobis distance can successfully reject corrupted samples coming from real-world ex vivo porcine eyes. RESULTS Our results demonstrate that the proposed approach can successfully detect OoD samples and help maintain the performance of the downstream task within reasonable levels. MahaAD outperformed a supervised approach trained on the same kind of corruptions and achieved the best performance in detecting OoD cases from a collection of iiOCT samples with real-world corruptions. CONCLUSION The results indicate that detecting corrupted iiOCT data through OoD detection is feasible and does not need prior knowledge of possible corruptions. Consequently, MahaAD could aid in ensuring patient safety during robotically guided microsurgery by preventing deployed prediction models from estimating distances that put the patient at risk.
Collapse
Affiliation(s)
- Alain Jungo
- ARTORG Center, University of Bern, Bern, Switzerland.
| | | | | | | | - Martin Zinkernagel
- Department of Ophthalmology and Department of Clinical Research, Bern University Hospital, Bern, Switzerland
| | | | | |
Collapse
|
15
|
Fountoukidou T, Sznitman R. A reinforcement learning approach for VQA validation: An application to diabetic macular edema grading. Med Image Anal 2023; 87:102822. [PMID: 37182321 DOI: 10.1016/j.media.2023.102822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 08/12/2022] [Accepted: 04/12/2023] [Indexed: 05/16/2023]
Abstract
Recent advances in machine learning models have greatly increased the performance of automated methods in medical image analysis. However, the internal functioning of such models is largely hidden, which hinders their integration in clinical practice. Explainability and trust are viewed as important aspects of modern methods, for the latter's widespread use in clinical communities. As such, validation of machine learning models represents an important aspect and yet, most methods are only validated in a limited way. In this work, we focus on providing a richer and more appropriate validation approach for highly powerful Visual Question Answering (VQA) algorithms. To better understand the performance of these methods, which answer arbitrary questions related to images, this work focuses on an automatic visual Turing test (VTT). That is, we propose an automatic adaptive questioning method, that aims to expose the reasoning behavior of a VQA algorithm. Specifically, we introduce a reinforcement learning (RL) agent that observes the history of previously asked questions, and uses it to select the next question to pose. We demonstrate our approach in the context of evaluating algorithms that automatically answer questions related to diabetic macular edema (DME) grading. The experiments show that such an agent has similar behavior to a clinician, whereby asking questions that are relevant to key clinical concepts.
Collapse
Affiliation(s)
- Tatiana Fountoukidou
- Artificial Intelligence in Medical Imaging, ARTORG Center, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland.
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, ARTORG Center, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| |
Collapse
|
16
|
Dotti PR, Fernandez-Tenorio M, Janicek R, Márquez-Neila P, Wullschleger M, Sznitman R, Egger M. Detection and classification of Ca²⁺ release events in cardiomyocytes using 3D-UNet neural network. Biophys J 2023; 122:238a. [PMID: 36783167 DOI: 10.1016/j.bpj.2022.11.1394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023] Open
Affiliation(s)
| | | | | | - Pablo Márquez-Neila
- Artificial Intelligence in Medical Imaging, ARTORG Center, Universität Bern, Bern, Switzerland
| | | | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, ARTORG Center, Universität Bern, Bern, Switzerland
| | | |
Collapse
|
17
|
Nilius H, Cuker A, Haug S, Nakas C, Studt JD, Tsakiris DA, Greinacher A, Mendez A, Schmidt A, Wuillemin WA, Gerber B, Kremer Hovinga JA, Vishnu P, Graf L, Kashev A, Sznitman R, Bakchoul T, Nagler M. A machine-learning model for reducing misdiagnosis in heparin-induced thrombocytopenia: A prospective, multicenter, observational study. EClinicalMedicine 2023; 55:101745. [PMID: 36457646 PMCID: PMC9706528 DOI: 10.1016/j.eclinm.2022.101745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Diagnosing heparin-induced thrombocytopenia (HIT) at the bedside remains challenging, exposing a significant number of patients at risk of delayed diagnosis or overtreatment. We hypothesized that machine-learning algorithms could be utilized to develop a more accurate and user-friendly diagnostic tool that integrates diverse clinical and laboratory information and accounts for complex interactions. METHODS We conducted a prospective cohort study including 1393 patients with suspected HIT between 2018 and 2021 from 10 study centers. Detailed clinical information and laboratory data were collected, and various immunoassays were conducted. The washed platelet heparin-induced platelet activation assay (HIPA) served as the reference standard. FINDINGS HIPA diagnosed HIT in 119 patients (prevalence 8.5%). The feature selection process in the training dataset (75% of patients) yielded the following predictor variables: (1) immunoassay test result, (2) platelet nadir, (3) unfractionated heparin use, (4) CRP, (5) timing of thrombocytopenia, and (6) other causes of thrombocytopenia. The best performing models were a support vector machine in case of the chemiluminescent immunoassay (CLIA) and the ELISA, as well as a gradient boosting machine in particle-gel immunoassay (PaGIA). In the validation dataset (25% of patients), the AUROC of all models was 0.99 (95% CI: 0.97, 1.00). Compared to the currently recommended diagnostic algorithm (4Ts score, immunoassay), the numbers of false-negative patients were reduced from 12 to 6 (-50.0%; ELISA), 9 to 3 (-66.7%, PaGIA) and 14 to 5 (-64.3%; CLIA). The numbers of false-positive individuals were reduced from 87 to 61 (-29.8%; ELISA), 200 to 63 (-68.5%; PaGIA) and increased from 50 to 63 (+29.0%) for the CLIA. INTERPRETATION Our user-friendly machine-learning algorithm for the diagnosis of HIT (https://toradi-hit.org) was substantially more accurate than the currently recommended diagnostic algorithm. It has the potential to reduce delayed diagnosis and overtreatment in clinical practice. Future studies shall validate this model in wider settings. FUNDING Swiss National Science Foundation (SNSF), and International Society on Thrombosis and Haemostasis (ISTH).
Collapse
Affiliation(s)
- Henning Nilius
- Department of Clinical Chemistry, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Adam Cuker
- Department of Medicine and Department of Pathology and Laboratory Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Sigve Haug
- Mathematical Institute, University of Bern, Bern, Switzerland
- Albert Einstein Center for Fundamental Physics and Laboratory for High Energy Physics, University of Bern, Bern, Switzerland
| | - Christos Nakas
- Department of Clinical Chemistry, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Laboratory of Biometry, School of Agriculture, University of Thessaly, Volos, Greece
| | - Jan-Dirk Studt
- Division of Medical Oncology and Hematology, University and University Hospital Zurich, Zurich, Switzerland
| | | | - Andreas Greinacher
- Institut für Immunologie und Transfusionsmedizin, Universitätsmedizin Greifswald, Greifswald, Germany
| | - Adriana Mendez
- Department of Laboratory Medicine, Kantonsspital Aarau, Aarau, Switzerland
| | - Adrian Schmidt
- Clinic of Medical Oncology and Hematology, Municipal Hospital Zurich Triemli, Zurich, Switzerland
| | - Walter A. Wuillemin
- Division of Hematology and Central Hematology Laboratory, Cantonal Hospital of Lucerne and University of Bern, Switzerland
| | - Bernhard Gerber
- Clinic of Hematology, Oncology Institute of Southern Switzerland, Bellinzona, Switzerland
| | - Johanna A. Kremer Hovinga
- Department of Hematology and Central Hematology Laboratory, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Prakash Vishnu
- Division of Hematology, CHI Franciscan Medical Group, Seattle, United States
| | - Lukas Graf
- Cantonal Hospital of St Gallen, Switzerland
| | | | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Tamam Bakchoul
- Centre for Clinical Transfusion Medicine, University Hospital of Tübingen, Tübingen, Germany
| | - Michael Nagler
- Department of Clinical Chemistry, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Corresponding author. Inselspital, Bern University Hospital, University of Bern, 3010 Bern, Switzerland.
| |
Collapse
|
18
|
Hong J, Brendel M, Erlandsson K, Sari H, Lu J, Clement C, Bui NV, Meindl M, Ziegler S, Barthel H, Sabri O, Choi H, Sznitman R, Rominger A, Shi K. Forecasting the Pharmacokinetics With Limited Early Frames in Dynamic Brain PET Imaging Using Neural Ordinary Differential Equation. IEEE Trans Radiat Plasma Med Sci 2023. [DOI: 10.1109/trpms.2023.3253261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Affiliation(s)
- Jimin Hong
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Matthias Brendel
- Department of Nuclear Medicine, Ludwig-Maximillians-University of Munich, Munich, Germany
| | - Kjell Erlandsson
- Institute of Nuclear Medicine, University College London, London, United Kingdom
| | - Hasan Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Jiaying Lu
- Huashan Hospital, Fudan University, Shanghai, China
| | - Christoph Clement
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Ngoc Vinh Bui
- Department of Nuclear Medicine, Ludwig-Maximillians-University of Munich, Munich, Germany
| | - Maria Meindl
- Department of Nuclear Medicine, Ludwig-Maximillians-University of Munich, Munich, Germany
| | - Sibylle Ziegler
- Department of Nuclear Medicine, Ludwig-Maximillians-University of Munich, Munich, Germany
| | - Henryk Barthel
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| | - Osama Sabri
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Raphael Sznitman
- University of Bern, ARTORG Center for Biomedical Engineering Research, Bern
| | - Axel Rominger
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
19
|
Habra O, Gallardo M, Meyer Zu Westram T, De Zanet S, Jaggi D, Zinkernagel M, Wolf S, Sznitman R. Evaluation of an Artificial Intelligence-Based Detector of Sub- and Intraretinal Fluid on a Large Set of Optical Coherence Tomography Volumes in Age-Related Macular Degeneration and Diabetic Macular Edema. Ophthalmologica 2022; 245:516-527. [PMID: 36215958 DOI: 10.1159/000527345] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 09/29/2022] [Indexed: 01/31/2023]
Abstract
INTRODUCTION In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). METHODS A total of 3,981 OCT volumes from 374 patients with AMD and 11,501 OCT volumes from 811 patients with DME were acquired with Heidelberg-Spectralis OCT device (Heidelberg Engineering Inc., Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI algorithm to detect IRF and SRF separately, and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity, and precision. RESULTS The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity, and sensitivity for IRF were 0.87, 0.88, 0.84, and 0.93, 0.95, 0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort, respectively. For detecting any fluid, the AUC was 0.95 and 0.98, and the accuracy, specificity, and sensitivity were 0.89, 0.93, and 0.90 and 0.95, 0.88, and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0% versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort. DISCUSSION/CONCLUSION The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF.
Collapse
Affiliation(s)
- Oussama Habra
- Department for Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | | | | | - Damian Jaggi
- Department for Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | - Martin Zinkernagel
- Department for Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | - Sebastian Wolf
- Department for Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | |
Collapse
|
20
|
Guo R, Xue S, Hu J, Sari H, Mingels C, Zeimpekis K, Prenosil G, Wang Y, Zhang Y, Viscione M, Sznitman R, Rominger A, Li B, Shi K. Using domain knowledge for robust and generalizable deep learning-based CT-free PET attenuation and scatter correction. Nat Commun 2022; 13:5882. [PMID: 36202816 PMCID: PMC9537165 DOI: 10.1038/s41467-022-33562-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 09/22/2022] [Indexed: 11/16/2022] Open
Abstract
Despite the potential of deep learning (DL)-based methods in substituting CT-based PET attenuation and scatter correction for CT-free PET imaging, a critical bottleneck is their limited capability in handling large heterogeneity of tracers and scanners of PET imaging. This study employs a simple way to integrate domain knowledge in DL for CT-free PET imaging. In contrast to conventional direct DL methods, we simplify the complex problem by a domain decomposition so that the learning of anatomy-dependent attenuation correction can be achieved robustly in a low-frequency domain while the original anatomy-independent high-frequency texture can be preserved during the processing. Even with the training from one tracer on one scanner, the effectiveness and robustness of our proposed approach are confirmed in tests of various external imaging tracers on different scanners. The robust, generalizable, and transparent DL development may enhance the potential of clinical translation. Deep learning-based methods have been proposed to substitute CT-based PET attenuation and scatter correction to achieve CT-free PET imaging. Here, the authors present a simple way to integrate domain knowledge in deep learning for CT-free PET imaging.
Collapse
Affiliation(s)
- Rui Guo
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Song Xue
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Hasan Sari
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Konstantinos Zeimpekis
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - George Prenosil
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Yue Wang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Yu Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Marco Viscione
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center, University of Bern, Bern, Switzerland.,Center of Artificial Intelligence in Medicine (CAIM), University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. .,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.,Center of Artificial Intelligence in Medicine (CAIM), University of Bern, Bern, Switzerland.,Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| |
Collapse
|
21
|
Hong J, Kang SK, Alberts I, Lu J, Sznitman R, Lee JS, Rominger A, Choi H, Shi K. Image-level trajectory inference of tau pathology using variational autoencoder for Flortaucipir PET. Eur J Nucl Med Mol Imaging 2022; 49:3061-3072. [PMID: 35226120 PMCID: PMC9250490 DOI: 10.1007/s00259-021-05662-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 12/15/2021] [Indexed: 11/26/2022]
Abstract
Purpose Alzheimer’s disease (AD) studies revealed that abnormal deposition of tau spreads in a specific spatial pattern, namely Braak stage. However, Braak staging is based on post mortem brains, each of which represents the cross section of the tau trajectory in disease progression, and numerous studies were reported that do not conform to that model. This study thus aimed to identify the tau trajectory and quantify the tau progression in a data-driven approach with the continuous latent space learned by variational autoencoder (VAE). Methods A total of 1080 [18F]Flortaucipir brain positron emission tomography (PET) images were collected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. VAE was built to compress the hidden features from tau images in latent space. Hierarchical agglomerative clustering and minimum spanning tree (MST) were applied to organize the features and calibrate them to the tau progression, thus deriving pseudo-time. The image-level tau trajectory was inferred by continuously sampling across the calibrated latent features. We assessed the pseudo-time with regard to tau standardized uptake value ratio (SUVr) in AD-vulnerable regions, amyloid deposit, glucose metabolism, cognitive scores, and clinical diagnosis. Results We identified four clusters that plausibly capture certain stages of AD and organized the clusters in the latent space. The inferred tau trajectory agreed with the Braak staging. According to the derived pseudo-time, tau first deposits in the parahippocampal and amygdala, and then spreads to the fusiform, inferior temporal lobe, and posterior cingulate. Prior to the regional tau deposition, amyloid accumulates first. Conclusion The spatiotemporal trajectory of tau progression inferred in this study was consistent with Braak staging. The profile of other biomarkers in disease progression agreed well with previous findings. We addressed that this approach additionally has the potential to quantify tau progression as a continuous variable by taking a whole-brain tau image into account. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05662-z.
Collapse
Affiliation(s)
- Jimin Hong
- Department of Nuclear Medicine, Inselspital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- ARTORG Center, University of Bern, Bern, Switzerland
| | - Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University Hospital, 28 Yeon Gun, Jong Ro, Seoul, Republic of Korea
| | - Ian Alberts
- Department of Nuclear Medicine, Inselspital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Jiaying Lu
- Department of Nuclear Medicine, Inselspital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | | | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University Hospital, 28 Yeon Gun, Jong Ro, Seoul, Republic of Korea
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University Hospital, 28 Yeon Gun, Jong Ro, Seoul, Republic of Korea
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- Department of Informatics, Technical University of Munich, Munich, Germany
| | | |
Collapse
|
22
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 69] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
23
|
Eichhorn C, Greulich S, Bucciarelli-Ducci C, Sznitman R, Kwong RY, Gräni C. Multiparametric Cardiovascular Magnetic Resonance Approach in Diagnosing, Monitoring, and Prognostication of Myocarditis. JACC Cardiovasc Imaging 2021; 15:1325-1338. [PMID: 35592889 DOI: 10.1016/j.jcmg.2021.11.017] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 11/12/2021] [Accepted: 11/18/2021] [Indexed: 01/14/2023]
Abstract
Myocarditis represents the entity of an inflamed myocardium and is a diagnostic challenge caused by its heterogeneous presentation. Contemporary noninvasive evaluation of patients with clinically suspected myocarditis using cardiac magnetic resonance (CMR) includes dimensions and function of the heart chambers, conventional T2-weighted imaging, late gadolinium enhancement, novel T1 and T2 mapping, and extracellular volume fraction calculation. CMR feature-tracking, texture analysis, and artificial intelligence emerge as potential modern techniques to further improve diagnosis and prognostication in this clinical setting. This review will describe the evidence surrounding different CMR methods and image postprocessing methods and highlight their values for clinical decision making, monitoring, and risk stratification across stages of this condition.
Collapse
Affiliation(s)
- Christian Eichhorn
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Simon Greulich
- Department of Cardiology and Angiology, University of Tübingen, Tübingen, Germany
| | - Chiara Bucciarelli-Ducci
- Bristol Heart Institute, NIHR Bristol Biomedical Research Centre, University Hospitals Bristol NHS Foundation Trust and University of Bristol, Bristol, United Kingdom
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, ARTORG Center, University of Bern, Bern, Switzerland
| | - Raymond Y Kwong
- Noninvasive Cardiovascular Imaging Section, Cardiovascular Division, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Christoph Gräni
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Switzerland.
| |
Collapse
|
24
|
Xue S, Guo R, Bohn KP, Matzke J, Viscione M, Alberts I, Meng H, Sun C, Zhang M, Zhang M, Sznitman R, El Fakhri G, Rominger A, Li B, Shi K. A cross-scanner and cross-tracer deep learning method for the recovery of standard-dose imaging quality from low-dose PET. Eur J Nucl Med Mol Imaging 2021; 49:1843-1856. [PMID: 34950968 PMCID: PMC9015984 DOI: 10.1007/s00259-021-05644-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 11/30/2021] [Indexed: 11/12/2022]
Abstract
Purpose A critical bottleneck for the credibility of artificial intelligence (AI) is replicating the results in the diversity of clinical practice. We aimed to develop an AI that can be independently applied to recover high-quality imaging from low-dose scans on different scanners and tracers. Methods Brain [18F]FDG PET imaging of 237 patients scanned with one scanner was used for the development of AI technology. The developed algorithm was then tested on [18F]FDG PET images of 45 patients scanned with three different scanners, [18F]FET PET images of 18 patients scanned with two different scanners, as well as [18F]Florbetapir images of 10 patients. A conditional generative adversarial network (GAN) was customized for cross-scanner and cross-tracer optimization. Three nuclear medicine physicians independently assessed the utility of the results in a clinical setting. Results The improvement achieved by AI recovery significantly correlated with the baseline image quality indicated by structural similarity index measurement (SSIM) (r = −0.71, p < 0.05) and normalized dose acquisition (r = −0.60, p < 0.05). Our cross-scanner and cross-tracer AI methodology showed utility based on both physical and clinical image assessment (p < 0.05). Conclusion The deep learning development for extensible application on unknown scanners and tracers may improve the trustworthiness and clinical acceptability of AI-based dose reduction. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05644-1.
Collapse
Affiliation(s)
- Song Xue
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Rui Guo
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Karl Peter Bohn
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Jared Matzke
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Marco Viscione
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Ian Alberts
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Hongping Meng
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Chenwei Sun
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Miao Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | - Min Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China
| | | | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Axel Rominger
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China. .,Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Ruijin Center, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland.,Department of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
25
|
Stapelfeldt J, Kucur SS, Huber N, Höhn R, Sznitman R. Virtual Reality-Based and Conventional Visual Field Examination Comparison in Healthy and Glaucoma Patients. Transl Vis Sci Technol 2021; 10:10. [PMID: 34614166 PMCID: PMC8496417 DOI: 10.1167/tvst.10.12.10] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Purpose Clinically evaluate the noninferiority of a custom virtual reality (VR) perimetry system when compared to a clinically and routinely used perimeter on both healthy subjects and glaucoma patients. Methods We use a custom-designed VR perimetry system tailored for visual field testing. The system uses Oculus Quest VR headset (Facebook Technologies, LLC, Bern, Switzerland), that includes a clicker for participant response feedback. A prospective, single center, study was conducted at the Department of Ophthalmology of the Bern University Hospital (Bern, Switzerland) for 12 months. Of the 114 participants recruited 70 subjects (36 healthy and 34 glaucoma patients with early to moderate visual field loss) were included in the study. Participants underwent perimetry tests on an Octopus 900 (Haag-Streit, Köniz, Switzerland) as well as on the custom VR perimeter. In both cases, standard dynamic strategy (DS) was used in conjunction with the G testing pattern. Collected visual fields (VFs) from both devices were then analyzed and compared. Results High mean defect (MD) correlations between the two systems (Spearman, ρ ≥ 0.75) were obtained. The VR system was found to slightly underestimate VF defects in glaucoma subjects (1.4 dB). No significant bias was found with respect to eccentricity or subject age. On average, a similar number of stimuli presentations per VF was necessary when measuring glaucoma patients and healthy subjects. Conclusions This study demonstrates that a clinically used perimeter and the proposed VR perimetry system have comparable performances with respect to a number of perimetry parameters in healthy and glaucoma patients with early to moderate visual field loss. Translational Relevance This suggests that VR perimeters have the potential to assess VFs with high enough confidence, whereby alleviating challenges in current perimetry practices by providing a portable and more accessible visual field test.
Collapse
Affiliation(s)
- Jan Stapelfeldt
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Serife Seda Kucur
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Nina Huber
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - René Höhn
- Department of Ophthalmology, Bern University Hopsital, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
26
|
Hu S, Hall DA, Zubler F, Sznitman R, Anschuetz L, Caversaccio M, Wimmer W. Bayesian brain in tinnitus: Computational modeling of three perceptual phenomena using a modified Hierarchical Gaussian Filter. Hear Res 2021; 410:108338. [PMID: 34469780 DOI: 10.1016/j.heares.2021.108338] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/27/2021] [Accepted: 08/17/2021] [Indexed: 01/01/2023]
Abstract
Recently, Bayesian brain-based models emerged as a possible composite of existing theories, providing an universal explanation of tinnitus phenomena. Yet, the involvement of multiple synergistic mechanisms complicates the identification of behavioral and physiological evidence. To overcome this, an empirically tested computational model could support the evaluation of theoretical hypotheses by intrinsically encompassing different mechanisms. The aim of this work was to develop a generative computational tinnitus perception model based on the Bayesian brain concept. The behavioral responses of 46 tinnitus subjects who underwent ten consecutive residual inhibition assessments were used for model fitting. Our model was able to replicate the behavioral responses during residual inhibition in our cohort (median linear correlation coefficient of 0.79). Using the same model, we simulated two additional tinnitus phenomena: residual excitation and occurrence of tinnitus in non-tinnitus subjects after sensory deprivation. In the simulations, the trajectories of the model were consistent with previously obtained behavioral and physiological observations. Our work introduces generative computational modeling to the research field of tinnitus. It has the potential to quantitatively link experimental observations to theoretical hypotheses and to support the search for neural signatures of tinnitus by finding correlates between the latent variables of the model and measured physiological data.
Collapse
Affiliation(s)
- Suyi Hu
- Department for Otolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern, University of Bern, Switzerland; Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Deborah A Hall
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK; Department of Psychology, School of Social Sciences, Heriot-Watt University Malaysia, Putrajaya, Malaysia
| | - Frédéric Zubler
- Department of Neurology, Inselspital, University Hospital Bern, University of Bern, Switzerland
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Lukas Anschuetz
- Department for Otolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern, University of Bern, Switzerland
| | - Marco Caversaccio
- Department for Otolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern, University of Bern, Switzerland; Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Wilhelm Wimmer
- Department for Otolaryngology, Head and Neck Surgery, Inselspital, University Hospital Bern, University of Bern, Switzerland; Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| |
Collapse
|
27
|
Lejeune L, Sznitman R. A positive/unlabeled approach for the segmentation of medical sequences using point-wise supervision. Med Image Anal 2021; 73:102185. [PMID: 34461559 DOI: 10.1016/j.media.2021.102185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 06/25/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
The ability to quickly annotate medical imaging data plays a critical role in training deep learning frameworks for segmentation. Doing so for image volumes or video sequences is even more pressing as annotating these is particularly burdensome. To alleviate this problem, this work proposes a new method to efficiently segment medical imaging volumes or videos using point-wise annotations only. This allows annotations to be collected extremely quickly and remains applicable to numerous segmentation tasks. Our approach trains a deep learning model using an appropriate Positive/Unlabeled objective function using sparse point-wise annotations. While most methods of this kind assume that the proportion of positive samples in the data is known a-priori, we introduce a novel self-supervised method to estimate this prior efficiently by combining a Bayesian estimation framework and new stopping criteria. Our method iteratively estimates appropriate class priors and yields high segmentation quality for a variety of object types and imaging modalities. In addition, by leveraging a spatio-temporal tracking framework, we regularize our predictions by leveraging the complete data volume. We show experimentally that our approach outperforms state-of-the-art methods tailored to the same problem.
Collapse
Affiliation(s)
- Laurent Lejeune
- Artificial Intelligence in Medical Imaging, ARTORG Center, University of Bern, Murtenstrasse 50, Bern 3008, Switzerland.
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, ARTORG Center, University of Bern, Murtenstrasse 50, Bern 3008, Switzerland.
| |
Collapse
|
28
|
Kurmann T, Márquez-Neila P, Allan M, Wolf S, Sznitman R. Mask then classify: multi-instance segmentation for surgical instruments. Int J Comput Assist Radiol Surg 2021; 16:1227-1236. [PMID: 34143374 PMCID: PMC8260538 DOI: 10.1007/s11548-021-02404-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 05/10/2021] [Indexed: 10/26/2022]
Abstract
PURPOSE The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument types and do not contain any information on the instance affiliation of each pixel. We propose to overcome this limitation by using a novel instance segmentation method which first masks instruments and then classifies them into their respective type. METHODS We introduce a novel method for instance segmentation where a pixel-wise mask of each instance is found prior to classification. An encoder-decoder network is used to extract instrument instances, which are then separately classified using the features of the previous stages. Furthermore, we present a method to incorporate instrument priors from surgical robots. RESULTS Experiments are performed on the robotic instrument segmentation dataset of the 2017 endoscopic vision challenge. We perform a fourfold cross-validation and show an improvement of over 18% to the previous state-of-the-art. Furthermore, we perform an ablation study which highlights the importance of certain design choices and observe an increase of 10% over semantic segmentation methods. CONCLUSIONS We have presented a novel instance segmentation method for surgical instruments which outperforms previous semantic segmentation-based methods. Our method further provides a more informative output of instance level information, while retaining a precise segmentation mask. Finally, we have shown that robotic instrument priors can be used to further increase the performance.
Collapse
Affiliation(s)
| | | | - Max Allan
- Intuitive Surgical Inc., Sunnyvale, USA
| | - Sebastian Wolf
- Department of Ophthalmology, Bern University Hospital, Bern, Switzerland
| | | |
Collapse
|
29
|
Gallardo M, Munk MR, Kurmann T, De Zanet S, Mosinska A, Karagoz IK, Zinkernagel MS, Wolf S, Sznitman R. Machine learning can predict anti-VEGF treatment demand in a Treat-and-Extend regimen for patients with nAMD, DME and RVO associated ME. ACTA ACUST UNITED AC 2021; 5:604-624. [DOI: 10.1016/j.oret.2021.05.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 01/27/2023]
|
30
|
Jacques M, Dobrzyński M, Gagliardi PA, Sznitman R, Pertz O. CODEX, a neural network approach to explore signaling dynamics landscapes. Mol Syst Biol 2021; 17:e10026. [PMID: 33835701 PMCID: PMC8034356 DOI: 10.15252/msb.202010026] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 03/01/2021] [Accepted: 03/03/2021] [Indexed: 12/19/2022] Open
Abstract
Current studies of cell signaling dynamics that use live cell fluorescent biosensors routinely yield thousands of single-cell, heterogeneous, multi-dimensional trajectories. Typically, the extraction of relevant information from time series data relies on predefined, human-interpretable features. Without a priori knowledge of the system, the predefined features may fail to cover the entire spectrum of dynamics. Here we present CODEX, a data-driven approach based on convolutional neural networks (CNNs) that identifies patterns in time series. It does not require a priori information about the biological system and the insights into the data are built through explanations of the CNNs' predictions. CODEX provides several views of the data: visualization of all the single-cell trajectories in a low-dimensional space, identification of prototypic trajectories, and extraction of distinctive motifs. We demonstrate how CODEX can provide new insights into ERK and Akt signaling in response to various growth factors, and we recapitulate findings in p53 and TGFβ-SMAD2 signaling.
Collapse
Affiliation(s)
| | | | | | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering ResearchUniversity of BernBernSwitzerland
| | - Olivier Pertz
- Institute of Cell BiologyUniversity of BernBernSwitzerland
| |
Collapse
|
31
|
Sandu RM, Paolucci I, Ruiter SJS, Sznitman R, de Jong KP, Freedman J, Weber S, Tinguely P. Volumetric Quantitative Ablation Margins for Assessment of Ablation Completeness in Thermal Ablation of Liver Tumors. Front Oncol 2021; 11:623098. [PMID: 33777768 PMCID: PMC7988092 DOI: 10.3389/fonc.2021.623098] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 01/05/2021] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND In thermal ablation of liver tumors, complete coverage of the tumor volume by the ablation volume with a sufficient ablation margin is the most important factor for treatment success. Evaluation of ablation completeness is commonly performed by visual inspection in 2D and is prone to inter-reader variability. This work aimed to introduce a standardized approach for evaluation of ablation completeness after CT-guided thermal ablation of liver tumors, using volumetric quantitative ablation margins (QAM). METHODS A QAM computation metric based on volumetric segmentations of tumor and ablation areas and signed Euclidean surface distance maps was developed, including a novel algorithm to address QAM computation in subcapsular tumors. The code for QAM computation was verified in artificial examples of tumor and ablation spheres simulating varying scenarios of ablation margins. The applicability of the QAM metric was investigated in representative cases extracted from a prospective database of colorectal liver metastases (CRLM) treated with stereotactic microwave ablation (SMWA). RESULTS Applicability of the proposed QAM metric was confirmed in artificial and clinical example cases. Numerical and visual options of data presentation displaying substrata of QAM distributions were proposed. For subcapsular tumors, the underestimation of tumor coverage by the ablation volume when applying an unadjusted QAM method was confirmed, supporting the benefits of using the proposed algorithm for QAM computation in these cases. The computational code for developed QAM was made publicly available, encouraging the use of a standard and objective metric in reporting ablation completeness and margins. CONCLUSION The proposed volumetric approach for QAM computation including a novel algorithm to address subcapsular liver tumors enables precision and reproducibility in the assessment of ablation margins. The quantitative feedback on ablation completeness opens possibilities for intra-operative decision making and for refined analyses on predictability and consistency of local tumor control after thermal ablation of liver tumors.
Collapse
Affiliation(s)
- Raluca-Maria Sandu
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Iwan Paolucci
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Simeon J. S. Ruiter
- Department of Hepato-Pancreato-Biliary Surgery and Liver Transplantation, University Medical Center Groningen, Groningen, Netherlands
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Koert P. de Jong
- Department of Hepato-Pancreato-Biliary Surgery and Liver Transplantation, University Medical Center Groningen, Groningen, Netherlands
| | - Jacob Freedman
- Division of Surgery, Department of Clinical Sciences, Karolinska Institutet at Danderyd Hospital, Stockholm, Sweden
| | - Stefan Weber
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Pascale Tinguely
- Division of Surgery, Department of Clinical Sciences, Karolinska Institutet at Danderyd Hospital, Stockholm, Sweden
- Department of Visceral Surgery and Medicine, Inselspital University Hospital Bern, University of Bern, Bern, Switzerland
| |
Collapse
|
32
|
Taghavi K, Moono M, Mwanahamuntu M, Basu P, Limacher A, Tembo T, Kapesa H, Hamusonde K, Asangbeh S, Sznitman R, Low N, Manasyan A, Bohlius J. Screening test accuracy to improve detection of precancerous lesions of the cervix in women living with HIV: a study protocol. BMJ Open 2020; 10:e037955. [PMID: 33371015 PMCID: PMC7751198 DOI: 10.1136/bmjopen-2020-037955] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 11/07/2020] [Accepted: 11/17/2020] [Indexed: 12/24/2022] Open
Abstract
INTRODUCTION The simplest and cheapest method for cervical cancer screening is visual inspection after application of acetic acid (VIA). However, this method has limitations for correctly identifying precancerous cervical lesions (sensitivity) and women free from these lesions (specificity). We will assess alternative screening methods that could improve sensitivity and specificity in women living with humanimmunodeficiency virus (WLHIV) in Southern Africa. METHODS AND ANALYSIS We will conduct a paired, prospective, screening test accuracy study among consecutive, eligible women aged 18-65 years receiving treatment for HIV/AIDS at Kanyama Hospital, Lusaka, Zambia. We will assess a portable magnification device (Gynocular, Gynius Plus AB, Sweden) based on the Swede score assessment of the cervix, test for high-risk subtypes of human papillomavirus (HR-HPV, GeneXpert, Cepheid, USA) and VIA. All study participants will receive all three tests and the reference standard at baseline and at six-month follow-up. The reference standard is histological assessment of two to four biopsies of the transformation zone. The primary histological endpoint is cervical intraepithelial neoplasia grade two and above (CIN2+). Women who are VIA-positive or have histologically confirmed CIN2+ lesions will be treated as per national guidelines. We plan to enrol 450 women. Primary outcome measures for test accuracy include sensitivity and specificity of each stand-alone test. In the secondary analyses, we will evaluate the combination of tests. Pre-planned additional studies include use of cervigrams to test an automated visual assessment tool using image pattern recognition, cost-analysis and associations with trichomoniasis. ETHICS AND DISSEMINATION Ethical approval was obtained from the University of Zambia Biomedical Research Ethics Committee, Zambian National Health Regulatory Authority, Zambia Medicines Regulatory Authority, Swissethics and the International Agency for Research on Cancer Ethics Committee. Results of the study will be submitted for publication in a peer-reviewed journal. TRIAL REGISTRATION NUMBER NCT03931083; Pre-results.
Collapse
Affiliation(s)
- Katayoun Taghavi
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Misinzo Moono
- Centre for Infectious Disease Research in Zambia (CIDRZ), Lusaka, Zambia
| | - Mulindi Mwanahamuntu
- Obstetrics and Gynaecology, University Teaching Hospital, Lusaka, Zambia
- Women and Newborn health, Levy Mwanawasa Medical University Hospital, Lusaka, Zambia
| | - Partha Basu
- International Agency for Research on Cancer (IARC), World Health Organization, Lyon, France
| | | | - Taniya Tembo
- Centre for Infectious Disease Research in Zambia (CIDRZ), Lusaka, Zambia
| | - Herbert Kapesa
- Centre for Infectious Disease Research in Zambia (CIDRZ), Lusaka, Zambia
| | - Kalongo Hamusonde
- Centre for Infectious Disease Research in Zambia (CIDRZ), Lusaka, Zambia
| | - Serra Asangbeh
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Nicola Low
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| | - Albert Manasyan
- Centre for Infectious Disease Research in Zambia (CIDRZ), Lusaka, Zambia
- University of Alabama at Birmingham (UAB), Birmingham, Alabama, USA
| | - Julia Bohlius
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, Switzerland
| |
Collapse
|
33
|
Kucur ŞS, Häckel S, Stapelfeldt J, Odermatt J, Iliev ME, Abegg M, Sznitman R, Höhn R. Comparative Study Between the SORS and Dynamic Strategy Visual Field Testing Methods on Glaucomatous and Healthy Subjects. Transl Vis Sci Technol 2020; 9:3. [PMID: 33344047 PMCID: PMC7718825 DOI: 10.1167/tvst.9.13.3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Accepted: 09/22/2020] [Indexed: 11/30/2022] Open
Abstract
Purpose To clinically validate the noninferiority of the sequentially optimized reconstruction strategy (SORS) when compared to the dynamic strategy (DS). Methods SORS is a novel perimetry testing strategy that evaluates a subset of test locations of a visual field (VF) test pattern and estimates the untested locations by linear approximation. When testing fewer locations, SORS has been shown in computer simulations to bring improvements in speed over conventional perimetry tests, while maintaining acquisition at high-quality acquisition. To validate SORS, a prospective clinical study was conducted at the Department of Ophthalmology of Bern University Hospital, over 12 months. Eighty-three subjects (32 healthy and 51 glaucoma patients with early to moderate visual field loss) of 114 participants were included in the study. The subjects underwent perimetry tests on an Octopus 900 (Haag-Streit, Köniz, Switzerland) using the G pattern with both DS and SORS. The acquired sensitivity thresholds (ST) by both tests were analyzed and compared. Results DS-acquired VFs were used as a reference. High correlations between individual STs (r ≥ 0.74), as well as between mean defect values (r ≥ 0.88) given by DS and SORS were obtained. The mean absolute error of SORS was under 3 dB with a 70% reduction in acquisition time. SORS overestimated healthy VFs while slightly underestimating glaucomatous VFs. Qualitatively, SORS acquisition yielded VF with detectable defect patterns, albeit some isolated and small defects were occasionally missed. Conclusions This clinical study showed that for healthy and glaucomatous patients, SORS-acquired VFs sufficiently correlated with the DS-acquired VFs with up to 70% reduction in acquisition time. Translational Relevance This clinical study suggests that the novel perimetry strategy SORS could be used in routine clinical practice with comparable utility to the current standard DS, whereby providing a shorter and more comfortable perimetry experience.
Collapse
Affiliation(s)
- Şerife Seda Kucur
- Artificial Intelligence in Medical Imaging Laboratory, ARTORG Center for, Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Sebastian Häckel
- Department of Ophthalmology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Jan Stapelfeldt
- Artificial Intelligence in Medical Imaging Laboratory, ARTORG Center for, Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | | | - Milko E Iliev
- Department of Ophthalmology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Mathias Abegg
- Department of Ophthalmology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging Laboratory, ARTORG Center for, Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Rene Höhn
- Department of Ophthalmology, University Hospital Bern, University of Bern, Bern, Switzerland
| |
Collapse
|
34
|
Vu MH, Lofstedt T, Nyholm T, Sznitman R. A Question-Centric Model for Visual Question Answering in Medical Imaging. IEEE Trans Med Imaging 2020; 39:2856-2868. [PMID: 32149682 DOI: 10.1109/tmi.2020.2978284] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep learning methods have proven extremely effective at performing a variety of medical image analysis tasks. With their potential use in clinical routine, their lack of transparency has however been one of their few weak points, raising concerns regarding their behavior and failure modes. While most research to infer model behavior has focused on indirect strategies that estimate prediction uncertainties and visualize model support in the input image space, the ability to explicitly query a prediction model regarding its image content offers a more direct way to determine the behavior of trained models. To this end, we present a novel Visual Question Answering approach that allows an image to be queried by means of a written question. Experiments on a variety of medical and natural image datasets show that by fusing image and question features in a novel way, the proposed approach achieves an equal or higher accuracy compared to current methods.
Collapse
|
35
|
Apostolopoulos S, Salas J, Ordóñez JLP, Tan SS, Ciller C, Ebneter A, Zinkernagel M, Sznitman R, Wolf S, De Zanet S, Munk MR. Automatically Enhanced OCT Scans of the Retina: A proof of concept study. Sci Rep 2020; 10:7819. [PMID: 32385371 PMCID: PMC7210925 DOI: 10.1038/s41598-020-64724-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 04/15/2020] [Indexed: 11/18/2022] Open
Abstract
In this work we evaluated a postprocessing, customized automatic retinal OCT B-scan enhancement software for noise reduction, contrast enhancement and improved depth quality applicable to Heidelberg Engineering Spectralis OCT devices. A trained deep neural network was used to process images from an OCT dataset with ground truth biomarker gradings. Performance was assessed by the evaluation of two expert graders who evaluated image quality for B-scan with a clear preference for enhanced over original images. Objective measures such as SNR and noise estimation showed a significant improvement in quality. Presence grading of seven biomarkers IRF, SRF, ERM, Drusen, RPD, GA and iRORA resulted in similar intergrader agreement. Intergrader agreement was also compared with improvement in IRF and RPD, and disagreement in high variance biomarkers such as GA and iRORA.
Collapse
Affiliation(s)
| | - Jazmín Salas
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | - José L P Ordóñez
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | | | - Andreas Ebneter
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | - Martin Zinkernagel
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | - Marion R Munk
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland.
| |
Collapse
|
36
|
Mendizabal A, Sznitman R, Cotin S. Force classification during robotic interventions through simulation-trained neural networks. Int J Comput Assist Radiol Surg 2019; 14:1601-1610. [DOI: 10.1007/s11548-019-02048-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Accepted: 07/30/2019] [Indexed: 11/30/2022]
|
37
|
Chen ECS, Harada K, Sznitman R. IJCARS-IPCAI 2019 special issue: conference information processing for computer-assisted interventions, 10th international conference 2019-part 1. Int J Comput Assist Radiol Surg 2019; 14:911-912. [PMID: 31123987 DOI: 10.1007/s11548-019-02000-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Elvis C S Chen
- Imaging Research Laboratories, Robarts Research Institute, Western University, 1151 Richmond St., London, ON, N6A 3K7, Canada
| | - Kanako Harada
- Mitsuishi-Suita Lab, Department of Bioengineering/Mechanical Engineering, School of Engineering, The University of Tokyo, Eng. Bldg 2-71C1, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Raphael Sznitman
- ARTORG Research Center Biomedical Engineering, University of Bern, Murtenstrasse 50, 3008, Bern, Switzerland.
| |
Collapse
|
38
|
Giannakaki-Zimmermann H, Huf W, Schaal KB, Schürch K, Dysli C, Dysli M, Zenger A, Ceklic L, Ciller C, Apostolopoulos S, De Zanet S, Sznitman R, Ebneter A, Zinkernagel MS, Wolf S, Munk MR. Comparison of Choroidal Thickness Measurements Using Spectral Domain Optical Coherence Tomography in Six Different Settings and With Customized Automated Segmentation Software. Transl Vis Sci Technol 2019; 8:5. [PMID: 31110908 PMCID: PMC6503890 DOI: 10.1167/tvst.8.3.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 01/29/2019] [Indexed: 11/24/2022] Open
Abstract
Purpose We investigate which spectral domain-optical coherence tomography (SD-OCT) setting is superior when measuring subfoveal choroidal thickness (CT) and compared results to an automated segmentation software. Methods Thirty patients underwent enhanced depth imaging (EDI)-OCT. B-scans were extracted in six different settings (W+N = white background/normal contrast 9; W+H = white background/maximum contrast 16; B+N = black background/normal contrast 12; B+H = black background/maximum contrast 16; C+N = Color-encoded image on black background at predefined contrast of 9, and C+H = Color-encoded image on black background at high/maximal contrast of 16), resulting in 180 images. Subfoveal CT was manually measured by nine graders and by automated segmentation software. Intraclass correlation (ICC) was assessed. Results ICC was higher in normal than in high contrast images, and better for achromatic black than for white background images. Achromatic images were better than color images. Highest ICC was achieved in B+N (ICC = 0.64), followed by B+H (ICC = 0.54), W+N, and W+H (ICC = 0.5 each). Weakest ICC was obtained with Spectral-color (ICC = 0.47). Mean manual CT versus mean computer estimated CT showed a correlation of r = 0.6 (P = 0.001). Conclusion Black background with white image at normal contrast (B+N) seems the best setting to manually assess subfoveal CT. Automated assessment of CT seems to be a reliable tool for CT assessment. Translational Relevance To define optimized OCT analysis settings to improve the evaluation of in vivo imaging.
Collapse
Affiliation(s)
- Helena Giannakaki-Zimmermann
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Wolfgang Huf
- Karl Landsteiner Institute for Clinical Risk Management, Vienna, Austria
| | - Karen B Schaal
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Kaspar Schürch
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Chantal Dysli
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Muriel Dysli
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Anita Zenger
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Lala Ceklic
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | | | - Stephanos Apostolopoulos
- RetinAI Medical AG, Bern, Switzerland.,ARTORG Center for Biomedical Engineering Research, University Bern, Bern Switzerland
| | | | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University Bern, Bern Switzerland
| | - Andreas Ebneter
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - Martin S Zinkernagel
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Sebastian Wolf
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland
| | - Marion R Munk
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, and University of Bern, Switzerland.,Bern Photographic Reading Center, University Hospital Bern, Switzerland.,Department of Ophthalmology, Northwestern University, Feinberg School of Medicine, Chicago, Illinois, USA
| | | |
Collapse
|
39
|
Kucur ŞS, Márquez-Neila P, Abegg M, Sznitman R. Patient-attentive sequential strategy for perimetry-based visual field acquisition. Med Image Anal 2019; 54:179-192. [PMID: 30933865 DOI: 10.1016/j.media.2019.03.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 03/08/2019] [Accepted: 03/14/2019] [Indexed: 11/28/2022]
Abstract
Perimetry is a non-invasive clinical psychometric examination used for diagnosing ophthalmic and neurological conditions. At its core, perimetry relies on a subject pressing a button whenever they see a visual stimulus within their field of view. This sequential process then yields a 2D visual field image that is critical for clinical use. Perimetry is painfully slow however, with examinations lasting 7-8 minutes per eye. Maintaining high levels of concentration during that time is exhausting for the patient and negatively affects the acquired visual field. We introduce PASS, a novel perimetry testing strategy, based on reinforcement learning, that requires fewer locations in order to effectively estimate 2D visual fields. PASS uses a selection policy that determines what locations should be tested in order to reconstruct the complete visual field as accurately as possible, and then separately reconstructs the visual field from sparse observations. Furthermore, PASS is patient-specific and non-greedy. It adaptively selects what locations to query based on the patient's answers to previous queries, and the locations are jointly selected to maximize the quality of the final reconstruction. In our experiments, we show that PASS outperforms state-of-the-art methods, leading to more accurate reconstructions while reducing between 30% and 70% the duration of the patient examination.
Collapse
Affiliation(s)
- Şerife Seda Kucur
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
| | - Pablo Márquez-Neila
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Mathias Abegg
- Department of Ophthalmology, Bern University Hospital, Inselspital, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
40
|
Hu S, Anschuetz L, Huth ME, Sznitman R, Blaser D, Kompis M, Hall DA, Caversaccio M, Wimmer W. Association Between Residual Inhibition and Neural Activity in Patients with Tinnitus: Protocol for a Controlled Within- and Between-Subject Comparison Study. JMIR Res Protoc 2019; 8:e12270. [PMID: 30626571 PMCID: PMC6329433 DOI: 10.2196/12270] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Revised: 10/25/2018] [Accepted: 10/25/2018] [Indexed: 01/19/2023] Open
Abstract
Background Electroencephalography (EEG) studies indicate possible associations between tinnitus and changes in the neural activity. However, inconsistent results require further investigation to better understand such heterogeneity and inform the interpretation of previous findings. Objective This study aims to investigate the feasibility of EEG measurements as an objective indicator for the identification of tinnitus-associated neural activities. Methods To reduce heterogeneity, participants served as their own control using residual inhibition (RI) to modulate the tinnitus perception in a within-subject EEG study design with a tinnitus group. In addition, comparison with a nontinnitus control group allowed for a between-subjects comparison. We will apply RI stimulation to generate tinnitus and nontinnitus conditions in the same subject. Furthermore, high-frequency audiometry (up to 13 kHz) and tinnitometry will be performed. Results This work was funded by the Infrastructure Grant of the University of Bern, Bern, Switzerland and Bernafon AG, Bern, Switzerland. Enrollment for the study described in this protocol commenced in February 2018. Data analysis is currently under way and the first results are expected to be submitted for publication in 2019. Conclusions This study design helps in comparing the neural activity between conditions in the same individual, thereby addressing a notable limitation of previous EEG tinnitus studies. In addition, the high-frequency assessment will help to analyze and classify tinnitus symptoms beyond the conventional clinical standard. International Registered Report Identifier (IRRID) RR1-10.2196/12270
Collapse
Affiliation(s)
- Suyi Hu
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Lukas Anschuetz
- Department of Ears, Nose, Throat, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Markus E Huth
- Department of Ears, Nose, Throat, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- Ophthalmic Technology Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Daniela Blaser
- Department of Ears, Nose, Throat, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Martin Kompis
- Department of Ears, Nose, Throat, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Deborah A Hall
- National Institute for Health Research Nottingham Biomedical Research Centre, University of Nottingham, Nottingham, United Kingdom.,Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom.,Nottingham University Hospitals National Health Service Trust, Queens Medical Centre, Nottingham, United Kingdom.,Malaysia Campus, University of Nottingham, Semeniyh, Malaysia
| | - Marco Caversaccio
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.,Department of Ears, Nose, Throat, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Wilhelm Wimmer
- Hearing Research Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.,Department of Ears, Nose, Throat, Head and Neck Surgery, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
41
|
Abstract
PURPOSE To investigate the suitability of multi-scale spatial information in 30o visual fields (VF), computed from a Convolutional Neural Network (CNN) classifier, for early-glaucoma vs. control discrimination. METHOD Two data sets of VFs acquired with the OCTOPUS 101 G1 program and the Humphrey Field Analyzer 24-2 pattern were subdivided into control and early-glaucomatous groups, and converted into a new image using a novel voronoi representation to train a custom-designed CNN so to discriminate between control and early-glaucomatous eyes. Saliency maps that highlight what regions of the VF are contributing maximally to the classification decision were computed to provide classification justification. Model fitting was cross-validated and average precision (AP) score performances were computed for our method, Mean Defect (MD), square-root of Loss Variance (sLV), their combination (MD+sLV), and a Neural Network (NN) that does not use convolutional features. RESULTS CNN achieved the best AP score (0.874±0.095) across all test folds for one data set compared to others (MD = 0.869±0.064, sLV = 0.775±0.137, MD+sLV = 0.839±0.085, NN = 0.843±0.089) and the third best AP score (0.986 ±0.019) on the other one with slight difference from the other methods (MD = 0.986±0.023, sLV = 0.992±0.016, MD+sLV = 0.987±0.017, NN = 0.985±0.017). In general, CNN consistently led to high AP across different data sets. Qualitatively, computed saliency maps appeared to provide clinically relevant information on the CNN decision for individual VFs. CONCLUSION The proposed CNN offers high classification performance for the discrimination of control and early-glaucoma VFs when compared with standard clinical decision measures. The CNN classification, aided by saliency visualization, may support clinicians in the automatic discrimination of early-glaucomatous and normal VFs.
Collapse
Affiliation(s)
- Şerife Seda Kucur
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Gábor Holló
- Department of Ophthalmology, Semmelweis University, Budapest, Hungary
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
42
|
Nguyen HG, Sznitman R, Maeder P, Schalenbourg A, Peroni M, Hrbacek J, Weber DC, Pica A, Bach Cuadra M. Personalized Anatomic Eye Model From T1-Weighted Volume Interpolated Gradient Echo Magnetic Resonance Imaging of Patients With Uveal Melanoma. Int J Radiat Oncol Biol Phys 2018; 102:813-820. [PMID: 29970318 DOI: 10.1016/j.ijrobp.2018.05.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 04/06/2018] [Accepted: 05/01/2018] [Indexed: 02/03/2023]
Abstract
PURPOSE We present a 3-dimensional patient-specific eye model from magnetic resonance imaging (MRI) for proton therapy treatment planning of uveal melanoma (UM). During MRI acquisition of UM patients, the point fixation can be difficult and, together with physiological blinking, can introduce motion artifacts in the images, thus challenging the model creation. Furthermore, the unclear boundary of the small objects (eg, lens, optic nerve) near the muscle or of the tumors with hemorrhage and tantalum clips can limit model accuracy. METHODS AND MATERIALS A dataset of 37 subjects, including 30 healthy eyes of volunteers and 7 eyes of UM patients, was investigated. In our previous work, active shape model was successfully applied to retinoblastoma eye segmentation in T1-weighted 3T MRI. Here, we evaluate this method in a more challenging setting, based on 1.5T MRI acquisition and different datasets of awake adult eyes with UM. The lens and cornea together with the sclera, vitreous humor, and optic nerve were automatically segmented and validated against manual delineations of a senior ocular radiation oncologist, in terms of the Dice similarity coefficient and Hausdorff distance. RESULTS Leave-one-out cross validation (mixing both volunteers and UM patients) yielded median Dice similarity coefficient values (respective of Hausdorff distance) of 94.5% (1.64 mm) for the sclera, 92.2% (1.73 mm) for the vitreous humor, 88.3% (1.09 mm) for the lens, and 81.9% (1.86 mm) for the optic nerve. The average computation time for an eye was 10 seconds. CONCLUSIONS To our knowledge, our work is the first attempt to automatically segment adult eyes, including patients with UM. Our results show that automated active shape model segmentation can succeed in the presence of motion, tumors, and tantalum clips. These results are promising for inclusion in clinical practice.
Collapse
Affiliation(s)
- Huu-Giao Nguyen
- Proton Therapy Center, Paul Scherrer Institut, ETH Domain, Villigen, Switzerland; Ophthalmic Technology Laboratory, ARTORG Center of the University of Bern, Bern, Switzerland; Radiology Department, Lausanne University Hospital, Lausanne, Switzerland; Medica Image Analysis Laboratory, Centre d'Imagerie BioMédicale, University of Lausanne, Lausanne, Switzerland.
| | - Raphael Sznitman
- Ophthalmic Technology Laboratory, ARTORG Center of the University of Bern, Bern, Switzerland
| | - Philippe Maeder
- Radiology Department, Lausanne University Hospital, Lausanne, Switzerland
| | - Ann Schalenbourg
- Adult Ocular Oncology Unit, Jules-Gonin Eye Hospital, FAA, Department of Ophthalmology, University of Lausanne, Switzerland
| | - Marta Peroni
- Proton Therapy Center, Paul Scherrer Institut, ETH Domain, Villigen, Switzerland
| | - Jan Hrbacek
- Proton Therapy Center, Paul Scherrer Institut, ETH Domain, Villigen, Switzerland
| | - Damien C Weber
- Proton Therapy Center, Paul Scherrer Institut, ETH Domain, Villigen, Switzerland
| | - Alessia Pica
- Proton Therapy Center, Paul Scherrer Institut, ETH Domain, Villigen, Switzerland
| | - Meritxell Bach Cuadra
- Radiology Department, Lausanne University Hospital, Lausanne, Switzerland; Medica Image Analysis Laboratory, Centre d'Imagerie BioMédicale, University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
43
|
Du X, Kurmann T, Chang PL, Allan M, Ourselin S, Sznitman R, Kelly JD, Stoyanov D. Articulated Multi-Instrument 2-D Pose Estimation Using Fully Convolutional Networks. IEEE Trans Med Imaging 2018; 37:1276-1287. [PMID: 29727290 PMCID: PMC6051486 DOI: 10.1109/tmi.2017.2787672] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Revised: 12/22/2017] [Accepted: 12/22/2017] [Indexed: 05/25/2023]
Abstract
Instrument detection, pose estimation, and tracking in surgical videos are an important vision component for computer-assisted interventions. While significant advances have been made in recent years, articulation detection is still a major challenge. In this paper, we propose a deep neural network for articulated multi-instrument 2-D pose estimation, which is trained on detailed annotations of endoscopic and microscopic data sets. Our model is formed by a fully convolutional detection-regression network. Joints and associations between joint pairs in our instrument model are located by the detection subnetwork and are subsequently refined through a regression subnetwork. Based on the output from the model, the poses of the instruments are inferred using maximum bipartite graph matching. Our estimation framework is powered by deep learning techniques without any direct kinematic information from a robot. Our framework is tested on single-instrument RMIT data, and also on multi-instrument EndoVis and in vivo data with promising results. In addition, the data set annotations are publicly released along with our code and model.
Collapse
|
44
|
Fountoukidou T, Raisin P, Kaufmann D, Justiz J, Sznitman R, Wolf S. Motion-invariant SRT treatment detection from direct M-scan OCT imaging. Int J Comput Assist Radiol Surg 2018. [PMID: 29520526 DOI: 10.1007/s11548-018-1720-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
PURPOSE Selective retina therapy (SRT) is a laser treatment targeting specific posterior retinal layers. It is focused on inducing damage to the retinal pigment epithelium (RPE), while sparing other retinal tissue compared to traditional photocoagulation. However, the targeted RPE layer is invisible with most imaging modalities and induced SRT lesions cannot be monitored. In this work, imaging scans acquired from an experimental setup that couples the SRT laser beam with an optical coherence tomography (OCT) beam are analyzed in order to evaluate the treatment as they occur. METHODS We isolated a small part of the time-resolved scan corresponding to the end of the treatment, for which we have microscopic evidence of the SRT outcome. We then use a convolutional neural network to correspond each scan to the treatment result. We explore which aspects of the scan convey more valuable information for a robust therapy evaluation. By only using this adequately small part, we can achieve an online estimation, while being resilient to eye movement. RESULTS The available dataset consists of time- resolved OCT scans of 98 ex vivo porcine eyes, treated with different energy levels. The proposed method yields high performance in the task of predicting whether the applied energy was adequate for SRT treatment, by focusing on the immediate OCT signal acquired during treatment time. CONCLUSIONS We propose a strategy toward online noninvasive SRT treatment assessment, able to provide a satisfying evaluation of a treatment status, that therefore could be used for the planning of the treatment continuation.
Collapse
Affiliation(s)
| | | | - Daniel Kaufmann
- Engineering and Information Technology, Berner Fachhochschule, Biel/Bienne, Switzerland
| | - Jörn Justiz
- Engineering and Information Technology, Berner Fachhochschule, Biel/Bienne, Switzerland
| | | | - Sebastian Wolf
- Inselspital, University Hospital of Bern, Bern, Switzerland
| |
Collapse
|
45
|
Glowacki P, Pinheiro MA, Mosinska A, Turetken E, Lebrecht D, Sznitman R, Holtmaat A, Kybic J, Fua P. Reconstructing Evolving Tree Structures in Time Lapse Sequences by Enforcing Time-Consistency. IEEE Trans Pattern Anal Mach Intell 2018; 40:755-761. [PMID: 28333621 DOI: 10.1109/tpami.2017.2680444] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose a novel approach to reconstructing curvilinear tree structures evolving over time, such as road networks in 2D aerial images or neural structures in 3D microscopy stacks acquired in vivo. To enforce temporal consistency, we simultaneously process all images in a sequence, as opposed to reconstructing structures of interest in each image independently. We formulate the problem as a Quadratic Mixed Integer Program and demonstrate the additional robustness that comes from using all available visual clues at once, instead of working frame by frame. Furthermore, when the linear structures undergo local changes over time, our approach automatically detects them.
Collapse
|
46
|
Affiliation(s)
- Derk Wild
- Ophthalmic Technology Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Serife Seda Kucur
- Ophthalmic Technology Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- Ophthalmic Technology Laboratory, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
47
|
Abstract
Perimetry testing is an automated method to measure visual function and is heavily used for diagnosing ophthalmic and neurological conditions. Its working principle is to sequentially query a subject about perceived light using different brightness levels at different visual field locations. At a given location, this query-patient-feedback process is expected to converge at a perceived sensitivity, such that a shown stimulus intensity is observed and reported 50% of the time. Given this inherently time-intensive and noisy process, fast testing strategies are necessary in order to measure existing regions more effectively and reliably. In this work, we present a novel meta-strategy which relies on the correlative nature of visual field locations in order to strongly reduce the necessary number of locations that need to be examined. To do this, we sequentially determine locations that most effectively reduce visual field estimation errors in an initial training phase. We then exploit these locations at examination time and show that our approach can easily be combined with existing perceived sensitivity estimation schemes to speed up the examinations. Compared to state-of-the-art strategies, our approach shows marked performance gains with a better accuracy-speed trade-off regime for both mixed and sub-populations.
Collapse
Affiliation(s)
- Şerife Seda Kucur
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- * E-mail:
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
48
|
Ciller C, De Zanet S, Kamnitsas K, Maeder P, Glocker B, Munier FL, Rueckert D, Thiran JP, Bach Cuadra M, Sznitman R. Multi-channel MRI segmentation of eye structures and tumors using patient-specific features. PLoS One 2017; 12:e0173900. [PMID: 28350816 PMCID: PMC5369682 DOI: 10.1371/journal.pone.0173900] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Accepted: 02/28/2017] [Indexed: 02/03/2023] Open
Abstract
Retinoblastoma and uveal melanoma are fast spreading eye tumors usually diagnosed by using 2D Fundus Image Photography (Fundus) and 2D Ultrasound (US). Diagnosis and treatment planning of such diseases often require additional complementary imaging to confirm the tumor extend via 3D Magnetic Resonance Imaging (MRI). In this context, having automatic segmentations to estimate the size and the distribution of the pathological tissue would be advantageous towards tumor characterization. Until now, the alternative has been the manual delineation of eye structures, a rather time consuming and error-prone task, to be conducted in multiple MRI sequences simultaneously. This situation, and the lack of tools for accurate eye MRI analysis, reduces the interest in MRI beyond the qualitative evaluation of the optic nerve invasion and the confirmation of recurrent malignancies below calcified tumors. In this manuscript, we propose a new framework for the automatic segmentation of eye structures and ocular tumors in multi-sequence MRI. Our key contribution is the introduction of a pathological eye model from which Eye Patient-Specific Features (EPSF) can be computed. These features combine intensity and shape information of pathological tissue while embedded in healthy structures of the eye. We assess our work on a dataset of pathological patient eyes by computing the Dice Similarity Coefficient (DSC) of the sclera, the cornea, the vitreous humor, the lens and the tumor. In addition, we quantitatively show the superior performance of our pathological eye model as compared to the segmentation obtained by using a healthy model (over 4% DSC) and demonstrate the relevance of our EPSF, which improve the final segmentation regardless of the classifier employed.
Collapse
Affiliation(s)
- Carlos Ciller
- Radiology Department, CIBM, Lausanne University and University Hospital, Lausanne, Switzerland
- Ophthalmic Technology Group, ARTORG Center Univ. of Bern, Bern, Switzerland
- * E-mail:
| | - Sandro De Zanet
- Ophthalmic Technology Group, ARTORG Center Univ. of Bern, Bern, Switzerland
| | | | - Philippe Maeder
- Radiology Department, CIBM, Lausanne University and University Hospital, Lausanne, Switzerland
| | - Ben Glocker
- Biomedical Image Analysis Group, Imperial College London, London, United Kingdom
| | - Francis L. Munier
- Unit of Pediatric Ocular Oncology, Jules Gonin Eye Hospital, Lausanne, Switzerland
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Imperial College London, London, United Kingdom
| | - Jean-Philippe Thiran
- Radiology Department, CIBM, Lausanne University and University Hospital, Lausanne, Switzerland
- Signal Processing Laboratory, Ećole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Meritxell Bach Cuadra
- Radiology Department, CIBM, Lausanne University and University Hospital, Lausanne, Switzerland
- Signal Processing Laboratory, Ećole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Raphael Sznitman
- Ophthalmic Technology Group, ARTORG Center Univ. of Bern, Bern, Switzerland
| |
Collapse
|
49
|
Abstract
Since its introduction 25 years ago, Optical Coherence Tomography (OCT) has contributed tremendously to diagnostic and monitoring capabilities of pathologies in the field of ophthalmology. Despite rapid progress in hardware and software technology however, the price of OCT devices has remained high, limiting their use in private practice, and in screening examinations. In this paper, we present a slitlamp-integrated OCT device, built with off-the-shelf components, which can generate high-quality volumetric images of the posterior eye segment. To do so, we present a novel strategy for 3D image reconstruction in this challenging domain that allows us for state-of-the-art OCT volumes to be generated at fast speeds. The result is an OCT device that can match current systems in clinical practice, at a significantly lower cost.
Collapse
|
50
|
Apostolopoulos S, De Zanet S, Ciller C, Wolf S, Sznitman R. Pathological OCT Retinal Layer Segmentation Using Branch Residual U-Shape Networks. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017 2017. [DOI: 10.1007/978-3-319-66179-7_34] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|