1
|
Hendrickx J, Gracea RS, Vanheers M, Winderickx N, Preda F, Shujaat S, Jacobs R. Can artificial intelligence-driven cephalometric analysis replace manual tracing? A systematic review and meta-analysis. Eur J Orthod 2024; 46:cjae029. [PMID: 38895901 PMCID: PMC11185929 DOI: 10.1093/ejo/cjae029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to investigate the accuracy and efficiency of artificial intelligence (AI)-driven automated landmark detection for cephalometric analysis on two-dimensional (2D) lateral cephalograms and three-dimensional (3D) cone-beam computed tomographic (CBCT) images. SEARCH METHODS An electronic search was conducted in the following databases: PubMed, Web of Science, Embase, and grey literature with search timeline extending up to January 2024. SELECTION CRITERIA Studies that employed AI for 2D or 3D cephalometric landmark detection were included. DATA COLLECTION AND ANALYSIS The selection of studies, data extraction, and quality assessment of the included studies were performed independently by two reviewers. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. A meta-analysis was conducted to evaluate the accuracy of the 2D landmarks identification based on both mean radial error and standard error. RESULTS Following the removal of duplicates, title and abstract screening, and full-text reading, 34 publications were selected. Amongst these, 27 studies evaluated the accuracy of AI-driven automated landmarking on 2D lateral cephalograms, while 7 studies involved 3D-CBCT images. A meta-analysis, based on the success detection rate of landmark placement on 2D images, revealed that the error was below the clinically acceptable threshold of 2 mm (1.39 mm; 95% confidence interval: 0.85-1.92 mm). For 3D images, meta-analysis could not be conducted due to significant heterogeneity amongst the study designs. However, qualitative synthesis indicated that the mean error of landmark detection on 3D images ranged from 1.0 to 5.8 mm. Both automated 2D and 3D landmarking proved to be time-efficient, taking less than 1 min. Most studies exhibited a high risk of bias in data selection (n = 27) and reference standard (n = 29). CONCLUSION The performance of AI-driven cephalometric landmark detection on both 2D cephalograms and 3D-CBCT images showed potential in terms of accuracy and time efficiency. However, the generalizability and robustness of these AI systems could benefit from further improvement. REGISTRATION PROSPERO: CRD42022328800.
Collapse
Affiliation(s)
- Julie Hendrickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Rellyca Sola Gracea
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
| | - Michiel Vanheers
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Nicolas Winderickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh 14611, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, 141 04 Stockholm, Sweden
| |
Collapse
|
2
|
Suri A, Mukherjee P, Pickhardt PJ, Summers RM. A Comparison of CT-Based Pancreatic Segmentation Deep Learning Models. Acad Radiol 2024:S1076-6332(24)00373-8. [PMID: 38944630 DOI: 10.1016/j.acra.2024.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 05/24/2024] [Accepted: 06/11/2024] [Indexed: 07/01/2024]
Abstract
RATIONALE AND OBJECTIVES Pancreas segmentation accuracy at CT is critical for the identification of pancreatic pathologies and is essential for the development of imaging biomarkers. Our objective was to benchmark the performance of five high-performing pancreas segmentation models across multiple metrics stratified by scan and patient/pancreatic characteristics that may affect segmentation performance. MATERIALS AND METHODS In this retrospective study, PubMed and ArXiv searches were conducted to identify pancreas segmentation models which were then evaluated on a set of annotated imaging datasets. Results (Dice score, Hausdorff distance [HD], average surface distance [ASD]) were stratified by contrast status and quartiles of peri-pancreatic attenuation (5 mm region around pancreas). Multivariate regression was performed to identify imaging characteristics and biomarkers (n = 9) that were significantly associated with Dice score. RESULTS Five pancreas segmentation models were identified: Abdomen Atlas [AAUNet, AASwin, trained on 8448 scans], TotalSegmentator [TS, 1204 scans], nnUNetv1 [MSD-nnUNet, 282 scans], and a U-Net based model for predicting diabetes [DM-UNet, 427 scans]. These were evaluated on 352 CT scans (30 females, 25 males, 297 sex unknown; age 58 ± 7 years [ ± 1 SD], 327 age unknown) from 2000-2023. Overall, TS, AAUNet, and AASwin were the best performers, Dice= 80 ± 11%, 79 ± 16%, and 77 ± 18%, respectively (pairwise Sidak test not-significantly different). AASwin and MSD-nnUNet performed worse (for all metrics) on non-contrast scans (vs contrast, P < .001). The worst performer was DM-UNet (Dice=67 ± 16%). All algorithms except TS showed lower Dice scores with increasing peri-pancreatic attenuation (P < .01). Multivariate regression showed non-contrast scans, (P < .001; MSD-nnUNet), smaller pancreatic length (P = .005, MSD-nnUNet), and height (P = .003, DM-UNet) were associated with lower Dice scores. CONCLUSION The convolutional neural network-based models trained on a diverse set of scans performed best (TS, AAUnet, and AASwin). TS performed equivalently to AAUnet and AASwin with only 13% of the training set size (8488 vs 1204 scans). Though trained on the same dataset, a transformer network (AASwin) had poorer performance on non-contrast scans whereas its convolutional network counterpart (AAUNet) did not. This study highlights how aggregate assessment metrics of pancreatic segmentation algorithms seen in other literature are not enough to capture differential performance across common patient and scanning characteristics in clinical populations.
Collapse
Affiliation(s)
- Abhinav Suri
- Radiology and Imaging Sciences, National Institutes of Health, Clinical Center, Bethesda, Maryland, USA; David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| | - Pritam Mukherjee
- Radiology and Imaging Sciences, National Institutes of Health, Clinical Center, Bethesda, Maryland, USA
| | - Perry J Pickhardt
- University of Wisconsin Madison School of Medicine, Madison, Wisconsin, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health, Clinical Center, Bethesda, Maryland, USA.
| |
Collapse
|
3
|
Wang L, Cheng Y, Meftaul IM, Luo F, Kabir MA, Doyle R, Lin Z, Naidu R. Advancing Soil Health: Challenges and Opportunities in Integrating Digital Imaging, Spectroscopy, and Machine Learning for Bioindicator Analysis. Anal Chem 2024; 96:8109-8123. [PMID: 38490962 DOI: 10.1021/acs.analchem.3c05311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Affiliation(s)
- Liang Wang
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| | - Ying Cheng
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| | - Islam Md Meftaul
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| | - Fang Luo
- Ministry of Education Key Laboratory for Analytical Science of Food Safety and Biology, Fujian Provincial Key Laboratory of Analysis and Detection for Food Safety, Fuzhou University, Fuzhou, Fjian 350108, China
| | - Muhammad Ashad Kabir
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
- School of Computing, Mathematics and Engineering, Charles Sturt University, Bathurst, New South Wales 2795, Australia
| | - Richard Doyle
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
- Tasmanian Institute of Agriculture (TIA), University of Tasmania, Launceston, Tasmania 7250, Australia
| | - Zhenyu Lin
- Ministry of Education Key Laboratory for Analytical Science of Food Safety and Biology, Fujian Provincial Key Laboratory of Analysis and Detection for Food Safety, Fuzhou University, Fuzhou, Fjian 350108, China
| | - Ravi Naidu
- Global Centre for Environmental Remediation, College of Engineering, Science and Environment, University of Newcastle, Callaghan, New South Wales 2308, Australia
- The Cooperative Research Centre for High-Performance Soils, Callaghan, New South Wales 2308, Australia
| |
Collapse
|
4
|
Finnegan RN, Quinn A, Booth J, Belous G, Hardcastle N, Stewart M, Griffiths B, Carroll S, Thwaites DI. Cardiac substructure delineation in radiation therapy - A state-of-the-art review. J Med Imaging Radiat Oncol 2024. [PMID: 38757728 DOI: 10.1111/1754-9485.13668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/29/2024] [Indexed: 05/18/2024]
Abstract
Delineation of cardiac substructures is crucial for a better understanding of radiation-related cardiotoxicities and to facilitate accurate and precise cardiac dose calculation for developing and applying risk models. This review examines recent advancements in cardiac substructure delineation in the radiation therapy (RT) context, aiming to provide a comprehensive overview of the current level of knowledge, challenges and future directions in this evolving field. Imaging used for RT planning presents challenges in reliably visualising cardiac anatomy. Although cardiac atlases and contouring guidelines aid in standardisation and reduction of variability, significant uncertainties remain in defining cardiac anatomy. Coupled with the inherent complexity of the heart, this necessitates auto-contouring for consistent large-scale data analysis and improved efficiency in prospective applications. Auto-contouring models, developed primarily for breast and lung cancer RT, have demonstrated performance comparable to manual contouring, marking a significant milestone in the evolution of cardiac delineation practices. Nevertheless, several key concerns require further investigation. There is an unmet need for expanding cardiac auto-contouring models to encompass a broader range of cancer sites. A shift in focus is needed from ensuring accuracy to enhancing the robustness and accessibility of auto-contouring models. Addressing these challenges is paramount for the integration of cardiac substructure delineation and associated risk models into routine clinical practice, thereby improving the safety of RT for future cancer patients.
Collapse
Affiliation(s)
- Robert N Finnegan
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Alexandra Quinn
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Brisbane, Queensland, Australia
| | - Nicholas Hardcastle
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Maegan Stewart
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Brooke Griffiths
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
| | - Susan Carroll
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - David I Thwaites
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales, Australia
- Radiotherapy Research Group, Leeds Institute of Medical Research, St James's Hospital and University of Leeds, Leeds, UK
| |
Collapse
|
5
|
Zheng W, Mu H, Chen Z, Liu J, Xia D, Cheng Y, Jing Q, Lau PM, Tang J, Bi GQ, Wu F, Wang H. NEATmap: a high-efficiency deep learning approach for whole mouse brain neuronal activity trace mapping. Natl Sci Rev 2024; 11:nwae109. [PMID: 38831937 PMCID: PMC11145917 DOI: 10.1093/nsr/nwae109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 01/26/2024] [Accepted: 02/25/2024] [Indexed: 06/05/2024] Open
Abstract
Quantitative analysis of activated neurons in mouse brains by a specific stimulation is usually a primary step to locate the responsive neurons throughout the brain. However, it is challenging to comprehensively and consistently analyze the neuronal activity trace in whole brains of a large cohort of mice from many terabytes of volumetric imaging data. Here, we introduce NEATmap, a deep learning-based high-efficiency, high-precision and user-friendly software for whole-brain neuronal activity trace mapping by automated segmentation and quantitative analysis of immunofluorescence labeled c-Fos+ neurons. We applied NEATmap to study the brain-wide differentiated neuronal activation in response to physical and psychological stressors in cohorts of mice.
Collapse
Affiliation(s)
- Weijie Zheng
- AHU-IAI AI Joint Laboratory, Anhui University, Hefei 230039, China
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Huawei Mu
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Zhiyi Chen
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Jiajun Liu
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Debin Xia
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Yuxiao Cheng
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Qi Jing
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Pak-Ming Lau
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Interdisciplinary Center for Brain Information, Brain Cognition and Brain Disease Institute, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jin Tang
- AHU-IAI AI Joint Laboratory, Anhui University, Hefei 230039, China
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
| | - Guo-Qiang Bi
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
- Interdisciplinary Center for Brain Information, Brain Cognition and Brain Disease Institute, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Feng Wu
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Hao Wang
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
- National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei 230026, China
| |
Collapse
|
6
|
Gut D, Trombini M, Kucybała I, Krupa K, Rozynek M, Dellepiane S, Tabor Z, Wojciechowski W. Use of superpixels for improvement of inter-rater and intra-rater reliability during annotation of medical images. Med Image Anal 2024; 94:103141. [PMID: 38489896 DOI: 10.1016/j.media.2024.103141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 01/29/2024] [Accepted: 03/05/2024] [Indexed: 03/17/2024]
Abstract
In the context of automatic medical image segmentation based on statistical learning, raters' variability of ground truth segmentations in training datasets is a widely recognized issue. Indeed, the reference information is provided by experts but bias due to their knowledge may affect the quality of the ground truth data, thus hindering creation of robust and reliable datasets employed in segmentation, classification or detection tasks. In such a framework, automatic medical image segmentation would significantly benefit from utilizing some form of presegmentation during training data preparation process, which could lower the impact of experts' knowledge and reduce time-consuming labeling efforts. The present manuscript proposes a superpixels-driven procedure for annotating medical images. Three different superpixeling methods with two different number of superpixels were evaluated on three different medical segmentation tasks and compared with manual annotations. Within the superpixels-based annotation procedure medical experts interactively select superpixels of interest, apply manual corrections, when necessary, and then the accuracy of the annotations, the time needed to prepare them, and the number of manual corrections are assessed. In this study, it is proven that the proposed procedure reduces inter- and intra-rater variability leading to more reliable annotations datasets which, in turn, may be beneficial for the development of more robust classification or segmentation models. In addition, the proposed approach reduces time needed to prepare the annotations.
Collapse
Affiliation(s)
- Daniel Gut
- Department of Biocybernetics and Biomedical Engineering, AGH University of Krakow, al. Mickiewicza 30, 30-059 Krakow, Poland.
| | - Marco Trombini
- Department of Electric, Electronic, and Telecommunication Engineering and Naval Architecture - DITEN, Università degli Studi di Genova, Via all'Opera Pia 11, 16145 Genoa, Italy
| | - Iwona Kucybała
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| | - Kamil Krupa
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| | - Miłosz Rozynek
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| | - Silvana Dellepiane
- Department of Electric, Electronic, and Telecommunication Engineering and Naval Architecture - DITEN, Università degli Studi di Genova, Via all'Opera Pia 11, 16145 Genoa, Italy
| | - Zbisław Tabor
- Department of Biocybernetics and Biomedical Engineering, AGH University of Krakow, al. Mickiewicza 30, 30-059 Krakow, Poland
| | - Wadim Wojciechowski
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| |
Collapse
|
7
|
Jung J, Han J, Han JM, Ko J, Yoon J, Hwang JS, Park JI, Hwang G, Jung JH, Hwang DDJ. Prediction of neovascular age-related macular degeneration recurrence using optical coherence tomography images with a deep neural network. Sci Rep 2024; 14:5854. [PMID: 38462646 PMCID: PMC10925587 DOI: 10.1038/s41598-024-56309-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 03/05/2024] [Indexed: 03/12/2024] Open
Abstract
Neovascular age-related macular degeneration (nAMD) can result in blindness if left untreated, and patients often require repeated anti-vascular endothelial growth factor injections. Although, the treat-and-extend method is becoming popular to reduce vision loss attributed to recurrence, it may pose a risk of overtreatment. This study aimed to develop a deep learning model based on DenseNet201 to predict nAMD recurrence within 3 months after confirming dry-up 1 month following three loading injections in treatment-naïve patients. A dataset of 1076 spectral domain optical coherence tomography (OCT) images from 269 patients diagnosed with nAMD was used. The performance of the model was compared with that of 6 ophthalmologists, using 100 randomly selected samples. The DenseNet201-based model achieved 53.0% accuracy in predicting nAMD recurrence using a single pre-injection image and 60.2% accuracy after viewing all the images immediately after the 1st, 2nd, and 3rd injections. The model outperformed experienced ophthalmologists, with an average accuracy of 52.17% using a single pre-injection image and 53.3% after examining four images before and after three loading injections. In conclusion, the artificial intelligence model demonstrated a promising ability to predict nAMD recurrence using OCT images and outperformed experienced ophthalmologists. These findings suggest that deep learning models can assist in nAMD recurrence prediction, thus improving patient outcomes and optimizing treatment strategies.
Collapse
Affiliation(s)
- Juho Jung
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea
| | - Jinyoung Han
- Department of Applied Artificial Intelligence, Sungkyunkwan University, Seoul, Korea
- Department of Human-Artificial Intelligence Interaction, Sungkyunkwan University, Seoul, Korea
| | - Jeong Mo Han
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
- Kong Eye Center, Seoul, Korea
| | | | | | | | - Ji In Park
- Department of Medicine, Kangwon National University Hospital, Kangwon National University School of Medicine, Chuncheon, Gangwon-do, Korea
| | - Gyudeok Hwang
- Department of Ophthalmology, Hangil Eye Hospital, 35 Bupyeong-Daero, Bupyeong-gu, Incheon, 21388, Korea
| | - Jae Ho Jung
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul, Korea
| | - Daniel Duck-Jin Hwang
- Department of Ophthalmology, Hangil Eye Hospital, 35 Bupyeong-Daero, Bupyeong-gu, Incheon, 21388, Korea.
- Department of Ophthalmology, Catholic Kwandong University College of Medicine, Incheon, Korea.
- Lux Mind, Incheon, Korea.
| |
Collapse
|
8
|
Kar J, Cohen MV, McQuiston SA, Poorsala T, Malozzi CM. Automated segmentation of the left-ventricle from MRI with a fully convolutional network to investigate CTRCD in breast cancer patients. J Med Imaging (Bellingham) 2024; 11:024003. [PMID: 38510543 PMCID: PMC10950093 DOI: 10.1117/1.jmi.11.2.024003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 03/01/2022] [Indexed: 03/22/2024] Open
Abstract
Purpose: The goal of this study was to develop a fully convolutional network (FCN) tool to automatedly segment the left-ventricular (LV) myocardium in displacement encoding with stimulated echoes MRI. The segmentation results are used for LV chamber quantification and strain analyses in breast cancer patients susceptible to cancer therapy-related cardiac dysfunction (CTRCD). Approach: A DeepLabV3+ FCN with a ResNet-101 backbone was custom-designed to conduct chamber quantification on 45 female breast cancer datasets (23 training, 11 validation, and 11 test sets). LV structural parameters and LV ejection fraction (LVEF) were measured, and myocardial strains estimated with the radial point interpolation method. Myocardial classification validation was against quantization-based ground-truth with computations of accuracy, Dice score, average perpendicular distance (APD), Hausdorff-distance, and others. Additional validations were conducted with equivalence tests and Cronbach's alpha (C - α ) intraclass correlation coefficients between the FCN and a vendor tool on chamber quantification and myocardial strain computations. Results: Myocardial classification results against ground-truth were Dice = 0.89 , APD = 2.4 mm , and accuracy = 97 % for the validation set and Dice = 0.90 , APD = 2.5 mm , and accuracy = 97 % for the test set. The confidence intervals (CI) and two one-sided t-test results of equivalence tests between the FCN and vendor-tool were CI = - 1.36 % to 2.42%, p-value < 0.001 for LVEF (58 ± 5 % versus 57 ± 6 % ), and CI = - 0.71 % to 0.63%, p-value < 0.001 for longitudinal strain (- 15 ± 2 % versus - 15 ± 3 % ). Conclusions: The validation results were found equivalent to the vendor tool-based parameter estimates, which show that accurate LV chamber quantification followed by strain analysis for CTRCD investigation can be achieved with our proposed FCN methodology.
Collapse
Affiliation(s)
- Julia Kar
- University of South Alabama, Departments of Mechanical Engineering and Pharmacology, Alabama, United States
| | - Michael V. Cohen
- University of South Alabama, Department of Cardiology, College of Medicine, Alabama, United States
| | - Samuel A. McQuiston
- University of South Alabama, Department of Radiology, Alabama, United States
| | - Teja Poorsala
- University of South Alabama, Departments of Oncology and Hematology, Alabama, United States
| | - Christopher M. Malozzi
- University of South Alabama, Department of Cardiology, College of Medicine, Alabama, United States
| |
Collapse
|
9
|
Lo CM, Wang CC, Hung PH. Interactive content-based image retrieval with deep learning for CT abdominal organ recognition. Phys Med Biol 2024; 69:045004. [PMID: 38232396 DOI: 10.1088/1361-6560/ad1f86] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 01/17/2024] [Indexed: 01/19/2024]
Abstract
Objective.Recognizing the most relevant seven organs in an abdominal computed tomography (CT) slice requires sophisticated knowledge. This study proposed automatically extracting relevant features and applying them in a content-based image retrieval (CBIR) system to provide similar evidence for clinical use.Approach.A total of 2827 abdominal CT slices, including 638 liver, 450 stomach, 229 pancreas, 442 spleen, 362 right kidney, 424 left kidney and 282 gallbladder tissues, were collected to evaluate the proposed CBIR in the present study. Upon fine-tuning, high-level features used to automatically interpret the differences among the seven organs were extracted via deep learning architectures, including DenseNet, Vision Transformer (ViT), and Swin Transformer v2 (SwinViT). Three images with different annotations were employed in the classification and query.Main results.The resulting performances included the classification accuracy (94%-99%) and retrieval result (0.98-0.99). Considering global features and multiple resolutions, SwinViT performed better than ViT. ViT also benefited from a better receptive field to outperform DenseNet. Additionally, the use of hole images can obtain almost perfect results regardless of which deep learning architectures are used.Significance.The experiment showed that using pretrained deep learning architectures and fine-tuning with enough data can achieve successful recognition of seven abdominal organs. The CBIR system can provide more convincing evidence for recognizing abdominal organs via similarity measurements, which could lead to additional possibilities in clinical practice.
Collapse
Affiliation(s)
- Chung-Ming Lo
- Graduate Institute of Library, Information and Archival Studies, National Chengchi University, Taipei, Taiwan
| | - Chi-Cheng Wang
- Department of Radiology, Mackay Memorial Hospital, Taipei, Taiwan
| | - Peng-Hsiang Hung
- Department of Radiology, Mackay Memorial Hospital, Taipei, Taiwan
| |
Collapse
|
10
|
Molière S, Hamzaoui D, Granger B, Montagne S, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Reference standard for the evaluation of automatic segmentation algorithms: Quantification of inter observer variability of manual delineation of prostate contour on MRI. Diagn Interv Imaging 2024; 105:65-73. [PMID: 37822196 DOI: 10.1016/j.diii.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.
Collapse
Affiliation(s)
- Sébastien Molière
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France; Breast and Thyroid Imaging Unit, Institut de Cancérologie Strasbourg Europe, 67200, Strasbourg, France; IGBMC, Institut de Génétique et de Biologie Moléculaire et Cellulaire, 67400, Illkirch, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, 06902, Nice, France
| | - Benjamin Granger
- Sorbonne Université, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique, IPLESP, AP-HP, Hôpital Pitié Salpêtrière, Département de Santé Publique, 75013, Paris, France
| | - Sarah Montagne
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| | - Alexandre Allera
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Malek Ezziane
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Anna Luzurier
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Raphaelle Quint
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Mehdi Kalai
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Nicholas Ayache
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Hervé Delingette
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Raphaële Renard-Penna
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| |
Collapse
|
11
|
Khabaz K, Yuan K, Pugar J, Jiang D, Sankary S, Dhara S, Kim J, Kang J, Nguyen N, Cao K, Washburn N, Bohr N, Lee CJ, Kindlmann G, Milner R, Pocivavsek L. The geometric evolution of aortic dissections: Predicting surgical success using fluctuations in integrated Gaussian curvature. PLoS Comput Biol 2024; 20:e1011815. [PMID: 38306397 PMCID: PMC10866512 DOI: 10.1371/journal.pcbi.1011815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 02/14/2024] [Accepted: 01/09/2024] [Indexed: 02/04/2024] Open
Abstract
Clinical imaging modalities are a mainstay of modern disease management, but the full utilization of imaging-based data remains elusive. Aortic disease is defined by anatomic scalars quantifying aortic size, even though aortic disease progression initiates complex shape changes. We present an imaging-based geometric descriptor, inspired by fundamental ideas from topology and soft-matter physics that captures dynamic shape evolution. The aorta is reduced to a two-dimensional mathematical surface in space whose geometry is fully characterized by the local principal curvatures. Disease causes deviation from the smooth bent cylindrical shape of normal aortas, leading to a family of highly heterogeneous surfaces of varying shapes and sizes. To deconvolute changes in shape from size, the shape is characterized using integrated Gaussian curvature or total curvature. The fluctuation in total curvature (δK) across aortic surfaces captures heterogeneous morphologic evolution by characterizing local shape changes. We discover that aortic morphology evolves with a power-law defined behavior with rapidly increasing δK forming the hallmark of aortic disease. Divergent δK is seen for highly diseased aortas indicative of impending topologic catastrophe or aortic rupture. We also show that aortic size (surface area or enclosed aortic volume) scales as a generalized cylinder for all shapes. Classification accuracy for predicting aortic disease state (normal, diseased with successful surgery, and diseased with failed surgical outcomes) is 92.8±1.7%. The analysis of δK can be applied on any three-dimensional geometric structure and thus may be extended to other clinical problems of characterizing disease through captured anatomic changes.
Collapse
Affiliation(s)
- Kameel Khabaz
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Karen Yuan
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Joseph Pugar
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
- Departments of Material Science and Engineering, Biomedical Engineering, and Chemistry, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - David Jiang
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Seth Sankary
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Sanjeev Dhara
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Junsung Kim
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Janet Kang
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Nhung Nguyen
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Kathleen Cao
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Newell Washburn
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Nicole Bohr
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Cheong Jun Lee
- Department of Surgery, NorthShore University Health System, Evanston, Illinois, United States of America
| | - Gordon Kindlmann
- Department of Computer Science, The University of Chicago, Chicago, Illinois, United States of America
| | - Ross Milner
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| | - Luka Pocivavsek
- Department of Surgery, The University of Chicago, Chicago, Illinois, United States of America
| |
Collapse
|
12
|
Maloca PM, Pfau M, Janeschitz-Kriegl L, Reich M, Goerdt L, Holz FG, Müller PL, Valmaggia P, Fasler K, Keane PA, Zarranz-Ventura J, Zweifel S, Wiesendanger J, Kaiser P, Enz TJ, Rothenbuehler SP, Hasler PW, Juedes M, Freichel C, Egan C, Tufail A, Scholl HPN, Denk N. Human selection bias drives the linear nature of the more ground truth effect in explainable deep learning optical coherence tomography image segmentation. JOURNAL OF BIOPHOTONICS 2024; 17:e202300274. [PMID: 37795556 DOI: 10.1002/jbio.202300274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/11/2023] [Accepted: 10/04/2023] [Indexed: 10/06/2023]
Abstract
Supervised deep learning (DL) algorithms are highly dependent on training data for which human graders are assigned, for example, for optical coherence tomography (OCT) image annotation. Despite the tremendous success of DL, due to human judgment, these ground truth labels can be inaccurate and/or ambiguous and cause a human selection bias. We therefore investigated the impact of the size of the ground truth and variable numbers of graders on the predictive performance of the same DL architecture and repeated each experiment three times. The largest training dataset delivered a prediction performance close to that of human experts. All DL systems utilized were highly consistent. Nevertheless, the DL under-performers could not achieve any further autonomous improvement even after repeated training. Furthermore, a quantifiable linear relationship between ground truth ambiguity and the beneficial effect of having a larger amount of ground truth data was detected and marked as the more-ground-truth effect.
Collapse
Affiliation(s)
- Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Maximilian Pfau
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Lucas Janeschitz-Kriegl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Michael Reich
- Eye Center, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Lukas Goerdt
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Philipp L Müller
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Ophthalmology, University of Bonn, Bonn, Germany
- Makula Center, Suedblick Eye Centers, Augsburg, Germany
| | - Philippe Valmaggia
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Katrin Fasler
- Department of Ophthalmology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | | | - Sandrine Zweifel
- Department of Ophthalmology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | | | | | - Tim J Enz
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | | | - Pascal W Hasler
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Marlene Juedes
- Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Christian Freichel
- Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Nora Denk
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland
- Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
- Pharma Research and Early Development (pRED), Pharmaceutical Sciences (PS), Roche, Innovation Center Basel, Basel, Switzerland
| |
Collapse
|
13
|
Swaity A, Elgarba BM, Morgan N, Ali S, Shujaat S, Borsci E, Chilvarquer I, Jacobs R. Deep learning driven segmentation of maxillary impacted canine on cone beam computed tomography images. Sci Rep 2024; 14:369. [PMID: 38172136 PMCID: PMC10764895 DOI: 10.1038/s41598-023-49613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 12/10/2023] [Indexed: 01/05/2024] Open
Abstract
The process of creating virtual models of dentomaxillofacial structures through three-dimensional segmentation is a crucial component of most digital dental workflows. This process is typically performed using manual or semi-automated approaches, which can be time-consuming and subject to observer bias. The aim of this study was to train and assess the performance of a convolutional neural network (CNN)-based online cloud platform for automated segmentation of maxillary impacted canine on CBCT image. A total of 100 CBCT images with maxillary canine impactions were randomly allocated into two groups: a training set (n = 50) and a testing set (n = 50). The training set was used to train the CNN model and the testing set was employed to evaluate the model performance. Both tasks were performed on an online cloud-based platform, 'Virtual patient creator' (Relu, Leuven, Belgium). The performance was assessed using voxel- and surface-based comparison between automated and semi-automated ground truth segmentations. In addition, the time required for segmentation was also calculated. The automated tool showed high performance for segmenting impacted canines with a dice similarity coefficient of 0.99 ± 0.02. Moreover, it was 24 times faster than semi-automated approach. The proposed CNN model achieved fast, consistent, and precise segmentation of maxillary impacted canines.
Collapse
Affiliation(s)
- Abdullah Swaity
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Prosthodontic Department, King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Prosthodontics, Tanta University, Tanta, Egypt
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Saleem Ali
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- Restorative Dentistry Department, King Hussein Medical Center, Jordanian Royal Medical Services, Amman, Jordan
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud Bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Elena Borsci
- Oral Diagnostic Clinic, Karolinska Institute, Stockholm, Sweden
| | - Israel Chilvarquer
- Department of Oral Radiology, School of Dentistry, University of São Paulo (USP), São Paulo, Brazil
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Leuven, Belgium.
- Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
14
|
Schacherer DP, Herrmann MD, Clunie DA, Höfener H, Clifford W, Longabaugh WJR, Pieper S, Kikinis R, Fedorov A, Homeyer A. The NCI Imaging Data Commons as a platform for reproducible research in computational pathology. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107839. [PMID: 37832430 PMCID: PMC10841477 DOI: 10.1016/j.cmpb.2023.107839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 09/20/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Reproducibility is a major challenge in developing machine learning (ML)-based solutions in computational pathology (CompPath). The NCI Imaging Data Commons (IDC) provides >120 cancer image collections according to the FAIR principles and is designed to be used with cloud ML services. Here, we explore its potential to facilitate reproducibility in CompPath research. METHODS Using the IDC, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets. To assess reproducibility, the experiments were run multiple times with separate but identically configured instances of common ML services. RESULTS The results of different runs of the same experiment were reproducible to a large extent. However, we observed occasional, small variations in AUC values, indicating a practical limit to reproducibility. CONCLUSIONS We conclude that the IDC facilitates approaching the reproducibility limit of CompPath research (i) by enabling researchers to reuse exactly the same datasets and (ii) by integrating with cloud ML services so that experiments can be run in identically configured computing environments.
Collapse
Affiliation(s)
- Daniela P Schacherer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Markus D Herrmann
- Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | | | | | | | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany.
| |
Collapse
|
15
|
Fedorov A, Longabaugh WJR, Pot D, Clunie DA, Pieper SD, Gibbs DL, Bridge C, Herrmann MD, Homeyer A, Lewis R, Aerts HJWL, Krishnaswamy D, Thiriveedhi VK, Ciausu C, Schacherer DP, Bontempi D, Pihl T, Wagner U, Farahani K, Kim E, Kikinis R. National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence. Radiographics 2023; 43:e230180. [PMID: 37999984 DOI: 10.1148/rg.230180] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2023]
Abstract
The remarkable advances of artificial intelligence (AI) technology are revolutionizing established approaches to the acquisition, interpretation, and analysis of biomedical imaging data. Development, validation, and continuous refinement of AI tools requires easy access to large high-quality annotated datasets, which are both representative and diverse. The National Cancer Institute (NCI) Imaging Data Commons (IDC) hosts large and diverse publicly available cancer image data collections. By harmonizing all data based on industry standards and colocalizing it with analysis and exploration resources, the IDC aims to facilitate the development, validation, and clinical translation of AI tools and address the well-documented challenges of establishing reproducible and transparent AI processing pipelines. Balanced use of established commercial products with open-source solutions, interconnected by standard interfaces, provides value and performance, while preserving sufficient agility to address the evolving needs of the research community. Emphasis on the development of tools, use cases to demonstrate the utility of uniform data representation, and cloud-based analysis aim to ease adoption and help define best practices. Integration with other data in the broader NCI Cancer Research Data Commons infrastructure opens opportunities for multiomics studies incorporating imaging data to further empower the research community to accelerate breakthroughs in cancer detection, diagnosis, and treatment. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Andrey Fedorov
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - William J R Longabaugh
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - David Pot
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - David A Clunie
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Steven D Pieper
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - David L Gibbs
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Christopher Bridge
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Markus D Herrmann
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - André Homeyer
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Rob Lewis
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Hugo J W L Aerts
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Deepa Krishnaswamy
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Vamsi Krishna Thiriveedhi
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Cosmin Ciausu
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Daniela P Schacherer
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Dennis Bontempi
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Todd Pihl
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Ulrike Wagner
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Keyvan Farahani
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Erika Kim
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Ron Kikinis
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| |
Collapse
|
16
|
Burke J, Pugh D, Farrah T, Hamid C, Godden E, MacGillivray TJ, Dhaun N, Baillie JK, King S, MacCormick IJC. Evaluation of an Automated Choroid Segmentation Algorithm in a Longitudinal Kidney Donor and Recipient Cohort. Transl Vis Sci Technol 2023; 12:19. [PMID: 37975844 PMCID: PMC10668611 DOI: 10.1167/tvst.12.11.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 10/16/2023] [Indexed: 11/19/2023] Open
Abstract
Purpose To evaluate the performance of an automated choroid segmentation algorithm in optical coherence tomography (OCT) data using a longitudinal kidney donor and recipient cohort. Methods We assessed 22 donors and 23 patients requiring renal transplantation over up to 1 year posttransplant. We measured choroidal thickness (CT) and area and compared our automated CT measurements to manual ones at the same locations. We estimated associations between choroidal measurements and markers of renal function (estimated glomerular filtration rate [eGFR], serum creatinine, and urea) using correlation and linear mixed-effects (LME) modeling. Results There was good agreement between manual and automated CT. Automated measures were more precise because of smaller measurement error over time. External adjudication of major discrepancies was in favor of automated measures. Significant differences were observed in the choroid pre- and posttransplant in both cohorts, and LME modeling revealed significant linear associations observed between choroidal measures and renal function in recipients. Significant associations were mostly stronger with automated CT (eGFR, P < 0.001; creatinine, P = 0.004; urea, P = 0.04) compared to manual CT (eGFR, P = 0.002; creatinine, P = 0.01; urea, P = 0.03). Conclusions Our automated approach has greater precision than human-performed manual measurements, which may explain stronger associations with renal function compared to manual measurements. To improve detection of meaningful associations with clinical endpoints in longitudinal studies of OCT, reducing measurement error should be a priority, and automated measurements help achieve this. Translational Relevance We introduce a novel choroid segmentation algorithm that can replace manual grading for studying the choroid in renal disease and other clinical conditions.
Collapse
Affiliation(s)
- Jamie Burke
- School of Mathematics, University of Edinburgh, College of Science and Engineering, Edinburgh, UK
| | - Dan Pugh
- University/BHF Centre for Cardiovascular Science, The Queen's Medical Research Institute, University of Edinburgh, Edinburgh, UK
| | - Tariq Farrah
- University/BHF Centre for Cardiovascular Science, The Queen's Medical Research Institute, University of Edinburgh, Edinburgh, UK
| | - Charlene Hamid
- Imaging Facility, University of Edinburgh, The Queen's Medical Research Institute, Edinburgh, UK
| | - Emily Godden
- Emergency Department, Royal Infirmary of Edinburgh, Edinburgh, UK
| | | | - Neeraj Dhaun
- University/BHF Centre for Cardiovascular Science, The Queen's Medical Research Institute, University of Edinburgh, Edinburgh, UK
| | - J. Kenneth Baillie
- Deanery of Clinical Sciences, University of Edinburgh, College of Medicine and Veterinary Medicine, Edinburgh, UK
| | - Stuart King
- School of Mathematics, University of Edinburgh, College of Science and Engineering, Edinburgh, UK
| | - Ian J. C. MacCormick
- Centre for Inflammation Research, The Queen's Medical Research Institute, University of Edinburgh, Edinburgh, UK
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
17
|
Escudero Sanchez L, Buddenkotte T, Al Sa’d M, McCague C, Darcy J, Rundo L, Samoshkin A, Graves MJ, Hollamby V, Browne P, Crispin-Ortuzar M, Woitek R, Sala E, Schönlieb CB, Doran SJ, Öktem O. Integrating Artificial Intelligence Tools in the Clinical Research Setting: The Ovarian Cancer Use Case. Diagnostics (Basel) 2023; 13:2813. [PMID: 37685352 PMCID: PMC10486639 DOI: 10.3390/diagnostics13172813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/31/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
Artificial intelligence (AI) methods applied to healthcare problems have shown enormous potential to alleviate the burden of health services worldwide and to improve the accuracy and reproducibility of predictions. In particular, developments in computer vision are creating a paradigm shift in the analysis of radiological images, where AI tools are already capable of automatically detecting and precisely delineating tumours. However, such tools are generally developed in technical departments that continue to be siloed from where the real benefit would be achieved with their usage. Significant effort still needs to be made to make these advancements available, first in academic clinical research and ultimately in the clinical setting. In this paper, we demonstrate a prototype pipeline based entirely on open-source software and free of cost to bridge this gap, simplifying the integration of tools and models developed within the AI community into the clinical research setting, ensuring an accessible platform with visualisation applications that allow end-users such as radiologists to view and interact with the outcome of these AI tools.
Collapse
Affiliation(s)
- Lorena Escudero Sanchez
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
| | - Thomas Buddenkotte
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
- Department for Diagnostic and Interventional Radiology and Nuclear Medicine, University Hospital Hamburg-Eppendorf, 20246 Hamburg, Germany
- Jung Diagnostics GmbH, 22335 Hamburg, Germany
| | - Mohammad Al Sa’d
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College, London SW7 2AZ, UK
| | - Cathal McCague
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
| | - James Darcy
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Division of Radiotherapy and Imaging, Institute of Cancer Research, London SW7 3RP, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, 84084 Fisciano, Italy
| | - Alex Samoshkin
- Office for Translational Research, School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, UK
| | - Martin J. Graves
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
| | - Victoria Hollamby
- Research and Information Governance, School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, UK
| | - Paul Browne
- High Performance Computing Department, University of Cambridge, Cambridge CB3 0RB, UK
| | - Mireia Crispin-Ortuzar
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Department of Oncology, University of Cambridge, Cambridge CB2 0XZ, UK
| | - Ramona Woitek
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- Research Centre for Medical Image Analysis and Artificial Intelligence (MIAAI), Department of Medicine, Faculty of Medicine and Dentistry, Danube Private University, 3500 Krems, Austria
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, Li Ka Shing Centre, Cambridge CB2 0RE, UK
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ, UK
- Dipartimento di Scienze Radiologiche ed Ematologiche, Universita Cattolica del Sacro Cuore, 00168 Rome, Italy
- Dipartimento Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
| | - Simon J. Doran
- National Cancer Imaging Translational Accelerator (NCITA) Consortium, UK
- Division of Radiotherapy and Imaging, Institute of Cancer Research, London SW7 3RP, UK
| | - Ozan Öktem
- Department of Mathematics, KTH-Royal Institute of Technology, SE-100 44 Stockholm, Sweden
| |
Collapse
|
18
|
Järnstedt J, Sahlsten J, Jaskari J, Kaski K, Mehtonen H, Hietanen A, Sundqvist O, Varjonen V, Mattila V, Prapayasatok S, Nalampang S. Reproducibility analysis of automated deep learning based localisation of mandibular canals on a temporal CBCT dataset. Sci Rep 2023; 13:14159. [PMID: 37644067 PMCID: PMC10465591 DOI: 10.1038/s41598-023-40516-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 08/11/2023] [Indexed: 08/31/2023] Open
Abstract
Preoperative radiological identification of mandibular canals is essential for maxillofacial surgery. This study demonstrates the reproducibility of a deep learning system (DLS) by evaluating its localisation performance on 165 heterogeneous cone beam computed tomography (CBCT) scans from 72 patients in comparison to an experienced radiologist's annotations. We evaluated the performance of the DLS using the symmetric mean curve distance (SMCD), the average symmetric surface distance (ASSD), and the Dice similarity coefficient (DSC). The reproducibility of the SMCD was assessed using the within-subject coefficient of repeatability (RC). Three other experts rated the diagnostic validity twice using a 0-4 Likert scale. The reproducibility of the Likert scoring was assessed using the repeatability measure (RM). The RC of SMCD was 0.969 mm, the median (interquartile range) SMCD and ASSD were 0.643 (0.186) mm and 0.351 (0.135) mm, respectively, and the mean (standard deviation) DSC was 0.548 (0.138). The DLS performance was most affected by postoperative changes. The RM of the Likert scoring was 0.923 for the radiologist and 0.877 for the DLS. The mean (standard deviation) Likert score was 3.94 (0.27) for the radiologist and 3.84 (0.65) for the DLS. The DLS demonstrated proficient qualitative and quantitative reproducibility, temporal generalisability, and clinical validity.
Collapse
Affiliation(s)
- Jorma Järnstedt
- Medical Imaging Centre, Department of Radiology Tampere University Hospital, Teiskontie 35, 33520, Tampere, Finland.
- The Graduate School, Chiang Mai University, 239 Huaykaew Road, Suthep, Mueang, Chiang Mai, Thailand.
| | - Jaakko Sahlsten
- Aalto University School of Science, Otakaari 1, 02150, Espoo, Finland
| | - Joel Jaskari
- Aalto University School of Science, Otakaari 1, 02150, Espoo, Finland
| | - Kimmo Kaski
- Aalto University School of Science, Otakaari 1, 02150, Espoo, Finland.
- Alan Turing Institute, British Library, 96 Euston Rd, London, NW1 2DB, UK.
| | - Helena Mehtonen
- Medical Imaging Centre, Department of Radiology Tampere University Hospital, Teiskontie 35, 33520, Tampere, Finland
| | - Ari Hietanen
- Planmeca Oy, Asentajankatu 6, 00880, Helsinki, Finland
| | | | - Vesa Varjonen
- Planmeca Oy, Asentajankatu 6, 00880, Helsinki, Finland
| | - Vesa Mattila
- Planmeca Oy, Asentajankatu 6, 00880, Helsinki, Finland
| | - Sangsom Prapayasatok
- Division of Oral and Maxillofacial Radiology, Faculty of Dentistry, Chiang Mai University, Suthep Rd., T. Suthep, A. Muang, Chiang Mai, Thailand
| | - Sakarat Nalampang
- Division of Oral and Maxillofacial Radiology, Faculty of Dentistry, Chiang Mai University, Suthep Rd., T. Suthep, A. Muang, Chiang Mai, Thailand
| |
Collapse
|
19
|
Booker EP, Paak M, Negahdar M. Quantitative Assessment of COVID-19 Lung Disease Severity: A Segmentation-based Approach. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082954 DOI: 10.1109/embc40787.2023.10340181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
We present the use of mean Hounsfield units within lungs as a metric of disease severity for the comparison of image analysis models in patients with COPD and COVID. We used this metric to assess the performance of a novel 3D global context attention network for image segmentation that produces lung masks from thoracic HRCT scans. Results showed that the mean Hounsfield units enable a detailed comparison of our 3D implementation of the GC-Net model to the V-Net segmentation algorithm. We implemented a biomimetic data augmentation strategy and used a quantitative severity metric to assess its performance. Framing our investigation around lung segmentation for patients with respiratory diseases allows analysis of the strengths and weaknesses of the implemented models in this context.Clinical Relevance - Mean Hounsfield units within the lung volume can be used as an objective measure of respiratory disease severity for the comparison of CT scan analysis algorithms.
Collapse
|
20
|
Mirikharaji Z, Abhishek K, Bissoto A, Barata C, Avila S, Valle E, Celebi ME, Hamarneh G. A survey on deep learning for skin lesion segmentation. Med Image Anal 2023; 88:102863. [PMID: 37343323 DOI: 10.1016/j.media.2023.102863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/01/2023] [Accepted: 05/31/2023] [Indexed: 06/23/2023]
Abstract
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.
Collapse
Affiliation(s)
- Zahra Mirikharaji
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Alceu Bissoto
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Técnico, Avenida Rovisco Pais, Lisbon 1049-001, Portugal
| | - Sandra Avila
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Eduardo Valle
- RECOD.ai Lab, School of Electrical and Computing Engineering, University of Campinas, Av. Albert Einstein 400, Campinas 13083-952, Brazil
| | - M Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
21
|
Morgan N, Meeus J, Shujaat S, Cortellini S, Bornstein MM, Jacobs R. CBCT for Diagnostics, Treatment Planning and Monitoring of Sinus Floor Elevation Procedures. Diagnostics (Basel) 2023; 13:diagnostics13101684. [PMID: 37238169 DOI: 10.3390/diagnostics13101684] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/05/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
Sinus floor elevation (SFE) is a standard surgical technique used to compensate for alveolar bone resorption in the posterior maxilla. Such a surgical procedure requires radiographic imaging pre- and postoperatively for diagnosis, treatment planning, and outcome assessment. Cone beam computed tomography (CBCT) has become a well-established imaging modality in the dentomaxillofacial region. The following narrative review is aimed to provide clinicians with an overview of the role of three-dimensional (3D) CBCT imaging for diagnostics, treatment planning, and postoperative monitoring of SFE procedures. CBCT imaging prior to SFE provides surgeons with a more detailed view of the surgical site, allows for the detection of potential pathologies three-dimensionally, and helps to virtually plan the procedure more precisely while reducing patient morbidity. In addition, it serves as a useful follow-up tool for assessing sinus and bone graft changes. Meanwhile, using CBCT imaging has to be standardized and justified based on the recognized diagnostic imaging guidelines, taking into account both the technical and clinical considerations. Future studies are recommended to incorporate artificial intelligence-based solutions for automating and standardizing the diagnostic and decision-making process in the context of SFE procedures to further improve the standards of patient care.
Collapse
Affiliation(s)
- Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura 35516, Egypt
| | - Jan Meeus
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Campus Sint-Rafael, 3000 Leuven, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Campus Sint-Rafael, 3000 Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh 11426, Saudi Arabia
| | - Simone Cortellini
- Department of Oral Health Sciences, Section of Periodontology, KU Leuven, 3000 Leuven, Belgium
- Department of Dentistry, University Hospitals Leuven, KU Leuven, 3000 Leuven, Belgium
| | - Michael M Bornstein
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, 4058 Basel, Switzerland
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Campus Sint-Rafael, 3000 Leuven, Belgium
- Department of Dental Medicine, Karolinska Institute, 141 04 Huddinge, Sweden
| |
Collapse
|
22
|
Manual Versus Artificial Intelligence-Based Segmentations as a Pre-processing Step in Whole-body PET Dosimetry Calculations. Mol Imaging Biol 2023; 25:435-441. [PMID: 36195742 PMCID: PMC10006025 DOI: 10.1007/s11307-022-01775-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/12/2022] [Accepted: 09/19/2022] [Indexed: 10/10/2022]
Abstract
PURPOSE As novel tracers are continuously under development, it is important to obtain reliable radiation dose estimates to optimize the amount of activity that can be administered while keeping radiation burden within acceptable limits. Organ segmentation is required for quantification of specific uptake in organs of interest and whole-body dosimetry but is a time-consuming task which induces high interobserver variability. Therefore, we explored using manual segmentations versus an artificial intelligence (AI)-based automated segmentation tool as a pre-processing step for calculating whole-body effective doses to determine the influence of variability in volumetric whole-organ segmentations on dosimetry. PROCEDURES PET/CT data of six patients undergoing imaging with 89Zr-labelled pembrolizumab were included. Manual organ segmentations were performed, using in-house developed software, and biodistribution information was obtained. Based on the activity biodistribution information, residence times were calculated. The residence times served as input for OLINDA/EXM version 1.0 (Vanderbilt University, 2003) to calculate the whole-body effective dose (mSv/MBq). Subsequently, organ segmentations were performed using RECOMIA, a cloud-based AI platform for nuclear medicine and radiology research. The workflow for calculating residence times and whole-body effective doses, as described above, was repeated. RESULTS Data were acquired on days 2, 4, and 7 post-injection, resulting in 18 scans. Overall analysis time per scan was approximately 4 h for manual segmentations compared to ≤ 30 min using AI-based segmentations. Median Jaccard similarity coefficients between manual and AI-based segmentations varied from 0.05 (range 0.00-0.14) for the pancreas to 0.78 (range 0.74-0.82) for the lungs. Whole-body effective doses differed minimally for the six patients with a median difference in received mSv/MBq of 0.52% (range 0.15-1.95%). CONCLUSION This pilot study suggests that whole-body dosimetry calculations can benefit from fast, automated AI-based whole organ segmentations.
Collapse
|
23
|
Kushwaha A, Mourad RF, Heist K, Tariq H, Chan HP, Ross BD, Chenevert TL, Malyarenko D, Hadjiiski LM. Improved Repeatability of Mouse Tibia Volume Segmentation in Murine Myelofibrosis Model Using Deep Learning. Tomography 2023; 9:589-602. [PMID: 36961007 PMCID: PMC10037585 DOI: 10.3390/tomography9020048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
A murine model of myelofibrosis in tibia was used in a co-clinical trial to evaluate segmentation methods for application of image-based biomarkers to assess disease status. The dataset (32 mice with 157 3D MRI scans including 49 test-retest pairs scanned on consecutive days) was split into approximately 70% training, 10% validation, and 20% test subsets. Two expert annotators (EA1 and EA2) performed manual segmentations of the mouse tibia (EA1: all data; EA2: test and validation). Attention U-net (A-U-net) model performance was assessed for accuracy with respect to EA1 reference using the average Jaccard index (AJI), volume intersection ratio (AVI), volume error (AVE), and Hausdorff distance (AHD) for four training scenarios: full training, two half-splits, and a single-mouse subsets. The repeatability of computer versus expert segmentations for tibia volume of test-retest pairs was assessed by within-subject coefficient of variance (%wCV). A-U-net models trained on full and half-split training sets achieved similar average accuracy (with respect to EA1 annotations) for test set: AJI = 83-84%, AVI = 89-90%, AVE = 2-3%, and AHD = 0.5 mm-0.7 mm, exceeding EA2 accuracy: AJ = 81%, AVI = 83%, AVE = 14%, and AHD = 0.3 mm. The A-U-net model repeatability wCV [95% CI]: 3 [2, 5]% was notably better than that of expert annotators EA1: 5 [4, 9]% and EA2: 8 [6, 13]%. The developed deep learning model effectively automates murine bone marrow segmentation with accuracy comparable to human annotators and substantially improved repeatability.
Collapse
|
24
|
Zhou T, Ruan S, Hu H. A literature survey of MR-based brain tumor segmentation with missing modalities. Comput Med Imaging Graph 2023; 104:102167. [PMID: 36584536 DOI: 10.1016/j.compmedimag.2022.102167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 11/01/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022]
Abstract
Multimodal MR brain tumor segmentation is one of the hottest issues in the community of medical image processing. However, acquiring the complete set of MR modalities is not always possible in clinical practice, due to the acquisition protocols, image corruption, scanner availability, scanning cost or allergies to certain contrast materials. The missing information can cause some restraints to brain tumor diagnosis, monitoring, treatment planning and prognosis. Thus, it is highly desirable to develop brain tumor segmentation methods to address the missing modalities problem. Based on the recent advancements, in this review, we provide a detailed analysis of the missing modality issue in MR-based brain tumor segmentation. First, we briefly introduce the biomedical background concerning brain tumor, MR imaging techniques, and the current challenges in brain tumor segmentation. Then, we provide a taxonomy of the state-of-the-art methods with five categories, namely, image synthesis-based method, latent feature space-based model, multi-source correlation-based method, knowledge distillation-based method, and domain adaptation-based method. In addition, the principles, architectures, benefits and limitations are elaborated in each method. Following that, the corresponding datasets and widely used evaluation metrics are described. Finally, we analyze the current challenges and provide a prospect for future development trends. This review aims to provide readers with a thorough knowledge of the recent contributions in the field of brain tumor segmentation with missing modalities and suggest potential future directions.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Su Ruan
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France
| | - Haigen Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, China.
| |
Collapse
|
25
|
Rajaraman S, Yang F, Zamzmi G, Xue Z, Antani S. Assessing the Impact of Image Resolution on Deep Learning for TB Lesion Segmentation on Frontal Chest X-rays. Diagnostics (Basel) 2023; 13:diagnostics13040747. [PMID: 36832235 PMCID: PMC9955202 DOI: 10.3390/diagnostics13040747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/10/2023] [Accepted: 02/15/2023] [Indexed: 02/18/2023] Open
Abstract
Deep learning (DL) models are state-of-the-art in segmenting anatomical and disease regions of interest (ROIs) in medical images. Particularly, a large number of DL-based techniques have been reported using chest X-rays (CXRs). However, these models are reportedly trained on reduced image resolutions for reasons related to the lack of computational resources. Literature is sparse in discussing the optimal image resolution to train these models for segmenting the tuberculosis (TB)-consistent lesions in CXRs. In this study, we investigated the performance variations with an Inception-V3 UNet model using various image resolutions with/without lung ROI cropping and aspect ratio adjustments and identified the optimal image resolution through extensive empirical evaluations to improve TB-consistent lesion segmentation performance. We used the Shenzhen CXR dataset for the study, which includes 326 normal patients and 336 TB patients. We proposed a combinatorial approach consisting of storing model snapshots, optimizing segmentation threshold and test-time augmentation (TTA), and averaging the snapshot predictions, to further improve performance with the optimal resolution. Our experimental results demonstrate that higher image resolutions are not always necessary; however, identifying the optimal image resolution is critical to achieving superior performance.
Collapse
|
26
|
Isaksson LJ, Pepa M, Summers P, Zaffaroni M, Vincini MG, Corrao G, Mazzola GC, Rotondi M, Lo Presti G, Raimondi S, Gandini S, Volpe S, Haron Z, Alessi S, Pricolo P, Mistretta FA, Luzzago S, Cattani F, Musi G, Cobelli OD, Cremonesi M, Orecchia R, Marvaso G, Petralia G, Jereczek-Fossa BA. Comparison of automated segmentation techniques for magnetic resonance images of the prostate. BMC Med Imaging 2023; 23:32. [PMID: 36774463 PMCID: PMC9921124 DOI: 10.1186/s12880-023-00974-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 01/20/2023] [Indexed: 02/13/2023] Open
Abstract
BACKGROUND Contouring of anatomical regions is a crucial step in the medical workflow and is both time-consuming and prone to intra- and inter-observer variability. This study compares different strategies for automatic segmentation of the prostate in T2-weighted MRIs. METHODS This study included 100 patients diagnosed with prostate adenocarcinoma who had undergone multi-parametric MRI and prostatectomy. From the T2-weighted MR images, ground truth segmentation masks were established by consensus from two expert radiologists. The prostate was then automatically contoured with six different methods: (1) a multi-atlas algorithm, (2) a proprietary algorithm in the Syngo.Via medical imaging software, and four deep learning models: (3) a V-net trained from scratch, (4) a pre-trained 2D U-net, (5) a GAN extension of the 2D U-net, and (6) a segmentation-adapted EfficientDet architecture. The resulting segmentations were compared and scored against the ground truth masks with one 70/30 and one 50/50 train/test data split. We also analyzed the association between segmentation performance and clinical variables. RESULTS The best performing method was the adapted EfficientDet (model 6), achieving a mean Dice coefficient of 0.914, a mean absolute volume difference of 5.9%, a mean surface distance (MSD) of 1.93 pixels, and a mean 95th percentile Hausdorff distance of 3.77 pixels. The deep learning models were less prone to serious errors (0.854 minimum Dice and 4.02 maximum MSD), and no significant relationship was found between segmentation performance and clinical variables. CONCLUSIONS Deep learning-based segmentation techniques can consistently achieve Dice coefficients of 0.9 or above with as few as 50 training patients, regardless of architectural archetype. The atlas-based and Syngo.via methods found in commercial clinical software performed significantly worse (0.855[Formula: see text]0.887 Dice).
Collapse
Affiliation(s)
- Lars Johannes Isaksson
- Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy.
| | - Matteo Pepa
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Paul Summers
- grid.15667.330000 0004 1757 0843Division of Radiology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Mattia Zaffaroni
- Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy.
| | - Maria Giulia Vincini
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Giulia Corrao
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Giovanni Carlo Mazzola
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy ,grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Marco Rotondi
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy ,grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Giuliana Lo Presti
- grid.15667.330000 0004 1757 0843Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Sara Raimondi
- grid.15667.330000 0004 1757 0843Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Sara Gandini
- grid.15667.330000 0004 1757 0843Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Stefania Volpe
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy ,grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Zaharudin Haron
- grid.459841.50000 0004 6017 2701Radiology Department, National Cancer Institute, Putrajaya, Malaysia
| | - Sarah Alessi
- grid.15667.330000 0004 1757 0843Division of Radiology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Paola Pricolo
- grid.15667.330000 0004 1757 0843Division of Radiology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Francesco Alessandro Mistretta
- grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy ,grid.15667.330000 0004 1757 0843Division of Urology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Stefano Luzzago
- grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy ,grid.15667.330000 0004 1757 0843Division of Urology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Federica Cattani
- grid.15667.330000 0004 1757 0843Medical Physics Unit, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Gennaro Musi
- grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy ,grid.15667.330000 0004 1757 0843Division of Urology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Ottavio De Cobelli
- grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy ,grid.15667.330000 0004 1757 0843Division of Urology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Marta Cremonesi
- grid.15667.330000 0004 1757 0843Radiation Research Unit, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Roberto Orecchia
- grid.15667.330000 0004 1757 0843Scientific Direction, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Giulia Marvaso
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy ,grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Giuseppe Petralia
- grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy ,grid.15667.330000 0004 1757 0843Precision Imaging and Research Unit, Department of Medical Imaging and Radiation Sciences, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Barbara Alicja Jereczek-Fossa
- grid.15667.330000 0004 1757 0843Department of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy ,grid.4708.b0000 0004 1757 2822Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| |
Collapse
|
27
|
Andrade KM, Silva BPM, de Oliveira LR, Cury PR. Automatic dental biofilm detection based on deep learning. J Clin Periodontol 2023; 50:571-581. [PMID: 36635042 DOI: 10.1111/jcpe.13774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 12/06/2022] [Accepted: 01/09/2023] [Indexed: 01/14/2023]
Abstract
AIM To estimate the automated biofilm detection capacity of the U-Net neural network on tooth images. MATERIALS AND METHODS Two datasets of intra-oral photographs taken in the frontal and lateral views of permanent and deciduous dentitions were employed. The first dataset consisted of 96 photographs taken before and after applying a disclosing agent and was used to validate the domain's expert biofilm annotation (intra-class correlation coefficient = .93). The second dataset comprised 480 photos, with or without orthodontic appliances, and without disclosing agents, and was used to train the neural network to segment the biofilm. Dental biofilm labelled by the dentist (without disclosing agents) was considered the ground truth. Segmentation performance was measured using accuracy, F1 score, sensitivity, and specificity. RESULTS The U-Net model achieved an accuracy of 91.8%, F1 score of 60.6%, specificity of 94.4%, and sensitivity of 67.2%. The accuracy was higher in the presence of orthodontic appliances (92.6%). CONCLUSIONS Visually segmenting dental biofilm employing a U-Net is feasible and can assist professionals and patients in identifying dental biofilm, thus improving oral hygiene and health.
Collapse
Affiliation(s)
- Katia Montanha Andrade
- Graduate Program in Dentistry and Health, School of Dentistry, Federal University of Bahia, Salvador, Brazil
| | | | | | - Patricia Ramos Cury
- Division of Periodontics, School of Dentistry, Federal University of Bahia, Salvador, Brazil
| |
Collapse
|
28
|
Shoaib MA, Chuah JH, Ali R, Dhanalakshmi S, Hum YC, Khalil A, Lai KW. Fully Automatic Left Ventricle Segmentation Using Bilateral Lightweight Deep Neural Network. LIFE (BASEL, SWITZERLAND) 2023; 13:life13010124. [PMID: 36676073 PMCID: PMC9864753 DOI: 10.3390/life13010124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/22/2022] [Accepted: 12/29/2022] [Indexed: 01/04/2023]
Abstract
The segmentation of the left ventricle (LV) is one of the fundamental procedures that must be performed to obtain quantitative measures of the heart, such as its volume, area, and ejection fraction. In clinical practice, the delineation of LV is still often conducted semi-automatically, leaving it open to operator subjectivity. The automatic LV segmentation from echocardiography images is a challenging task due to poorly defined boundaries and operator dependency. Recent research has demonstrated that deep learning has the capability to employ the segmentation process automatically. However, the well-known state-of-the-art segmentation models still lack in terms of accuracy and speed. This study aims to develop a single-stage lightweight segmentation model that precisely and rapidly segments the LV from 2D echocardiography images. In this research, a backbone network is used to acquire both low-level and high-level features. Two parallel blocks, known as the spatial feature unit and the channel feature unit, are employed for the enhancement and improvement of these features. The refined features are merged by an integrated unit to segment the LV. The performance of the model and the time taken to segment the LV are compared to other established segmentation models, DeepLab, FCN, and Mask RCNN. The model achieved the highest values of the dice similarity index (0.9446), intersection over union (0.8445), and accuracy (0.9742). The evaluation metrics and processing time demonstrate that the proposed model not only provides superior quantitative results but also trains and segments the LV in less time, indicating its improved performance over competing segmentation models.
Collapse
Affiliation(s)
- Muhammad Ali Shoaib
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta 87300, Pakistan
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Raza Ali
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta 87300, Pakistan
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering (DMBE), Lee Kong Chian Faculty of Engineering and Science (LKC FES), Universiti Tunku Abdul Rahman (UTAR), Jalan Sungai Long, Bandar Sungai Long, Cheras, Kajang 43000, Malaysia
| | - Azira Khalil
- Faculty of Science and Technology, Universiti Sains Islam Malaysia (USIM), Nilai 71800, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia
- Correspondence:
| |
Collapse
|
29
|
Rodriguez-Vila B, Gonzalez-Hospital V, Puertas E, Beunza JJ, Pierce DM. Democratization of deep learning for segmenting cartilage from MRIs of human knees: Application to data from the osteoarthritis initiative. J Orthop Res 2022. [PMID: 36573479 DOI: 10.1002/jor.25509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 11/29/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022]
Abstract
In this study, we aimed to democratize access to convolutional neural networks (CNN) for segmenting cartilage volumes, generating state-of-the-art results for specialized, real-world applications in hospitals and research. Segmentation of cross-sectional and/or longitudinal magnetic resonance (MR) images of articular cartilage facilitates both clinical management of joint damage/disease and fundamental research. Manual delineation of such images is a time-consuming task susceptible to high intra- and interoperator variability and prone to errors. Thus, enabling reliable and efficient analyses of MRIs of cartilage requires automated segmentation of cartilage volumes. Two main limitations arise in the development of hospital- or population-specific deep learning (DL) models for image segmentation: specialized knowledge and specialized hardware. We present a relatively easy and accessible implementation of a DL model to automatically segment MRIs of human knees with state-of-the-art accuracy. In representative examples, we trained CNN models in 6-8 h and obtained results quantitatively comparable to state-of-the-art for every anatomical structure. We established and evaluated our methods using two publicly available MRI data sets originating from the Osteoarthritis Initiative, Stryker Imorphics, and Zuse Institute Berlin (ZIB), as representative test cases. We use Google Colabfor editing and adapting the Python codes and selecting the runtime environment leveraging high-performance graphical processing units. We designed our solution for novice users to apply to any data set with relatively few adaptations requiring only basic programming skills. To facilitate the adoption of our methods, we provide a complete guideline for using our methods and software, as well as the software tools themselves. Clinical significance: We establish and detail methods that clinical personal can apply to create their own DL models without specialized knowledge of DL nor specialized hardware/infrastructure and obtain results comparable with the state-of-the-art to facilitate both clinical management of joint damage/disease and fundamental research.
Collapse
Affiliation(s)
- Borja Rodriguez-Vila
- Department of Electronics, Universidad Rey Juan Carlos, Madrid, Spain.,Medical Image Analysis and Biometry Laboratory, Universidad Rey Juan Carlos, Madrid, Spain.,IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain
| | - Vera Gonzalez-Hospital
- IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain
| | - Enrique Puertas
- IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain.,Department of Computer Science and Technology, School of Architecture, Engineering and Design, Universidad Europea de Madrid, Madrid, Spain
| | - Juan-Jose Beunza
- IAsalud, School for Doctoral Studies and Research, Universidad Europea de Madrid, Madrid, Spain.,Department of Medicine, School of Biomedical and Health Sciences, Universidad Europea de Madrid, Madrid, Spain
| | - David M Pierce
- Department of Mechanical Engineering, University of Connecticut, Storrs, Connecticut, USA.,Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
30
|
Alwakid G, Gouda W, Humayun M, Sama NU. Melanoma Detection Using Deep Learning-Based Classifications. Healthcare (Basel) 2022; 10:healthcare10122481. [PMID: 36554004 PMCID: PMC9777935 DOI: 10.3390/healthcare10122481] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/02/2022] [Accepted: 12/05/2022] [Indexed: 12/13/2022] Open
Abstract
One of the most prevalent cancers worldwide is skin cancer, and it is becoming more common as the population ages. As a general rule, the earlier skin cancer can be diagnosed, the better. As a result of the success of deep learning (DL) algorithms in other industries, there has been a substantial increase in automated diagnosis systems in healthcare. This work proposes DL as a method for extracting a lesion zone with precision. First, the image is enhanced using Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) to improve the image's quality. Then, segmentation is used to segment Regions of Interest (ROI) from the full image. We employed data augmentation to rectify the data disparity. The image is then analyzed with a convolutional neural network (CNN) and a modified version of Resnet-50 to classify skin lesions. This analysis utilized an unequal sample of seven kinds of skin cancer from the HAM10000 dataset. With an accuracy of 0.86, a precision of 0.84, a recall of 0.86, and an F-score of 0.86, the proposed CNN-based Model outperformed the earlier study's results by a significant margin. The study culminates with an improved automated method for diagnosing skin cancer that benefits medical professionals and patients.
Collapse
Affiliation(s)
- Ghadah Alwakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
- Correspondence:
| | - Walaa Gouda
- Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
| | - Najm Us Sama
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Sarawak, Malaysia
| |
Collapse
|
31
|
Li Z, Zhang W, Li B, Zhu J, Peng Y, Li C, Zhu J, Zhou Q, Yin Y. Patient-specific daily updated deep learning auto-segmentation for MRI-guided adaptive radiotherapy. Radiother Oncol 2022; 177:222-230. [PMID: 36375561 DOI: 10.1016/j.radonc.2022.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 10/31/2022] [Accepted: 11/06/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND AND PURPOSE Deep Learning (DL) technique has shown great potential but still has limited success in online contouring for MR-guided adaptive radiotherapy (MRgART). This study proposed a patient-specific DL auto-segmentation (DLAS) strategy using the patient's previous images and contours to update the model and improve segmentation accuracy and efficiency for MRgART. METHODS AND MATERIALS A prototype model was trained for each patient using the first set of MRI and corresponding contours as inputs. The patient-specific model was updated after each fraction with all the available fractional MRIs/contours, and then used to predict the segmentation for the next fraction. During model training, a variant was fitted under consistency constraints, limiting the differences in the volume, length and centroid between the predictions for the latest MRI within a reasonable range. The model performance was evaluated for both organ-at-risks and tumors auto-segmentation for a total of 6 abdominal/pelvic cases (each with at least 8 sets of MRIs/contours) underwent MRgART through Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD95), and was compared with deformable image registration (DIR) and frozen DL model (no updating after pre-training). The contouring time was also recorded and analyzed. RESULTS The proposed model achieved superior performance with higher mean DSC (0.90, 95 % CI: 0.88-0.95), as compared to DIR (0.63, 95 %CI: 0.59-0.68) and frozen DL models (0.74, 95 % CI: 0.71-0.79). As for tumors, the proposed method yielded a median DSC of 0.95, 95 % CI: 0.94-0.97, and a median HD95 of 1.63 mm, 95 % CI: 1.22 mm-2.06 mm. The contouring time was reduced significantly (p < 0.05) using the proposed method (73.4 ± 6.5 secs) compared to the manual process (12 ∼ 22 mins). The online ART time was reduced to 1650 ± 274 seconds with the proposed method, as compared to 3251.8 ± 447 seconds using the original workflow. CONCLUSION The proposed patient-specific DLAS method can significantly improve the segmentation accuracy and efficiency for longitudinal MRIs, thereby facilitating the routine practice of MRgART.
Collapse
Affiliation(s)
- Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| | - Wei Zhang
- Manteia Technologies Co.,Ltd, 1903, B Tower, Zijin Plaza, No.1811 Huandao East Road, Xiamen, 361001, China.
| | - Baosheng Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| | - Jian Zhu
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| | - Yinglin Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China.
| | - Chengze Li
- Manteia Technologies Co.,Ltd, 1903, B Tower, Zijin Plaza, No.1811 Huandao East Road, Xiamen, 361001, China.
| | - Jennifer Zhu
- Department of biochemistry and molecular biology, University of British Columbia, Canada, 8 Edenstone View NW, Calgary AB, Canada T3A 3Z2.
| | - Qichao Zhou
- Manteia Technologies Co.,Ltd, 1903, B Tower, Zijin Plaza, No.1811 Huandao East Road, Xiamen, 361001, China.
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| |
Collapse
|
32
|
Mohammadi A, Mirza-Aghazadeh-Attari M, Faeghi F, Homayoun H, Abolghasemi J, Vogl TJ, Bureau NJ, Bakhshandeh M, Acharya RU, Abbasian Ardakani A. Tumor Microenvironment, Radiology, and Artificial Intelligence: Should We Consider Tumor Periphery? JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:3079-3090. [PMID: 36000351 DOI: 10.1002/jum.16086] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 08/02/2022] [Accepted: 08/05/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES The tumor microenvironment (TME) consists of cellular and noncellular components which enable the tumor to interact with its surroundings and plays an important role in the tumor progression and how the immune system reacts to the malignancy. In the present study, we investigate the diagnostic potential of the TME in differentiating benign and malignant lesions using image quantification and machine learning. METHODS A total of 229 breast lesions and 220 cervical lymph nodes were included in the study. A group of expert radiologists first performed medical imaging and segmented the lesions, after which a rectangular mask was drawn, encompassing all of the contouring. The mask was extended in each axis up to 50%, and 29 radiomics features were extracted from each mask. Radiomics features that showed a significant difference in each contour were used to develop a support vector machine (SVM) classifier for benign and malignant lesions in breast and lymph node images separately. RESULTS Single radiomics features extracted from extended contours outperformed radiologists' contours in both breast and lymph node lesions. Furthermore, when fed into the SVM model, the extended models also outperformed the radiologist's contour, achieving an area under the receiver operating characteristic curve of 0.887 and 0.970 in differentiating breast and lymph node lesions, respectively. CONCLUSIONS Our results provide convincing evidence regarding the importance of the tumor periphery and TME in medical imaging diagnosis. We propose that the immediate tumor periphery should be considered for differentiating benign and malignant lesions in image quantification studies.
Collapse
Affiliation(s)
- Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran
| | | | - Fariborz Faeghi
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hasan Homayoun
- Urology Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Jamileh Abolghasemi
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Nathalie J Bureau
- Department of Radiology, Centre Hospitalier de l'Université de Montréal, Montreal, Canada
| | - Mohsen Bakhshandeh
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Rajendra U Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| | - Ali Abbasian Ardakani
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
33
|
Mumuni AN, Hasford F, Udeme NI, Dada MO, Awojoyogbe BO. A SWOT analysis of artificial intelligence in diagnostic imaging in the developing world: making a case for a paradigm shift. PHYSICAL SCIENCES REVIEWS 2022. [DOI: 10.1515/psr-2022-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Abstract
Diagnostic imaging (DI) refers to techniques and methods of creating images of the body’s internal parts and organs with or without the use of ionizing radiation, for purposes of diagnosing, monitoring and characterizing diseases. By default, DI equipment are technology based and in recent times, there has been widespread automation of DI operations in high-income countries while low and middle-income countries (LMICs) are yet to gain traction in automated DI. Advanced DI techniques employ artificial intelligence (AI) protocols to enable imaging equipment perceive data more accurately than humans do, and yet automatically or under expert evaluation, make clinical decisions such as diagnosis and characterization of diseases. In this narrative review, SWOT analysis is used to examine the strengths, weaknesses, opportunities and threats associated with the deployment of AI-based DI protocols in LMICs. Drawing from this analysis, a case is then made to justify the need for widespread AI applications in DI in resource-poor settings. Among other strengths discussed, AI-based DI systems could enhance accuracies in diagnosis, monitoring, characterization of diseases and offer efficient image acquisition, processing, segmentation and analysis procedures, but may have weaknesses regarding the need for big data, huge initial and maintenance costs, and inadequate technical expertise of professionals. They present opportunities for synthetic modality transfer, increased access to imaging services, and protocol optimization; and threats of input training data biases, lack of regulatory frameworks and perceived fear of job losses among DI professionals. The analysis showed that successful integration of AI in DI procedures could position LMICs towards achievement of universal health coverage by 2030/2035. LMICs will however have to learn from the experiences of advanced settings, train critical staff in relevant areas of AI and proceed to develop in-house AI systems with all relevant stakeholders onboard.
Collapse
Affiliation(s)
| | - Francis Hasford
- Department of Medical Physics , University of Ghana, Ghana Atomic Energy Commission , Accra , Ghana
| | | | | | | |
Collapse
|
34
|
Geerlings-Batt J, Tillett C, Gupta A, Sun Z. Enhanced Visualisation of Normal Anatomy with Potential Use of Augmented Reality Superimposed on Three-Dimensional Printed Models. MICROMACHINES 2022; 13:1701. [PMID: 36296054 PMCID: PMC9608320 DOI: 10.3390/mi13101701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/04/2022] [Accepted: 10/08/2022] [Indexed: 06/16/2023]
Abstract
Anatomical knowledge underpins the practice of many healthcare professions. While cadaveric specimens are generally used to demonstrate realistic anatomy, high cost, ethical considerations and limited accessibility can often impede their suitability for use as teaching tools. This study aimed to develop an alternative to traditional teaching methods; a novel teaching tool using augmented reality (AR) and three-dimensional (3D) printed models to accurately demonstrate normal ankle and foot anatomy. An open-source software (3D Slicer) was used to segment a high-resolution magnetic resonance imaging (MRI) dataset of a healthy volunteer ankle and produce virtual bone and musculature objects. Bone and musculature were segmented using seed-planting and interpolation functions, respectively. Virtual models were imported into Unity 3D, which was used to develop user interface and achieve interactability prior to export to the Microsoft HoloLens 2. Three life-size models of bony anatomy were printed in yellow polylactic acid and thermoplastic polyurethane, with another model printed in white Visijet SL Flex with a supporting base attached to its plantar aspect. Interactive user interface with functional toggle switches was developed. Object recognition did not function as intended, with adequate tracking and AR superimposition not achieved. The models accurately demonstrate bony foot and ankle anatomy in relation to the associated musculature. Although segmentation outcomes were sufficient, the process was highly time consuming, with effective object recognition tools relatively inaccessible. This may limit the reproducibility of augmented reality learning tools on a larger scale. Research is required to determine the extent to which this tool accurately demonstrates anatomy and ascertain whether use of this tool improves learning outcomes and is effective for teaching anatomy.
Collapse
Affiliation(s)
- Jade Geerlings-Batt
- Discipline of Medical Radiation Science, Curtin Medical School, Curtin University, Perth, WA 6845, Australia
| | - Carley Tillett
- Curtin HIVE (Hub for Immersive Visualisation and eResearch), Curtin University, Perth, WA 6845, Australia
| | - Ashu Gupta
- Department of Medical Imaging, Fiona Stanley Hospital, Perth, WA 6150, Australia
| | - Zhonghua Sun
- Discipline of Medical Radiation Science, Curtin Medical School, Curtin University, Perth, WA 6845, Australia
- Curtin Health Innovation Research Institute (CHIRI), Curtin University, Perth, WA 6845, Australia
| |
Collapse
|
35
|
Johann to Berens P, Schivre G, Theune M, Peter J, Sall SO, Mutterer J, Barneche F, Bourbousse C, Molinier J. Advanced Image Analysis Methods for Automated Segmentation of Subnuclear Chromatin Domains. EPIGENOMES 2022; 6:epigenomes6040034. [PMID: 36278680 PMCID: PMC9624336 DOI: 10.3390/epigenomes6040034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/19/2022] [Accepted: 10/01/2022] [Indexed: 11/07/2022] Open
Abstract
The combination of ever-increasing microscopy resolution with cytogenetical tools allows for detailed analyses of nuclear functional partitioning. However, the need for reliable qualitative and quantitative methodologies to detect and interpret chromatin sub-nuclear organization dynamics is crucial to decipher the underlying molecular processes. Having access to properly automated tools for accurate and fast recognition of complex nuclear structures remains an important issue. Cognitive biases associated with human-based curation or decisions for object segmentation tend to introduce variability and noise into image analysis. Here, we report the development of two complementary segmentation methods, one semi-automated (iCRAQ) and one based on deep learning (Nucl.Eye.D), and their evaluation using a collection of A. thaliana nuclei with contrasted or poorly defined chromatin compartmentalization. Both methods allow for fast, robust and sensitive detection as well as for quantification of subtle nucleus features. Based on these developments, we highlight advantages of semi-automated and deep learning-based analyses applied to plant cytogenetics.
Collapse
Affiliation(s)
| | - Geoffrey Schivre
- Institut de Biologie de l’Ecole Normale Supérieure (IBENS), Ecole Normale Supérieure, Centre National de la Recherche Scientifique, Inserm, Université PSL, 75230 Paris, France
- Université Paris-Saclay, 91190 Orsay, France
| | - Marius Theune
- FB 10 / Molekulare Pflanzenphysiologie, Bioenergetik in Photoautotrophen, Universität Kassel, 34127 Kassel, Germany
| | - Jackson Peter
- Institut de Biologie Moléculaire des Plantes du CNRS, 67000 Strasbourg, France
| | | | - Jérôme Mutterer
- Institut de Biologie Moléculaire des Plantes du CNRS, 67000 Strasbourg, France
| | - Fredy Barneche
- Institut de Biologie de l’Ecole Normale Supérieure (IBENS), Ecole Normale Supérieure, Centre National de la Recherche Scientifique, Inserm, Université PSL, 75230 Paris, France
| | - Clara Bourbousse
- Institut de Biologie de l’Ecole Normale Supérieure (IBENS), Ecole Normale Supérieure, Centre National de la Recherche Scientifique, Inserm, Université PSL, 75230 Paris, France
- Correspondence: (C.B.); (J.M.)
| | - Jean Molinier
- Institut de Biologie Moléculaire des Plantes du CNRS, 67000 Strasbourg, France
- Correspondence: (C.B.); (J.M.)
| |
Collapse
|
36
|
Zhao J, Sun L, Zhou X, Huang S, Si H, Zhang D. Residual-atrous attention network for lumbosacral plexus segmentation with MR image. Comput Med Imaging Graph 2022; 100:102109. [DOI: 10.1016/j.compmedimag.2022.102109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/12/2022] [Accepted: 07/28/2022] [Indexed: 10/15/2022]
|
37
|
Madireddy I, Wu T. Rule and Neural Network-Based Image Segmentation of Mice Vertebrae Images. Cureus 2022; 14:e27247. [PMID: 36039207 PMCID: PMC9401637 DOI: 10.7759/cureus.27247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2022] [Indexed: 12/03/2022] Open
Abstract
Background Image segmentation is a fundamental technique that allows researchers to process images from various sources into individual components for certain applications, such as visual or numerical evaluations. Image segmentation is beneficial when studying medical images for healthcare purposes. However, existing semantic image segmentation models like the U-net are computationally intensive. This work aimed to develop less complicated models that could still accurately segment images. Methodology Rule-based and linear layer neural network models were developed in Mathematica and trained on mouse vertebrae micro-computed tomography scans. These models were tasked with segmenting the cortical shell from the whole bone image. A U-net model was also set up for comparison. Results It was found that the linear layer neural network had comparable accuracy to the U-net model in segmenting the mice vertebrae scans. Conclusions This work provides two separate models that allow for automated segmentation of mouse vertebral scans, which could be potentially valuable in applications such as pre-processing the murine vertebral scans for further evaluations of the effect of drug treatment on bone micro-architecture.
Collapse
|
38
|
Preda F, Morgan N, Van Gerven A, Nogueira-Reis F, Smolders A, Wang X, Nomidis S, Shaheen E, Willems H, Jacobs R. Deep convolutional neural network-based automated segmentation of the maxillofacial complex from cone-beam computed tomography - A validation study. J Dent 2022; 124:104238. [PMID: 35872223 DOI: 10.1016/j.jdent.2022.104238] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 02/08/2023] Open
Abstract
OBJECTIVES The present study investigated the accuracy, consistency, and time-efficiency of a novel deep CNN-based model for the automated maxillofacial bone segmentation from CBCT images. METHOD A dataset of 144 scans was acquired from two CBCT devices and randomly divided into three subsets: training set (n= 110), validation set (n= 10) and testing set (n=24). A three-dimensional (3D) U-Net (CNN) model was developed, and the achieved automated segmentation was compared with a manual approach. RESULTS The average time required for automated segmentation was 39.1 seconds with a 204-fold decrease in time consumption compared to manual segmentation (132.7 minutes). The model is highly accurate for identification of the bony structures of the anatomical region of interest with a dice similarity coefficient (DSC) of 92.6%. Additionally, the fully deterministic nature of the CNN model was able to provide 100% consistency without any variability. The inter-observer consistency for expert-based minor correction of the automated segmentation observed an excellent DSC of 99.7%. CONCLUSION The proposed CNN model provided a time-efficient, accurate, and consistent CBCT-based automated segmentation of the maxillofacial complex. CLINICAL SIGNIFICANCE Automated segmentation of the maxillofacial complex could act as a potent alternative to the conventional segmentation techniques for improving the efficiency of the digital workflows. This approach could deliver an accurate and ready-to-print three dimensional (3D) models that are essential to patient-specific digital treatment planning for orthodontics, maxillofacial surgery, and implant placement.
Collapse
Affiliation(s)
- Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium.
| | - Nermin Morgan
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, 35516 Mansoura, Dakahlia, Egypt
| | | | - Fernanda Nogueira-Reis
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium; Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), Av. Limeira 901, Piracicaba, São Paulo 13414‑903, Brazil
| | | | - Xiaotong Wang
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium
| | | | - Eman Shaheen
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium
| | | | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging & Pathology, Faculty of Medicine, KU Leuven & Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer33, BE-3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institutet, Box 4064, 141 04 Huddinge, Stockholm, Sweden
| |
Collapse
|
39
|
Gryska E, Björkman-Burtscher I, Jakola AS, Dunås T, Schneiderman J, Heckemann RA. Deep learning for automatic brain tumour segmentation on MRI: evaluation of recommended reporting criteria via a reproduction and replication study. BMJ Open 2022; 12:e059000. [PMID: 35851016 PMCID: PMC9297223 DOI: 10.1136/bmjopen-2021-059000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVES To determine the reproducibility and replicability of studies that develop and validate segmentation methods for brain tumours on MRI and that follow established reproducibility criteria; and to evaluate whether the reporting guidelines are sufficient. METHODS Two eligible validation studies of distinct deep learning (DL) methods were identified. We implemented the methods using published information and retraced the reported validation steps. We evaluated to what extent the description of the methods enabled reproduction of the results. We further attempted to replicate reported findings on a clinical set of images acquired at our institute consisting of high-grade and low-grade glioma (HGG, LGG), and meningioma (MNG) cases. RESULTS We successfully reproduced one of the two tumour segmentation methods. Insufficient description of the preprocessing pipeline and our inability to replicate the pipeline resulted in failure to reproduce the second method. The replication of the first method showed promising results in terms of Dice similarity coefficient (DSC) and sensitivity (Sen) on HGG cases (DSC=0.77, Sen=0.88) and LGG cases (DSC=0.73, Sen=0.83), however, poorer performance was observed for MNG cases (DSC=0.61, Sen=0.71). Preprocessing errors were identified that contributed to low quantitative scores in some cases. CONCLUSIONS Established reproducibility criteria do not sufficiently emphasise description of the preprocessing pipeline. Discrepancies in preprocessing as a result of insufficient reporting are likely to influence segmentation outcomes and hinder clinical utilisation. A detailed description of the whole processing chain, including preprocessing, is thus necessary to obtain stronger evidence of the generalisability of DL-based brain tumour segmentation methods and to facilitate translation of the methods into clinical practice.
Collapse
Affiliation(s)
- Emilia Gryska
- MedTech West at Sahlgrenska University Hospital, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Isabella Björkman-Burtscher
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Asgeir Store Jakola
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Tora Dunås
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Justin Schneiderman
- MedTech West at Sahlgrenska University Hospital, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Rolf A Heckemann
- MedTech West at Sahlgrenska University Hospital, University of Gothenburg, Gothenburg, Sweden
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
40
|
Dentamaro V, Giglio P, Impedovo D, Moretti L, Pirlo G. AUCO ResNet: an end-to-end network for Covid-19 pre-screening from cough and breath. PATTERN RECOGNITION 2022; 127:108656. [PMID: 35313619 PMCID: PMC8920577 DOI: 10.1016/j.patcog.2022.108656] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 03/10/2022] [Accepted: 03/14/2022] [Indexed: 05/09/2023]
Abstract
This study presents the Auditory Cortex ResNet (AUCO ResNet), it is a biologically inspired deep neural network especially designed for sound classification and more specifically for Covid-19 recognition from audio tracks of coughs and breaths. Differently from other approaches, it can be trained end-to-end thus optimizing (with gradient descent) all the modules of the learning algorithm: mel-like filter design, feature extraction, feature selection, dimensionality reduction and prediction. This neural network includes three attention mechanisms namely the squeeze and excitation mechanism, the convolutional block attention module, and the novel sinusoidal learnable attention. The attention mechanism is able to merge relevant information from activation maps at various levels of the network. The net takes as input raw audio files and it is able to fine tune also the features extraction phase. In fact, a Mel-like filter is designed during the training, thus adapting filter banks on important frequencies. AUCO ResNet has proved to provide state of art results on many datasets. Firstly, it has been tested on many datasets containing Covid-19 cough and breath. This choice is related to the fact that that cough and breath are language independent, allowing for cross dataset tests with generalization aims. These tests demonstrate that the approach can be adopted as a low cost, fast and remote Covid-19 pre-screening tool. The net has also been tested on the famous UrbanSound 8K dataset, achieving state of the art accuracy without any data preprocessing or data augmentation technique.
Collapse
Affiliation(s)
- Vincenzo Dentamaro
- Università degli studi di Bari "Aldo Moro", Department of Computer Science, via Orabona 4, Bari, 70125, Italy
| | - Paolo Giglio
- Università degli studi di Bari "Aldo Moro", Department of Computer Science, via Orabona 4, Bari, 70125, Italy
| | - Donato Impedovo
- Università degli studi di Bari "Aldo Moro", Department of Computer Science, via Orabona 4, Bari, 70125, Italy
| | - Luigi Moretti
- Università degli studi di Bari "Aldo Moro", Medical School, Bari, Italy
| | - Giuseppe Pirlo
- Università degli studi di Bari "Aldo Moro", Department of Computer Science, via Orabona 4, Bari, 70125, Italy
| |
Collapse
|
41
|
Algorithms used in medical image segmentation for 3D printing and how to understand and quantify their performance. 3D Print Med 2022; 8:18. [PMID: 35748984 PMCID: PMC9229760 DOI: 10.1186/s41205-022-00145-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/30/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND 3D printing (3DP) has enabled medical professionals to create patient-specific medical devices to assist in surgical planning. Anatomical models can be generated from patient scans using a wide array of software, but there are limited studies on the geometric variance that is introduced during the digital conversion of images to models. The final accuracy of the 3D printed model is a function of manufacturing hardware quality control and the variability introduced during the multiple digital steps that convert patient scans to a printable format. This study provides a brief summary of common algorithms used for segmentation and refinement. Parameters for each that can introduce geometric variability are also identified. Several metrics for measuring variability between models and validating processes are explored and assessed. METHODS Using a clinical maxillofacial CT scan of a patient with a tumor of the mandible, four segmentation and refinement workflows were processed using four software packages. Differences in segmentation were calculated using several techniques including volumetric, surface, linear, global, and local measurements. RESULTS Visual inspection of print-ready models showed distinct differences in the thickness of the medial wall of the mandible adjacent to the tumor. Volumetric intersections and heatmaps provided useful local metrics of mismatch or variance between models made by different workflows. They also allowed calculations of aggregate percentage agreement and disagreement which provided a global benchmark metric. For the relevant regions of interest (ROIs), statistically significant differences were found in the volume and surface area comparisons for the final mandible and tumor models, as well as between measurements of the nerve central path. As with all clinical use cases, statistically significant results must be weighed against the clinical significance of any deviations found. CONCLUSIONS Statistically significant geometric variations from differences in segmentation and refinement algorithms can be introduced into patient-specific models. No single metric was able to capture the true accuracy of the final models. However, a combination of global and local measurements provided an understanding of important geometric variations. The clinical implications of each geometric variation is different for each anatomical location and should be evaluated on a case-by-case basis by clinicians familiar with the process. Understanding the basic segmentation and refinement functions of software is essential for sites to create a baseline from which to evaluate their standard workflows, user training, and inter-user variability when using patient-specific models for clinical interventions or decisions.
Collapse
|
42
|
Müller D, Soto-Rey I, Kramer F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res Notes 2022; 15:210. [PMID: 35725483 PMCID: PMC9208116 DOI: 10.1186/s13104-022-06096-y] [Citation(s) in RCA: 47] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
In the last decade, research on artificial intelligence has seen rapid growth with deep learning models, especially in the field of medical image segmentation. Various studies demonstrated that these models have powerful prediction capabilities and achieved similar results as clinicians. However, recent studies revealed that the evaluation in image segmentation studies lacks reliable model performance assessment and showed statistical bias by incorrect metric implementation or usage. Thus, this work provides an overview and interpretation guide on the following metrics for medical image segmentation evaluation in binary as well as multi-class problems: Dice similarity coefficient, Jaccard, Sensitivity, Specificity, Rand index, ROC curves, Cohen's Kappa, and Hausdorff distance. Furthermore, common issues like class imbalance and statistical as well as interpretation biases in evaluation are discussed. As a summary, we propose a guideline for standardized medical image segmentation evaluation to improve evaluation quality, reproducibility, and comparability in the research field.
Collapse
Affiliation(s)
- Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany. .,Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, Augsburg, Germany.
| | - Iñaki Soto-Rey
- Medical Data Integration Center, Institute for Digital Medicine, University Hospital Augsburg, Augsburg, Germany
| | - Frank Kramer
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany
| |
Collapse
|
43
|
Trimpl MJ, Primakov S, Lambin P, Stride EPJ, Vallis KA, Gooding MJ. Beyond automatic medical image segmentation-the spectrum between fully manual and fully automatic delineation. Phys Med Biol 2022; 67. [PMID: 35523158 DOI: 10.1088/1361-6560/ac6d9c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 05/06/2022] [Indexed: 12/19/2022]
Abstract
Semi-automatic and fully automatic contouring tools have emerged as an alternative to fully manual segmentation to reduce time spent contouring and to increase contour quality and consistency. Particularly, fully automatic segmentation has seen exceptional improvements through the use of deep learning in recent years. These fully automatic methods may not require user interactions, but the resulting contours are often not suitable to be used in clinical practice without a review by the clinician. Furthermore, they need large amounts of labelled data to be available for training. This review presents alternatives to manual or fully automatic segmentation methods along the spectrum of variable user interactivity and data availability. The challenge lies to determine how much user interaction is necessary and how this user interaction can be used most effectively. While deep learning is already widely used for fully automatic tools, interactive methods are just at the starting point to be transformed by it. Interaction between clinician and machine, via artificial intelligence, can go both ways and this review will present the avenues that are being pursued to improve medical image segmentation.
Collapse
Affiliation(s)
- Michael J Trimpl
- Mirada Medical Ltd, Oxford, United Kingdom
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology, Maastricht University, Maastricht, NL, The Netherlands
| | - Eleanor P J Stride
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Katherine A Vallis
- Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| | | |
Collapse
|
44
|
Chavva IR, Crawford AL, Mazurek MH, Yuen MM, Prabhat AM, Payabvash S, Sze G, Falcone GJ, Matouk CC, de Havenon A, Kim JA, Sharma R, Schiff SJ, Rosen MS, Kalpathy-Cramer J, Iglesias Gonzalez JE, Kimberly WT, Sheth KN. Deep Learning Applications for Acute Stroke Management. Ann Neurol 2022; 92:574-587. [PMID: 35689531 DOI: 10.1002/ana.26435] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 05/27/2022] [Accepted: 06/04/2022] [Indexed: 11/08/2022]
Abstract
Brain imaging is essential to the clinical care of patients with stroke, a leading cause of disability and death worldwide. Whereas advanced neuroimaging techniques offer opportunities for aiding acute stroke management, several factors, including time delays, inter-clinician variability, and lack of systemic conglomeration of clinical information, hinder their maximal utility. Recent advances in deep machine learning (DL) offer new strategies for harnessing computational medical image analysis to inform decision making in acute stroke. We examine the current state of the field for DL models in stroke triage. First, we provide a brief, clinical practice-focused primer on DL. Next, we examine real-world examples of DL applications in pixel-wise labeling, volumetric lesion segmentation, stroke detection, and prediction of tissue fate postintervention. We evaluate recent deployments of deep neural networks and their ability to automatically select relevant clinical features for acute decision making, reduce inter-rater variability, and boost reliability in rapid neuroimaging assessments, and integrate neuroimaging with electronic medical record (EMR) data in order to support clinicians in routine and triage stroke management. Ultimately, we aim to provide a framework for critically evaluating existing automated approaches, thus equipping clinicians with the ability to understand and potentially apply DL approaches in order to address challenges in clinical practice. ANN NEUROL 2022.
Collapse
Affiliation(s)
- Isha R Chavva
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Anna L Crawford
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Mercy H Mazurek
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Matthew M Yuen
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | | | - Sam Payabvash
- Department of Radiology, Yale School of Medicine, New Haven, CT
| | - Gordon Sze
- Department of Radiology, Yale School of Medicine, New Haven, CT
| | - Guido J Falcone
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Charles C Matouk
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT
| | - Adam de Havenon
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Jennifer A Kim
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Richa Sharma
- Department of Neurology, Yale School of Medicine, New Haven, CT
| | - Steven J Schiff
- Departments of Neurosurgery, Engineering Science and Mechanics and Physics, Penn State University, University Park, PA
| | - Matthew S Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA
| | - Juan E Iglesias Gonzalez
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA
| | - W Taylor Kimberly
- Department of Neurology, Division of Neurocritical Care, Massachusetts General Hospital, Boston, MA
| | - Kevin N Sheth
- Department of Neurology, Yale School of Medicine, New Haven, CT
| |
Collapse
|
45
|
Kugler EC, Rampun A, Chico TJA, Armitage PA. Analytical Approaches for the Segmentation of the Zebrafish Brain Vasculature. Curr Protoc 2022; 2:e443. [PMID: 35617469 DOI: 10.1002/cpz1.443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With advancements in imaging techniques, data visualization allows new insights into fundamental biological processes of development and disease. However, although biomedical science is heavily reliant on imaging data, interpretation of datasets is still often based on subjective visual assessment rather than rigorous quantitation. This overview presents steps to validate image processing and segmentation using the zebrafish brain vasculature data acquired with light sheet fluorescence microscopy as a use case. Blood vessels are of particular interest to both medical and biomedical science. Specific image enhancement filters have been developed that enhance blood vessels in imaging data prior to segmentation. Using the Sato enhancement filter as an example, we discuss how filter application can be evaluated and optimized. Approaches from the medical field such as simulated, experimental, and augmented datasets can be used to gain the most out of the data at hand. Using such datasets, we provide an overview of how biologists and data analysts can assess the accuracy, sensitivity, and robustness of their segmentation approaches that allow extraction of objects from images. Importantly, even after optimization and testing of a segmentation workflow (e.g., from a particular reporter line to another or between immunostaining processes), its generalizability is often limited, and this can be tested using double-transgenic reporter lines. Lastly, due to the increasing importance of deep learning networks, a comparative approach can be adopted to study their applicability to biological datasets. In summary, we present a broad methodological overview ranging from image enhancement to segmentation with a mixed approach of experimental, simulated, and augmented datasets to assess and validate vascular segmentation using the zebrafish brain vasculature as an example. © 2022 The Authors. Current Protocols published by Wiley Periodicals LLC. HIGHLIGHTS: Simulated, experimental, and augmented datasets provide an alternative to overcome the lack of segmentation gold standards and phantom models for zebrafish cerebrovascular segmentation. Direct generalization of a segmentation approach to the data for which it was not optimized (e.g., different transgenics or antibody stainings) should be treated with caution. Comparison of different deep learning segmentation methods can be used to assess their applicability to data. Here, we show that the zebrafish cerebral vasculature can be segmented with U-Net-based architectures, which outperform SegNet architectures.
Collapse
Affiliation(s)
- Elisabeth C Kugler
- Institute of Ophthalmology, Faculty of Brain Sciences, University College London, Greater London.,Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Medical School, Beech Hill Road, Sheffield, United Kingdom.,The Bateson Centre, Firth Court, University of Sheffield, Western Bank, Sheffield, United Kingdom.,Insigneo Institute for in silico Medicine, The Pam Liversidge Building, Sheffield, United Kingdom
| | - Andrik Rampun
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Medical School, Beech Hill Road, Sheffield, United Kingdom.,Insigneo Institute for in silico Medicine, The Pam Liversidge Building, Sheffield, United Kingdom
| | - Timothy J A Chico
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Medical School, Beech Hill Road, Sheffield, United Kingdom.,The Bateson Centre, Firth Court, University of Sheffield, Western Bank, Sheffield, United Kingdom.,Insigneo Institute for in silico Medicine, The Pam Liversidge Building, Sheffield, United Kingdom
| | - Paul A Armitage
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Medical School, Beech Hill Road, Sheffield, United Kingdom.,The Bateson Centre, Firth Court, University of Sheffield, Western Bank, Sheffield, United Kingdom.,Insigneo Institute for in silico Medicine, The Pam Liversidge Building, Sheffield, United Kingdom
| |
Collapse
|
46
|
Sharma P, Ninomiya T, Omodaka K, Takahashi N, Miya T, Himori N, Okatani T, Nakazawa T. A lightweight deep learning model for automatic segmentation and analysis of ophthalmic images. Sci Rep 2022; 12:8508. [PMID: 35595784 PMCID: PMC9122907 DOI: 10.1038/s41598-022-12486-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 05/11/2022] [Indexed: 12/04/2022] Open
Abstract
Detection, diagnosis, and treatment of ophthalmic diseases depend on extraction of information (features and/or their dimensions) from the images. Deep learning (DL) model are crucial for the automation of it. Here, we report on the development of a lightweight DL model, which can precisely segment/detect the required features automatically. The model utilizes dimensionality reduction of image to extract important features, and channel contraction to allow only the required high-level features necessary for reconstruction of segmented feature image. Performance of present model in detection of glaucoma from optical coherence tomography angiography (OCTA) images of retina is high (area under the receiver-operator characteristic curve AUC ~ 0.81). Bland–Altman analysis gave exceptionally low bias (~ 0.00185), and high Pearson’s correlation coefficient (p = 0.9969) between the parameters determined from manual and DL based segmentation. On the same dataset, bias is an order of magnitude higher (~ 0.0694, p = 0.8534) for commercial software. Present model is 10 times lighter than Unet (popular for biomedical image segmentation) and have a better segmentation accuracy and model training reproducibility (based on the analysis of 3670 OCTA images). High dice similarity coefficient (D) for variety of ophthalmic images suggested it’s wider scope in precise segmentation of images even from other fields. Our concept of channel narrowing is not only important for the segmentation problems, but it can also reduce number of parameters significantly in object classification models. Enhanced disease diagnostic accuracy can be achieved for the resource limited devices (such as mobile phone, Nvidia’s Jetson, Raspberry pi) used in self-monitoring, and tele-screening (memory size of trained model ~ 35 MB).
Collapse
Affiliation(s)
- Parmanand Sharma
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Takahiro Ninomiya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kazuko Omodaka
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Naoki Takahashi
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Takehiro Miya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Noriko Himori
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Aging Vision Healthcare, Tohoku University Graduate School of Biomedical Engineering, Sendai, Japan
| | - Takayuki Okatani
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Toru Nakazawa
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Retinal Disease Control, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Advanced Ophthalmic Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
47
|
Fully Automatic Whole-Volume Tumor Segmentation in Cervical Cancer. Cancers (Basel) 2022; 14:cancers14102372. [PMID: 35625977 PMCID: PMC9139985 DOI: 10.3390/cancers14102372] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 05/02/2022] [Accepted: 05/05/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Uterine cervical cancer (CC) is a leading cause of cancer-related deaths in women worldwide. Pelvic magnetic resonance imaging (MRI) allows the assessment of local tumor extent and guides the choice of primary treatment. MRI tumor segmentation enables whole-volume radiomic tumor profiling, which is potentially useful for prognostication and individualization of therapy in CC. Manual tumor segmentation is, however, labor intensive and thus not part of routine clinical workflow. In the current work, we trained a deep learning (DL) algorithm to automatically segment the primary tumor in CC patients. Although the achieved segmentation performance of the trained DL algorithm is slightly lower than that for human experts, it is still relatively good. This study suggests that automated MRI primary tumor segmentations by DL algorithms without any human interaction is possible in patients with CC. Abstract Uterine cervical cancer (CC) is the most common gynecologic malignancy worldwide. Whole-volume radiomic profiling from pelvic MRI may yield prognostic markers for tailoring treatment in CC. However, radiomic profiling relies on manual tumor segmentation which is unfeasible in the clinic. We present a fully automatic method for the 3D segmentation of primary CC lesions using state-of-the-art deep learning (DL) techniques. In 131 CC patients, the primary tumor was manually segmented on T2-weighted MRI by two radiologists (R1, R2). Patients were separated into a train/validation (n = 105) and a test- (n = 26) cohort. The segmentation performance of the DL algorithm compared with R1/R2 was assessed with Dice coefficients (DSCs) and Hausdorff distances (HDs) in the test cohort. The trained DL network retrieved whole-volume tumor segmentations yielding median DSCs of 0.60 and 0.58 for DL compared with R1 (DL-R1) and R2 (DL-R2), respectively, whereas DSC for R1-R2 was 0.78. Agreement for primary tumor volumes was excellent between raters (R1-R2: intraclass correlation coefficient (ICC) = 0.93), but lower for the DL algorithm and the raters (DL-R1: ICC = 0.43; DL-R2: ICC = 0.44). The developed DL algorithm enables the automated estimation of tumor size and primary CC tumor segmentation. However, segmentation agreement between raters is better than that between DL algorithm and raters.
Collapse
|
48
|
Thambawita V, Salehi P, Sheshkal SA, Hicks SA, Hammer HL, Parasa S, de Lange T, Halvorsen P, Riegler MA. SinGAN-Seg: Synthetic training data generation for medical image segmentation. PLoS One 2022; 17:e0267976. [PMID: 35500005 PMCID: PMC9060378 DOI: 10.1371/journal.pone.0267976] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 04/19/2022] [Indexed: 12/20/2022] Open
Abstract
Analyzing medical data to find abnormalities is a time-consuming and costly task, particularly for rare abnormalities, requiring tremendous efforts from medical experts. Therefore, artificial intelligence has become a popular tool for the automatic processing of medical data, acting as a supportive tool for doctors. However, the machine learning models used to build these tools are highly dependent on the data used to train them. Large amounts of data can be difficult to obtain in medicine due to privacy reasons, expensive and time-consuming annotations, and a general lack of data samples for infrequent lesions. In this study, we present a novel synthetic data generation pipeline, called SinGAN-Seg, to produce synthetic medical images with corresponding masks using a single training image. Our method is different from the traditional generative adversarial networks (GANs) because our model needs only a single image and the corresponding ground truth to train. We also show that the synthetic data generation pipeline can be used to produce alternative artificial segmentation datasets with corresponding ground truth masks when real datasets are not allowed to share. The pipeline is evaluated using qualitative and quantitative comparisons between real data and synthetic data to show that the style transfer technique used in our pipeline significantly improves the quality of the generated data and our method is better than other state-of-the-art GANs to prepare synthetic images when the size of training datasets are limited. By training UNet++ using both real data and the synthetic data generated from the SinGAN-Seg pipeline, we show that the models trained on synthetic data have very close performances to those trained on real data when both datasets have a considerable amount of training data. In contrast, we show that synthetic data generated from the SinGAN-Seg pipeline improves the performance of segmentation models when training datasets do not have a considerable amount of data. All experiments were performed using an open dataset and the code is publicly available on GitHub.
Collapse
Affiliation(s)
- Vajira Thambawita
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
- * E-mail:
| | | | | | | | - Hugo L. Hammer
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | - Sravanthi Parasa
- Department of Gastroenterology, Swedish Medical Group, Seattle, WA, United States of America
| | - Thomas de Lange
- Medical Department, Sahlgrenska University Hospital-Möndal, Gothenburg, Sweden
- Department of Molecular and Clinical Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Augere Medical, Oslo, Norway
| | - Pål Halvorsen
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | | |
Collapse
|
49
|
The Relationship between Intelligent Image Simulation and Recognition Technology and the Health Literacy and Quality of Life of the Elderly. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:9984873. [PMID: 35280704 PMCID: PMC8890847 DOI: 10.1155/2022/9984873] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/15/2021] [Accepted: 01/25/2022] [Indexed: 12/29/2022]
Abstract
In order to explore the relationship between intelligent image recognition technology and the mentality and quality of life of the elderly, this paper combines intelligent image simulation technology to identify the behavior of the elderly, protect the safety of the elderly, and provide timely feedback on the adverse conditions of the elderly. Moreover, this paper improves the traditional intelligent image recognition algorithm, verifies the research method of this paper through experimental research, and puts forward corresponding suggestions. Through investigation and research, we can see that the level of health literacy of elderly patients with chronic diseases is low. Therefore, in the future health education, we should strengthen health education for elderly patients with chronic diseases, use different mass media to propagate health knowledge, and promote the formation of healthy lifestyles and behaviors for elderly patients with chronic diseases. At the same time, the experiment also verified that the intelligent image recognition technology proposed in this paper has a positive effect in improving the mentality and quality of life of the elderly.
Collapse
|
50
|
Karlsson RA, Hardarson SH. Artery vein classification in fundus images using serially connected U-Nets. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 216:106650. [PMID: 35139461 DOI: 10.1016/j.cmpb.2022.106650] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 01/12/2022] [Accepted: 01/18/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal vessels provide valuable information when diagnosing or monitoring various diseases affecting the retina and disorders affecting the cardiovascular or central nervous systems. Automated retinal vessel segmentation can assist clinicians and researchers when interpreting retinal images. As there are differences in both the structure and function of retinal arteries and veins, separating these two vessel types is essential. As manual segmentation of retinal images is impractical, an accurate automated method is required. METHODS In this paper, we propose a convolutional neural network based on serially connected U-nets that simultaneously segment the retinal vessels and classify them as arteries or veins. Detailed ablation experiments are performed to understand how the major components contribute to the overall system's performance. The proposed method is trained and tested on the public DRIVE and HRF datasets and a proprietary dataset. RESULTS The proposed convolutional neural network achieves an F1 score of 0.829 for vessel segmentation on the DRIVE dataset and an F1 score of 0.814 on the HRF dataset, consistent with the state-of-the-art methods on the former and outperforming the state-of-the-art on the latter. On the task of classifying the vessels into arteries and veins, the method achieves an F1 score of 0.952 for the DRIVE dataset exceeding the state-of-the-art performance. On the HRF dataset, the method achieves an F1 score of 0.966, which is consistent with the state-of-the-art. CONCLUSIONS The proposed method demonstrates competitive performance on both vessel segmentation and artery vein classification compared with state-of-the-art methods. The method outperforms human experts on the DRIVE dataset when classifying retinal images into arteries, veins, and background simultaneously. The method segments the vasculature on the proprietary dataset and classifies the retinal vessels accurately, even on challenging pathological images. The ablation experiments which utilize repeated runs for each configuration provide statistical evidence for the appropriateness of the proposed solution. Connecting several simple U-nets significantly improved artery vein classification performance. The proposed way of serially connecting base networks is not limited to the proposed base network or segmenting the retinal vessels and could be applied to other tasks.
Collapse
Affiliation(s)
- Robert Arnar Karlsson
- Faculty of Medicine at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland; Faculty of Electrical and Computer Engineering at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland.
| | - Sveinn Hakon Hardarson
- Faculty of Medicine at the University of Iceland, Sæmundargata 2, Reykjavík, 102, Iceland.
| |
Collapse
|