1
|
Kumar K, Yeo AU, McIntosh L, Kron T, Wheeler G, Franich RD. Deep Learning Auto-Segmentation Network for Pediatric Computed Tomography Data Sets: Can We Extrapolate From Adults? Int J Radiat Oncol Biol Phys 2024; 119:1297-1306. [PMID: 38246249 DOI: 10.1016/j.ijrobp.2024.01.201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 12/10/2023] [Accepted: 01/07/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Artificial intelligence (AI)-based auto-segmentation models hold promise for enhanced efficiency and consistency in organ contouring for adaptive radiation therapy and radiation therapy planning. However, their performance on pediatric computed tomography (CT) data and cross-scanner compatibility remain unclear. This study aimed to evaluate the performance of AI-based auto-segmentation models trained on adult CT data when applied to pediatric data sets and explore the improvement in performance gained by including pediatric training data. It also examined their ability to accurately segment CT data acquired from different scanners. METHODS AND MATERIALS Using the nnU-Net framework, segmentation models were trained on data sets of adult, pediatric, and combined CT scans for 7 pelvic/thoracic organs. Each model was trained on 290 to 300 cases per category and organ. Training data sets included a combination of clinical data and several open repositories. The study incorporated a database of 459 pediatric (0-16 years) CT scans and 950 adults (>18 years), ensuring all scans had human expert ground-truth contours of the selected organs. Performance was evaluated based on Dice similarity coefficients (DSC) of the model-generated contours. RESULTS AI models trained exclusively on adult data underperformed on pediatric data, especially for the 0 to 2 age group: mean DSC was below 0.5 for the bladder and spleen. The addition of pediatric training data demonstrated significant improvement for all age groups, achieving a mean DSC of above 0.85 for all organs in every age group. Larger organs like the liver and kidneys maintained consistent performance for all models across age groups. No significant difference emerged in the cross-scanner performance evaluation, suggesting robust cross-scanner generalization. CONCLUSIONS For optimal segmentation across age groups, it is important to include pediatric data in the training of segmentation models. The successful cross-scanner generalization also supports the real-world clinical applicability of these AI models. This study emphasizes the significance of data set diversity in training robust AI systems for medical image interpretation tasks.
Collapse
Affiliation(s)
- Kartik Kumar
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Adam U Yeo
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Lachlan McIntosh
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Tomas Kron
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia; Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Greg Wheeler
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Rick D Franich
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia.
| |
Collapse
|
2
|
Shephard AJ, Bashir RMS, Mahmood H, Jahanifar M, Minhas F, Raza SEA, McCombe KD, Craig SG, James J, Brooks J, Nankivell P, Mehanna H, Khurram SA, Rajpoot NM. A fully automated and explainable algorithm for predicting malignant transformation in oral epithelial dysplasia. NPJ Precis Oncol 2024; 8:137. [PMID: 38942998 PMCID: PMC11213925 DOI: 10.1038/s41698-024-00624-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 05/29/2024] [Indexed: 06/30/2024] Open
Abstract
Oral epithelial dysplasia (OED) is a premalignant histopathological diagnosis given to lesions of the oral cavity. Its grading suffers from significant inter-/intra-observer variability, and does not reliably predict malignancy progression, potentially leading to suboptimal treatment decisions. To address this, we developed an artificial intelligence (AI) algorithm, that assigns an Oral Malignant Transformation (OMT) risk score based on the Haematoxylin and Eosin (H&E) stained whole slide images (WSIs). Our AI pipeline leverages an in-house segmentation model to detect and segment both nuclei and epithelium. Subsequently, a shallow neural network utilises interpretable morphological and spatial features, emulating histological markers, to predict progression. We conducted internal cross-validation on our development cohort (Sheffield; n = 193 cases) and independent validation on two external cohorts (Birmingham and Belfast; n = 89 cases). On external validation, the proposed OMTscore achieved an AUROC = 0.75 (Recall = 0.92) in predicting OED progression, outperforming other grading systems (Binary: AUROC = 0.72, Recall = 0.85). Survival analyses showed the prognostic value of our OMTscore (C-index = 0.60, p = 0.02), compared to WHO (C-index = 0.64, p = 0.003) and binary grades (C-index = 0.65, p < 0.001). Nuclear analyses elucidated the presence of peri-epithelial and intra-epithelial lymphocytes in highly predictive patches of transforming cases (p < 0.001). This is the first study to propose a completely automated, explainable, and externally validated algorithm for predicting OED transformation. Our algorithm shows comparable-to-human-level performance, offering a promising solution to the challenges of grading OED in routine clinical practice.
Collapse
Affiliation(s)
- Adam J Shephard
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Hanya Mahmood
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Kris D McCombe
- Precision Medicine Centre, Patrick G. Johnston Centre for Cancer Research, Queen's University Belfast, Belfast, UK
| | - Stephanie G Craig
- Precision Medicine Centre, Patrick G. Johnston Centre for Cancer Research, Queen's University Belfast, Belfast, UK
| | - Jacqueline James
- Precision Medicine Centre, Patrick G. Johnston Centre for Cancer Research, Queen's University Belfast, Belfast, UK
| | - Jill Brooks
- Institute of Head and Neck Studies and Education, Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK
| | - Paul Nankivell
- Institute of Head and Neck Studies and Education, Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK
| | - Hisham Mehanna
- Institute of Head and Neck Studies and Education, Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham, UK
| | - Syed Ali Khurram
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Nasir M Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
| |
Collapse
|
3
|
Luo J, Dai P, He Z, Huang Z, Liao S, Liu K. Deep learning models for ischemic stroke lesion segmentation in medical images: A survey. Comput Biol Med 2024; 175:108509. [PMID: 38677171 DOI: 10.1016/j.compbiomed.2024.108509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 02/09/2024] [Accepted: 04/21/2024] [Indexed: 04/29/2024]
Abstract
This paper provides a comprehensive review of deep learning models for ischemic stroke lesion segmentation in medical images. Ischemic stroke is a severe neurological disease and a leading cause of death and disability worldwide. Accurate segmentation of stroke lesions in medical images such as MRI and CT scans is crucial for diagnosis, treatment planning and prognosis. This paper first introduces common imaging modalities used for stroke diagnosis, discussing their capabilities in imaging lesions at different disease stages from the acute to chronic stage. It then reviews three major public benchmark datasets for evaluating stroke segmentation algorithms: ATLAS, ISLES and AISD, highlighting their key characteristics. The paper proceeds to provide an overview of foundational deep learning architectures for medical image segmentation, including CNN-based and transformer-based models. It summarizes recent innovations in adapting these architectures to the task of stroke lesion segmentation across the three datasets, analyzing their motivations, modifications and results. A survey of loss functions and data augmentations employed for this task is also included. The paper discusses various aspects related to stroke segmentation tasks, including prior knowledge, small lesions, and multimodal fusion, and then concludes by outlining promising future research directions. Overall, this comprehensive review covers critical technical developments in the field to support continued progress in automated stroke lesion segmentation.
Collapse
Affiliation(s)
- Jialin Luo
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Peishan Dai
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China.
| | - Zhuang He
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Zhongchao Huang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, Hunan, China
| | - Shenghui Liao
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Kun Liu
- Brain Hospital of Hunan Province (The Second People's Hospital of Hunan Province), Changsha, Hunan, China
| |
Collapse
|
4
|
Hirani R, Noruzi K, Khuram H, Hussaini AS, Aifuwa EI, Ely KE, Lewis JM, Gabr AE, Smiley A, Tiwari RK, Etienne M. Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life (Basel) 2024; 14:557. [PMID: 38792579 PMCID: PMC11122160 DOI: 10.3390/life14050557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/22/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a powerful tool in healthcare significantly impacting practices from diagnostics to treatment delivery and patient management. This article examines the progress of AI in healthcare, starting from the field's inception in the 1960s to present-day innovative applications in areas such as precision medicine, robotic surgery, and drug development. In addition, the impact of the COVID-19 pandemic on the acceleration of the use of AI in technologies such as telemedicine and chatbots to enhance accessibility and improve medical education is also explored. Looking forward, the paper speculates on the promising future of AI in healthcare while critically addressing the ethical and societal considerations that accompany the integration of AI technologies. Furthermore, the potential to mitigate health disparities and the ethical implications surrounding data usage and patient privacy are discussed, emphasizing the need for evolving guidelines to govern AI's application in healthcare.
Collapse
Affiliation(s)
- Rahim Hirani
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Kaleb Noruzi
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Hassan Khuram
- College of Medicine, Drexel University, Philadelphia, PA 19129, USA
| | - Anum S. Hussaini
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA 02115, USA
| | - Esewi Iyobosa Aifuwa
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Kencie E. Ely
- Kirk Kerkorian School of Medicine, University of Nevada Las Vegas, Las Vegas, NV 89106, USA
| | - Joshua M. Lewis
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Ahmed E. Gabr
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Abbas Smiley
- School of Medicine and Dentistry, University of Rochester, Rochester, NY 14642, USA
| | - Raj K. Tiwari
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Mill Etienne
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
- Department of Neurology, New York Medical College, Valhalla, NY 10595, USA
| |
Collapse
|
5
|
Liu R, Li X, Liu Y, Du L, Zhu Y, Wu L, Hu B. A high-speed microscopy system based on deep learning to detect yeast-like fungi cells in blood. Bioanalysis 2024; 16:289-303. [PMID: 38334080 DOI: 10.4155/bio-2023-0193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024] Open
Abstract
Background: Blood-invasive fungal infections can cause the death of patients, while diagnosis of fungal infections is challenging. Methods: A high-speed microscopy detection system was constructed that included a microfluidic system, a microscope connected to a high-speed camera and a deep learning analysis section. Results: For training data, the sensitivity and specificity of the convolutional neural network model were 93.5% (92.7-94.2%) and 99.5% (99.1-99.5%), respectively. For validating data, the sensitivity and specificity were 81.3% (80.0-82.5%) and 99.4% (99.2-99.6%), respectively. Cryptococcal cells were found in 22.07% of blood samples. Conclusion: This high-speed microscopy system can analyze fungal pathogens in blood samples rapidly with high sensitivity and specificity and can help dramatically accelerate the diagnosis of fungal infectious diseases.
Collapse
Affiliation(s)
- Ruiqi Liu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Xiaojie Li
- Department of Laboratory Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, P.R. China
| | - Yingyi Liu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Lijun Du
- Department of Clinical Laboratory, Huadu District People's Hospital of Guangzhou, Guangdong, China
| | - Yingzhu Zhu
- Guangzhou Waterrock Gene Technology, Guangdong, China
| | - Lichuan Wu
- Guangxi Key Laboratory of Special Biomedicine, School of Medicine, Guangxi University, Nanning, Guangxi, P.R. China
| | - Bo Hu
- Department of Laboratory Medicine, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, P.R. China
| |
Collapse
|
6
|
Morita D, Kawarazaki A, Koimizu J, Tsujiko S, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic orbital segmentation using deep learning-based 2D U-net and accuracy evaluation: A retrospective study. J Craniomaxillofac Surg 2023; 51:609-613. [PMID: 37813770 DOI: 10.1016/j.jcms.2023.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 05/25/2023] [Accepted: 09/05/2023] [Indexed: 10/11/2023] Open
Abstract
The purpose of this study was to verify whether the accuracy of automatic segmentation (AS) of computed tomography (CT) images of fractured orbits using deep learning (DL) is sufficient for clinical application. In the surgery of orbital fractures, many methods have been reported to create a 3D anatomical model for use as a reference. However, because the orbit bone is thin and complex, creating a segmentation model for 3D printing is complicated and time-consuming. Here, the training of DL was performed using U-Net as the DL model, and the AS output was validated with Dice coefficients and average symmetry surface distance (ASSD). In addition, the AS output was 3D printed and evaluated for accuracy by four surgeons, each with over 15 years of clinical experience. One hundred twenty-five CT images were prepared, and manual orbital segmentation was performed in all cases. Ten orbital fracture cases were randomly selected as validation data, and the remaining 115 were set as training data. AS was successful in all cases, with good accuracy: Dice, 0.860 ± 0.033 (mean ± SD); ASSD, 0.713 ± 0.212 mm. In evaluating AS accuracy, the expert surgeons generally considered that it could be used for surgical support without further modification. The orbital AS algorithm developed using DL in this study is extremely accurate and can create 3D models rapidly at low cost, potentially enabling safer and more accurate surgeries.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Jungen Koimizu
- Department of Plastic and Reconstructive Surgery, Omihachiman Community Medical Center, Shiga, Japan
| | - Shoko Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
7
|
Piraianu AI, Fulga A, Musat CL, Ciobotaru OR, Poalelungi DG, Stamate E, Ciobotaru O, Fulga I. Enhancing the Evidence with Algorithms: How Artificial Intelligence Is Transforming Forensic Medicine. Diagnostics (Basel) 2023; 13:2992. [PMID: 37761359 PMCID: PMC10529115 DOI: 10.3390/diagnostics13182992] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/13/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND The integration of artificial intelligence (AI) into various fields has ushered in a new era of multidisciplinary progress. Defined as the ability of a system to interpret external data, learn from it, and adapt to specific tasks, AI is poised to revolutionize the world. In forensic medicine and pathology, algorithms play a crucial role in data analysis, pattern recognition, anomaly identification, and decision making. This review explores the diverse applications of AI in forensic medicine, encompassing fields such as forensic identification, ballistics, traumatic injuries, postmortem interval estimation, forensic toxicology, and more. RESULTS A thorough review of 113 articles revealed a subset of 32 papers directly relevant to the research, covering a wide range of applications. These included forensic identification, ballistics and additional factors of shooting, traumatic injuries, post-mortem interval estimation, forensic toxicology, sexual assaults/rape, crime scene reconstruction, virtual autopsy, and medical act quality evaluation. The studies demonstrated the feasibility and advantages of employing AI technology in various facets of forensic medicine and pathology. CONCLUSIONS The integration of AI in forensic medicine and pathology offers promising prospects for improving accuracy and efficiency in medico-legal practices. From forensic identification to post-mortem interval estimation, AI algorithms have shown the potential to reduce human subjectivity, mitigate errors, and provide cost-effective solutions. While challenges surrounding ethical considerations, data security, and algorithmic correctness persist, continued research and technological advancements hold the key to realizing the full potential of AI in forensic applications. As the field of AI continues to evolve, it is poised to play an increasingly pivotal role in the future of forensic medicine and pathology.
Collapse
Affiliation(s)
| | - Ana Fulga
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza St., 800010 Galati, Romania; (A.-I.P.); (C.L.M.); (O.-R.C.); (D.G.P.); (O.C.); (I.F.)
| | | | | | | | - Elena Stamate
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza St., 800010 Galati, Romania; (A.-I.P.); (C.L.M.); (O.-R.C.); (D.G.P.); (O.C.); (I.F.)
| | | | | |
Collapse
|
8
|
Poalelungi DG, Musat CL, Fulga A, Neagu M, Neagu AI, Piraianu AI, Fulga I. Advancing Patient Care: How Artificial Intelligence Is Transforming Healthcare. J Pers Med 2023; 13:1214. [PMID: 37623465 PMCID: PMC10455458 DOI: 10.3390/jpm13081214] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 07/27/2023] [Accepted: 07/28/2023] [Indexed: 08/26/2023] Open
Abstract
Artificial Intelligence (AI) has emerged as a transformative technology with immense potential in the field of medicine. By leveraging machine learning and deep learning, AI can assist in diagnosis, treatment selection, and patient monitoring, enabling more accurate and efficient healthcare delivery. The widespread implementation of AI in healthcare has the role to revolutionize patients' outcomes and transform the way healthcare is practiced, leading to improved accessibility, affordability, and quality of care. This article explores the diverse applications and reviews the current state of AI adoption in healthcare. It concludes by emphasizing the need for collaboration between physicians and technology experts to harness the full potential of AI.
Collapse
Affiliation(s)
- Diana Gina Poalelungi
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
| | - Carmina Liana Musat
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Ana Fulga
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Marius Neagu
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Anca Iulia Neagu
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
- ‘Saint John’ Clinical Emergency Hospital for Children, 800487 Galati, Romania
| | - Alin Ionut Piraianu
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| | - Iuliu Fulga
- Saint Apostle Andrew Emergency County Clinical Hospital, 177 Brailei st., 800578 Galati, Romania; (D.G.P.); (M.N.); (A.I.P.); (I.F.)
- Faculty of Medicine and Pharmacy, Dunarea de Jos University of Galati, 35 AI Cuza st., 800010 Galati, Romania;
| |
Collapse
|
9
|
Morita D, Mazen S, Tsujiko S, Otake Y, Sato Y, Numajiri T. Deep-learning-based automatic facial bone segmentation using a two-dimensional U-Net. Int J Oral Maxillofac Surg 2023; 52:787-792. [PMID: 36328865 DOI: 10.1016/j.ijom.2022.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 08/16/2022] [Accepted: 10/24/2022] [Indexed: 06/04/2023]
Abstract
The use of deep learning (DL) in medical imaging is becoming increasingly widespread. Although DL has been used previously for the segmentation of facial bones in computed tomography (CT) images, there are few reports of segmentation involving multiple areas. In this study, a U-Net was used to investigate the automatic segmentation of facial bones into eight areas, with the aim of facilitating virtual surgical planning (VSP) and computer-aided design and manufacturing (CAD/CAM) in maxillofacial surgery. CT data from 50 patients were prepared and used for training, and five-fold cross-validation was performed. The output results generated by the DL model were validated by Dice coefficient and average symmetric surface distance (ASSD). The automatic segmentation was successful in all cases, with a mean± standard deviation Dice coefficient of 0.897 ± 0.077 and ASSD of 1.168 ± 1.962 mm. The accuracy was very high for the mandible (Dice coefficient 0.984, ASSD 0.324 mm) and zygomatic bones (Dice coefficient 0.931, ASSD 0.487 mm), and these could be introduced for VSP and CAD/CAM without any modification. The results for other areas, particularly the teeth, were slightly inferior, with possible reasons being the effects of defects, bonded maxillary and mandibular teeth, and metal artefacts. A limitation of this study is that the data were from a single institution. Hence further research is required to improve the accuracy for some facial areas and to validate the results in larger and more diverse populations.
Collapse
Affiliation(s)
- D Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - S Mazen
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - S Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Y Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Y Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - T Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
10
|
Liu Y, Zabihollahy F, Yan R, Lee B, Janzen C, Devaskar SU, Sung K. Evaluation of Spatial Attentive Deep Learning for Automatic Placental Segmentation on Longitudinal MRI. J Magn Reson Imaging 2023; 57:1533-1540. [PMID: 37021577 PMCID: PMC10080136 DOI: 10.1002/jmri.28403] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 08/03/2022] [Accepted: 08/05/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Automated segmentation of the placenta by MRI in early pregnancy may help predict normal and aberrant placenta function, which could improve the efficiency of placental assessment and the prediction of pregnancy outcomes. An automated segmentation method that works at one gestational age may not transfer effectively to other gestational ages. PURPOSE To evaluate a spatial attentive deep learning method (SADL) for automated placental segmentation on longitudinal placental MRI scans. STUDY TYPE Prospective, single-center. SUBJECTS A total of 154 pregnant women who underwent MRI scans at both 14-18 weeks of gestation and at 19-24 weeks of gestation, divided into training (N = 108), validation (N = 15), and independent testing datasets (N = 31). FIELD STRENGTH/SEQUENCE A 3 T, T2-weighted half Fourier single-shot turbo spin-echo (T2-HASTE) sequence. ASSESSMENT The reference standard of placental segmentation was manual delineation on T2-HASTE by a third-year neonatology clinical fellow (B.L.) under the supervision of an experienced maternal-fetal medicine specialist (C.J. with 20 years of experience) and an MRI scientist (K.S. with 19 years of experience). STATISTICAL TESTS The three-dimensional Dice similarity coefficient (DSC) was used to measure the automated segmentation performance compared to the manual placental segmentation. A paired t-test was used to compare the DSCs between SADL and U-Net methods. A Bland-Altman plot was used to analyze the agreement between manual and automated placental volume measurements. A P value < 0.05 was considered statistically significant. RESULTS In the testing dataset, SADL achieved average DSCs of 0.83 ± 0.06 and 0.84 ± 0.05 in the first and second MRI, which were significantly higher than those achieved by U-Net (0.77 ± 0.08 and 0.76 ± 0.10, respectively). A total of 6 out of 62 MRI scans (9.6%) had volume measurement differences between the SADL-based automated and manual volume measurements that were out of 95% limits of agreement. DATA CONCLUSIONS SADL can automatically detect and segment the placenta with high performance in MRI at two different gestational ages. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Fatemeh Zabihollahy
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Ran Yan
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Department of Bioengineering, Henry Samueli School of Engineering, University of California, Los Angeles, CA, USA
| | - Brian Lee
- Department of Pediatrics, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Carla Janzen
- Department of Obstetrics and Gynecology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Sherin U. Devaskar
- Department of Pediatrics, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Department of Bioengineering, Henry Samueli School of Engineering, University of California, Los Angeles, CA, USA
| |
Collapse
|
11
|
Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders-A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:3062. [PMID: 36991773 PMCID: PMC10053494 DOI: 10.3390/s23063062] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 03/09/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Artificial intelligence (AI) is a field of computer science that deals with the simulation of human intelligence using machines so that such machines gain problem-solving and decision-making capabilities similar to that of the human brain. Neuroscience is the scientific study of the struczture and cognitive functions of the brain. Neuroscience and AI are mutually interrelated. These two fields help each other in their advancements. The theory of neuroscience has brought many distinct improvisations into the AI field. The biological neural network has led to the realization of complex deep neural network architectures that are used to develop versatile applications, such as text processing, speech recognition, object detection, etc. Additionally, neuroscience helps to validate the existing AI-based models. Reinforcement learning in humans and animals has inspired computer scientists to develop algorithms for reinforcement learning in artificial systems, which enables those systems to learn complex strategies without explicit instruction. Such learning helps in building complex applications, like robot-based surgery, autonomous vehicles, gaming applications, etc. In turn, with its ability to intelligently analyze complex data and extract hidden patterns, AI fits as a perfect choice for analyzing neuroscience data that are very complex. Large-scale AI-based simulations help neuroscientists test their hypotheses. Through an interface with the brain, an AI-based system can extract the brain signals and commands that are generated according to the signals. These commands are fed into devices, such as a robotic arm, which helps in the movement of paralyzed muscles or other human parts. AI has several use cases in analyzing neuroimaging data and reducing the workload of radiologists. The study of neuroscience helps in the early detection and diagnosis of neurological disorders. In the same way, AI can effectively be applied to the prediction and detection of neurological disorders. Thus, in this paper, a scoping review has been carried out on the mutual relationship between AI and neuroscience, emphasizing the convergence between AI and neuroscience in order to detect and predict various neurological disorders.
Collapse
Affiliation(s)
| | | | | | - Edmond Prakash
- Research Center for Creative Arts, University for the Creative Arts (UCA), Farnham GU9 7DS, UK
| | - Chaminda Hewage
- Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
12
|
Hughes H, O'Reilly M, McVeigh N, Ryan R. The top 100 most cited articles on artificial intelligence in radiology: a bibliometric analysis. Clin Radiol 2023; 78:99-106. [PMID: 36639176 DOI: 10.1016/j.crad.2022.09.133] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 09/12/2022] [Accepted: 09/16/2022] [Indexed: 01/12/2023]
Abstract
AIM To identify the most influential publications relating to artificial intelligence (AI) in radiology in order to identify current trends in the literature and to highlight areas requiring further research. MATERIALS AND METHODS A retrospective bibliometric analysis was performed of the top 100 most cited articles on this topic. Data pertaining to year of publication, publishing journal, journal impact factor, authorship, article title, institution, country, type of article, article subject, and keywords were collected. RESULTS The number of citations per article for the top 100 list ranged from 254 to 3,576 (median 353). The number of citations per year, per article ranged from 10.4 to 894 (median 65.6). The majority of articles (n=62) were published within the last 10 years. The USA was the most common country of origin (n=44). The journal with the greatest number of articles was IEEE Transactions On Medical Imaging (n=38). University Medical Center Utrecht contributed the greatest number of articles (n=6). There were 92 original research articles, 52 of which were clinical studies. The most common clinical subjects were neuroimaging (n=25) and oncology (n=16). The most common keyword used was "deep learning" (n=34). CONCLUSION This study provides an in-depth analysis of the top 100 most-cited papers on the use of AI in radiology. It also provides researchers with detailed insight into the current influential papers in this field, the characteristics of those studies, as well as potential future trends in this fast-developing area of radiology.
Collapse
Affiliation(s)
- H Hughes
- Department of Radiology, St Vincent's University Hospital, Dublin, 4, Ireland.
| | - M O'Reilly
- Department of Radiology, Cork University Hospital, Wilton, Co. Cork, Ireland
| | - N McVeigh
- Department of Radiology, St Vincent's University Hospital, Dublin, 4, Ireland
| | - R Ryan
- Department of Radiology, St Vincent's University Hospital, Dublin, 4, Ireland
| |
Collapse
|
13
|
Liu F, Meamardoost S, Gunawan R, Komiyama T, Mewes C, Zhang Y, Hwang E, Wang L. Deep learning for neural decoding in motor cortex. J Neural Eng 2022; 19. [PMID: 36148535 DOI: 10.1088/1741-2552/ac8fb5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/06/2022] [Indexed: 11/12/2022]
Abstract
Objective. Neural decoding is an important tool in neural engineering and neural data analysis. Of various machine learning algorithms adopted for neural decoding, the recently introduced deep learning is promising to excel. Therefore, we sought to apply deep learning to decode movement trajectories from the activity of motor cortical neurons.Approach. In this paper, we assessed the performance of deep learning methods in three different decoding schemes, concurrent, time-delay, and spatiotemporal. In the concurrent decoding scheme where the input to the network is the neural activity coincidental to the movement, deep learning networks including artificial neural network (ANN) and long-short term memory (LSTM) were applied to decode movement and compared with traditional machine learning algorithms. Both ANN and LSTM were further evaluated in the time-delay decoding scheme in which temporal delays are allowed between neural signals and movements. Lastly, in the spatiotemporal decoding scheme, we trained convolutional neural network (CNN) to extract movement information from images representing the spatial arrangement of neurons, their activity, and connectomes (i.e. the relative strengths of connectivity between neurons) and combined CNN and ANN to develop a hybrid spatiotemporal network. To reveal the input features of the CNN in the hybrid network that deep learning discovered for movement decoding, we performed a sensitivity analysis and identified specific regions in the spatial domain.Main results. Deep learning networks (ANN and LSTM) outperformed traditional machine learning algorithms in the concurrent decoding scheme. The results of ANN and LSTM in the time-delay decoding scheme showed that including neural data from time points preceding movement enabled decoders to perform more robustly when the temporal relationship between the neural activity and movement dynamically changes over time. In the spatiotemporal decoding scheme, the hybrid spatiotemporal network containing the concurrent ANN decoder outperformed single-network concurrent decoders.Significance. Taken together, our study demonstrates that deep learning could become a robust and effective method for the neural decoding of behavior.
Collapse
Affiliation(s)
- Fangyu Liu
- Department of Civil and Environmental Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, United States of America
| | - Saber Meamardoost
- Department of Chemical and Biological Engineering, University at Buffalo, Buffalo, NY 14260, United States of America
| | - Rudiyanto Gunawan
- Department of Chemical and Biological Engineering, University at Buffalo, Buffalo, NY 14260, United States of America
| | - Takaki Komiyama
- Department of Neurobiology, Center for Neural Circuits and Behavior, and Department of Neurosciences, University of California San Diego, La Jolla, CA 92093, United States of America
| | - Claudia Mewes
- Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, United States of America
| | - Ying Zhang
- Department of Cell and Molecular Biology, University of Rhode Island, Kingston, RI 02881, United States of America
| | - EunJung Hwang
- Department of Neurobiology, Center for Neural Circuits and Behavior, and Department of Neurosciences, University of California San Diego, La Jolla, CA 92093, United States of America.,Cell Biology and Anatomy Discipline, Center for Brain Function and Repair, Chicago Medical School, Rosalind Franklin University of Medicine and Science, North Chicago, IL 60064, United States of America
| | - Linbing Wang
- Department of Civil and Environmental Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, United States of America
| |
Collapse
|
14
|
Mahant SS, Varma AR. Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine. Cureus 2022; 14:e28945. [PMID: 36237807 PMCID: PMC9547651 DOI: 10.7759/cureus.28945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/08/2022] [Indexed: 11/25/2022] Open
Abstract
In today's world, progressively enormous popularity prevails around artificial intelligence (AI). AI is gaining popularity in the identification of various images. Therefore, it has been widely used in the ultrasound of the breast. Furthermore, AI can perform a quantitative evaluation, which further helps maintain the diagnosis's accuracy. Moreover, breast cancer is the most common cancer in women, posing a severe threat to women's health. Hence, its early detection is usually associated with a patient's prognosis. As a result, using AI in breast cancer screening and detection is highly crucial. The concept of AI in the perspective of breast ultrasound has been highlighted in this brief review article. It tends to focus on early AI, i.e., traditional machine learning and deep learning algorithms. Also, the use of AI in ultrasound and the use of it in mammography, magnetic resonance imaging, nuclear medicine imaging, and classification of breast lesions is broadly explained, along with the challenges faced in bringing AI into daily practice.
Collapse
|
15
|
Zhu W, Huang H, Zhou Y, Shi F, Shen H, Chen R, Hua R, Wang W, Xu S, Luo X. Automatic segmentation of white matter hyperintensities in routine clinical brain MRI by 2D VB-Net: A large-scale study. Front Aging Neurosci 2022; 14:915009. [PMID: 35966772 PMCID: PMC9372352 DOI: 10.3389/fnagi.2022.915009] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 07/14/2022] [Indexed: 11/13/2022] Open
Abstract
White matter hyperintensities (WMH) are imaging manifestations frequently observed in various neurological disorders, yet the clinical application of WMH quantification is limited. In this study, we designed a series of dedicated WMH labeling protocols and proposed a convolutional neural network named 2D VB-Net for the segmentation of WMH and other coexisting intracranial lesions based on a large dataset of 1,045 subjects across various demographics and multiple scanners using 2D thick-slice protocols that are more commonly applied in clinical practice. Using our labeling pipeline, the Dice consistency of the WMH regions manually depicted by two observers was 0.878, which formed a solid basis for the development and evaluation of the automatic segmentation system. The proposed algorithm outperformed other state-of-the-art methods (uResNet, 3D V-Net and Visual Geometry Group network) in the segmentation of WMH and other coexisting intracranial lesions and was well validated on datasets with thick-slice magnetic resonance (MR) images and the 2017 medical image computing and computer assisted intervention WMH Segmentation Challenge dataset (with thin-slice MR images), all showing excellent effectiveness. Furthermore, our method can subclassify WMH to display the WMH distributions and is very lightweight. Additionally, in terms of correlation to visual rating scores, our algorithm showed excellent consistency with the manual delineations and was overall better than those from other competing methods. In conclusion, we developed an automatic WMH quantification framework for multiple application scenarios, exhibiting a promising future in clinical practice.
Collapse
Affiliation(s)
- Wenhao Zhu
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hao Huang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yaqi Zhou
- Shanghai United Imaging Intelligence, Wuhan, China
| | - Feng Shi
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Hong Shen
- Shanghai United Imaging Intelligence, Wuhan, China
| | - Ran Chen
- Shanghai United Imaging Intelligence, Wuhan, China
| | - Rui Hua
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Wei Wang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shabei Xu
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiang Luo
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
16
|
Alabi RO, Almangush A, Elmusrati M, Leivo I, Mäkitie A. Measuring the Usability and Quality of Explanations of a Machine Learning Web-Based Tool for Oral Tongue Cancer Prognostication. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19148366. [PMID: 35886221 PMCID: PMC9322510 DOI: 10.3390/ijerph19148366] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 06/23/2022] [Accepted: 07/04/2022] [Indexed: 12/10/2022]
Abstract
Background: Machine learning models have been reported to assist in the proper management of cancer through accurate prognostication. Integrating such models as a web-based prognostic tool or calculator may help to improve cancer care and assist clinicians in making oral cancer management-related decisions. However, none of these models have been recommended in daily practices of oral cancer due to concerns related to machine learning methodologies and clinical implementation challenges. An instance of the concerns inherent to the science of machine learning is explainability. Objectives: This study measures the usability and explainability of a machine learning-based web prognostic tool that was designed for prediction of oral tongue cancer. We used the System Usability Scale (SUS) and System Causability Scale (SCS) to evaluate the explainability of the prognostic tool. In addition, we propose a framework for the evaluation of post hoc explainability of web-based prognostic tools. Methods: A SUS- and SCS-based questionnaire was administered amongst pathologists, radiologists, cancer and machine learning researchers and surgeons (n = 11) to evaluate the quality of explanations offered by the machine learning-based web prognostic tool to address the concern of explainability and usability of these models for cancer management. The examined web-based tool was developed by our group and is freely available online. Results: In terms of the usability of the web-based tool using the SUS, 81.9% (45.5% strongly agreed; 36.4% agreed) agreed that neither the support of a technical assistant nor a need to learn many things were required to use the web-based tool. Furthermore, 81.8% agreed that the evaluated web-based tool was not cumbersome to use (usability). The average score for the SCS (explainability) was 0.74. A total of 91.0% of the participants strongly agreed that the web-based tool can assist in clinical decision-making. These scores indicated that the examined web-based tool offers a significant level of usability and explanations about the outcome of interest. Conclusions: Integrating the trained and internally and externally validated model as a web-based tool or calculator is poised to offer an effective and easy approach towards the usage and acceptance of these models in the future daily practice. This approach has received significant attention in recent years. Thus, it is important that the usability and explainability of these models are measured to achieve such touted benefits. A usable and well-explained web-based tool further brings the use of these web-based tools closer to everyday clinical practices. Thus, the concept of more personalized and precision oncology can be achieved.
Collapse
Affiliation(s)
- Rasheed Omobolaji Alabi
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, 00100 Helsinki, Finland; (A.A.); (A.M.)
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, 65200 Vaasa, Finland;
- Correspondence:
| | - Alhadi Almangush
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, 00100 Helsinki, Finland; (A.A.); (A.M.)
- Department of Pathology, University of Helsinki, Haartmaninkatu 3 (P.O. Box 21), FIN-00014 Helsinki, Finland
- Institute of Biomedicine, University of Turku, Pathology, 20500 Turku, Finland;
- Faculty of Dentistry, Misurata University, Misurata 2478, Libya
| | - Mohammed Elmusrati
- Department of Industrial Digitalization, School of Technology and Innovations, University of Vaasa, 65200 Vaasa, Finland;
| | - Ilmo Leivo
- Institute of Biomedicine, University of Turku, Pathology, 20500 Turku, Finland;
| | - Antti Mäkitie
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, 00100 Helsinki, Finland; (A.A.); (A.M.)
- Department of Otorhinolaryngology—Head and Neck Surgery, University of Helsinki, Helsinki University Hospital, 00029 HUS Helsinki, Finland
- Department of Clinical Sciences, Intervention and Technology, Division of Ear, Nose and Throat Diseases, Karolinska Institute, Karolinska University Hospital, 17177 Stockholm, Sweden
| |
Collapse
|
17
|
Zhang Y, Zhu W, Li K, Yan D, Liu H, Bai J, Liu F, Cheng X, Wu T. SMANet: multi-region ensemble of convolutional neural network model for skeletal maturity assessment. Quant Imaging Med Surg 2022; 12:3556-3568. [PMID: 35782257 PMCID: PMC9246748 DOI: 10.21037/qims-21-1158] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 03/30/2022] [Indexed: 11/06/2023]
Abstract
Background Bone age assessment (BAA) is a crucial research topic in pediatric radiology. Interest in the development of automated methods for BAA is increasing. The current BAA algorithms based on deep learning have displayed the following deficiencies: (I) most methods involve end-to-end prediction, lacking integration with clinically interpretable methods; (II) BAA methods exhibit racial and geographical differences. Methods A novel, automatic skeletal maturity assessment (SMA) method with clinically interpretable methods was proposed based on a multi-region ensemble of convolutional neural networks (CNNs). This method predicted skeletal maturity scores and thus assessed bone age by utilizing left-hand radiographs and key regional patches of clinical concern. Results Experiments included 4,861 left-hand radiographs from the database of Beijing Jishuitan Hospital and revealed that the mean absolute error (MAE) was 31.4±0.19 points (skeletal maturity scores) and 0.45±0.13 years (bone age) for the carpal bones-series and 29.9±0.21 points and 0.43±0.17 years, respectively, for the radius, ulna, and short (RUS) bones series based on the Tanner-Whitehouse 3 (TW3) method. Conclusions The proposed automatic SMA method, which was without racial and geographical influence, is a novel, automatic method for assessing childhood bone development by utilizing skeletal maturity. Furthermore, it provides a comparable performance to endocrinologists, with greater stability and efficiency.
Collapse
Affiliation(s)
- Yi Zhang
- China Academy of Information and Communications Technology, Beijing, China
| | - Wenwen Zhu
- China Academy of Information and Communications Technology, Beijing, China
| | - Kai Li
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - Dong Yan
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - Hua Liu
- Forensic Science Service of Beijing Public Security Bureau, Beijing, China
| | - Jie Bai
- Forensic Science Service of Beijing Public Security Bureau, Beijing, China
| | - Fan Liu
- Forensic Science Service of Beijing Public Security Bureau, Beijing, China
| | - Xiaoguang Cheng
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - Tongning Wu
- China Academy of Information and Communications Technology, Beijing, China
| |
Collapse
|
18
|
Rao B H, Trieu JA, Nair P, Gressel G, Venu M, Venu RP. Artificial intelligence in endoscopy: More than what meets the eye in screening colonoscopy and endosonographic evaluation of pancreatic lesions. Artif Intell Gastrointest Endosc 2022; 3:16-30. [DOI: 10.37126/aige.v3.i3.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 03/07/2022] [Accepted: 05/07/2022] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI)-based tools have ushered in a new era of innovation in the field of gastrointestinal (GI) endoscopy. Despite vast improvements in endoscopic techniques and equipment, diagnostic endoscopy remains heavily operator-dependent, in particular, colonoscopy and endoscopic ultrasound (EUS). Recent reports have shown that as much as 25% of colonic adenomas may be missed at colonoscopy. This can result in an increased incidence of interval colon cancer. Similarly, EUS has been shown to have high inter-observer variability, overlap in diagnoses with a relatively low specificity for pancreatic lesions. Our understanding of Machine-learning (ML) techniques in AI have evolved over the last decade and its application in AI–based tools for endoscopic detection and diagnosis is being actively investigated at several centers. ML is an aspect of AI that is based on neural networks, and is widely used for image classification, object detection, and semantic segmentation which are key functional aspects of AI-related computer aided diagnostic systems. In this review, current status and limitations of ML, specifically for adenoma detection and endosonographic diagnosis of pancreatic lesions, will be summarized from existing literature. This will help to better understand its role as viewed through the prism of real world application in the field of GI endoscopy.
Collapse
Affiliation(s)
- Harshavardhan Rao B
- Department of Gastroenterology, Amrita Institute of Medical Sciences, Kochi 682041, Kerala, India
| | - Judy A Trieu
- Internal Medicine - Gastroenterology, Loyola University Medical Center, Maywood, IL 60153, United States
| | - Priya Nair
- Department of Gastroenterology, Amrita Institute of Medical Sciences, Kochi 682041, Kerala, India
| | - Gilad Gressel
- Center for Cyber Security Systems and Networks, Amrita Vishwavidyapeetham, Kollam 690546, Kerala, India
| | - Mukund Venu
- Internal Medicine - Gastroenterology, Loyola University Medical Center, Maywood, IL 60153, United States
| | - Rama P Venu
- Department of Gastroenterology, Amrita Institute of Medical Sciences, Kochi 682041, Kerala, India
| |
Collapse
|
19
|
Thyreau B, Tatewaki Y, Chen L, Takano Y, Hirabayashi N, Furuta Y, Hata J, Nakaji S, Maeda T, Noguchi‐Shinohara M, Mimura M, Nakashima K, Mori T, Takebayashi M, Ninomiya T, Taki Y. Higher-resolution quantification of white matter hypointensities by large-scale transfer learning from 2D images on the JPSC-AD cohort. Hum Brain Mapp 2022; 43:3998-4012. [PMID: 35524684 PMCID: PMC9374893 DOI: 10.1002/hbm.25899] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 03/24/2022] [Accepted: 04/20/2022] [Indexed: 12/14/2022] Open
Abstract
White matter lesions (WML) commonly occur in older brains and are quantifiable on MRI, often used as a biomarker in Aging research. Although algorithms are regularly proposed that identify these lesions from T2‐fluid‐attenuated inversion recovery (FLAIR) sequences, none so far can estimate lesions directly from T1‐weighted images with acceptable accuracy. Since 3D T1 is a polyvalent and higher‐resolution sequence, it could be beneficial to obtain the distribution of WML directly from it. However a serious difficulty, both for algorithms and human, can be found in the ambiguities of brain signal intensity in T1 images. This manuscript shows that a cross‐domain ConvNet (Convolutional Neural Network) approach can help solve this problem. Still, this is non‐trivial, as it would appear to require a large and varied dataset (for robustness) labelled at the same high resolution (for spatial accuracy). Instead, our model was taught from two‐dimensional FLAIR images with a loss function designed to handle the super‐resolution need. And crucially, we leveraged a very large training set for this task, the recently assembled, multi‐sites Japan Prospective Studies Collaboration for Aging and Dementia (JPSC‐AD) cohort. We describe the two‐step procedure that we followed to handle such a large number of imperfectly labeled samples. A large‐scale accuracy evaluation conducted against FreeSurfer 7, and a further visual expert rating revealed that WML segmentation from our ConvNet was consistently better. Finally, we made a directly usable software program based on that trained ConvNet model, available at https://github.com/bthyreau/deep-T1-WMH.
Collapse
Affiliation(s)
- Benjamin Thyreau
- Smart‐Aging Research Center, Institute of Development, Aging, and CancerTohoku UniversitySendaiJapan
| | - Yasuko Tatewaki
- Department of Aging Research and Geriatric Medicine, Institute of Development, Aging, and CancerTohoku UniversitySendaiJapan
- Department of Geriatric Medicine and NeuroimagingTohoku University HospitalSendaiJapan
| | - Liying Chen
- Smart‐Aging Research Center, Institute of Development, Aging, and CancerTohoku UniversitySendaiJapan
| | - Yuji Takano
- Smart‐Aging Research Center, Institute of Development, Aging, and CancerTohoku UniversitySendaiJapan
- Department of Psychological SciencesUniversity of Human EnvironmentsMatsuyamaJapan
| | - Naoki Hirabayashi
- Department of Epidemiology and Public Health, Graduate School of Medical SciencesKyushu UniversityFukuokaJapan
| | - Yoshihiko Furuta
- Department of Epidemiology and Public Health, Graduate School of Medical SciencesKyushu UniversityFukuokaJapan
| | - Jun Hata
- Department of Epidemiology and Public Health, Graduate School of Medical SciencesKyushu UniversityFukuokaJapan
| | - Shigeyuki Nakaji
- Department of Social Medicine, Graduate School of MedicineHirosaki UniversityHirosakiJapan
| | - Tetsuya Maeda
- Division of Neurology and Gerontology, Department of Internal Medicine, School of MedicineIwate Medical UniversityIwateJapan
| | - Moeko Noguchi‐Shinohara
- Department of Neurology and Neurobiology of Aging, Kanazawa University Graduate School of Medical SciencesKanazawa UniversityKanazawaJapan
| | | | - Kenji Nakashima
- National Hospital Organization, Matsue Medical CenterShimaneJapan
| | - Takaaki Mori
- Department of Neuropsychiatry, Ehime University Graduate School of MedicineEhime UniversityEhimeJapan
| | - Minoru Takebayashi
- Faculty of Life Sciences, Department of NeuropsychiatryKumamoto UniversityKumamotoJapan
| | - Toshiharu Ninomiya
- Department of Epidemiology and Public Health, Graduate School of Medical SciencesKyushu UniversityFukuokaJapan
| | - Yasuyuki Taki
- Smart‐Aging Research Center, Institute of Development, Aging, and CancerTohoku UniversitySendaiJapan
- Department of Aging Research and Geriatric Medicine, Institute of Development, Aging, and CancerTohoku UniversitySendaiJapan
- Department of Geriatric Medicine and NeuroimagingTohoku University HospitalSendaiJapan
| | | |
Collapse
|
20
|
|
21
|
Clever Hans effect found in a widely used brain tumour MRI dataset. Med Image Anal 2022; 77:102368. [DOI: 10.1016/j.media.2022.102368] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 12/19/2021] [Accepted: 01/10/2022] [Indexed: 12/11/2022]
|
22
|
Ong K, Young DM, Sulaiman S, Shamsuddin SM, Mohd Zain NR, Hashim H, Yuen K, Sanders SJ, Yu W, Hang S. Detection of subtle white matter lesions in MRI through texture feature extraction and boundary delineation using an embedded clustering strategy. Sci Rep 2022; 12:4433. [PMID: 35292654 PMCID: PMC8924181 DOI: 10.1038/s41598-022-07843-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 02/24/2022] [Indexed: 11/29/2022] Open
Abstract
White matter lesions (WML) underlie multiple brain disorders, and automatic WML segmentation is crucial to evaluate the natural disease course and effectiveness of clinical interventions, including drug discovery. Although recent research has achieved tremendous progress in WML segmentation, accurate detection of subtle WML present early in the disease course remains particularly challenging. Here we propose an approach to automatic WML segmentation of mild WML loads using an intensity standardisation technique, gray level co-occurrence matrix (GLCM) embedded clustering technique, and random forest (RF) classifier to extract texture features and identify morphology specific to true WML. We precisely define their boundaries through a local outlier factor (LOF) algorithm that identifies edge pixels by local density deviation relative to its neighbors. The automated approach was validated on 32 human subjects, demonstrating strong agreement and correlation (excluding one outlier) with manual delineation by a neuroradiologist through Intra-Class Correlation (ICC = 0.881, 95% CI 0.769, 0.941) and Pearson correlation (r = 0.895, p-value < 0.001), respectively, and outperforming three leading algorithms (Trimmed Mean Outlier Detection, Lesion Prediction Algorithm, and SALEM-LS) in five of the six established key metrics defined in the MICCAI Grand Challenge. By facilitating more accurate segmentation of subtle WML, this approach may enable earlier diagnosis and intervention.
Collapse
Affiliation(s)
- Kokhaur Ong
- Bioinformatics Institute, A*STAR, Singapore, Singapore.,Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - David M Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.,Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, USA
| | - Sarina Sulaiman
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Johor, Malaysia
| | | | | | - Hilwati Hashim
- Department of Radiology, Faculty of Medicine, Universiti Teknologi MARA, Sungai Buloh, Malaysia
| | - Kahhay Yuen
- School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia
| | - Stephan J Sanders
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, USA
| | - Weimiao Yu
- Bioinformatics Institute, A*STAR, Singapore, Singapore. .,Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore. .,Computational Digital Pathology Laboratory, Bioinformatics Institute (BII), 30 Biopolis Street, #07-46 Matrix, Singapore, 138671, Singapore.
| | - Seepheng Hang
- Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, UTM Skudai, 81310, Johor, Malaysia.
| |
Collapse
|
23
|
Adamson PM, Bhattbhatt V, Principi S, Beriwal S, Strain LS, Offe M, Wang AS, Vo N, Schmidt TG, Jordan P. Technical note: Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability and application to patient‐specific CT dosimetry. Med Phys 2022; 49:2342-2354. [PMID: 35128672 PMCID: PMC9007850 DOI: 10.1002/mp.15521] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 12/23/2021] [Accepted: 01/08/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study developed and evaluated a fully convolutional network (FCN) for pediatric CT organ segmentation and investigated the generalizability of the FCN across image heterogeneities such as CT scanner model protocols and patient age. We also evaluated the autosegmentation models as part of a software tool for patient-specific CT dose estimation. METHODS A collection of 359 pediatric CT datasets with expert organ contours were used for model development and evaluation. Autosegmentation models were trained for each organ using a modified FCN 3D V-Net. An independent test set of 60 patients was withheld for testing. To evaluate the impact of CT scanner model protocol and patient age heterogeneities, separate models were trained using a subset of scanner model protocols and pediatric age groups. Train and test sets were split to answer questions about the generalizability of pediatric FCN autosegmentation models to unseen age groups and scanner model protocols, as well as the merit of scanner model protocol or age-group-specific models. Finally, the organ contours resulting from the autosegmentation models were applied to patient-specific dose maps to evaluate the impact of segmentation errors on organ dose estimation. RESULTS Results demonstrate that the autosegmentation models generalize to CT scanner acquisition and reconstruction methods which were not present in the training dataset. While models are not equally generalizable across age groups, age-group-specific models do not hold any advantage over combining heterogeneous age groups into a single training set. Dice similarity coefficient (DSC) and mean surface distance results are presented for 19 organ structures, for example, median DSC of 0.52 (duodenum), 0.74 (pancreas), 0.92 (stomach), and 0.96 (heart). The FCN models achieve a mean dose error within 5% of expert segmentations for all 19 organs except for the spinal canal, where the mean error was 6.31%. CONCLUSIONS Overall, these results are promising for the adoption of FCN autosegmentation models for pediatric CT, including applications for patient-specific CT dose estimation.
Collapse
Affiliation(s)
| | | | - Sara Principi
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | | | - Linda S. Strain
- Department of Radiology Children's Wisconsin and Medical College of Wisconsin Milwaukee WI 53226 United States
| | - Michael Offe
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | - Adam S. Wang
- Department of Radiology Stanford University Stanford CA 94305 United States
| | - Nghia‐Jack Vo
- Department of Radiology Children's Wisconsin and Medical College of Wisconsin Milwaukee WI 53226 United States
| | - Taly Gilat Schmidt
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | - Petr Jordan
- Varian Medical Systems Palo Alto CA 94304 United States
| |
Collapse
|
24
|
van der Putten J, van der Sommen F. AIM in Barrett’s Esophagus. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
25
|
Hotz I, Deschwanden PF, Liem F, Mérillat S, Malagurski B, Kollias S, Jäncke L. Performance of three freely available methods for extracting white matter hyperintensities: FreeSurfer, UBO Detector, and BIANCA. Hum Brain Mapp 2021; 43:1481-1500. [PMID: 34873789 PMCID: PMC8886667 DOI: 10.1002/hbm.25739] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 11/11/2021] [Accepted: 11/26/2021] [Indexed: 11/07/2022] Open
Abstract
White matter hyperintensities (WMH) of presumed vascular origin are frequently found in MRIs of healthy older adults. WMH are also associated with aging and cognitive decline. Here, we compared and validated three algorithms for WMH extraction: FreeSurfer (T1w), UBO Detector (T1w + FLAIR), and FSL's Brain Intensity AbNormality Classification Algorithm (BIANCA; T1w + FLAIR) using a longitudinal dataset comprising MRI data of cognitively healthy older adults (baseline N = 231, age range 64–87 years). As reference we manually segmented WMH in T1w, three‐dimensional (3D) FLAIR, and two‐dimensional (2D) FLAIR images which were used to assess the segmentation accuracy of the different automated algorithms. Further, we assessed the relationships of WMH volumes provided by the algorithms with Fazekas scores and age. FreeSurfer underestimated the WMH volumes and scored worst in Dice Similarity Coefficient (DSC = 0.434) but its WMH volumes strongly correlated with the Fazekas scores (rs = 0.73). BIANCA accomplished the highest DSC (0.602) in 3D FLAIR images. However, the relations with the Fazekas scores were only moderate, especially in the 2D FLAIR images (rs = 0.41), and many outlier WMH volumes were detected when exploring within‐person trajectories (2D FLAIR: ~30%). UBO Detector performed similarly to BIANCA in DSC with both modalities and reached the best DSC in 2D FLAIR (0.531) without requiring a tailored training dataset. In addition, it achieved very high associations with the Fazekas scores (2D FLAIR: rs = 0.80). In summary, our results emphasize the importance of carefully contemplating the choice of the WMH segmentation algorithm and MR‐modality.
Collapse
Affiliation(s)
- Isabel Hotz
- Division of Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | | | - Franziskus Liem
- University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Susan Mérillat
- University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Brigitta Malagurski
- University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| | - Spyros Kollias
- Department of Neuroradiology, University Hospital Zurich, Zurich, Switzerland
| | - Lutz Jäncke
- Division of Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program (URPP), Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
| |
Collapse
|
26
|
Sundaresan V, Zamboni G, Dinsdale NK, Rothwell PM, Griffanti L, Jenkinson M. Comparison of domain adaptation techniques for white matter hyperintensity segmentation in brain MR images. Med Image Anal 2021; 74:102215. [PMID: 34454295 PMCID: PMC8573594 DOI: 10.1016/j.media.2021.102215] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 07/12/2021] [Accepted: 08/16/2021] [Indexed: 12/05/2022]
Abstract
Robust automated segmentation of white matter hyperintensities (WMHs) in different datasets (domains) is highly challenging due to differences in acquisition (scanner, sequence), population (WMH amount and location) and limited availability of manual segmentations to train supervised algorithms. In this work we explore various domain adaptation techniques such as transfer learning and domain adversarial learning methods, including domain adversarial neural networks and domain unlearning, to improve the generalisability of our recently proposed triplanar ensemble network, which is our baseline model. We used datasets with variations in intensity profile, lesion characteristics and acquired using different scanners. For the source domain, we considered a dataset consisting of data acquired from 3 different scanners, while the target domain consisted of 2 datasets. We evaluated the domain adaptation techniques on the target domain datasets, and additionally evaluated the performance on the source domain test dataset for the adversarial techniques. For transfer learning, we also studied various training options such as minimal number of unfrozen layers and subjects required for fine-tuning in the target domain. On comparing the performance of different techniques on the target dataset, domain adversarial training of neural network gave the best performance, making the technique promising for robust WMH segmentation.
Collapse
Affiliation(s)
- Vaanathi Sundaresan
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Oxford-Nottingham Centre for Doctoral Training in Biomedical Imaging, University of Oxford, UK; Oxford India Centre for Sustainable Development, Somerville College, University of Oxford, UK.
| | - Giovanna Zamboni
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze, Università di Modena e Reggio Emilia, Italy
| | - Nicola K Dinsdale
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Oxford-Nottingham Centre for Doctoral Training in Biomedical Imaging, University of Oxford, UK
| | - Peter M Rothwell
- Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - Ludovica Griffanti
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Human Brain Activity, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Australian Institute for Machine Learning (AIML), School of Computer Science, The University of Adelaide, Adelaide, Australia; South Australian Health and Medical Research Institute (SAHMRI), Adelaide, Australia
| |
Collapse
|
27
|
Vrenken H, Jenkinson M, Pham DL, Guttmann CRG, Pareto D, Paardekooper M, de Sitter A, Rocca MA, Wottschel V, Cardoso MJ, Barkhof F. Opportunities for Understanding MS Mechanisms and Progression With MRI Using Large-Scale Data Sharing and Artificial Intelligence. Neurology 2021; 97:989-999. [PMID: 34607924 PMCID: PMC8610621 DOI: 10.1212/wnl.0000000000012884] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 09/09/2021] [Indexed: 11/15/2022] Open
Abstract
Patients with multiple sclerosis (MS) have heterogeneous clinical presentations, symptoms, and progression over time, making MS difficult to assess and comprehend in vivo. The combination of large-scale data sharing and artificial intelligence creates new opportunities for monitoring and understanding MS using MRI. First, development of validated MS-specific image analysis methods can be boosted by verified reference, test, and benchmark imaging data. Using detailed expert annotations, artificial intelligence algorithms can be trained on such MS-specific data. Second, understanding disease processes could be greatly advanced through shared data of large MS cohorts with clinical, demographic, and treatment information. Relevant patterns in such data that may be imperceptible to a human observer could be detected through artificial intelligence techniques. This applies from image analysis (lesions, atrophy, or functional network changes) to large multidomain datasets (imaging, cognition, clinical disability, genetics). After reviewing data sharing and artificial intelligence, we highlight 3 areas that offer strong opportunities for making advances in the next few years: crowdsourcing, personal data protection, and organized analysis challenges. Difficulties as well as specific recommendations to overcome them are discussed, in order to best leverage data sharing and artificial intelligence to improve image analysis, imaging, and the understanding of MS.
Collapse
Affiliation(s)
- Hugo Vrenken
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK.
| | - Mark Jenkinson
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Dzung L Pham
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Charles R G Guttmann
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Deborah Pareto
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Michel Paardekooper
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Alexandra de Sitter
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Maria A Rocca
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Viktor Wottschel
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - M Jorge Cardoso
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | - Frederik Barkhof
- From the MS Center Amsterdam (H.V., A.d.S., V.W.), Amsterdam Neuroscience, Department of Radiology and Nuclear Medicine, Amsterdam UMC (M.P.), the Netherlands; Wellcome Centre for Integrative Neuroimaging (WIN), FMRIB (M.J.), Nuffield Department of Clinical Neurosciences (NDCN), University of Oxford, UK; Human Imaging and Image Processing Core (D.L.P.), Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation, Bethesda, MD; Center for Neurological Imaging (C.R.G.G.), Department of Radiology, Brigham and Women's Hospital, Boston, MA; Section of Neuroradiology (Department of Radiology) (D.P.), Vall d'Hebron University Hospital and Research Institute (VHIR), Autonomous University Barcelona, Spain; Neuroimaging Research Unit (M.A.R.), Institute of Experimental Neurology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy; AMIGO (M.J.C.), School of Biomedical Engineering and Imaging Sciences, King's College London; and Institutes of Neurology & Healthcare Engineering (F.B.), UCL London, UK
| | | |
Collapse
|
28
|
Li K, Xu Y, Liu L, Meng MQH. A Virtual Scanning Framework for Robotic Spinal Sonography with Automatic Real-time Recognition of Standard Views. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4574-4577. [PMID: 34892234 DOI: 10.1109/embc46164.2021.9629703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrasound (US) imaging is widely used to assist in the diagnosis and intervention of the spine, but the manual scanning process would bring heavy physical and cognitive burdens on the sonographers. Robotic US acquisitions can provide an alternative to the standard handheld technique to reduce operator workload and avoid direct patient contact. However, the real-time interpretation of the acquired images is rarely addressed in existing robotic US systems. Therefore, we envision a robotic system that can automatically scan the spine and search for the standard views like an expert sonographer. In this work, we propose a virtual scanning framework based on real-world US data acquired by a robotic system to simulate the autonomous robotic spinal sonography, and incorporate automatic real-time recognition of the standard views of the spine based on a multi-scale fusion approach and deep convolutional neural networks. Our method can accurately classify 96.71% of the standard views of the spine in the test set, and the simulated clinical application preliminarily demonstrates the potential of our method.
Collapse
|
29
|
Sundaresan V, Zamboni G, Rothwell PM, Jenkinson M, Griffanti L. Triplanar ensemble U-Net model for white matter hyperintensities segmentation on MR images. Med Image Anal 2021; 73:102184. [PMID: 34325148 PMCID: PMC8505759 DOI: 10.1016/j.media.2021.102184] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 03/10/2021] [Accepted: 07/16/2021] [Indexed: 01/05/2023]
Abstract
White matter hyperintensities (WMHs) have been associated with various cerebrovascular and neurodegenerative diseases. Reliable quantification of WMHs is essential for understanding their clinical impact in normal and pathological populations. Automated segmentation of WMHs is highly challenging due to heterogeneity in WMH characteristics between deep and periventricular white matter, presence of artefacts and differences in the pathology and demographics of populations. In this work, we propose an ensemble triplanar network that combines the predictions from three different planes of brain MR images to provide an accurate WMH segmentation. In the loss functions the network uses anatomical information regarding WMH spatial distribution in loss functions, to improve the efficiency of segmentation and to overcome the contrast variations between deep and periventricular WMHs. We evaluated our method on 5 datasets, of which 3 are part of a publicly available dataset (training data for MICCAI WMH Segmentation Challenge 2017 - MWSC 2017) consisting of subjects from three different cohorts, and we also submitted our method to MWSC 2017 to be evaluated on the unseen test datasets. On evaluating our method separately in deep and periventricular regions, we observed robust and comparable performance in both regions. Our method performed better than most of the existing methods, including FSL BIANCA, and on par with the top ranking deep learning methods of MWSC 2017.
Collapse
Affiliation(s)
- Vaanathi Sundaresan
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
- Oxford-Nottingham Centre for Doctoral Training in Biomedical Imaging, University of Oxford, UK
- Oxford India Centre for Sustainable Development, Somerville College, University of Oxford, UK
| | - Giovanna Zamboni
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
- Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
- Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze, Universitá di Modena e Reggio Emilia, Italy
| | - Peter M. Rothwell
- Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
- Australian Institute for Machine Learning (AIML), School of Computer Science, The University of Adelaide, Adelaide, Australia
| | - Ludovica Griffanti
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Human Brain Activity, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
30
|
An [18F]FDG-PET/CT deep learning method for fully automated detection of pathological mediastinal lymph nodes in lung cancer patients. Eur J Nucl Med Mol Imaging 2021; 49:881-888. [PMID: 34519888 PMCID: PMC8803782 DOI: 10.1007/s00259-021-05513-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 07/28/2021] [Indexed: 12/24/2022]
Abstract
Purpose The identification of pathological mediastinal lymph nodes is an important step in the staging of lung cancer, with the presence of metastases significantly affecting survival rates. Nodes are currently identified by a physician, but this process is time-consuming and prone to errors. In this paper, we investigate the use of artificial intelligence–based methods to increase the accuracy and consistency of this process. Methods Whole-body 18F-labelled fluoro-2-deoxyglucose ([18F]FDG) positron emission tomography/computed tomography ([18F]FDG-PET/CT) scans (Philips Gemini TF) from 134 patients were retrospectively analysed. The thorax was automatically located, and then slices were fed into a U-Net to identify candidate regions. These regions were split into overlapping 3D cubes, which were individually predicted as positive or negative using a 3D CNN. From these predictions, pathological mediastinal nodes could be identified. A second cohort of 71 patients was then acquired from a different, newer scanner (GE Discovery MI), and the performance of the model on this dataset was tested with and without transfer learning. Results On the test set from the first scanner, our model achieved a sensitivity of 0.87 (95% confidence intervals [0.74, 0.94]) with 0.41 [0.22, 0.71] false positives/patient. This was comparable to the performance of an expert. Without transfer learning, on the test set from the second scanner, the corresponding results were 0.53 [0.35, 0.70] and 0.24 [0.10, 0.49], respectively. With transfer learning, these metrics were 0.88 [0.73, 0.97] and 0.69 [0.43, 1.04], respectively. Conclusion Model performance was comparable to that of an expert on data from the same scanner. With transfer learning, the model can be applied to data from a different scanner. To our knowledge it is the first study of its kind to go directly from whole-body [18F]FDG-PET/CT scans to pathological mediastinal lymph node localisation. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05513-x.
Collapse
|
31
|
Krishna Priya R, Chacko S. Improved particle swarm optimized deep convolutional neural network with super-pixel clustering for multiple sclerosis lesion segmentation in brain MRI imaging. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2021; 37:e3506. [PMID: 34181310 DOI: 10.1002/cnm.3506] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 02/09/2021] [Accepted: 03/12/2021] [Indexed: 06/13/2023]
Abstract
A central nervous system (CNS) disease affecting the insulating myelin sheaths around the brain axons is called multiple sclerosis (MS). In today's world, MS is extensively diagnosed and monitored using the MRI, because of the structural MRI sensitivity in dissemination of white matter lesions with respect to space and time. The main aim of this study is to propose Multiple Sclerosis Lesion Segmentation in Brain MRI imaging using Optimized Deep Convolutional Neural Network and Super-pixel Clustering. Three stages included in the proposed methodology are: (a) preprocessing, (b) segmentation of super-pixel, and (c) classification of super-pixel. In the first stage, image enhancement and skull stripping is done through performing a preprocessing step. In the second stage, the MS lesion and Non-MS lesion regions are segmented through applying SLICO algorithm over each slice of the volume. In the fourth stage, a CNN training and classification is performed using this segmented lesion and non-lesion regions. To handle this complex task, a newly developed Improved Particle Swarm Optimization (IPSO) based optimized convolutional neural network classifier is applied. On clinical MS data, the approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods.
Collapse
Affiliation(s)
- R Krishna Priya
- Department of Electrical and Communication Engineering, National University of Science and Technology, Oman
| | - Susamma Chacko
- Department of Quality Enhancement and Assurance, National University of Science and Technology, Oman
| |
Collapse
|
32
|
Homayoun H, Ebrahimpour-komleh H. Automated Segmentation of Abnormal Tissues in Medical Images. J Biomed Phys Eng 2021; 11:415-424. [PMID: 34458189 PMCID: PMC8385212 DOI: 10.31661/jbpe.v0i0.958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/14/2018] [Indexed: 11/29/2022]
Abstract
Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type.
Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients.
Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result,
automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of
multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than
other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported
Collapse
Affiliation(s)
- Hassan Homayoun
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| | - Hossein Ebrahimpour-komleh
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
33
|
MCA-DN: Multi-path convolution leveraged attention deep network for salvageable tissue detection in ischemic stroke from multi-parametric MRI. Comput Biol Med 2021; 136:104724. [PMID: 34388469 DOI: 10.1016/j.compbiomed.2021.104724] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 07/16/2021] [Accepted: 07/30/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND AND OBJECTIVE Accurate and timely treatment of ischemic stroke can restore the blood flow in the affected area and reduce the risk of disability and death. Identification and localisation of both direct and collateral blood flow restriction from MRI using computational intelligence play a crucial role in assisting manual diagnosis decisions in stroke treatment. METHOD A novel multi-path convolution leveraged attention based deep network (MCA-DN) is proposed to address this challenge. MCA-DN combines multi-path convolution derived attention making different weighted filters in each attention convolution sub-path, with interactions on the same level of abstraction. This facilitates the network to focus on voxels with enhanced weighted activations, directing to a plausible lesion. Such a proposition of acquiring attention by embedding multiple filter paths, also prioritizes the selective activation of multi-parametric MRI sequences. The multi-path convolution assisted attention block allows the network layers to gain more insights on the input tensor, enabling the expansion of hypothesis search space with a controlled parameter count. RESULTS The algorithm is evaluated on 139 patients of 3 datasets with 4 sub-datasets, including 2 benchmarked challenge datasets of ISLES-2015, 2017. MCA-DN achieved parametric measures of Dice similarity coefficient: 77.3 %, sensitivity: 82.8 %, and specificity: 98.8 %, for stroke segmentation, outperforming the five state-of-the-art methods in the field with encouraging success. CONCLUSION Competitive performance of the MCA-DN demonstrates immense potential to assist patient-specific stroke treatment planning by estimating the benefit of reperfusion.
Collapse
|
34
|
Liu Y, Li X, Li T, Li B, Wang Z, Gan J, Wei B. A deep semantic segmentation correction network for multi-model tiny lesion areas detection. BMC Med Inform Decis Mak 2021; 21:89. [PMID: 34330249 PMCID: PMC8323231 DOI: 10.1186/s12911-021-01430-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 02/09/2021] [Indexed: 11/21/2022] Open
Abstract
Background Semantic segmentation of white matter hyperintensities related to focal cerebral ischemia (FCI) and lacunar infarction (LACI) is of significant importance for the automatic screening of tiny cerebral lesions and early prevention of LACI. However, existing studies on brain magnetic resonance imaging lesion segmentation focus on large lesions with obvious features, such as glioma and acute cerebral infarction. Owing to the multi-model tiny lesion areas of FCI and LACI, reliable and precise segmentation and/or detection of these lesion areas is still a significant challenge task. Methods We propose a novel segmentation correction algorithm for estimating the lesion areas via segmentation and correction processes, in which we design two sub-models simultaneously: a segmentation network and a correction network. The segmentation network was first used to extract and segment diseased areas on T2 fluid-attenuated inversion recovery (FLAIR) images. Consequently, the correction network was used to classify these areas at the corresponding locations on T1 FLAIR images to distinguish between FCI and LACI. Finally, the results of the correction network were used to correct the segmentation results and achieve segmentation and recognition of the lesion areas. Results In our experiment on magnetic resonance images of 113 clinical patients, our method achieved a precision of 91.76% for detection and 92.89% for classification, indicating a powerful method to distinguish between small lesions, such as FCI and LACI. Conclusions Overall, we developed a complete method for segmentation and detection of WMHs related to FCI and LACI. The experimental results show that it has potential clinical application potential. In the future, we will collect more clinical data and test more types of tiny lesions at the same time.
Collapse
Affiliation(s)
- Yue Liu
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.,Qingdao Academy of Chinese Medical Sciences, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Xiang Li
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.,Qingdao Academy of Chinese Medical Sciences, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.,College of Intelligence and Information Engineering, Shandong University of Traditional Chinese Medicine, Jinan, 250355, China
| | - Tianyang Li
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.,Qingdao Academy of Chinese Medical Sciences, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China
| | - Bin Li
- Radiology Department, Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, 250001, China
| | - Zhensong Wang
- Radiology Department, Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, 250001, China
| | - Jie Gan
- Radiology Department, Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, 250001, China
| | - Benzheng Wei
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China. .,Qingdao Academy of Chinese Medical Sciences, Shandong University of Traditional Chinese Medicine, Qingdao, 266112, China.
| |
Collapse
|
35
|
Bernal J, Valverde S, Kushibar K, Cabezas M, Oliver A, Lladó X. Generating Longitudinal Atrophy Evaluation Datasets on Brain Magnetic Resonance Images Using Convolutional Neural Networks and Segmentation Priors. Neuroinformatics 2021; 19:477-492. [PMID: 33389607 DOI: 10.1007/s12021-020-09499-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2020] [Indexed: 02/03/2023]
Abstract
Brain atrophy quantification plays a fundamental role in neuroinformatics since it permits studying brain development and neurological disorders. However, the lack of a ground truth prevents testing the accuracy of longitudinal atrophy quantification methods. We propose a deep learning framework to generate longitudinal datasets by deforming T1-w brain magnetic resonance imaging scans as requested through segmentation maps. Our proposal incorporates a cascaded multi-path U-Net optimised with a multi-objective loss which allows its paths to generate different brain regions accurately. We provided our model with baseline scans and real follow-up segmentation maps from two longitudinal datasets, ADNI and OASIS, and observed that our framework could produce synthetic follow-up scans that matched the real ones (Total scans= 584; Median absolute error: 0.03 ± 0.02; Structural similarity index: 0.98 ± 0.02; Dice similarity coefficient: 0.95 ± 0.02; Percentage of brain volume change: 0.24 ± 0.16; Jacobian integration: 1.13 ± 0.05). Compared to two relevant works generating brain lesions using U-Nets and conditional generative adversarial networks (CGAN), our proposal outperformed them significantly in most cases (p < 0.01), except in the delineation of brain edges where the CGAN took the lead (Jacobian integration: Ours - 1.13 ± 0.05 vs CGAN - 1.00 ± 0.02; p < 0.01). We examined whether changes induced with our framework were detected by FAST, SPM, SIENA, SIENAX, and the Jacobian integration method. We observed that induced and detected changes were highly correlated (Adj. R2 > 0.86). Our preliminary results on harmonised datasets showed the potential of our framework to be applied to various data collections without further adjustment.
Collapse
Affiliation(s)
- Jose Bernal
- Computer Vision and Robotics Institute, Universitat de Girona, Girona, Spain.
| | - Sergi Valverde
- Computer Vision and Robotics Institute, Universitat de Girona, Girona, Spain
| | - Kaisar Kushibar
- Computer Vision and Robotics Institute, Universitat de Girona, Girona, Spain
| | - Mariano Cabezas
- Computer Vision and Robotics Institute, Universitat de Girona, Girona, Spain
| | - Arnau Oliver
- Computer Vision and Robotics Institute, Universitat de Girona, Girona, Spain
| | - Xavier Lladó
- Computer Vision and Robotics Institute, Universitat de Girona, Girona, Spain
| | | |
Collapse
|
36
|
Mill L, Wolff D, Gerrits N, Philipp P, Kling L, Vollnhals F, Ignatenko A, Jaremenko C, Huang Y, De Castro O, Audinot JN, Nelissen I, Wirtz T, Maier A, Christiansen S. Synthetic Image Rendering Solves Annotation Problem in Deep Learning Nanoparticle Segmentation. SMALL METHODS 2021; 5:e2100223. [PMID: 34927995 DOI: 10.1002/smtd.202100223] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 04/17/2021] [Indexed: 05/14/2023]
Abstract
Nanoparticles occur in various environments as a consequence of man-made processes, which raises concerns about their impact on the environment and human health. To allow for proper risk assessment, a precise and statistically relevant analysis of particle characteristics (such as size, shape, and composition) is required that would greatly benefit from automated image analysis procedures. While deep learning shows impressive results in object detection tasks, its applicability is limited by the amount of representative, experimentally collected and manually annotated training data. Here, an elegant, flexible, and versatile method to bypass this costly and tedious data acquisition process is presented. It shows that using a rendering software allows to generate realistic, synthetic training data to train a state-of-the art deep neural network. Using this approach, a segmentation accuracy can be derived that is comparable to man-made annotations for toxicologically relevant metal-oxide nanoparticle ensembles which were chosen as examples. The presented study paves the way toward the use of deep learning for automated, high-throughput particle detection in a variety of imaging techniques such as in microscopies and spectroscopies, for a wide range of applications, including the detection of micro- and nanoplastic particles in water and tissue samples.
Collapse
Affiliation(s)
- Leonid Mill
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
| | - David Wolff
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Nele Gerrits
- Health Unit, Flemish Institute for Technological Research, Mol, 2400, Belgium
| | - Patrick Philipp
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Lasse Kling
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Florian Vollnhals
- Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Andrew Ignatenko
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Christian Jaremenko
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Institut für Nanotechnologie und korrelative Mikroskopie, 91301, Forchheim, Germany
| | - Olivier De Castro
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Jean-Nicolas Audinot
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Inge Nelissen
- Health Unit, Flemish Institute for Technological Research, Mol, 2400, Belgium
| | - Tom Wirtz
- Advanced Instrumentation for Ion Nano-Analytics, Materials Research and Technology Department, Luxembourg Institute of Science and Technology, Belvaux, L-4422, Luxembourg
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
| | - Silke Christiansen
- Institute of Optics, Information and Photonics, Friedrich-Alexander-University Erlangen-Nuremberg, 91058, Erlangen, Germany
- Physics Department, Free University, 14195, Berlin, Germany
- Correlative Microscopy and Material Data Department, Fraunhofer Institute for Ceramic Technologies and Systems, 01277, Dresden, Germany
| |
Collapse
|
37
|
Bi T, Sferrazza C, D'Andrea R. Zero-Shot Sim-to-Real Transfer of Tactile Control Policies for Aggressive Swing-Up Manipulation. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3084880] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
38
|
Woldegiorgis S, Enqvist A, Baciak J. ResNet and CycleGAN for pulse shape discrimination of He-4 detector pulses: Recovering pulses conventional algorithms fail to label unanimously. Appl Radiat Isot 2021; 176:109819. [PMID: 34171767 DOI: 10.1016/j.apradiso.2021.109819] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 03/22/2021] [Accepted: 06/02/2021] [Indexed: 11/30/2022]
Abstract
Pulse shape discrimination (PSD) capable detectors, such as He-4, that respond to neutron and gamma-ray interactions have a threshold deposited energy value below which n/γ discrimination vanishes when using conventional PSD algorithms. Recent attempts in applying supervised learning based artificial neural networks for PSD use the pulses in the separated regions to train the networks so they can be used to classify another set of separated pulses. In doing so, pulses previously indistinguishable are not recovered for classification, which would have increased the number of neutron and gamma-ray pulses that could be used for further analysis. Assuming the reason why conventional PSD algorithms have unseparated regions is because the parameter space of the algorithms fail to capture the intrinsic (but subtle) distinguishing behavior of some of the neutron and gamma-ray pulses, a cycle-consistent generative adversarial network (CycleGAN) was trained to amplify those differences and extract well separated neutron and gamma-ray clusters. Results show that, once the network is trained with pulses from separated and unseparated regions, it was able to transform the pulses in the unseparated region to improve the PSD. Subsequent n/γ classification was performed using deep residual network (ResNet) that takes pulses with 512 data points as an input. Two different ResNets were explored - simple ResNet and modified ResNet which takes segmented pulse inputs in the first layer and the corresponding time axis values in the last hidden layer. The later approach enables the network to extract time correlated pulse features to enhance its ability to capture the pulse behaviors relevant for PSD. Although it achieves slightly lower accuracy, 99.41% versus 99.89%, based on simply counting the number of correct n/γ labels assigned, compared to the simple ResNet, the modified ResNets architecture was able to decreases the cross-entropy loss function by half, which implies that the correct n/γ labels assigned are less likely to be accidental. PSD parameter distributions based on n/γ classification by ResNet before and after transforming unseparated pulses using CycleGAN show that by enhancing the separation between neutrons and gamma-rays, the transformation helps improve the performance of classifier networks that are trained using labeled dataset. The enhancement of neutron and gamma-ray separation by the CycleGAN increased the PSD figure of merit (FOM) by up to 70% in some regions. The results show that, if a given detector achieves clear separation between neutron and gamma-ray pulses in any energy region, such neural network approaches can help lower the energy threshold for the separation and increasing the number of neutron and gamma-ray pulses that can be used for further analysis.
Collapse
Affiliation(s)
- Surafel Woldegiorgis
- Nuclear Engineering Program, University of Florida, Gainesville, FL, 32611, USA.
| | - Andreas Enqvist
- Nuclear Engineering Program, University of Florida, Gainesville, FL, 32611, USA
| | - James Baciak
- Nuclear Engineering Program, University of Florida, Gainesville, FL, 32611, USA
| |
Collapse
|
39
|
Xia X, Feng B, Wang J, Hua Q, Yang Y, Sheng L, Mou Y, Hu W. Deep Learning for Differentiating Benign From Malignant Parotid Lesions on MR Images. Front Oncol 2021; 11:632104. [PMID: 34249680 PMCID: PMC8262843 DOI: 10.3389/fonc.2021.632104] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 06/07/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose/Objectives(s) Salivary gland tumors are a rare, histologically heterogeneous group of tumors. The distinction between malignant and benign tumors of the parotid gland is clinically important. This study aims to develop and evaluate a deep-learning network for diagnosing parotid gland tumors via the deep learning of MR images. Materials/Methods Two hundred thirty-three patients with parotid gland tumors were enrolled in this study. Histology results were available for all tumors. All patients underwent MRI scans, including T1-weighted, CE-T1-weighted and T2-weighted imaging series. The parotid glands and tumors were segmented on all three MR image series by a radiologist with 10 years of clinical experience. A total of 3791 parotid gland region images were cropped from the MR images. A label (pleomorphic adenoma and Warthin tumor, malignant tumor or free of tumor), which was based on histology results, was assigned to each image. To train the deep-learning model, these data were randomly divided into a training dataset (90%, comprising 3035 MR images from 212 patients: 714 pleomorphic adenoma images, 558 Warthin tumor images, 861 malignant tumor images, and 902 images free of tumor) and a validation dataset (10%, comprising 275 images from 21 patients: 57 pleomorphic adenoma images, 36 Warthin tumor images, 93 malignant tumor images, and 89 images free of tumor). A modified ResNet model was developed to classify these images. The input images were resized to 224x224 pixels, including four channels (T1-weighted tumor images only, T2-weighted tumor images only, CE-T1-weighted tumor images only and parotid gland images). Random image flipping and contrast adjustment were used for data enhancement. The model was trained for 1200 epochs with a learning rate of 1e-6, and the Adam optimizer was implemented. It took approximately 2 hours to complete the whole training procedure. The whole program was developed with PyTorch (version 1.2). Results The model accuracy with the training dataset was 92.94% (95% CI [0.91, 0.93]). The micro-AUC was 0.98. The experimental results showed that the accuracy of the final algorithm in the diagnosis and staging of parotid cancer was 82.18% (95% CI [0.77, 0.86]). The micro-AUC was 0.93. Conclusion The proposed model may be used to assist clinicians in the diagnosis of parotid tumors. However, future larger-scale multicenter studies are required for full validation.
Collapse
Affiliation(s)
- Xianwu Xia
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Bin Feng
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Qianjin Hua
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Yide Yang
- Department of Infectious Disease, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Liang Sheng
- Department of Radiology, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Yonghua Mou
- Department of Hepatobiliary Surgery, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
40
|
Fletcher E, DeCarli C, Fan AP, Knaack A. Convolutional Neural Net Learning Can Achieve Production-Level Brain Segmentation in Structural Magnetic Resonance Imaging. Front Neurosci 2021; 15:683426. [PMID: 34234642 PMCID: PMC8255694 DOI: 10.3389/fnins.2021.683426] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/27/2021] [Indexed: 01/18/2023] Open
Abstract
Deep learning implementations using convolutional neural nets have recently demonstrated promise in many areas of medical imaging. In this article we lay out the methods by which we have achieved consistently high quality, high throughput computation of intra-cranial segmentation from whole head magnetic resonance images, an essential but typically time-consuming bottleneck for brain image analysis. We refer to this output as “production-level” because it is suitable for routine use in processing pipelines. Training and testing with an extremely large archive of structural images, our segmentation algorithm performs uniformly well over a wide variety of separate national imaging cohorts, giving Dice metric scores exceeding those of other recent deep learning brain extractions. We describe the components involved to achieve this performance, including size, variety and quality of ground truth, and appropriate neural net architecture. We demonstrate the crucial role of appropriately large and varied datasets, suggesting a less prominent role for algorithm development beyond a threshold of capability.
Collapse
Affiliation(s)
- Evan Fletcher
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Charles DeCarli
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Audrey P Fan
- Department of Neurology, University of California, Davis, Davis, CA, United States.,Department of Biomedical Engineering, University of California, Davis, Davis, CA, United States
| | - Alexander Knaack
- Department of Neurology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
41
|
Hirsch L, Huang Y, Parra LC. Segmentation of MRI head anatomy using deep volumetric networks and multiple spatial priors. J Med Imaging (Bellingham) 2021; 8:034001. [PMID: 34159222 DOI: 10.1117/1.jmi.8.3.034001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 05/19/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Conventional automated segmentation of the head anatomy in magnetic resonance images distinguishes different brain and nonbrain tissues based on image intensities and prior tissue probability maps (TPMs). This works well for normal head anatomies but fails in the presence of unexpected lesions. Deep convolutional neural networks (CNNs) leverage instead spatial patterns and can learn to segment lesions but often ignore prior probabilities. Approach: We add three sources of prior information to a three-dimensional (3D) convolutional network, namely, spatial priors with a TPM, morphological priors with conditional random fields, and spatial context with a wider field-of-view at lower resolution. We train and test these networks on 3D images of 43 stroke patients and 4 healthy individuals which have been manually segmented. Results: We demonstrate the benefits of each source of prior information, and we show that the new architecture, which we call Multiprior network, improves the performance of existing segmentation software, such as SPM, FSL, and DeepMedic for abnormal anatomies. The relevance of the different priors was compared, and the TPM was found to be most beneficial. The benefit of adding a TPM is generic in that it can boost the performance of established segmentation networks such as the DeepMedic and a UNet. We also provide an out-of-sample validation and clinical application of the approach on an additional 47 patients with disorders of consciousness. We make the code and trained networks freely available. Conclusions: Biomedical images follow imaging protocols that can be leveraged as prior information into deep CNNs to improve performance. The network segmentations match human manual corrections performed in 3D and are comparable in performance to human segmentations obtained from scratch in 2D for abnormal brain anatomies.
Collapse
Affiliation(s)
- Lukas Hirsch
- City College New York, Department of Biomedical Engineering, New York City, New York, United States
| | - Yu Huang
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York City, New York, United States
| | - Lucas C Parra
- City College New York, Department of Biomedical Engineering, New York City, New York, United States
| |
Collapse
|
42
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Mandible Segmentation of Dental CBCT Scans Affected by Metal Artifacts Using Coarse-to-Fine Learning Model. J Pers Med 2021; 11:560. [PMID: 34208429 PMCID: PMC8232763 DOI: 10.3390/jpm11060560] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/16/2022] Open
Abstract
Accurate segmentation of the mandible from cone-beam computed tomography (CBCT) scans is an important step for building a personalized 3D digital mandible model for maxillofacial surgery and orthodontic treatment planning because of the low radiation dose and short scanning duration. CBCT images, however, exhibit lower contrast and higher levels of noise and artifacts due to extremely low radiation in comparison with the conventional computed tomography (CT), which makes automatic mandible segmentation from CBCT data challenging. In this work, we propose a novel coarse-to-fine segmentation framework based on 3D convolutional neural network and recurrent SegUnet for mandible segmentation in CBCT scans. Specifically, the mandible segmentation is decomposed into two stages: localization of the mandible-like region by rough segmentation and further accurate segmentation of the mandible details. The method was evaluated using a dental CBCT dataset. In addition, we evaluated the proposed method and compared it with state-of-the-art methods in two CT datasets. The experiments indicate that the proposed algorithm can provide more accurate and robust segmentation results for different imaging techniques in comparison with the state-of-the-art models with respect to these three datasets.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
43
|
Qiu B, Guo J, Kraeima J, Glas HH, Zhang W, Borra RJH, Witjes MJH, van Ooijen PMA. Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography. J Pers Med 2021; 11:jpm11060492. [PMID: 34072714 PMCID: PMC8229770 DOI: 10.3390/jpm11060492] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 12/24/2022] Open
Abstract
Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
- Correspondence:
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Weichuan Zhang
- Institute for Integrated and Intelligent System, Griffith University, Nathan, QLD 4111, Australia;
- CSIRO Data61, Epping, NSW 1710, Australia
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands; (B.Q.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ Groningen, The Netherlands
| |
Collapse
|
44
|
Zhang Z, Powell K, Yin C, Cao S, Gonzalez D, Hannawi Y, Zhang P. Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2021; 2021:663-671. [PMID: 34457182 PMCID: PMC8378613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
White Matter Hyperintensities (WMH) are the most common manifestation of cerebral small vessel disease (cSVD) on the brain MRI. Accurate WMH segmentation algorithms are important to determine cSVD burden and its clinical con-sequences. Most of existing WMH segmentation algorithms require both fluid attenuated inversion recovery (FLAIR) images and T1-weighted images as inputs. However, T1-weighted images are typically not part of standard clinical scans which are acquired for patients with acute stroke. In this paper, we propose a novel brain atlas guided attention U-Net (BAGAU-Net) that leverages only FLAIR images with a spatially-registered white matter (WM) brain atlas to yield competitive WMH segmentation performance. Specifically, we designed a dual-path segmentation model with two novel connecting mechanisms, namely multi-input attention module (MAM) and attention fusion module (AFM) to fuse the information from two paths for accurate results. Experiments on two publicly available datasets show the effectiveness of the proposed BAGAU-Net. With only FLAIR images and WM brain atlas, BAGAU-Net outperforms the state-of-the-art method with T1-weighted images, paving the way for effective development of WMH segmentation. Availability: https://github.com/Ericzhang1/BAGAU-Net.
Collapse
Affiliation(s)
- Zicong Zhang
- Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Kimerly Powell
- Biomedical Informatics, The Ohio State University, Columbus, Ohio, USA
- Department of Radiology, The Ohio State University, Columbus, Ohio, USA
| | - Changchang Yin
- Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Shilei Cao
- Tencent Jarvis Lab, Tencent, Shenzhen, China
| | - Dani Gonzalez
- Biomedical Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Yousef Hannawi
- Department of Neurology, The Ohio State University, Columbus, Ohio, USA
- Corresponding authors: ;
| | - Ping Zhang
- Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA
- Biomedical Informatics, The Ohio State University, Columbus, Ohio, USA
- Corresponding authors: ;
| |
Collapse
|
45
|
Qiu B, van der Wel H, Kraeima J, Hendrik Glas H, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Robust and Accurate Mandible Segmentation on Dental CBCT Scans Affected by Metal Artifacts Using a Prior Shape Model. J Pers Med 2021; 11:364. [PMID: 34062762 PMCID: PMC8147374 DOI: 10.3390/jpm11050364] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 12/17/2022] Open
Abstract
Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (J.G.); (P.M.A.v.O.)
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
46
|
Wang Z, Xiao Y, Weng F, Li X, Zhu D, Lu F, Liu X, Hou M, Meng Y. R-JaunLab: Automatic Multi-Class Recognition of Jaundice on Photos of Subjects with Region Annotation Networks. J Digit Imaging 2021; 34:337-350. [PMID: 33634415 DOI: 10.1007/s10278-021-00432-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 07/01/2020] [Accepted: 02/09/2021] [Indexed: 12/21/2022] Open
Abstract
Jaundice occurs as a symptom of various diseases, such as hepatitis, the liver cancer, gallbladder or pancreas. Therefore, clinical measurement with special equipment is a common method that is used to identify the total serum bilirubin level in patients. Fully automated multi-class recognition of jaundice combines two key issues: (1) the critical difficulties in multi-class recognition of jaundice approaches contrasting with the binary class and (2) the subtle difficulties in multi-class recognition of jaundice represent extensive individuals variability of high-resolution photos of subjects, huge coherency between healthy controls and occult jaundice, as well as broadly inhomogeneous color distribution. We introduce a novel approach for multi-class recognition of jaundice to detect occult jaundice, obvious jaundice and healthy controls. First, region annotation network is developed and trained to propose eye candidates. Subsequently, an efficient jaundice recognizer is proposed to learn similarities, context, localization features and globalization characteristics on photos of subjects. Finally, both networks are unified by using shared convolutional layer. Evaluation of the structured model in a comparative study resulted in a significant performance boost (categorical accuracy for mean 91.38%) over the independent human observer. Our work was exceeded against the state-of-the-art convolutional neural network (96.85% and 90.06% for training and validation subset, respectively) and showed a remarkable categorical result for mean 95.33% on testing subset. The proposed network makes a performance better than physicians. This work demonstrates the strength of our proposal to help bringing an efficient tool for multi-class recognition of jaundice into clinical practice.
Collapse
Affiliation(s)
- Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.,Science and Engineering School, Hunan First Normal University, Changsha, 410205, China
| | - Ying Xiao
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Futian Weng
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China
| | - Xiaojun Li
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Danhua Zhu
- Department of Gastroenterology, Hunan Provincial People's Hospital, Changsha, 410002, China
| | - Fanggen Lu
- The Second Xiangya Hospital, Central South University, 410083, Changsha, China
| | - Xiaowei Liu
- Gastroenterology Department of Xiangya Hospital, Central South University, Changsha, 410083, China
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha, Hunan, 410083, China.
| | - Yu Meng
- Department of Gastroenterology and Hepatology, Shenzhen University General Hospital, Shenzhen, 518055, China.
| |
Collapse
|
47
|
Martin M, Sciolla B, Sdika M, Quétin P, Delachartre P. Automatic segmentation and location learning of neonatal cerebral ventricles in 3D ultrasound data combining CNN and CPPN. Comput Biol Med 2021; 131:104268. [PMID: 33639351 DOI: 10.1016/j.compbiomed.2021.104268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 02/09/2021] [Accepted: 02/09/2021] [Indexed: 10/22/2022]
Abstract
Preterm neonates are highly likely to suffer from ventriculomegaly, a dilation of the Cerebral Ventricular System (CVS). This condition can develop into life-threatening hydrocephalus and is correlated with future neuro-developmental impairments. Consequently, it must be detected and monitored by physicians. In clinical routing, manual 2D measurements are performed on 2D ultrasound (US) images to estimate the CVS volume but this practice is imprecise due to the unavailability of 3D information. A way to tackle this problem would be to develop automatic CVS segmentation algorithms for 3D US data. In this paper, we investigate the potential of 2D and 3D Convolutional Neural Networks (CNN) to solve this complex task and propose to use Compositional Pattern Producing Network (CPPN) to enable Fully Convolutional Networks (FCN) to learn CVS location. Our database was composed of 25 3D US volumes collected on 21 preterm nenonates at the age of 35.8±1.6 gestational weeks. We found that the CPPN enables to encode CVS location, which increases the accuracy of the CNNs when they have few layers. Accuracy of the 2D and 3D FCNs reached intraobserver variability (IOV) in the case of dilated ventricles with Dice of 0.893±0.008 and 0.886±0.004 respectively (IOV = 0.898±0.008) and with volume errors of 0.45±0.42 cm3 and 0.36±0.24 cm3 respectively (IOV = 0.41±0.05 cm3). 3D FCNs were more accurate than 2D FCNs in the case of normal ventricles with Dice of 0.797±0.041 against 0.776±0.038 (IOV = 0.816±0.009) and volume errors of 0.35±0.29 cm3 against 0.35±0.24 cm3 (IOV = 0.2±0.11 cm3). The best segmentation time of volumes of size 320×320×320 was obtained by a 2D FCN in 3.5±0.2 s.
Collapse
Affiliation(s)
- Matthieu Martin
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France.
| | - Bruno Sciolla
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France
| | - Michaël Sdika
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France
| | | | - Philippe Delachartre
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69621, LYON, France
| |
Collapse
|
48
|
Bo ZH, Qiao H, Tian C, Guo Y, Li W, Liang T, Li D, Liao D, Zeng X, Mei L, Shi T, Wu B, Huang C, Liu L, Jin C, Guo Q, Yong JH, Xu F, Zhang T, Wang R, Dai Q. Toward human intervention-free clinical diagnosis of intracranial aneurysm via deep neural network. PATTERNS (NEW YORK, N.Y.) 2021; 2:100197. [PMID: 33659913 PMCID: PMC7892358 DOI: 10.1016/j.patter.2020.100197] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 10/01/2020] [Accepted: 12/29/2020] [Indexed: 11/15/2022]
Abstract
Intracranial aneurysm (IA) is an enormous threat to human health, which often results in nontraumatic subarachnoid hemorrhage or dismal prognosis. Diagnosing IAs on commonly used computed tomographic angiography (CTA) examinations remains laborious and time consuming, leading to error-prone results in clinical practice, especially for small targets. In this study, we propose a fully automatic deep-learning model for IA segmentation that can be applied to CTA images. Our model, called Global Localization-based IA Network (GLIA-Net), can incorporate the global localization prior and generates the fine-grain three-dimensional segmentation. GLIA-Net is trained and evaluated on a big internal dataset (1,338 scans from six institutions) and two external datasets. Evaluations show that our model exhibits good tolerance to different settings and achieves superior performance to other models. A clinical experiment further demonstrates the clinical utility of our technique, which helps radiologists in the diagnosis of IAs. GLIA-Net is a deep learning method for the clinical diagnosis of IAs It can be applied directly to CTA images without any laborious preprocessing A clinical study demonstrates its effectiveness in assisting diagnosis An IA dataset of 1,338 CTA cases from six institutions is publicly released
Intracranial aneurysms (IAs) are enormous threats to human health with a prevalence of approximately 4%. The rupture of IAs usually causes death or severe damage to the patients. To enhance the clinical diagnosis of IAs, we present a deep learning model (GLIA-Net) for IA detection and segmentation without laborious human intervention, which achieves superior diagnostic performance validated by quantitative evaluations as well as a sophisticated clinical study. We anticipate that the publicly released data and the artificial intelligence technique would help to transform the clinical diagnostics and precision treatments of cerebrovascular diseases. They may also revolutionize the landscape of healthcare and biomedical research in the future.
Collapse
Affiliation(s)
- Zi-Hao Bo
- BNRist and School of Software, Tsinghua University, Beijing, Beijing 100084, China
| | - Hui Qiao
- BNRist and Department of Automation, Tsinghua University, Beijing, Beijing 100084, China.,Institute of Brain and Cognitive Sciences, Tsinghua University, Beijing, Beijing 100084, China
| | - Chong Tian
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Yuchen Guo
- BNRist and Department of Automation, Tsinghua University, Beijing, Beijing 100084, China
| | - Wuchao Li
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Tiantian Liang
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Dongxue Li
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Dan Liao
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Xianchun Zeng
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Leilei Mei
- Department of Radiology, Affiliated Hospital of Zunyi Medical University, Zunyi, Guizhou 563000, China
| | - Tianliang Shi
- Department of Radiology, Tongren Municipal People's Hospital, Tongren, Guizhou 554300, China
| | - Bo Wu
- Department of Radiology, Tongren Municipal People's Hospital, Tongren, Guizhou 554300, China
| | - Chao Huang
- Department of Radiology, Tongren Municipal People's Hospital, Tongren, Guizhou 554300, China
| | - Lu Liu
- Department of Radiology, The Second People's Hospital of Guiyang, Guiyang, Guizhou 550002, China
| | - Can Jin
- Department of Radiology, The Second People's Hospital of Guiyang, Guiyang, Guizhou 550002, China
| | - Qiping Guo
- Department of Radiology, Xingyi Municipal People's Hospital, Xingyi, Guizhou 562400, China
| | - Jun-Hai Yong
- BNRist and School of Software, Tsinghua University, Beijing, Beijing 100084, China
| | - Feng Xu
- BNRist and School of Software, Tsinghua University, Beijing, Beijing 100084, China.,Institute of Brain and Cognitive Sciences, Tsinghua University, Beijing, Beijing 100084, China
| | - Tijiang Zhang
- Department of Radiology, Affiliated Hospital of Zunyi Medical University, Zunyi, Guizhou 563000, China
| | - Rongpin Wang
- Department of Radiology and Guizhou Provincial Key Laboratory of Intelligent Medical Image Analysis and Precision Diagnosis, Guizhou Provincial People's Hospital, Guiyang, Guizhou 550002, China
| | - Qionghai Dai
- BNRist and Department of Automation, Tsinghua University, Beijing, Beijing 100084, China.,Institute of Brain and Cognitive Sciences, Tsinghua University, Beijing, Beijing 100084, China
| |
Collapse
|
49
|
Gryska E, Schneiderman J, Björkman-Burtscher I, Heckemann RA. Automatic brain lesion segmentation on standard magnetic resonance images: a scoping review. BMJ Open 2021; 11:e042660. [PMID: 33514580 PMCID: PMC7849889 DOI: 10.1136/bmjopen-2020-042660] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/09/2021] [Accepted: 01/12/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field. DESIGN Scoping review. SETTING Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison. RESULTS Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity. CONCLUSIONS The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
Collapse
Affiliation(s)
- Emilia Gryska
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| | - Justin Schneiderman
- Sektionen för klinisk neurovetenskap, Goteborgs Universitet Institutionen for Neurovetenskap och fysiologi, Goteborg, Sweden
| | | | - Rolf A Heckemann
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| |
Collapse
|
50
|
Automatic segmentation of white matter hyperintensities from brain magnetic resonance images in the era of deep learning and big data - A systematic review. Comput Med Imaging Graph 2021; 88:101867. [PMID: 33508567 DOI: 10.1016/j.compmedimag.2021.101867] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 12/23/2020] [Accepted: 12/31/2020] [Indexed: 11/20/2022]
Abstract
BACKGROUND White matter hyperintensities (WMH), of presumed vascular origin, are visible and quantifiable neuroradiological markers of brain parenchymal change. These changes may range from damage secondary to inflammation and other neurological conditions, through to healthy ageing. Fully automatic WMH quantification methods are promising, but still, traditional semi-automatic methods seem to be preferred in clinical research. We systematically reviewed the literature for fully automatic methods developed in the last five years, to assess what are considered state-of-the-art techniques, as well as trends in the analysis of WMH of presumed vascular origin. METHOD We registered the systematic review protocol with the International Prospective Register of Systematic Reviews (PROSPERO), registration number - CRD42019132200. We conducted the search for fully automatic methods developed from 2015 to July 2020 on Medline, Science direct, IEE Explore, and Web of Science. We assessed risk of bias and applicability of the studies using QUADAS 2. RESULTS The search yielded 2327 papers after removing 104 duplicates. After screening titles, abstracts and full text, 37 were selected for detailed analysis. Of these, 16 proposed a supervised segmentation method, 10 proposed an unsupervised segmentation method, and 11 proposed a deep learning segmentation method. Average DSC values ranged from 0.538 to 0.91, being the highest value obtained from an unsupervised segmentation method. Only four studies validated their method in longitudinal samples, and eight performed an additional validation using clinical parameters. Only 8/37 studies made available their methods in public repositories. CONCLUSIONS We found no evidence that favours deep learning methods over the more established k-NN, linear regression and unsupervised methods in this task. Data and code availability, bias in study design and ground truth generation influence the wider validation and applicability of these methods in clinical research.
Collapse
|