1
|
Matsui Y, Ueda D, Fujita S, Fushimi Y, Tsuboyama T, Kamagata K, Ito R, Yanagawa M, Yamada A, Kawamura M, Nakaura T, Fujima N, Nozaki T, Tatsugami F, Fujioka T, Hirata K, Naganawa S. Applications of artificial intelligence in interventional oncology: An up-to-date review of the literature. Jpn J Radiol 2024:10.1007/s11604-024-01668-3. [PMID: 39356439 DOI: 10.1007/s11604-024-01668-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 09/23/2024] [Indexed: 10/03/2024]
Abstract
Interventional oncology provides image-guided therapies, including transarterial tumor embolization and percutaneous tumor ablation, for malignant tumors in a minimally invasive manner. As in other medical fields, the application of artificial intelligence (AI) in interventional oncology has garnered significant attention. This narrative review describes the current state of AI applications in interventional oncology based on recent literature. A literature search revealed a rapid increase in the number of studies relevant to this topic recently. Investigators have attempted to use AI for various tasks, including automatic segmentation of organs, tumors, and treatment areas; treatment simulation; improvement of intraprocedural image quality; prediction of treatment outcomes; and detection of post-treatment recurrence. Among these, the AI-based prediction of treatment outcomes has been the most studied. Various deep and conventional machine learning algorithms have been proposed for these tasks. Radiomics has often been incorporated into prediction and detection models. Current literature suggests that AI is potentially useful in various aspects of interventional oncology, from treatment planning to post-treatment follow-up. However, most AI-based methods discussed in this review are still at the research stage, and few have been implemented in clinical practice. To achieve widespread adoption of AI technologies in interventional oncology procedures, further research on their reliability and clinical utility is necessary. Nevertheless, considering the rapid research progress in this field, various AI technologies will be integrated into interventional oncology practices in the near future.
Collapse
Affiliation(s)
- Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-Cho, Kita-Ku, Okayama, 700-8558, Japan.
| | - Daiju Ueda
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Abeno-Ku, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, Bunkyo-Ku, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Kobe University Graduate School of Medicine, Chuo-Ku, Kobe, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-Ku, Tokyo, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Showa-Ku, Nagoya, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita-City, Osaka, Japan
| | - Akira Yamada
- Medical Data Science Course, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Showa-Ku, Nagoya, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-Ku, Kumamoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita-Ku, Sapporo, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-Ku, Tokyo, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-Ku, Hiroshima, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-Ku, Tokyo, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita-Ku, Sapporo, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Showa-Ku, Nagoya, Japan
| |
Collapse
|
2
|
Le QA, Pham XL, van Walsum T, Dao VH, Le TL, Franklin D, Moelker A, Le VH, Trung NL, Luu MH. Precise ablation zone segmentation on CT images after liver cancer ablation using semi-automatic CNN-based segmentation. Med Phys 2024. [PMID: 39250658 DOI: 10.1002/mp.17373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 07/17/2024] [Accepted: 08/12/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND Ablation zone segmentation in contrast-enhanced computed tomography (CECT) images enables the quantitative assessment of treatment success in the ablation of liver lesions. However, fully automatic liver ablation zone segmentation in CT images still remains challenging, such as low accuracy and time-consuming manual refinement of the incorrect regions. PURPOSE Therefore, in this study, we developed a semi-automatic technique to address the remaining drawbacks and improve the accuracy of the liver ablation zone segmentation in the CT images. METHODS Our approach uses a combination of a CNN-based automatic segmentation method and an interactive CNN-based segmentation method. First, automatic segmentation is applied for coarse ablation zone segmentation in the whole CT image. Human experts then visually validate the segmentation results. If there are errors in the coarse segmentation, local corrections can be performed on each slice via an interactive CNN-based segmentation method. The models were trained and the proposed method was evaluated using two internal datasets of post-interventional CECT images (n 1 $n_{1}$ = 22,n 2 $n_{2}$ = 145; 62 patients in total) and then further tested using an external benchmark dataset (n 3 $n_{3}$ = 12; 10 patients). RESULTS To evaluate the accuracy of the proposed approach, we used Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), Hausdorff distance (HD), and volume difference (VD). The quantitative evaluation results show that the proposed approach obtained mean DSC, ASSD, HD, and VD scores of 94.0%, 0.4 mm, 8.4 mm, 0.02, respectively, on the internal dataset, and 87.8%, 0.9 mm, 9.5 mm, and -0.03, respectively, on the benchmark dataset. We also compared the performance of the proposed approach to that of five well-known segmentation methods; the proposed semi-automatic method achieved state-of-the-art performance on ablation segmentation accuracy, and on average, 2 min are required to correct the segmentation. Furthermore, we found that the accuracy of the proposed method on the benchmark dataset is comparable to that of manual segmentation by human experts ( p $p$ = 0.55, t $t$ -test). CONCLUSIONS The proposed semi-automatic CNN-based segmentation method can be used to effectively segment the ablation zones, increasing the value of CECT for an assessment of treatment success. For reproducibility, the trained models, source code, and demonstration tool are publicly available at https://github.com/lqanh11/Interactive_AblationZone_Segmentation.
Collapse
Affiliation(s)
- Quoc Anh Le
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Xuan Loc Pham
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Theo van Walsum
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Viet Hang Dao
- Internal Medicine Faculty, Hanoi Medical University, Hanoi, Vietnam
- The Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | - Tuan Linh Le
- Diagnostic Imaging and Interventional Radiology Center, Hanoi Medical University Hospital, Hanoi, Vietnam
| | - Daniel Franklin
- School of Electrical and Data Engineering, University of Technology Sydney, Sydney, Australia
| | - Adriaan Moelker
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Vu Ha Le
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Nguyen Linh Trung
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Manh Ha Luu
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
3
|
Zossou VBS, Rodrigue Gnangnon FH, Biaou O, de Vathaire F, Allodji RS, Ezin EC. Automatic Diagnosis of Hepatocellular Carcinoma and Metastases Based on Computed Tomography Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01192-w. [PMID: 39227538 DOI: 10.1007/s10278-024-01192-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/26/2024] [Accepted: 06/27/2024] [Indexed: 09/05/2024]
Abstract
Liver cancer, a leading cause of cancer mortality, is often diagnosed by analyzing the grayscale variations in liver tissue across different computed tomography (CT) images. However, the intensity similarity can be strong, making it difficult for radiologists to visually identify hepatocellular carcinoma (HCC) and metastases. It is crucial for the management and prevention strategies to accurately differentiate between these two liver cancers. This study proposes an automated system using a convolutional neural network (CNN) to enhance diagnostic accuracy to detect HCC, metastasis, and healthy liver tissue. This system incorporates automatic segmentation and classification. The liver lesions segmentation model is implemented using residual attention U-Net. A 9-layer CNN classifier implements the lesions classification model. Its input is the combination of the results of the segmentation model with original images. The dataset included 300 patients, with 223 used to develop the segmentation model and 77 to test it. These 77 patients also served as inputs for the classification model, consisting of 20 HCC cases, 27 with metastasis, and 30 healthy. The system achieved a mean Dice score of 87.65 % in segmentation and a mean accuracy of 93.97 % in classification, both in the test phase. The proposed method is a preliminary study with great potential in helping radiologists diagnose liver cancers.
Collapse
Affiliation(s)
- Vincent-Béni Sèna Zossou
- Université Paris-Saclay, UVSQ, Univ. Paris-Sud, CESP, Équipe Radiation Epidemiology, 94805, Villejuif, France.
- Centre de recherche en épidémiologie et santé des populations (CESP), U1018, Institut national de la santé et de la recherche médicale (INSERM), 94805, Villejuif, France.
- Department of Clinical Research, Radiation Epidemiology Team, Gustave Roussy, 94805, Villejuif, France.
- Ecole Doctorale Sciences de l'Ingénieur, Université d'Abomey-Calavi, BP 526, Abomey-Calavi, Benin.
| | | | - Olivier Biaou
- Faculté des Sciences de la Santé, Université d'Abomey-Calavi, BP 188, Cotonou, Benin
- Department of Radiology, CNHU-HKM, 1213, Cotonou, Benin
| | - Florent de Vathaire
- Université Paris-Saclay, UVSQ, Univ. Paris-Sud, CESP, Équipe Radiation Epidemiology, 94805, Villejuif, France
- Centre de recherche en épidémiologie et santé des populations (CESP), U1018, Institut national de la santé et de la recherche médicale (INSERM), 94805, Villejuif, France
- Department of Clinical Research, Radiation Epidemiology Team, Gustave Roussy, 94805, Villejuif, France
| | - Rodrigue S Allodji
- Université Paris-Saclay, UVSQ, Univ. Paris-Sud, CESP, Équipe Radiation Epidemiology, 94805, Villejuif, France
- Centre de recherche en épidémiologie et santé des populations (CESP), U1018, Institut national de la santé et de la recherche médicale (INSERM), 94805, Villejuif, France
- Department of Clinical Research, Radiation Epidemiology Team, Gustave Roussy, 94805, Villejuif, France
| | - Eugène C Ezin
- Institut de Formation et de Recherche en Informatique, Université d'Abomey-Calavi, BP 526, Cotonou, Benin
- Institut de Mathématiques et de Sciences Physiques, Université d'Abomey-Calavi, 613, Dangbo, Benin
| |
Collapse
|
4
|
Siami M, Barszcz T, Wodecki J, Zimroz R. Semantic segmentation of thermal defects in belt conveyor idlers using thermal image augmentation and U-Net-based convolutional neural networks. Sci Rep 2024; 14:5748. [PMID: 38459162 PMCID: PMC10923815 DOI: 10.1038/s41598-024-55864-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 02/28/2024] [Indexed: 03/10/2024] Open
Abstract
The belt conveyor (BC) is the main means of horizontal transportation of bulk materials at mining sites. The sudden fault in BC modules may cause unexpected stops in production lines. With the increasing number of applications of inspection mobile robots in condition monitoring (CM) of industrial infrastructure in hazardous environments, in this article we introduce an image processing pipeline for automatic segmentation of thermal defects in thermal images captured from BC idlers using a mobile robot. This study follows the fact that CM of idler temperature is an important task for preventing sudden breakdowns in BC system networks. We compared the performance of three different types of U-Net-based convolutional neural network architectures for the identification of thermal anomalies using a small number of hand-labeled thermal images. Experiments on the test data set showed that the attention residual U-Net with binary cross entropy as the loss function handled the semantic segmentation problem better than our previous research and other studied U-Net variations.
Collapse
Affiliation(s)
- Mohammad Siami
- AMC Vibro Sp. z o.o., Pilotow 2e, 31-462, Kraków, Poland.
| | - Tomasz Barszcz
- Faculty of Mechanical Engineering and Robotics, AGH University, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Jacek Wodecki
- Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Na Grobli 15, 50-421, Wroclaw, Poland
| | - Radoslaw Zimroz
- Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Na Grobli 15, 50-421, Wroclaw, Poland
| |
Collapse
|
5
|
Li R, An C, Wang S, Wang G, Zhao L, Yu Y, Wang L. A heuristic method for rapid and automatic radiofrequency ablation planning of liver tumors. Int J Comput Assist Radiol Surg 2023; 18:2213-2221. [PMID: 37145252 DOI: 10.1007/s11548-023-02921-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Accepted: 04/14/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE Preprocedural planning is a key step in radiofrequency ablation (RFA) treatment for liver tumors, which is a complex task with multiple constraints and relies heavily on the personal experience of interventional radiologists, and existing optimization-based automatic RFA planning methods are very time-consuming. In this paper, we aim to develop a heuristic RFA planning method to rapidly and automatically make a clinically acceptable RFA plan. METHODS First, the insertion direction is heuristically initialized based on tumor long axis. Then, the 3D RFA planning is divided into insertion path planning and ablation position planning, which are further simplified into 2D by projections along two orthogonal directions. Here, a heuristic algorithm based on regular arrangement and step-wise adjustment is proposed to implement the 2D planning tasks. Experiments are conducted on patients with liver tumors of different sizes and shapes from multicenter to evaluate the proposed method. RESULTS The proposed method automatically generated clinically acceptable RFA plans within 3 min for all cases in the test set and the clinical validation set. All RFA plans of our method achieve 100% treatment zone coverage without damaging the vital organs. Compared with the optimization-based method, the proposed method reduces the planning time by dozens of times while generating RFA plans with similar ablation efficiency. CONCLUSION The proposed method demonstrates a new way to rapidly and automatically generate clinically acceptable RFA plans with multiple clinical constraints. The plans of our method are consistent with the clinical actual plans on almost all cases, which demonstrates the effectiveness of the proposed method and can help reduce the burden on clinicians.
Collapse
Affiliation(s)
- Ruikun Li
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Chengyang An
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | | | - Guisheng Wang
- Department of Radiology, Third Medical Centre, Chinese PLA General Hospital, Beijing, 100036, China
| | - Lifeng Zhao
- Department of Radiology, Daqing Longnan Hospital, Daqing, 163453, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing, 100080, China.
| | - Lisheng Wang
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
6
|
Radiya K, Joakimsen HL, Mikalsen KØ, Aahlin EK, Lindsetmo RO, Mortensen KE. Performance and clinical applicability of machine learning in liver computed tomography imaging: a systematic review. Eur Radiol 2023; 33:6689-6717. [PMID: 37171491 PMCID: PMC10511359 DOI: 10.1007/s00330-023-09609-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 05/13/2023]
Abstract
OBJECTIVES Machine learning (ML) for medical imaging is emerging for several organs and image modalities. Our objectives were to provide clinicians with an overview of this field by answering the following questions: (1) How is ML applied in liver computed tomography (CT) imaging? (2) How well do ML systems perform in liver CT imaging? (3) What are the clinical applications of ML in liver CT imaging? METHODS A systematic review was carried out according to the guidelines from the PRISMA-P statement. The search string focused on studies containing content relating to artificial intelligence, liver, and computed tomography. RESULTS One hundred ninety-one studies were included in the study. ML was applied to CT liver imaging by image analysis without clinicians' intervention in majority of studies while in newer studies the fusion of ML method with clinical intervention have been identified. Several were documented to perform very accurately on reliable but small data. Most models identified were deep learning-based, mainly using convolutional neural networks. Potentially many clinical applications of ML to CT liver imaging have been identified through our review including liver and its lesion segmentation and classification, segmentation of vascular structure inside the liver, fibrosis and cirrhosis staging, metastasis prediction, and evaluation of chemotherapy. CONCLUSION Several studies attempted to provide transparent result of the model. To make the model convenient for a clinical application, prospective clinical validation studies are in urgent call. Computer scientists and engineers should seek to cooperate with health professionals to ensure this. KEY POINTS • ML shows great potential for CT liver image tasks such as pixel-wise segmentation and classification of liver and liver lesions, fibrosis staging, metastasis prediction, and retrieval of relevant liver lesions from similar cases of other patients. • Despite presenting the result is not standardized, many studies have attempted to provide transparent results to interpret the machine learning method performance in the literature. • Prospective studies are in urgent call for clinical validation of ML method, preferably carried out by cooperation between clinicians and computer scientists.
Collapse
Affiliation(s)
- Keyur Radiya
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway.
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway.
| | - Henrik Lykke Joakimsen
- Institute of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North Norway, Tromso, Norway
| | - Karl Øyvind Mikalsen
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Centre for Clinical Artificial Intelligence (SPKI), University Hospital of North Norway, Tromso, Norway
- UiT Machine Learning Group, Department of Physics and Technology, UiT the Arctic University of Norway, Tromso, Norway
| | - Eirik Kjus Aahlin
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway
| | - Rolv-Ole Lindsetmo
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
- Head Clinic of Surgery, Oncology and Women Health, University Hospital of North Norway, Tromso, Norway
| | - Kim Erlend Mortensen
- Department of Gastroenterological Surgery at University Hospital of North Norway (UNN), Tromso, Norway
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromso, Norway
| |
Collapse
|
7
|
A Lightweight Convolutional Neural Network Model for Liver Segmentation in Medical Diagnosis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7954333. [PMID: 35755754 PMCID: PMC9225858 DOI: 10.1155/2022/7954333] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/15/2022] [Accepted: 02/21/2022] [Indexed: 12/24/2022]
Abstract
Liver segmentation and recognition from computed tomography (CT) images is a warm topic in image processing which is helpful for doctors and practitioners. Currently, many deep learning methods are used for liver segmentation that takes a long time to train the model which makes this task challenging and limited to larger hardware resources. In this research, we proposed a very lightweight convolutional neural network (CNN) to extract the liver region from CT scan images. The suggested CNN algorithm consists of 3 convolutional and 2 fully connected layers, where softmax is used to discriminate the liver from background. Random Gaussian distribution is used for weight initialization which achieved a distance-preserving-embedding of the information. The proposed network is known as Ga-CNN (Gaussian-weight initialization of CNN). General experiments are performed on three benchmark datasets including MICCAI SLiver’07, 3Dircadb01, and LiTS17. Experimental results show that the proposed method performed well on each benchmark dataset.
Collapse
|
8
|
Mahmood U, Bates DDB, Erdi YE, Mannelli L, Corrias G, Kanan C. Deep Learning and Domain-Specific Knowledge to Segment the Liver from Synthetic Dual Energy CT Iodine Scans. Diagnostics (Basel) 2022; 12:672. [PMID: 35328225 PMCID: PMC8947702 DOI: 10.3390/diagnostics12030672] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 02/24/2022] [Accepted: 03/03/2022] [Indexed: 11/23/2022] Open
Abstract
We map single energy CT (SECT) scans to synthetic dual-energy CT (synth-DECT) material density iodine (MDI) scans using deep learning (DL) and demonstrate their value for liver segmentation. A 2D pix2pix (P2P) network was trained on 100 abdominal DECT scans to infer synth-DECT MDI scans from SECT scans. The source and target domain were paired with DECT monochromatic 70 keV and MDI scans. The trained P2P algorithm then transformed 140 public SECT scans to synth-DECT scans. We split 131 scans into 60% train, 20% tune, and 20% held-out test to train four existing liver segmentation frameworks. The remaining nine low-dose SECT scans tested system generalization. Segmentation accuracy was measured with the dice coefficient (DSC). The DSC per slice was computed to identify sources of error. With synth-DECT (and SECT) scans, an average DSC score of 0.93±0.06 (0.89±0.01) and 0.89±0.01 (0.81±0.02) was achieved on the held-out and generalization test sets. Synth-DECT-trained systems required less data to perform as well as SECT-trained systems. Low DSC scores were primarily observed around the scan margin or due to non-liver tissue or distortions within ground-truth annotations. In general, training with synth-DECT scans resulted in improved segmentation performance with less data.
Collapse
Affiliation(s)
- Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
| | - David D. B. Bates
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
| | - Yusuf E. Erdi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
| | | | - Giuseppe Corrias
- Department of Radiology, University of Cagliari, 09124 Cagliari, Italy;
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623, USA;
| |
Collapse
|
9
|
Nam D, Chapiro J, Paradis V, Seraphin TP, Kather JN. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP REPORTS : INNOVATION IN HEPATOLOGY 2022; 4:100443. [PMID: 35243281 PMCID: PMC8867112 DOI: 10.1016/j.jhepr.2022.100443] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/26/2021] [Accepted: 01/11/2022] [Indexed: 12/18/2022]
Abstract
Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum of metabolic, infectious, autoimmune and neoplastic diseases. Clinicians integrate qualitative and quantitative information from multiple data sources to make a diagnosis, prognosticate the disease course, and recommend a treatment. In the last 5 years, advances in artificial intelligence (AI), particularly in deep learning, have made it possible to extract clinically relevant information from complex and diverse clinical datasets. In particular, histopathology and radiology image data contain diagnostic, prognostic and predictive information which AI can extract. Ultimately, such AI systems could be implemented in clinical routine as decision support tools. However, in the context of hepatology, this requires further large-scale clinical validation and regulatory approval. Herein, we summarise the state of the art in AI in hepatology with a particular focus on histopathology and radiology data. We present a roadmap for the further development of novel biomarkers in hepatology and outline critical obstacles which need to be overcome.
Collapse
|
10
|
Chen CI, Lu NH, Huang YH, Liu KY, Hsu SY, Matsushima A, Wang YM, Chen TB. Segmentation of liver tumors with abdominal computed tomography using fully convolutional networks. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:953-966. [PMID: 35754254 DOI: 10.3233/xst-221194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Dividing liver organs or lesions depicting on computed tomography (CT) images could be applied to help tumor staging and treatment. However, most existing image segmentation technologies use manual or semi-automatic analysis, making the analysis process costly and time-consuming. OBJECTIVE This research aims to develop and apply a deep learning network architecture to segment liver tumors automatically after fine tuning parameters. METHODS AND MATERIALS The medical imaging is obtained from the International Symposium on Biomedical Imaging (ISBI), which includes 3D abdominal CT scans of 131 patients diagnosed with liver tumors. From these CT scans, there are 7,190 2D CT images along with the labeled binary images. The labeled binary images are regarded as gold standard for evaluation of the segmented results by FCN (Fully Convolutional Network). The backbones of FCN are extracted from Xception, InceptionresNetv2, MobileNetv2, ResNet18, ResNet50 in this study. Meanwhile, the parameters including optimizers (SGDM and ADAM), size of epoch, and size of batch are investigated. CT images are randomly divided into training and testing sets using a ratio of 9:1. Several evaluation indices including Global Accuracy, Mean Accuracy, Mean IoU (Intersection over Union), Weighted IoU and Mean BF Score are applied to evaluate tumor segmentation results in the testing images. RESULTS The Global Accuracy, Mean Accuracy, Mean IoU, Weighted IoU, and Mean BF Scores are 0.999, 0.969, 0.954, 0.998, 0.962 using ResNet50 in FCN with optimizer SGDM, batch size 12, and epoch 9. It is important to fine tuning the parameters in FCN model. Top 20 FNC models enable to achieve higher tumor segmentation accuracy with Mean IoU over 0.900. The occurred frequency of InceptionresNetv2, MobileNetv2, ResNet18, ResNet50, and Xception are 9, 6, 3, 5, and 2 times. Therefore, the InceptionresNetv2 has higher performance than others. CONCLUSIONS This study develop and test an automated liver tumor segmentation model based on FCN. Study results demonstrate that many deep learning models including InceptionresNetv2, MobileNetv2, ResNet18, ResNet50, and Xception have high potential to segment liver tumors from CT images with accuracy exceeding 90%. However, it is still difficult to accurately segment tiny and small size tumors by FCN models.
Collapse
Affiliation(s)
- Chih-I Chen
- Division of Colon and Rectal Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City, Taiwan
- Division of General Medicine Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City, Taiwan
- Department of Information Engineering, I-Shou University, Kaohsiung City, Taiwan
- The School of Chinese Medicine for Post Baccalaureate, I-Shou University, Kaohsiung City, Taiwan
| | - Nan-Han Lu
- Department of Pharmacy, Tajen University, Pingtung City, Taiwan
- Department of Radiology, E-DA Hospital, I-Shou University, Kaohsiung City, Taiwan
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City, Taiwan
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Hospital, I-Shou University, Kaohsiung City, Taiwan
| | - Shih-Yen Hsu
- Department of Information Engineering, I-Shou University, Kaohsiung City, Taiwan
| | - Akari Matsushima
- Department of Radiological Technology Faculty of Medical Technology, Teikyo University, Tokyo, Japan
| | - Yi-Ming Wang
- Department of Information Engineering, I-Shou University, Kaohsiung City, Taiwan
- Department of Critical Care Medicine, E-DA hospital, I-Shou University, Kaohsiung City, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City, Taiwan
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| |
Collapse
|
11
|
Liu Y, Chen Z, Wang J, Wang X, Qu B, Ma L, Zhao W, Zhang G, Xu S. Dose Prediction Using a Three-Dimensional Convolutional Neural Network for Nasopharyngeal Carcinoma With Tomotherapy. Front Oncol 2021; 11:752007. [PMID: 34858825 PMCID: PMC8631763 DOI: 10.3389/fonc.2021.752007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 10/21/2021] [Indexed: 01/14/2023] Open
Abstract
Purpose This study focused on predicting 3D dose distribution at high precision and generated the prediction methods for nasopharyngeal carcinoma patients (NPC) treated with Tomotherapy based on the patient-specific gap between organs at risk (OARs) and planning target volumes (PTVs). Methods A convolutional neural network (CNN) is trained using the CT and contour masks as the input and dose distributions as output. The CNN is based on the "3D Dense-U-Net", which combines the U-Net and the Dense-Net. To evaluate the model, we retrospectively used 124 NPC patients treated with Tomotherapy, in which 96 and 28 patients were randomly split and used for model training and test, respectively. We performed comparison studies using different training matrix shapes and dimensions for the CNN models, i.e., 128 ×128 ×48 (for Model I), 128 ×128 ×16 (for Model II), and 2D Dense U-Net (for Model III). The performance of these models was quantitatively evaluated using clinically relevant metrics and statistical analysis. Results We found a more considerable height of the training patch size yields a better model outcome. The study calculated the corresponding errors by comparing the predicted dose with the ground truth. The mean deviations from the mean and maximum doses of PTVs and OARs were 2.42 and 2.93%. Error for the maximum dose of right optic nerves in Model I was 4.87 ± 6.88%, compared with 7.9 ± 6.8% in Model II (p=0.08) and 13.85 ± 10.97% in Model III (p<0.01); the Model I performed the best. The gamma passing rates of PTV60 for 3%/3 mm criteria was 83.6 ± 5.2% in Model I, compared with 75.9 ± 5.5% in Model II (p<0.001) and 77.2 ± 7.3% in Model III (p<0.01); the Model I also gave the best outcome. The prediction error of D95 for PTV60 was 0.64 ± 0.68% in Model I, compared with 2.04 ± 1.38% in Model II (p<0.01) and 1.05 ± 0.96% in Model III (p=0.01); the Model I was also the best one. Conclusions It is significant to train the dose prediction model by exploiting deep-learning techniques with various clinical logic concepts. Increasing the height (Y direction) of training patch size can improve the dose prediction accuracy of tiny OARs and the whole body. Our dose prediction network model provides a clinically acceptable result and a training strategy for a dose prediction model. It should be helpful to build automatic Tomotherapy planning.
Collapse
Affiliation(s)
- Yaoying Liu
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China.,School of Physics, Beihang University, Beijing, China
| | | | - Jinyuan Wang
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Xiaoshen Wang
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Baolin Qu
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Lin Ma
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| | - Wei Zhao
- School of Physics, Beihang University, Beijing, China
| | - Gaolong Zhang
- School of Physics, Beihang University, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, the First Medical Center of the People's Liberation Army General Hospital, Beijing, China
| |
Collapse
|