1
|
Tsuboyama T, Yanagawa M, Fujioka T, Fujita S, Ueda D, Ito R, Yamada A, Fushimi Y, Tatsugami F, Nakaura T, Nozaki T, Kamagata K, Matsui Y, Hirata K, Fujima N, Kawamura M, Naganawa S. Recent trends in AI applications for pelvic MRI: a comprehensive review. LA RADIOLOGIA MEDICA 2024; 129:1275-1287. [PMID: 39096356 DOI: 10.1007/s11547-024-01861-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 07/25/2024] [Indexed: 08/05/2024]
Abstract
Magnetic resonance imaging (MRI) is an essential tool for evaluating pelvic disorders affecting the prostate, bladder, uterus, ovaries, and/or rectum. Since the diagnostic pathway of pelvic MRI can involve various complex procedures depending on the affected organ, the Reporting and Data System (RADS) is used to standardize image acquisition and interpretation. Artificial intelligence (AI), which encompasses machine learning and deep learning algorithms, has been integrated into both pelvic MRI and the RADS, particularly for prostate MRI. This review outlines recent developments in the use of AI in various stages of the pelvic MRI diagnostic pathway, including image acquisition, image reconstruction, organ and lesion segmentation, lesion detection and classification, and risk stratification, with special emphasis on recent trends in multi-center studies, which can help to improve the generalizability of AI.
Collapse
Affiliation(s)
- Takahiro Tsuboyama
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe-City, Hyogo, 650-0017, Japan.
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita-City, Osaka, 565-0871, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Shohei Fujita
- Department of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Akira Yamada
- Medical Data Science Course, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-0016, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nishi 7, Kita-ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N15, W5, Kita-ku, Sapporo, 060-8638, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
2
|
Cai M, Zhao L, Qiang Y, Wang L, Zhao J. CHNet: A multi-task global-local Collaborative Hybrid Network for KRAS mutation status prediction in colorectal cancer. Artif Intell Med 2024; 155:102931. [PMID: 39094228 DOI: 10.1016/j.artmed.2024.102931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 06/29/2024] [Accepted: 07/03/2024] [Indexed: 08/04/2024]
Abstract
Accurate prediction of Kirsten rat sarcoma (KRAS) mutation status is crucial for personalized treatment of advanced colorectal cancer patients. However, despite the excellent performance of deep learning models in certain aspects, they often overlook the synergistic promotion among multiple tasks and the consideration of both global and local information, which can significantly reduce prediction accuracy. To address these issues, this paper proposes an innovative method called the Multi-task Global-Local Collaborative Hybrid Network (CHNet) aimed at more accurately predicting patients' KRAS mutation status. CHNet consists of two branches that can extract global and local features from segmentation and classification tasks, respectively, and exchange complementary information to collaborate in executing these tasks. Within the two branches, we have designed a Channel-wise Hybrid Transformer (CHT) and a Spatial-wise Hybrid Transformer (SHT). These transformers integrate the advantages of both Transformer and CNN, employing cascaded hybrid attention and convolution to capture global and local information from the two tasks. Additionally, we have created an Adaptive Collaborative Attention (ACA) module to facilitate the collaborative fusion of segmentation and classification features through guidance. Furthermore, we introduce a novel Class Activation Map (CAM) loss to encourage CHNet to learn complementary information between the two tasks. We evaluate CHNet on the T2-weighted MRI dataset, and achieve an accuracy of 88.93% in KRAS mutation status prediction, which outperforms the performance of representative KRAS mutation status prediction methods. The results suggest that our CHNet can more accurately predict KRAS mutation status in patients via a multi-task collaborative facilitation and considering global-local information way, which can assist doctors in formulating more personalized treatment strategies for patients.
Collapse
Affiliation(s)
- Meiling Cai
- College of computer science and technology (College of data science), Taiyuan University of Technology, Taiyuan, 030024, Shanxi, China.
| | - Lin Zhao
- Southeast University, Nanjing, 210037, Jiangsu, China
| | - Yan Qiang
- College of computer science and technology (College of data science), Taiyuan University of Technology, Taiyuan, 030024, Shanxi, China
| | - Long Wang
- Jinzhong College of Information, Jinzhong, 030800, Shanxi, China
| | - Juanjuan Zhao
- College of computer science and technology (College of data science), Taiyuan University of Technology, Taiyuan, 030024, Shanxi, China.
| |
Collapse
|
3
|
Yang M, Yang M, Yang L, Wang Z, Ye P, Chen C, Fu L, Xu S. Deep learning for MRI lesion segmentation in rectal cancer. Front Med (Lausanne) 2024; 11:1394262. [PMID: 38983364 PMCID: PMC11231084 DOI: 10.3389/fmed.2024.1394262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 06/14/2024] [Indexed: 07/11/2024] Open
Abstract
Rectal cancer (RC) is a globally prevalent malignant tumor, presenting significant challenges in its management and treatment. Currently, magnetic resonance imaging (MRI) offers superior soft tissue contrast and radiation-free effects for RC patients, making it the most widely used and effective detection method. In early screening, radiologists rely on patients' medical radiology characteristics and their extensive clinical experience for diagnosis. However, diagnostic accuracy may be hindered by factors such as limited expertise, visual fatigue, and image clarity issues, resulting in misdiagnosis or missed diagnosis. Moreover, the distribution of surrounding organs in RC is extensive with some organs having similar shapes to the tumor but unclear boundaries; these complexities greatly impede doctors' ability to diagnose RC accurately. With recent advancements in artificial intelligence, machine learning techniques like deep learning (DL) have demonstrated immense potential and broad prospects in medical image analysis. The emergence of this approach has significantly enhanced research capabilities in medical image classification, detection, and segmentation fields with particular emphasis on medical image segmentation. This review aims to discuss the developmental process of DL segmentation algorithms along with their application progress in lesion segmentation from MRI images of RC to provide theoretical guidance and support for further advancements in this field.
Collapse
Affiliation(s)
- Mingwei Yang
- Department of General Surgery, Nanfang Hospital Zengcheng Campus, Guangzhou, Guangdong, China
| | - Miyang Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Lanlan Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Zhaochu Wang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Peiyun Ye
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Chujie Chen
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Liyuan Fu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Shangwen Xu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| |
Collapse
|
4
|
Ma Y, Guo Y, Cui W, Liu J, Li Y, Wang Y, Qiang Y. SG-Transunet: A segmentation-guided Transformer U-Net model for KRAS gene mutation status identification in colorectal cancer. Comput Biol Med 2024; 173:108293. [PMID: 38574528 DOI: 10.1016/j.compbiomed.2024.108293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 02/28/2024] [Accepted: 03/12/2024] [Indexed: 04/06/2024]
Abstract
Accurately identifying the Kirsten rat sarcoma virus (KRAS) gene mutation status in colorectal cancer (CRC) patients can assist doctors in deciding whether to use specific targeted drugs for treatment. Although deep learning methods are popular, they are often affected by redundant features from non-lesion areas. Moreover, existing methods commonly extract spatial features from imaging data, which neglect important frequency domain features and may degrade the performance of KRAS gene mutation status identification. To address this deficiency, we propose a segmentation-guided Transformer U-Net (SG-Transunet) model for KRAS gene mutation status identification in CRC. Integrating the strength of convolutional neural networks (CNNs) and Transformers, SG-Transunet offers a unique approach for both lesion segmentation and KRAS mutation status identification. Specifically, for precise lesion localization, we employ an encoder-decoder to obtain segmentation results and guide the KRAS gene mutation status identification task. Subsequently, a frequency domain supplement block is designed to capture frequency domain features, integrating it with high-level spatial features extracted in the encoding path to derive advanced spatial-frequency domain features. Furthermore, we introduce a pre-trained Xception block to mitigate the risk of overfitting associated with small-scale datasets. Following this, an aggregate attention module is devised to consolidate spatial-frequency domain features with global information extracted by the Transformer at shallow and deep levels, thereby enhancing feature discriminability. Finally, we propose a mutual-constrained loss function that simultaneously constrains the segmentation mask acquisition and gene status identification process. Experimental results demonstrate the superior performance of SG-Transunet over state-of-the-art methods in discriminating KRAS gene mutation status.
Collapse
Affiliation(s)
- Yulan Ma
- Department of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Yuzhu Guo
- Department of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China
| | - Weigang Cui
- School of Engineering Medicine, Beihang University, Beijing, 100191, China
| | - Jingyu Liu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Yang Li
- Department of Automation Science and Electrical Engineering, Beihang University, Beijing, 100191, China.
| | - Yingsen Wang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- School of Software, North University of China, Taiyuan, China; College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China.
| |
Collapse
|
5
|
Song K, Bian Y, Zeng F, Liu Z, Han S, Li J, Tian J, Li K, Shi X, Xiao L. Photon-level single-pixel 3D tomography with masked attention network. OPTICS EXPRESS 2024; 32:4387-4399. [PMID: 38297641 DOI: 10.1364/oe.510706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/11/2024] [Indexed: 02/02/2024]
Abstract
Tomography plays an important role in characterizing the three-dimensional structure of samples within specialized scenarios. In the paper, a masked attention network is presented to eliminate interference from different layers of the sample, substantially enhancing the resolution for photon-level single-pixel tomographic imaging. The simulation and experimental results have demonstrated that the axial resolution and lateral resolution of the imaging system can be improved by about 3 and 2 times respectively, with a sampling rate of 3.0 %. The scheme is expected to be seamlessly integrated into various tomography systems, which is conducive to promoting the tomographic imaging for biology, medicine, and materials science.
Collapse
|
6
|
Choi BS, Yoo SK, Moon J, Chung SY, Oh J, Baek S, Kim Y, Chang JS, Kim H, Kim JS. Acute coronary event (ACE) prediction following breast radiotherapy by features extracted from 3D CT, dose, and cardiac structures. Med Phys 2023; 50:6409-6420. [PMID: 36974390 DOI: 10.1002/mp.16398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 02/22/2023] [Accepted: 03/21/2023] [Indexed: 03/29/2023] Open
Abstract
PURPOSE Heart toxicity, such as major acute coronary events (ACE), following breast radiation therapy (RT) is of utmost concern. Thus, many studies have been investigating the effect of mean heart dose (MHD) and dose received in heart sub-structures on toxicity. Most studies focused on the dose thresholds in the heart and its sub-structures, while few studies adopted such computational methods as deep neural networks (DNN) and radiomics. This work aims to construct a feature-driven predictive model for ACE after breast RT. METHODS A recently proposed two-step predictive model that extracts a number of features from a deep auto-segmentation network and processes the selected features for prediction was adopted. This work refined the auto-segmenting network and feature processing algorithms to enhance performance in cardiac toxicity prediction. In the predictive model, the deep convolutional neural network (CNN) extracted features from 3D computed tomography (CT) images and dose distributions in three automatically segmented heart sub-structures, including the left anterior descending artery (LAD), right coronary artery (RCA), and left ventricle (LV). The optimal feature processing workflow for the extracted features was explored to enhance the prediction accuracy. The regions associated with toxicity were visualized using a class activation map (CAM)-based technique. Our proposed model was validated against a conventional DNN (convolutional and fully connected layers) and radiomics with a patient cohort of 84 cases, including 29 and 55 patient cases with and without ACE. Of the entire 84 cases, 12 randomly chosen cases (5 toxicity and 7 non-toxicity cases) were set aside for independent test, and the remaining 72 cases were applied to 4-fold stratified cross-validation. RESULTS Our predictive model outperformed the conventional DNN by 38% and 10% and radiomics-based predictive models by 9% and 10% in AUC for 4-fold cross-validations and independent test, respectively. The degree of enhancement was greater when incorporating dose information and heart sub-structures into feature extraction. The model whose inputs were CT, dose, and three sub-structures (LV, LAD, and RCA) reached 96% prediction accuracy on average and 0.94 area under the curve (AUC) on average in the cross-validation, and also achieved prediction accuracy of 83% and AUC of 0.83 in the independent test. On 10 correctly predicted cases out of 12 for the independent test, the activation maps implied that for cases of ACE toxicity, the higher intensity was more likely to be observed inside the LV. CONCLUSIONS The proposed model characterized by modifications in model input with dose distributions and cardiac sub-structures, and serial processing of feature extraction and feature selection techniques can improve the predictive performance in ACE following breast RT.
Collapse
Affiliation(s)
- Byong Su Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Sang Kyun Yoo
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Jinyoung Moon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Seung Yeun Chung
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, South Korea
| | - Jaewon Oh
- Cardiology Division, Severance Cardiovascular Hospital, and Cardiovascular Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Stephen Baek
- School of Data Science, University of Virginia, Charlottesville, Virginia, USA
| | - Yusung Kim
- Department of Radiation Physics, The Universiy of Texas MD Anderson Cancer Center, Texas, USA
| | - Jee Suk Chang
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul, South Korea
| | - Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
7
|
Predicting gene mutation status via artificial intelligence technologies based on multimodal integration (MMI) to advance precision oncology. Semin Cancer Biol 2023; 91:1-15. [PMID: 36801447 DOI: 10.1016/j.semcancer.2023.02.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 01/30/2023] [Accepted: 02/15/2023] [Indexed: 02/21/2023]
Abstract
Personalized treatment strategies for cancer frequently rely on the detection of genetic alterations which are determined by molecular biology assays. Historically, these processes typically required single-gene sequencing, next-generation sequencing, or visual inspection of histopathology slides by experienced pathologists in a clinical context. In the past decade, advances in artificial intelligence (AI) technologies have demonstrated remarkable potential in assisting physicians with accurate diagnosis of oncology image-recognition tasks. Meanwhile, AI techniques make it possible to integrate multimodal data such as radiology, histology, and genomics, providing critical guidance for the stratification of patients in the context of precision therapy. Given that the mutation detection is unaffordable and time-consuming for a considerable number of patients, predicting gene mutations based on routine clinical radiological scans or whole-slide images of tissue with AI-based methods has become a hot issue in actual clinical practice. In this review, we synthesized the general framework of multimodal integration (MMI) for molecular intelligent diagnostics beyond standard techniques. Then we summarized the emerging applications of AI in the prediction of mutational and molecular profiles of common cancers (lung, brain, breast, and other tumor types) pertaining to radiology and histology imaging. Furthermore, we concluded that there truly exist multiple challenges of AI techniques in the way of its real-world application in the medical field, including data curation, feature fusion, model interpretability, and practice regulations. Despite these challenges, we still prospect the clinical implementation of AI as a highly potential decision-support tool to aid oncologists in future cancer treatment management.
Collapse
|