1
|
Zhu S, Chen G, Chen H, Lu Y, Wu M, Zheng B, Liu D, Qian C, Chen Y. Squeeze-and-excitation-attention-based mobile vision transformer for grading recognition of bladder prolapse in pelvic MRI images. Med Phys 2024; 51:5236-5249. [PMID: 38767532 DOI: 10.1002/mp.17171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 03/22/2024] [Accepted: 03/31/2024] [Indexed: 05/22/2024] Open
Abstract
BACKGROUND Bladder prolapse is a common clinical disorder of pelvic floor dysfunction in women, and early diagnosis and treatment can help them recover. Pelvic magnetic resonance imaging (MRI) is one of the most important methods used by physicians to diagnose bladder prolapse; however, it is highly subjective and largely dependent on the clinical experience of physicians. The application of computer-aided diagnostic techniques to achieve a graded diagnosis of bladder prolapse can help improve its accuracy and shorten the learning curve. PURPOSE The purpose of this study is to combine convolutional neural network (CNN) and vision transformer (ViT) for grading bladder prolapse in place of traditional neural networks, and to incorporate attention mechanisms into mobile vision transformer (MobileViT) for assisting in the grading of bladder prolapse. METHODS This study focuses on the grading of bladder prolapse in pelvic organs using a combination of a CNN and a ViT. First, this study used MobileNetV2 to extract the local features of the images. Next, a ViT was used to extract the global features by modeling the non-local dependencies at a distance. Finally, a channel attention module (i.e., squeeze-and-excitation network) was used to improve the feature extraction network and enhance its feature representation capability. The final grading of the degree of bladder prolapse was thus achieved. RESULTS Using pelvic MRI images provided by a Huzhou Maternal and Child Health Care Hospital, this study used the proposed method to grade patients with bladder prolapse. The accuracy, Kappa value, sensitivity, specificity, precision, and area under the curve of our method were 86.34%, 78.27%, 83.75%, 95.43%, 85.70%, and 95.05%, respectively. In comparison with other CNN models, the proposed method performed better. CONCLUSIONS Thus, the model based on attention mechanisms exhibits better classification performance than existing methods for grading bladder prolapse in pelvic organs, and it can effectively assist physicians in achieving a more accurate bladder prolapse diagnosis.
Collapse
Affiliation(s)
- Shaojun Zhu
- School of information engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural 8 Resources, Huzhou University, Huzhou, China
| | - Guotao Chen
- School of information engineering, Huzhou University, Huzhou, Zhejiang, China
| | - Hongguang Chen
- School of information engineering, Huzhou University, Huzhou, Zhejiang, China
| | - Ying Lu
- School of information engineering, Huzhou University, Huzhou, Zhejiang, China
| | - Maonian Wu
- School of information engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural 8 Resources, Huzhou University, Huzhou, China
| | - Bo Zheng
- School of information engineering, Huzhou University, Huzhou, Zhejiang, China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural 8 Resources, Huzhou University, Huzhou, China
| | - Dongquan Liu
- Ninghai First Hospital, Ninghai, Zhejiang, China
| | - Cheng Qian
- Department of Colon-rectal Surgery, Huzhou Maternity & Child Health Care Hospital, Huzhou, China
| | - Yun Chen
- Department of Colon-rectal Surgery, Huzhou Maternity & Child Health Care Hospital, Huzhou, China
| |
Collapse
|
2
|
Jang DH, Lee J, Jeon YJ, Yoon YE, Ahn H, Kang BK, Choi WS, Oh J, Lee DK. Kidney, ureter, and urinary bladder segmentation based on non-contrast enhanced computed tomography images using modified U-Net. Sci Rep 2024; 14:15325. [PMID: 38961140 PMCID: PMC11222420 DOI: 10.1038/s41598-024-66045-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 06/26/2024] [Indexed: 07/05/2024] Open
Abstract
This study was performed to segment the urinary system as the basis for diagnosing urinary system diseases on non-contrast computed tomography (CT). This study was conducted with images obtained between January 2016 and December 2020. During the study period, non-contrast abdominopelvic CT scans of patients and diagnosed and treated with urinary stones at the emergency departments of two institutions were collected. Region of interest extraction was first performed, and urinary system segmentation was performed using a modified U-Net. Thereafter, fivefold cross-validation was performed to evaluate the robustness of the model performance. In fivefold cross-validation results of the segmentation of the urinary system, the average dice coefficient was 0.8673, and the dice coefficients for each class (kidney, ureter, and urinary bladder) were 0.9651, 0.7172, and 0.9196, respectively. In the test dataset, the average dice coefficient of best performing model in fivefold cross validation for whole urinary system was 0.8623, and the dice coefficients for each class (kidney, ureter, and urinary bladder) were 0.9613, 0.7225, and 0.9032, respectively. The segmentation of the urinary system using the modified U-Net proposed in this study could be the basis for the detection of kidney, ureter, and urinary bladder lesions, such as stones and tumours, through machine learning.
Collapse
Affiliation(s)
- Dong-Hyun Jang
- Department of Public Healthcare Service, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Juncheol Lee
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | | | - Young Eun Yoon
- Department of Urology, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Hyungwoo Ahn
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Bo-Kyeong Kang
- Department of Radiology, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Won Seok Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea.
| | - Dong Keon Lee
- Department of Emergency Medicine, Seoul National University Bundang Hospital, 13620, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea.
- Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Yue X, Huang X, Xu Z, Chen Y, Xu C. Involving logical clinical knowledge into deep neural networks to improve bladder tumor segmentation. Med Image Anal 2024; 95:103189. [PMID: 38776840 DOI: 10.1016/j.media.2024.103189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/06/2024] [Accepted: 05/01/2024] [Indexed: 05/25/2024]
Abstract
Segmentation of bladder tumors from medical radiographic images is of great significance for early detection, diagnosis and prognosis evaluation of bladder cancer. Deep Convolution Neural Networks (DCNNs) have been successfully used for bladder tumor segmentation, but the segmentation based on DCNN is data-hungry for model training and ignores clinical knowledge. From the clinical view, bladder tumors originate from the mucosal surface of bladder and must rely on the bladder wall to survive and grow. This clinical knowledge of tumor location is helpful to improve the bladder tumor segmentation. To achieve this, we propose a novel bladder tumor segmentation method, which incorporates the clinical logic rules of bladder tumor and bladder wall into DCNNs to harness the tumor segmentation. Clinical logical rules provide a semantic and human-readable knowledge representation and are easy for knowledge acquisition from clinicians. In addition, incorporating logical rules of clinical knowledge helps to reduce the data dependency of the segmentation network, and enables precise segmentation results even with limited number of annotated images. Experiments on bladder MR images collected from the collaborating hospital validate the effectiveness of the proposed bladder tumor segmentation method.
Collapse
Affiliation(s)
- Xiaodong Yue
- Artificial Intelligence Institute of Shanghai University, Shanghai University, Shanghai 200444, China; School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China.
| | - Xiao Huang
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Zhikang Xu
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Yufei Chen
- College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China.
| | - Chuanliang Xu
- Department of Urology, Changhai hospital, Shanghai 200433, China
| |
Collapse
|
4
|
Zhao X, Lai L, Li Y, Zhou X, Cheng X, Chen Y, Huang H, Guo J, Wang G. A lightweight bladder tumor segmentation method based on attention mechanism. Med Biol Eng Comput 2024; 62:1519-1534. [PMID: 38308022 DOI: 10.1007/s11517-024-03018-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 01/05/2024] [Indexed: 02/04/2024]
Abstract
In the endoscopic images of bladder, accurate segmentation of different grade bladder tumor from blurred boundary regions and highly variable shapes is of great significance for doctors' diagnosis and patients' later treatment. We propose a nested attentional feature fusion segmentation network (NAFF-Net) based on the encoder-decoder structure formed by the combination of weighted pyramid pooling module (WPPM) and nested attentional feature fusion (NAFF). Among them, WPPM applies the cascade of atrous convolution to enhance the overall perceptual field while introducing adaptive weights to optimize multi-scale feature extraction, NAFF integrates deep semantic information into shallow feature maps, effectively focusing on edge and detail information in bladder tumor images. Additionally, a weighted mixed loss function is constructed to alleviate the impact of imbalance between positive and negative sample distribution on segmentation accuracy. Experiments illustrate the proposed NAFF-Net achieves better segmentation results compared to other mainstream models, with a MIoU of 84.05%, MPrecision of 91.52%, MRecall of 90.81%, and F1-score of 91.16%, and also achieves good results on the public datasets Kvasir-SEG and CVC-ClinicDB. Compared to other models, NAFF-Net has a smaller number of parameters, which is a significant advantage in model deployment.
Collapse
Affiliation(s)
- Xiushun Zhao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China
| | - Libing Lai
- Department of Urology, The First Affiliated Hospital of Nanchang University, Nanchang, 330006, China
| | - Yunjiao Li
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China
| | - Xiaochen Zhou
- Department of Urology, The First Affiliated Hospital of Nanchang University, Nanchang, 330006, China
| | - Xiaofeng Cheng
- Department of Urology, The First Affiliated Hospital of Nanchang University, Nanchang, 330006, China
| | - Yujun Chen
- Department of Urology, The First Affiliated Hospital of Nanchang University, Nanchang, 330006, China
| | - Haohui Huang
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China
| | - Jing Guo
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China.
| | - Gongxian Wang
- Department of Urology, The First Affiliated Hospital of Nanchang University, Nanchang, 330006, China.
| |
Collapse
|
5
|
Zhang W, Tao Y, Huang Z, Li Y, Chen Y, Song T, Ma X, Zhang Y. Multi-phase features interaction transformer network for liver tumor segmentation and microvascular invasion assessment in contrast-enhanced CT. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5735-5761. [PMID: 38872556 DOI: 10.3934/mbe.2024253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
Precise segmentation of liver tumors from computed tomography (CT) scans is a prerequisite step in various clinical applications. Multi-phase CT imaging enhances tumor characterization, thereby assisting radiologists in accurate identification. However, existing automatic liver tumor segmentation models did not fully exploit multi-phase information and lacked the capability to capture global information. In this study, we developed a pioneering multi-phase feature interaction Transformer network (MI-TransSeg) for accurate liver tumor segmentation and a subsequent microvascular invasion (MVI) assessment in contrast-enhanced CT images. In the proposed network, an efficient multi-phase features interaction module was introduced to enable bi-directional feature interaction among multiple phases, thus maximally exploiting the available multi-phase information. To enhance the model's capability to extract global information, a hierarchical transformer-based encoder and decoder architecture was designed. Importantly, we devised a multi-resolution scales feature aggregation strategy (MSFA) to optimize the parameters and performance of the proposed model. Subsequent to segmentation, the liver tumor masks generated by MI-TransSeg were applied to extract radiomic features for the clinical applications of the MVI assessment. With Institutional Review Board (IRB) approval, a clinical multi-phase contrast-enhanced CT abdominal dataset was collected that included 164 patients with liver tumors. The experimental results demonstrated that the proposed MI-TransSeg was superior to various state-of-the-art methods. Additionally, we found that the tumor mask predicted by our method showed promising potential in the assessment of microvascular invasion. In conclusion, MI-TransSeg presents an innovative paradigm for the segmentation of complex liver tumors, thus underscoring the significance of multi-phase CT data exploitation. The proposed MI-TransSeg network has the potential to assist radiologists in diagnosing liver tumors and assessing microvascular invasion.
Collapse
Affiliation(s)
- Wencong Zhang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
- Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, Singapore
| | - Yuxi Tao
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Zhanyao Huang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Yue Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yingjia Chen
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Tengfei Song
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Xiangyuan Ma
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Yaqin Zhang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| |
Collapse
|
6
|
Akin O, Lema-Dopico A, Paudyal R, Konar AS, Chenevert TL, Malyarenko D, Hadjiiski L, Al-Ahmadie H, Goh AC, Bochner B, Rosenberg J, Schwartz LH, Shukla-Dave A. Multiparametric MRI in Era of Artificial Intelligence for Bladder Cancer Therapies. Cancers (Basel) 2023; 15:5468. [PMID: 38001728 PMCID: PMC10670574 DOI: 10.3390/cancers15225468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/26/2023] Open
Abstract
This review focuses on the principles, applications, and performance of mpMRI for bladder imaging. Quantitative imaging biomarkers (QIBs) derived from mpMRI are increasingly used in oncological applications, including tumor staging, prognosis, and assessment of treatment response. To standardize mpMRI acquisition and interpretation, an expert panel developed the Vesical Imaging-Reporting and Data System (VI-RADS). Many studies confirm the standardization and high degree of inter-reader agreement to discriminate muscle invasiveness in bladder cancer, supporting VI-RADS implementation in routine clinical practice. The standard MRI sequences for VI-RADS scoring are anatomical imaging, including T2w images, and physiological imaging with diffusion-weighted MRI (DW-MRI) and dynamic contrast-enhanced MRI (DCE-MRI). Physiological QIBs derived from analysis of DW- and DCE-MRI data and radiomic image features extracted from mpMRI images play an important role in bladder cancer. The current development of AI tools for analyzing mpMRI data and their potential impact on bladder imaging are surveyed. AI architectures are often implemented based on convolutional neural networks (CNNs), focusing on narrow/specific tasks. The application of AI can substantially impact bladder imaging clinical workflows; for example, manual tumor segmentation, which demands high time commitment and has inter-reader variability, can be replaced by an autosegmentation tool. The use of mpMRI and AI is projected to drive the field toward the personalized management of bladder cancer patients.
Collapse
Affiliation(s)
- Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Alfonso Lema-Dopico
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| | - Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| | | | | | - Dariya Malyarenko
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Hikmat Al-Ahmadie
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Alvin C. Goh
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Bernard Bochner
- Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Jonathan Rosenberg
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Lawrence H. Schwartz
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer, New York, NY 10065, USA
| |
Collapse
|
7
|
Ferro M, Falagario UG, Barone B, Maggi M, Crocetto F, Busetto GM, Giudice FD, Terracciano D, Lucarelli G, Lasorsa F, Catellani M, Brescia A, Mistretta FA, Luzzago S, Piccinelli ML, Vartolomei MD, Jereczek-Fossa BA, Musi G, Montanari E, Cobelli OD, Tataru OS. Artificial Intelligence in the Advanced Diagnosis of Bladder Cancer-Comprehensive Literature Review and Future Advancement. Diagnostics (Basel) 2023; 13:2308. [PMID: 37443700 DOI: 10.3390/diagnostics13132308] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/03/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence is highly regarded as the most promising future technology that will have a great impact on healthcare across all specialties. Its subsets, machine learning, deep learning, and artificial neural networks, are able to automatically learn from massive amounts of data and can improve the prediction algorithms to enhance their performance. This area is still under development, but the latest evidence shows great potential in the diagnosis, prognosis, and treatment of urological diseases, including bladder cancer, which are currently using old prediction tools and historical nomograms. This review focuses on highly significant and comprehensive literature evidence of artificial intelligence in the management of bladder cancer and investigates the near introduction in clinical practice.
Collapse
Affiliation(s)
- Matteo Ferro
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
| | - Ugo Giovanni Falagario
- Department of Urology and Organ Transplantation, University of Foggia, 71121 Foggia, Italy
| | - Biagio Barone
- Urology Unit, Department of Surgical Sciences, AORN Sant'Anna e San Sebastiano, 81100 Caserta, Italy
| | - Martina Maggi
- Department of Maternal Infant and Urologic Sciences, Policlinico Umberto I Hospital, Sapienza University of Rome, 00161 Rome, Italy
| | - Felice Crocetto
- Department of Neurosciences and Reproductive Sciences and Odontostomatology, University of Naples Federico II, 80131 Naples, Italy
| | - Gian Maria Busetto
- Department of Urology and Organ Transplantation, University of Foggia, 71121 Foggia, Italy
| | - Francesco Del Giudice
- Department of Maternal Infant and Urologic Sciences, Policlinico Umberto I Hospital, Sapienza University of Rome, 00161 Rome, Italy
| | - Daniela Terracciano
- Department of Translational Medical Sciences, University of Naples "Federico II", 80131 Naples, Italy
| | - Giuseppe Lucarelli
- Urology, Andrology and Kidney Transplantation Unit, Department of Emergency and Organ Transplantation, University of Bari, 70124 Bari, Italy
| | - Francesco Lasorsa
- Urology, Andrology and Kidney Transplantation Unit, Department of Emergency and Organ Transplantation, University of Bari, 70124 Bari, Italy
| | - Michele Catellani
- Department of Urology, ASST Papa Giovanni XXIII, 24127 Bergamo, Italy
| | - Antonio Brescia
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
| | - Francesco Alessandro Mistretta
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Stefano Luzzago
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Mattia Luca Piccinelli
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
| | | | - Barbara Alicja Jereczek-Fossa
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
- Division of Radiation Oncology, IEO-European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Gennaro Musi
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Emanuele Montanari
- Department of Urology, Foundation IRCCS Ca' Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, 20122 Milan, Italy
| | - Ottavio de Cobelli
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Octavian Sabin Tataru
- Department of Simulation Applied in Medicine, George Emil Palade University of Medicine, Pharmacy, Science and Technology of Târgu Mures, 540142 Târgu Mures, Romania
| |
Collapse
|
8
|
Cellina M, Cè M, Rossini N, Cacioppa LM, Ascenti V, Carrafiello G, Floridi C. Computed Tomography Urography: State of the Art and Beyond. Tomography 2023; 9:909-930. [PMID: 37218935 PMCID: PMC10204399 DOI: 10.3390/tomography9030075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/24/2023] Open
Abstract
Computed Tomography Urography (CTU) is a multiphase CT examination optimized for imaging kidneys, ureters, and bladder, complemented by post-contrast excretory phase imaging. Different protocols are available for contrast administration and image acquisition and timing, with different strengths and limits, mainly related to kidney enhancement, ureters distension and opacification, and radiation exposure. The availability of new reconstruction algorithms, such as iterative and deep-learning-based reconstruction has dramatically improved the image quality and reducing radiation exposure at the same time. Dual-Energy Computed Tomography also has an important role in this type of examination, with the possibility of renal stone characterization, the availability of synthetic unenhanced phases to reduce radiation dose, and the availability of iodine maps for a better interpretation of renal masses. We also describe the new artificial intelligence applications for CTU, focusing on radiomics to predict tumor grading and patients' outcome for a personalized therapeutic approach. In this narrative review, we provide a comprehensive overview of CTU from the traditional to the newest acquisition techniques and reconstruction algorithms, and the possibility of advanced imaging interpretation to provide an up-to-date guide for radiologists who want to better comprehend this technique.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Nicolo’ Rossini
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy
| | - Laura Maria Cacioppa
- Division of Interventional Radiology, Department of Radiological Sciences, University Politecnica delle Marche, 60126 Ancona, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Gianpaolo Carrafiello
- Radiology Department, Policlinico di Milano Ospedale Maggiore|Fondazione IRCCS Ca’ Granda, Via Francesco Sforza 35, 20122 Milan, Italy
| | - Chiara Floridi
- Division of Interventional Radiology, Department of Radiological Sciences, University Politecnica delle Marche, 60126 Ancona, Italy
- Division of Special and Pediatric Radiology, Department of Radiology, University Hospital “Umberto I-Lancisi-Salesi”, 60126 Ancona, Italy
| |
Collapse
|
9
|
Kushwaha A, Mourad RF, Heist K, Tariq H, Chan HP, Ross BD, Chenevert TL, Malyarenko D, Hadjiiski LM. Improved Repeatability of Mouse Tibia Volume Segmentation in Murine Myelofibrosis Model Using Deep Learning. Tomography 2023; 9:589-602. [PMID: 36961007 PMCID: PMC10037585 DOI: 10.3390/tomography9020048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
A murine model of myelofibrosis in tibia was used in a co-clinical trial to evaluate segmentation methods for application of image-based biomarkers to assess disease status. The dataset (32 mice with 157 3D MRI scans including 49 test-retest pairs scanned on consecutive days) was split into approximately 70% training, 10% validation, and 20% test subsets. Two expert annotators (EA1 and EA2) performed manual segmentations of the mouse tibia (EA1: all data; EA2: test and validation). Attention U-net (A-U-net) model performance was assessed for accuracy with respect to EA1 reference using the average Jaccard index (AJI), volume intersection ratio (AVI), volume error (AVE), and Hausdorff distance (AHD) for four training scenarios: full training, two half-splits, and a single-mouse subsets. The repeatability of computer versus expert segmentations for tibia volume of test-retest pairs was assessed by within-subject coefficient of variance (%wCV). A-U-net models trained on full and half-split training sets achieved similar average accuracy (with respect to EA1 annotations) for test set: AJI = 83-84%, AVI = 89-90%, AVE = 2-3%, and AHD = 0.5 mm-0.7 mm, exceeding EA2 accuracy: AJ = 81%, AVI = 83%, AVE = 14%, and AHD = 0.3 mm. The A-U-net model repeatability wCV [95% CI]: 3 [2, 5]% was notably better than that of expert annotators EA1: 5 [4, 9]% and EA2: 8 [6, 13]%. The developed deep learning model effectively automates murine bone marrow segmentation with accuracy comparable to human annotators and substantially improved repeatability.
Collapse
|
10
|
Chen W, Gong M, Zhou D, Zhang L, Kong J, Jiang F, Feng S, Yuan R. CT-based deep learning radiomics signature for the preoperative prediction of the muscle-invasive status of bladder cancer. Front Oncol 2022; 12:1019749. [PMID: 36544709 PMCID: PMC9761839 DOI: 10.3389/fonc.2022.1019749] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/17/2022] [Indexed: 12/07/2022] Open
Abstract
Objectives Although the preoperative assessment of whether a bladder cancer (BCa) indicates muscular invasion is crucial for adequate treatment, there currently exist some challenges involved in preoperative diagnosis of BCa with muscular invasion. The aim of this study was to construct deep learning radiomic signature (DLRS) for preoperative predicting the muscle invasion status of BCa. Methods A retrospective review covering 173 patients revealed 43 with pathologically proven muscle-invasive bladder cancer (MIBC) and 130 with non-muscle-invasive bladder cancer (non- MIBC). A total of 129 patients were randomly assigned to the training cohort and 44 to the test cohort. The Pearson correlation coefficient combined with the least absolute shrinkage and selection operator (LASSO) was utilized to reduce radiomic redundancy. To decrease the dimension of deep learning features, Principal Component Analysis (PCA) was adopted. Six machine learning classifiers were finally constructed based on deep learning radiomics features, which were adopted to predict the muscle invasion status of bladder cancer. The area under the curve (AUC), accuracy, sensitivity and specificity were used to evaluate the performance of the model. Results According to the comparison, DLRS-based models performed the best in predicting muscle violation status, with MLP (Train AUC: 0.973260 (95% CI 0.9488-0.9978) and Test AUC: 0.884298 (95% CI 0.7831-0.9855)) outperforming the other models. In the test cohort, the sensitivity, specificity and accuracy of the MLP model were 0.91 (95% CI 0.551-0.873), 0.78 (95% CI 0.594-0.863) and 0.58 (95% CI 0.729-0.827), respectively. DCA indicated that the MLP model showed better clinical utility than Radiomics-only model, which was demonstrated by the decision curve analysis. Conclusions A deep radiomics model constructed with CT images can accurately predict the muscle invasion status of bladder cancer.
Collapse
Affiliation(s)
- Weitian Chen
- Department of Urology, Zhongshan People's Hospital, Zhongshan, China
| | - Mancheng Gong
- Department of Urology, Zhongshan People's Hospital, Zhongshan, China
| | - Dongsheng Zhou
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Lijie Zhang
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Jie Kong
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Feng Jiang
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Shengxing Feng
- First Clinical Medical College, Guangdong Medical University, Zhanjiang, China
| | - Runqiang Yuan
- Department of Urology, Zhongshan People's Hospital, Zhongshan, China,*Correspondence: Runqiang Yuan,
| |
Collapse
|
11
|
Li M, Jiang Z, Shen W, Liu H. Deep learning in bladder cancer imaging: A review. Front Oncol 2022; 12:930917. [PMID: 36338676 PMCID: PMC9631317 DOI: 10.3389/fonc.2022.930917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/30/2022] [Indexed: 11/13/2022] Open
Abstract
Deep learning (DL) is a rapidly developing field in machine learning (ML). The concept of deep learning originates from research on artificial neural networks and is an upgrade of traditional neural networks. It has achieved great success in various domains and has shown potential in solving medical problems, particularly when using medical images. Bladder cancer (BCa) is the tenth most common cancer in the world. Imaging, as a safe, noninvasive, and relatively inexpensive technique, is a powerful tool to aid in the diagnosis and treatment of bladder cancer. In this review, we provide an overview of the latest progress in the application of deep learning to the imaging assessment of bladder cancer. First, we review the current deep learning approaches used for bladder segmentation. We then provide examples of how deep learning helps in the diagnosis, staging, and treatment management of bladder cancer using medical images. Finally, we summarize the current limitations of deep learning and provide suggestions for future improvements.
Collapse
Affiliation(s)
- Mingyang Li
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zekun Jiang
- Ministry of Education (MoE) Key Lab of Artificial Intelligence, Artificial Intelligence (AI) Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Wei Shen
- Ministry of Education (MoE) Key Lab of Artificial Intelligence, Artificial Intelligence (AI) Institute, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Haitao Liu, ; Wei Shen,
| | - Haitao Liu
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- *Correspondence: Haitao Liu, ; Wei Shen,
| |
Collapse
|
12
|
Lv L, Li H, Wu Z, Zeng W, Hua P, Yang S. An artificial intelligence-based platform for automatically estimating time-averaged wall shear stress in the ascending aorta. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2022; 3:525-534. [PMID: 36710907 PMCID: PMC9779925 DOI: 10.1093/ehjdh/ztac058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 09/01/2022] [Accepted: 09/06/2022] [Indexed: 11/07/2022]
Abstract
Aims Aortopathies are a series of disorders requiring multiple indicators to assess risk. Time-averaged wall shear stress (TAWSS) is currently considered as the primary indicator of aortopathies progression, which can only be calculated by Computational Fluid Dynamics (CFD). However, CFD's complexity and high computational cost, greatly limit its application. The study aimed to construct a deep learning platform which could accurately estimate TAWSS in ascending aorta. Methods and results A total of 154 patients who had thoracic computed tomography angiography were included and randomly divided into two parts: training set (90%, n = 139) and testing set (10%, n = 15). TAWSS were calculated via CFD. The artificial intelligence (AI)-based model was trained and assessed using the dice coefficient (DC), normalized mean absolute error (NMAE), and root mean square error (RMSE). Our AI platform brought into correspondence with the manual segmentation (DC = 0.86) and the CFD findings (NMAE, 7.8773% ± 4.7144%; RMSE, 0.0098 ± 0.0097), while saving 12000-fold computational cost. Conclusion The high-efficiency and robust AI platform can automatically estimate value and distribution of TAWSS in ascending aorta, which may be suitable for clinical applications and provide potential ideas for CFD-based problem solving.
Collapse
Affiliation(s)
| | | | - Zonglv Wu
- Department of Cardio-Vascular Surgery, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, No. 107 Yan Jiang West Road, 510120 Guangzhou, China,Department of Cardiac Surgery, Guangzhou Women and Children's Medical Center, No. 9 Jinsui Road, 510623 Guangzhou, China
| | - Weike Zeng
- Department of Radiology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, No. 107 Yanjiang West Road, 510120 Guangzhou, China
| | - Ping Hua
- Corresponding author. Tel: +86 13 609716875, Fax: +86 20 81332199, (P.H.); Tel: +86 13926168990, Fax: +86 20 81332199, (S.R.)
| | - Songran Yang
- Corresponding author. Tel: +86 13 609716875, Fax: +86 20 81332199, (P.H.); Tel: +86 13926168990, Fax: +86 20 81332199, (S.R.)
| |
Collapse
|
13
|
Rani G, Thakkar P, Verma A, Mehta V, Chavan R, Dhaka VS, Sharma RK, Vocaturo E, Zumpano E. KUB-UNet: Segmentation of Organs of Urinary System from a KUB X-ray Image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107031. [PMID: 35878485 DOI: 10.1016/j.cmpb.2022.107031] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/01/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE The alarming increase in diseases of urinary system is a cause of concern for the populace and health experts. The traditional techniques used for the diagnosis of these diseases are inconvenient for patients, require high cost, and additional waiting time for generating the reports. The objective of this research is to utilize the proven potential of Artificial Intelligence for organ segmentation. Correct identification and segmentation of the region of interest in a medical image are important to enhance the accuracy of disease diagnosis. Also, it improves the reliability of the system by ensuring the extraction of features only from the region of interest. METHOD A lot of research works are proposed in the literature for the segmentation of organs using MRI, CT scans, and ultrasound images. But, the segmentation of kidneys, ureters, and bladder from KUB X-ray images is found under explored. Also, there is a lack of validated datasets comprising KUB X-ray images. These challenges motivated the authors to tie up with the team of radiologists and gather the anonymous and validated dataset that can be used to automate the diagnosis of diseases of the urinary system. Further, they proposed a KUB-UNet model for semantic segmentation of the urinary system. RESULTS The proposed KUB-UNet model reported the highest accuracy of 99.18% for segmentation of organs of urinary system. CONCLUSION The comparative analysis of its performance with state-of-the-art models and validation of results by radiology experts prove its reliability, robustness, and supremacy. This segmentation phase may prove useful in extracting the features only from the region of interest and improve the accuracy diagnosis.
Collapse
Affiliation(s)
- Geeta Rani
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Priyam Thakkar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Akshat Verma
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Vanshika Mehta
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Rugved Chavan
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur, India, 303007.
| | | | - Eugenio Vocaturo
- Department of Computer Engineering, Modeling, Electronics and Systems (DIMES), University of Calabria, Italy; CNR NANOTEC, National Research Council, Rende, Italy.
| | - Ester Zumpano
- Department of Computer Engineering, Modeling, Electronics and Systems (DIMES), University of Calabria, Italy; CNR NANOTEC, National Research Council, Rende, Italy.
| |
Collapse
|
14
|
Habibi K, Tirdad K, Dela Cruz A, Wenger K, Mari A, Basheer M, Kuk C, van Rhijn BW, Zlotta AR, van der Kwast TH, Sadeghian A. ABC: Artificial Intelligence for Bladder Cancer grading system. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022] Open
|
15
|
|
16
|
Zhou X, Yue X, Xu Z, Denoeux T, Chen Y. PENet: Prior Evidence Deep Neural Network for Bladder Cancer Staging. Methods 2022; 207:20-28. [PMID: 36031139 DOI: 10.1016/j.ymeth.2022.08.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 08/10/2022] [Accepted: 08/21/2022] [Indexed: 10/31/2022] Open
Abstract
Bladder cancer is a heterogeneous, complicated, and widespread illness with high rates of morbidity, death, and expense if not treated adequately. The accurate and exact stage of bladder cancer is fundamental for treatment choices and prognostic forecasts, as indicated by convincing evidence from randomized trials. The extraordinary capability of Deep Convolutional Neural Networks (DCNNs) to extract features is one of the primary advantages offered by these types of networks. DCNNs work well in numerous real clinical medical applications as it demands costly large-scale data annotation. However, a lack of background information hinders its effectiveness and interpretability. Clinicians identify the stage of a tumor by evaluating whether the tumor is muscle-invasive, as shown in images by the tumor's infiltration of the bladder wall. Incorporating this clinical knowledge in DCNN has the ability to enhance the performance of bladder cancer staging and bring the prediction into accordance with medical principles. Therefore, we introduce PENet, innovative prior evidence deep neural network, for classifying MR images of bladder cancer staging in line with clinical knowledge. To do this, first, the degree to which the tumor has penetrated into the bladder wall is measured to get prior distribution parameters of class probability called prior evidence. Second, we formulate the posterior distribution of class probability according to Bayesian Theorem. Last, we modify the loss function based on posterior distribution of class probability which parameters include both prior evidence and prediction evidence in the learning procedure. Our investigation reveals that the prediction error and the variance of PENet may be reduced by giving the network prior evidence that is consistent with the ground truth. Using MR image datasets, experiments show that PENet performs better than image-based DCNN algorithms for bladder cancer staging.
Collapse
Affiliation(s)
- Xiaoqian Zhou
- School of Computer Engineering and Science, Shanghai University, Shanghai, China.
| | - Xiaodong Yue
- School of Computer Engineering and Science, Shanghai University, Shanghai, China; Artificial Intelligence Institute of Shanghai University, Shanghai, China.
| | - Zhikang Xu
- School of Computer Engineering and Science, Shanghai University, Shanghai, China.
| | - Thierry Denoeux
- Sino-European School of Technology, Shanghai University, Shanghai, China; Université de technologie de Compiégne, Compiégne, France.
| | - Yufei Chen
- College of Electronics and Information Engineering, Tongji University, Shanghai, China.
| |
Collapse
|
17
|
Dai W, Woo B, Liu S, Marques M, Engstrom C, Greer PB, Crozier S, Dowling JA, Chandra SS. CAN3D: Fast 3D medical image segmentation via compact context aggregation. Med Image Anal 2022; 82:102562. [PMID: 36049450 DOI: 10.1016/j.media.2022.102562] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 05/19/2022] [Accepted: 07/29/2022] [Indexed: 11/24/2022]
Abstract
Direct automatic segmentation of objects in 3D medical imaging, such as magnetic resonance (MR) imaging, is challenging as it often involves accurately identifying multiple individual structures with complex geometries within a large volume under investigation. Most deep learning approaches address these challenges by enhancing their learning capability through a substantial increase in trainable parameters within their models. An increased model complexity will incur high computational costs and large memory requirements unsuitable for real-time implementation on standard clinical workstations, as clinical imaging systems typically have low-end computer hardware with limited memory and CPU resources only. This paper presents a compact convolutional neural network (CAN3D) designed specifically for clinical workstations and allows the segmentation of large 3D Magnetic Resonance (MR) images in real-time. The proposed CAN3D has a shallow memory footprint to reduce the number of model parameters and computer memory required for state-of-the-art performance and maintain data integrity by directly processing large full-size 3D image input volumes with no patches required. The proposed architecture significantly reduces computational costs, especially for inference using the CPU. We also develop a novel loss function with extra shape constraints to improve segmentation accuracy for imbalanced classes in 3D MR images. Compared to state-of-the-art approaches (U-Net3D, improved U-Net3D and V-Net), CAN3D reduced the number of parameters up to two orders of magnitude and achieved much faster inference, up to 5 times when predicting with a standard commercial CPU (instead of GPU). For the open-access OAI-ZIB knee MR dataset, in comparison with manual segmentation, CAN3D achieved Dice coefficient values of (mean = 0.87 ± 0.02 and 0.85 ± 0.04) with mean surface distance errors (mean = 0.36 ± 0.32 mm and 0.29 ± 0.10 mm) for imbalanced classes such as (femoral and tibial) cartilage volumes respectively when training volume-wise under only 12G video memory. Similarly, CAN3D demonstrated high accuracy and efficiency on a pelvis 3D MR imaging dataset for prostate cancer consisting of 211 examinations with expert manual semantic labels (bladder, body, bone, rectum, prostate) now released publicly for scientific use as part of this work.
Collapse
Affiliation(s)
- Wei Dai
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia.
| | - Boyeong Woo
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Matthew Marques
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Craig Engstrom
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | | | - Stuart Crozier
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | | | - Shekhar S Chandra
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| |
Collapse
|
18
|
Dong Q, Huang D, Xu X, Li Z, Liu Y, Lu H, Liu Y. Content and shape attention network for bladder wall and cancer segmentation in MRIs. Comput Biol Med 2022; 148:105809. [DOI: 10.1016/j.compbiomed.2022.105809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 06/06/2022] [Accepted: 06/06/2022] [Indexed: 11/03/2022]
|
19
|
Pettit RW, Marlatt BB, Corr SJ, Havelka J, Rana A. nnU-Net Deep Learning Method for Segmenting Parenchyma and Determining Liver Volume From Computed Tomography Images. ANNALS OF SURGERY OPEN 2022; 3:e155. [PMID: 36275876 PMCID: PMC9585534 DOI: 10.1097/as9.0000000000000155] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 03/15/2022] [Indexed: 11/26/2022] Open
Abstract
Background Recipient donor matching in liver transplantation can require precise estimations of liver volume. Currently utilized demographic-based organ volume estimates are imprecise and nonspecific. Manual image organ annotation from medical imaging is effective; however, this process is cumbersome, often taking an undesirable length of time to complete. Additionally, manual organ segmentation and volume measurement incurs additional direct costs to payers for either a clinician or trained technician to complete. Deep learning-based image automatic segmentation tools are well positioned to address this clinical need. Objectives To build a deep learning model that could accurately estimate liver volumes and create 3D organ renderings from computed tomography (CT) medical images. Methods We trained a nnU-Net deep learning model to identify liver borders in images of the abdominal cavity. We used 151 publicly available CT scans. For each CT scan, a board-certified radiologist annotated the liver margins (ground truth annotations). We split our image dataset into training, validation, and test sets. We trained our nnU-Net model on these data to identify liver borders in 3D voxels and integrated these to reconstruct a total organ volume estimate. Results The nnU-Net model accurately identified the border of the liver with a mean overlap accuracy of 97.5% compared with ground truth annotations. Our calculated volume estimates achieved a mean percent error of 1.92% + 1.54% on the test set. Conclusions Precise volume estimation of livers from CT scans is accurate using a nnU-Net deep learning architecture. Appropriately deployed, a nnU-Net algorithm is accurate and quick, making it suitable for incorporation into the pretransplant clinical decision-making workflow.
Collapse
Affiliation(s)
- Rowland W. Pettit
- From the Department of Medicine, Baylor College of Medicine, Houston, TX
| | | | - Stuart J. Corr
- Department of Innovation Systems Engineering, Houston Methodist, Houston, TX
- Department of Cardiovascular Surgery, Houston Methodist Hospital, Houston, TX
- Department of Bioengineering, Rice University, Houston, TX
- Department of Biomedical Engineering, University of Houston, Houston, TX
- Swansea University Medical School, Wales, United Kingdom
| | | | - Abbas Rana
- Department of Surgery, Division of Abdominal Transplantation, Baylor College of Medicine, Houston, TX
| |
Collapse
|
20
|
Yu J, Cai L, Chen C, Fu X, Wang L, Yuan B, Yang X, Lu Q. Cascade Path Augmentation Unet for Bladder Cancer Segmentation in MRI. Med Phys 2022; 49:4622-4631. [PMID: 35389528 DOI: 10.1002/mp.15646] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 02/25/2022] [Accepted: 03/14/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Treatment choices for patients with bladder cancer are determined by the presence of muscular invasion. The precise segmentation of the inner and outer walls (IW and OW), as well as the bladder tumor (BT), is crucial for improving computer-aided diagnosis of muscle-invasive bladder cancer. PURPOSE To propose a novel deep learning-based model to improve the segmentation accuracy of the IW, OW, and BT, which can be useful in clinical practice. METHODS We proposed a Cascade Path Augmentation Unet (CPA-Unet) network to conduct multi-regional segmentation of the bladder using 1545 T2 weighted MRI scans. The model employs a cascade strategy to eliminate the redundant information in the background. Unet is used to segment the bladder from the background in the rough segmentation. The path augmentation structure is used in the fine segmentation to mine multi-scale features. Additionally, the partial dense connection is adopted as the skip connection module to concatenate the low and high-level sematic features. RESULTS The CPA-Unet is trained using 1391 T2WI slices and tested using 154 T2WI slices. In comparison to previous deep learning-based methods, the CPA-Unet achieves superior segmentation results in terms of dice similarity coefficient (DSC) and Hausdorff distance (HD). (IW: DSC = 98.19%, HD = 2.07mm; OW: DSC = 82.24%, HD = 2.62mm; BT: DSC = 87.40%, HD = 0.76mm). CONCLUSIONS Our proposed CPA-Unet network is capable of segmenting the bladder into its IW and OW, as well as tumors. The segmentation results provide a reliable and effective foundation for computer-assisted clinical diagnosis of muscle-invasive bladder cancer. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jie Yu
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Lingkai Cai
- Department of Urology, the First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Chunxiao Chen
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xue Fu
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Liang Wang
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Baorui Yuan
- Department of Urology, the First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xiao Yang
- Department of Urology, the First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Qiang Lu
- Department of Urology, the First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
21
|
Xu Y, Lou J, Gao Z, Zhan M. Computed Tomography Image Features under Deep Learning Algorithm Applied in Staging Diagnosis of Bladder Cancer and Detection on Ceramide Glycosylation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7979523. [PMID: 35035524 PMCID: PMC8759889 DOI: 10.1155/2022/7979523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 11/01/2021] [Accepted: 11/10/2021] [Indexed: 11/18/2022]
Abstract
The research is aimed at investigating computed tomography (CT) image based on deep learning algorithm and the application value of ceramide glycosylation in diagnosing bladder cancer. The images of ordinary CT detection were improved. In this study, 60 bladder cancer patients were selected and performed with ordinary CT detection, and the detection results were processed by CT based on deep learning algorithms and compared with pathological diagnosis. In addition, Western Blot technology was used to detect the expression of glucose ceramide synthase (GCS) in the cell membrane of tumor tissues and normal tissues of bladder. The comparison results found that, in simple CT clinical staging, the coincidence rates of T1 stage, T2a stage, T2b stage, T3 stage, and T4 stage were 28.56%, 62.51%, 78.94%, 84.61%, and 74.99%, respectively; and the total coincidence rate of CT clinical staging was 63.32%, which was greatly different from the clinical staging of pathological diagnosis (P < 0.05). In the clinical staging of algorithm-based CT test results, the coincidence rates of T1 stage and T2a stage were 50.01% and 91.65%, respectively; and those of T2b stage, T3 stage, and T4 stage were 100.00%; and the total coincidence rate was 96.69%, which was not obviously different from the clinical staging of pathological diagnosis (P > 0.05). Therefore, it could be concluded that the algorithm-based CT detection results were more accurate, and the use of CT scans based on deep learning algorithms in the preoperative staging and clinical treatment of bladder cancer showed reliable guiding significance and clinical value. In addition, it was found that the expression level of GCS in normal bladder tissues was much lower than that in bladder cancer tissues. This indicated that the changes in GCS were closely related to the development and prognosis of bladder cancer. Therefore, it was believed that GCS may be an effective target for the treatment of bladder cancer in the future, and further research was needed for specific conditions.
Collapse
Affiliation(s)
- Yisheng Xu
- Department of Radiology, Hangzhou Xiaoshan Hospital of Traditional Chinese Medicine, Hangzhou 311201, China
| | - Jianghua Lou
- Department of Radiology, Hangzhou Xiaoshan Hospital of Traditional Chinese Medicine, Hangzhou 311201, China
| | - Zhiqin Gao
- Department of Radiology, Hangzhou Xiaoshan Hospital of Traditional Chinese Medicine, Hangzhou 311201, China
| | - Ming Zhan
- Department of Radiology, Affiliated Xiaoshan Hospital, Hangzhou Normal University, Hangzhou 311201, China
| |
Collapse
|
22
|
Ko H, Huh J, Kim KW, Chung H, Ko Y, Kim JK, Lee JH, Lee J. A Deep Residual U-Net Algorithm for Automatic Detection and Quantification of Ascites on Abdominopelvic Computed Tomography Images Acquired in the Emergency Department: Model Development and Validation. J Med Internet Res 2022; 24:e34415. [PMID: 34982041 PMCID: PMC8764611 DOI: 10.2196/34415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/30/2021] [Accepted: 11/30/2021] [Indexed: 12/18/2022] Open
Abstract
Background Detection and quantification of intra-abdominal free fluid (ie, ascites) on computed tomography (CT) images are essential processes for finding emergent or urgent conditions in patients. In an emergency department, automatic detection and quantification of ascites will be beneficial. Objective We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). Methods We developed 2D DLMs based on deep residual U-Net, U-Net, bidirectional U-Net, and recurrent residual U-Net (R2U-Net) algorithms to segment areas of ascites on abdominopelvic CT images. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and nonascites images. The AI algorithms were trained using 6337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of the AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. Results The segmentation accuracy was the highest for the deep residual U-Net model with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bidirectional U-Net, and R2U-Net models (mIoU values of 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest for the deep residual U-Net model (0.96), followed by U-Net, bidirectional U-Net, and R2U-Net models (0.90, 0.88, and 0.82, respectively). The deep residual U-Net model also achieved high sensitivity (0.96) and high specificity (0.96). Conclusions We propose a deep residual U-Net–based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.
Collapse
Affiliation(s)
- Hoon Ko
- Department of Biomedical Engineering, Kyung Hee University, Yongin-si, Republic of Korea
| | - Jimi Huh
- The Department of Radiology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Kyung Won Kim
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.,Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Heewon Chung
- Department of Biomedical Engineering, Kyung Hee University, Yongin-si, Republic of Korea
| | - Yousun Ko
- Biomedical Research Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Republic of Korea
| | - Jai Keun Kim
- The Department of Radiology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Jei Hee Lee
- The Department of Radiology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Jinseok Lee
- Department of Biomedical Engineering, Kyung Hee University, Yongin-si, Republic of Korea
| |
Collapse
|
23
|
Artificial intelligence: A promising frontier in bladder cancer diagnosis and outcome prediction. Crit Rev Oncol Hematol 2022; 171:103601. [DOI: 10.1016/j.critrevonc.2022.103601] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 02/07/2023] Open
|
24
|
Two-Stage Segmentation Framework Based on Distance Transformation. SENSORS 2021; 22:s22010250. [PMID: 35009793 PMCID: PMC8749866 DOI: 10.3390/s22010250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/25/2021] [Accepted: 12/26/2021] [Indexed: 11/16/2022]
Abstract
With the rise of deep learning, using deep learning to segment lesions and assist in diagnosis has become an effective means to promote clinical medical analysis. However, the partial volume effect of organ tissues leads to unclear and blurred edges of ROI in medical images, making it challenging to achieve high-accuracy segmentation of lesions or organs. In this paper, we assume that the distance map obtained by performing distance transformation on the ROI edge can be used as a weight map to make the network pay more attention to the learning of the ROI edge region. To this end, we design a novel framework to flexibly embed the distance map into the two-stage network to improve left atrium MRI segmentation performance. Furthermore, a series of distance map generation methods are proposed and studied to reasonably explore how to express the weight of assisting network learning. We conduct thorough experiments to verify the effectiveness of the proposed segmentation framework, and experimental results demonstrate that our hypothesis is feasible.
Collapse
|
25
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
26
|
Analytical performance of aPROMISE: automated anatomic contextualization, detection, and quantification of [ 18F]DCFPyL (PSMA) imaging for standardized reporting. Eur J Nucl Med Mol Imaging 2021; 49:1041-1051. [PMID: 34463809 PMCID: PMC8803714 DOI: 10.1007/s00259-021-05497-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
Purpose The application of automated image analyses could improve and facilitate standardization and consistency of quantification in [18F]DCFPyL (PSMA) PET/CT scans. In the current study, we analytically validated aPROMISE, a software as a medical device that segments organs in low-dose CT images with deep learning, and subsequently detects and quantifies potential pathological lesions in PSMA PET/CT. Methods To evaluate the deep learning algorithm, the automated segmentations of the low-dose CT component of PSMA PET/CT scans from 20 patients were compared to manual segmentations. Dice scores were used to quantify the similarities between the automated and manual segmentations. Next, the automated quantification of tracer uptake in the reference organs and detection and pre-segmentation of potential lesions were evaluated in 339 patients with prostate cancer, who were all enrolled in the phase II/III OSPREY study. Three nuclear medicine physicians performed the retrospective independent reads of OSPREY images with aPROMISE. Quantitative consistency was assessed by the pairwise Pearson correlations and standard deviation between the readers and aPROMISE. The sensitivity of detection and pre-segmentation of potential lesions was evaluated by determining the percent of manually selected abnormal lesions that were automatically detected by aPROMISE. Results The Dice scores for bone segmentations ranged from 0.88 to 0.95. The Dice scores of the PSMA PET/CT reference organs, thoracic aorta and liver, were 0.89 and 0.97, respectively. Dice scores of other visceral organs, including prostate, were observed to be above 0.79. The Pearson correlation for blood pool reference was higher between any manual reader and aPROMISE, than between any pair of manual readers. The standard deviations of reference organ uptake across all patients as determined by aPROMISE (SD = 0.21 blood pool and SD = 1.16 liver) were lower compared to those of the manual readers. Finally, the sensitivity of aPROMISE detection and pre-segmentation was 91.5% for regional lymph nodes, 90.6% for all lymph nodes, and 86.7% for bone in metastatic patients. Conclusion In this analytical study, we demonstrated the segmentation accuracy of the deep learning algorithm, the consistency in quantitative assessment across multiple readers, and the high sensitivity in detecting potential lesions. The study provides a foundational framework for clinical evaluation of aPROMISE in standardized reporting of PSMA PET/CT. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05497-8.
Collapse
|
27
|
Hu Y, Zhao H, Li W, Li J. Semantic image segmentation of brain MRI with deep learning. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2021; 46:858-864. [PMID: 34565730 PMCID: PMC10929963 DOI: 10.11817/j.issn.1672-7347.2021.200744] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/06/2020] [Indexed: 11/03/2022]
Abstract
OBJECTIVES Previous studies on brain MRI image segmentation, such as threshold method, boundary detection method, and region method did not achieve good performance in complex scenes. Based on the deep learning segmentation technology, this study constructed a neural network model by using the algorithm of atrous convolution combined with conditional random field (CRF) to segment the thalamus, caudate nucleus, and lenticular nucleus in brain MRI, which laid a good foundation for MRI diagnosis of brain diseases. METHODS A total of 1 200 MRI-Flair images of the brain were randomly selected, and 3 anatomical structures of thalamus, caudate nucleus, and lenticular nucleus were manually labeled, of which 1 000 were used as training data sets and 200 were used as test data sets. The neural network model was established by using deep convolutional neural networks (DCNN) combined with CRF algorithm. The training data set was input into the model, and the parameterized neural network model was obtained after iteration for 30 000 times. The test data set was used to evaluate, test, and output the predicted image. RESULTS The model optimization results showed that the new brain MRI segmentation model DeepXAG had the highest accuracy. Therefore, DeepXAG was selected as the segmentation algorithm. The mean intersection over union (mIOU) of the DeepXAG model was 72.3%, which was significantly higher than other classical segmentation algorithms (CRF-RNN1, FCN-8s2, DPN3, RefineNet4, and PSPNet5). CONCLUSIONS The DeepXAG algorithm has good accuracy and robustness in segmenting the anatomical structure of brain MRI images.
Collapse
Affiliation(s)
- Yimin Hu
- Department of Neurology, Beijing Chuiyangliu Hosipital, Beijing 100022.
| | - Huiping Zhao
- Department of Oncology, Brain Hospital of Human Province, Changsha 410007, China.
| | - Wei Li
- Department of Neurology, Beijing Chuiyangliu Hosipital, Beijing 100022
| | - Jun Li
- Department of Neurology, Beijing Chuiyangliu Hosipital, Beijing 100022
| |
Collapse
|
28
|
Xu X, Wang H, Guo Y, Zhang X, Li B, Du P, Liu Y, Lu H. Study Progress of Noninvasive Imaging and Radiomics for Decoding the Phenotypes and Recurrence Risk of Bladder Cancer. Front Oncol 2021; 11:704039. [PMID: 34336691 PMCID: PMC8321511 DOI: 10.3389/fonc.2021.704039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 06/30/2021] [Indexed: 12/24/2022] Open
Abstract
Urinary bladder cancer (BCa) is a highly prevalent disease among aged males. Precise diagnosis of tumor phenotypes and recurrence risk is of vital importance in the clinical management of BCa. Although imaging modalities such as CT and multiparametric MRI have played an essential role in the noninvasive diagnosis and prognosis of BCa, radiomics has also shown great potential in the precise diagnosis of BCa and preoperative prediction of the recurrence risk. Radiomics-empowered image interpretation can amplify the differences in tumor heterogeneity between different phenotypes, i.e., high-grade vs. low-grade, early-stage vs. advanced-stage, and nonmuscle-invasive vs. muscle-invasive. With a multimodal radiomics strategy, the recurrence risk of BCa can be preoperatively predicted, providing critical information for the clinical decision making. We thus reviewed the rapid progress in the field of medical imaging empowered by the radiomics for decoding the phenotype and recurrence risk of BCa during the past 20 years, summarizing the entire pipeline of the radiomics strategy for the definition of BCa phenotype and recurrence risk including region of interest definition, radiomics feature extraction, tumor phenotype prediction and recurrence risk stratification. We particularly focus on current pitfalls, challenges and opportunities to promote massive clinical applications of radiomics pipeline in the near future.
Collapse
Affiliation(s)
- Xiaopan Xu
- School of Biomedical Engineering, Air Force Medical University, Xi’an, China
| | - Huanjun Wang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yan Guo
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Xi Zhang
- School of Biomedical Engineering, Air Force Medical University, Xi’an, China
| | - Baojuan Li
- School of Biomedical Engineering, Air Force Medical University, Xi’an, China
| | - Peng Du
- School of Biomedical Engineering, Air Force Medical University, Xi’an, China
| | - Yang Liu
- School of Biomedical Engineering, Air Force Medical University, Xi’an, China
| | - Hongbing Lu
- School of Biomedical Engineering, Air Force Medical University, Xi’an, China
| |
Collapse
|
29
|
Mohammadi R, Shokatian I, Salehi M, Arabi H, Shiri I, Zaidi H. Deep learning-based auto-segmentation of organs at risk in high-dose rate brachytherapy of cervical cancer. Radiother Oncol 2021; 159:231-240. [DOI: 10.1016/j.radonc.2021.03.030] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/20/2021] [Accepted: 03/24/2021] [Indexed: 12/11/2022]
|
30
|
Ge R, Cai H, Yuan X, Qin F, Huang Y, Wang P, Lyu L. MD-UNET: Multi-input dilated U-shape neural network for segmentation of bladder cancer. Comput Biol Chem 2021; 93:107510. [PMID: 34044203 DOI: 10.1016/j.compbiolchem.2021.107510] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 05/12/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation of the tumour area is crucial for the treatment and prognosis of patients with bladder cancer. However, the complex information from the MRI image poses an important challenge for us to accurately segment the lesion, for example, the high distinction among people, size of bladder variation and noise interference. Based on the above issues, we propose an MD-Unet network structure, which uses multi-scale images as the input of the network, and combines max-pooling with dilated convolution to increase the receptive field of the convolutional network. The results show that the proposed network can obtain higher precision than the existing models for the bladder cancer dataset. The MD-Unet can achieve state-of-art performance compared with other methods.
Collapse
Affiliation(s)
- Ruiquan Ge
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Huihuang Cai
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Xin Yuan
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Feiwei Qin
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Yan Huang
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Pu Wang
- Computer School, Hubei University of Arts and Science, Xiangyang, 441053, China.
| | - Lei Lyu
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250014, China.
| |
Collapse
|
31
|
Bandyk MG, Gopireddy DR, Lall C, Balaji KC, Dolz J. MRI and CT bladder segmentation from classical to deep learning based approaches: Current limitations and lessons. Comput Biol Med 2021; 134:104472. [PMID: 34023696 DOI: 10.1016/j.compbiomed.2021.104472] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 04/29/2021] [Accepted: 05/02/2021] [Indexed: 10/21/2022]
Abstract
Precise determination and assessment of bladder cancer (BC) extent of muscle invasion involvement guides proper risk stratification and personalized therapy selection. In this context, segmentation of both bladder walls and cancer are of pivotal importance, as it provides invaluable information to stage the primary tumor. Hence, multiregion segmentation on patients presenting with symptoms of bladder tumors using deep learning heralds a new level of staging accuracy and prediction of the biologic behavior of the tumor. Nevertheless, despite the success of these models in other medical problems, progress in multiregion bladder segmentation, particularly in MRI and CT modalities, is still at a nascent stage, with just a handful of works tackling a multiregion scenario. Furthermore, most existing approaches systematically follow prior literature in other clinical problems, without casting a doubt on the validity of these methods on bladder segmentation, which may present different challenges. Inspired by this, we provide an in-depth look at bladder cancer segmentation using deep learning models. The critical determinants for accurate differentiation of muscle invasive disease, current status of deep learning based bladder segmentation, lessons and limitations of prior work are highlighted.
Collapse
Affiliation(s)
- Mark G Bandyk
- Department of Urology, University of Florida, Jacksonville, FL, USA.
| | | | - Chandana Lall
- Department of Radiology, University of Florida, Jacksonville, FL, USA
| | - K C Balaji
- Department of Urology, University of Florida, Jacksonville, FL, USA
| | | |
Collapse
|
32
|
|
33
|
Hadjiiski L, Samala R, Chan HP. Image Processing Analytics: Enhancements and Segmentation. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00057-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
34
|
Sanders JW, Lewis GD, Thames HD, Kudchadker RJ, Venkatesan AM, Bruno TL, Ma J, Pagel MD, Frank SJ. Machine Segmentation of Pelvic Anatomy in MRI-Assisted Radiosurgery (MARS) for Prostate Cancer Brachytherapy. Int J Radiat Oncol Biol Phys 2020; 108:1292-1303. [DOI: 10.1016/j.ijrobp.2020.06.076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/28/2020] [Accepted: 06/28/2020] [Indexed: 10/23/2022]
|
35
|
Esaki T, Furukawa R. [Volume Measurements of Post-transplanted Liver of Pediatric Recipients Using Workstations and Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:1133-1142. [PMID: 33229843 DOI: 10.6009/jjrt.2020_jsrt_76.11.1133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE The purpose of this study was to propose a method for segmentation and volume measurement of graft liver and spleen of pediatric transplant recipients on digital imaging and communications in medicine (DICOM) -format images using U-Net and three-dimensional (3-D) workstations (3DWS) . METHOD For segmentation accuracy assessments, Dice coefficients were calculated for the graft liver and spleen. After verifying that the created DICOM-format images could be imported using the existing 3DWS, accuracy rates between the ground truth and segmentation images were calculated via mask processing. RESULT As per the verification results, Dice coefficients for the test data were as follows: graft liver, 0.758 and spleen, 0.577. All created DICOM-format images were importable using the 3DWS, with accuracy rates of 87.10±4.70% and 80.27±11.29% for the graft liver and spleen, respectively. CONCLUSION The U-Net could be used for graft liver and spleen segmentations, and volume measurement using 3DWS was simplified by this method.
Collapse
Affiliation(s)
- Toru Esaki
- Department of Radiologic Technology, Jichi Medical University Hospital
| | - Rieko Furukawa
- Department of Pediatric Medical Imaging, Jichi Children's Medical Center Tochigi
| |
Collapse
|
36
|
Zhang Z, Zhao T, Gay H, Zhang W, Sun B. ARPM-net: A novel CNN-based adversarial method with Markov random field enhancement for prostate and organs at risk segmentation in pelvic CT images. Med Phys 2020; 48:227-237. [PMID: 33151620 DOI: 10.1002/mp.14580] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 09/21/2020] [Accepted: 10/21/2020] [Indexed: 01/30/2023] Open
Abstract
PURPOSE The research is to develop a novel CNN-based adversarial deep learning method to improve and expedite the multi-organ semantic segmentation of CT images and to generate accurate contours on pelvic CT images. METHODS Planning CT and structure datasets for 120 patients with intact prostate cancer were retrospectively selected and divided for tenfold cross-validation. The proposed adversarial multi-residual multi-scale pooling Markov random field (MRF) enhanced network (ARPM-net) implements an adversarial training scheme. A segmentation network and a discriminator network were trained jointly, and only the segmentation network was used for prediction. The segmentation network integrates a newly designed MRF block into a variation of multi-residual U-net. The discriminator takes the product of the original CT and the prediction/ground-truth as input and classifies the input into fake/real. The segmentation network and discriminator network can be trained jointly as a whole, or the discriminator can be used for fine-tuning after the segmentation network is coarsely trained. Multi-scale pooling layers were introduced to preserve spatial resolution during pooling using less memory compared to atrous convolution layers. An adaptive loss function was proposed to enhance the training on small or low contrast organs. The accuracy of modeled contours was measured with the dice similarity coefficient (DSC), average Hausdorff distance (AHD), average surface Hausdorff distance (ASHD), and relative volume difference (VD) using clinical contours as references to the ground-truth. The proposed ARPM-net method was compared to several state-of-the-art deep learning methods. RESULTS ARPM-net outperformed several existing deep learning approaches and MRF methods and achieved state-of-the-art performance on a testing dataset. On the test set with 20 cases, the average DSC on the prostate, bladder, rectum, left femur, and right femur were 0.88 ( ± 0.11), 0.97 ( ± 0.07), 0.86 ( ± 0.12), 0.97 ( ± 0.01), and 0.97 ( ± 0.01), respectively. The average HD (mm) on these organs were 1.58 ( ± 1.77), 1.91 ( ± 1.29), 3.14 ( ± 2.39), 1.76 ( ± 1.57), and 1.92 ( ± 1.01). The average surface HD (mm) on these organs are 2.11 ( ± 2.03), 2.36 ( ± 2.43), 3.05 ( ± 2.11), 1.99 ( ± 1.66), and 2.00 ( ± 2.07). CONCLUSION ARPM-net was designed for the automatic segmentation of pelvic CT images. With adversarial fine-tuning, ARPM-net produces state-of-the-art accurate contouring of multiple organs on CT images and has the potential to facilitate routine pelvic cancer radiation therapy planning process.
Collapse
Affiliation(s)
- Zhuangzhuang Zhang
- Department of Computer Science and Engineering, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO, 63130, USA
| | - Tianyu Zhao
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO, 63110, USA
| | - Hiram Gay
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO, 63110, USA
| | - Weixiong Zhang
- Department of Computer Science and Engineering, Department of Genetics, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO, 63130, USA
| | - Baozhou Sun
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO, 63110, USA
| |
Collapse
|
37
|
Boers TGW, Hu Y, Gibson E, Barratt DC, Bonmati E, Krdzalic J, van der Heijden F, Hermans JJ, Huisman HJ. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans. Phys Med Biol 2020; 65:065002. [PMID: 31978921 DOI: 10.1088/1361-6560/ab6f99] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The increasing incidence of pancreatic cancer will make it the second deadliest cancer in 2030. Imaging based early diagnosis and image guided treatment are emerging potential solutions. Artificial intelligence (AI) can help provide and improve widespread diagnostic expertise and accurate interventional image interpretation. Accurate segmentation of the pancreas is essential to create annotated data sets to train AI, and for computer assisted interventional guidance. Automated deep learning segmentation performance in pancreas computed tomography (CT) imaging is low due to poor grey value contrast and complex anatomy. A good solution seemed a recent interactive deep learning segmentation framework for brain CT that helped strongly improve initial automated segmentation with minimal user input. This method yielded no satisfactory results for pancreas CT, possibly due to a sub-optimal neural network architecture. We hypothesize that a state-of-the-art U-net neural network architecture is better because it can produce a better initial segmentation and is likely to be extended to work in a similar interactive approach. We implemented the existing interactive method, iFCN, and developed an interactive version of U-net method we call iUnet. The iUnet is fully trained to produce the best possible initial segmentation. In interactive mode it is additionally trained on a partial set of layers on user generated scribbles. We compare initial segmentation performance of iFCN and iUnet on a 100CT dataset using dice similarity coefficient analysis. Secondly, we assessed the performance gain in interactive use with three observers on segmentation quality and time. Average automated baseline performance was 78% (iUnet) versus 72% (FCN). Manual and semi-automatic segmentation performance was: 87% in 15 min. for manual, and 86% in 8 min. for iUNet. We conclude that iUnet provides a better baseline than iFCN and can reach expert manual performance significantly faster than manual segmentation in case of pancreas CT. Our novel iUnet architecture is modality and organ agnostic and can be a potential novel solution for semi-automatic medical imaging segmentation in general.
Collapse
Affiliation(s)
- T G W Boers
- Faculty of Science and Technology, University of Twente, Enschede, The Netherlands
| | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Xiong X, Linhardt TJ, Liu W, Smith BJ, Sun W, Bauer C, Sunderland JJ, Graham MM, Buatti JM, Beichel RR. A 3D deep convolutional neural network approach for the automated measurement of cerebellum tracer uptake in FDG PET-CT scans. Med Phys 2019; 47:1058-1066. [PMID: 31855287 DOI: 10.1002/mp.13970] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 12/05/2019] [Accepted: 12/05/2019] [Indexed: 01/12/2023] Open
Abstract
PURPOSE The purpose of this work was to assess the potential of deep convolutional neural networks in automated measurement of cerebellum tracer uptake in F-18 fluorodeoxyglucose (FDG) positron emission tomography (PET) scans. METHODS Three different three-dimensional (3D) convolutional neural network architectures (U-Net, V-Net, and modified U-Net) were implemented and compared regarding their performance in 3D cerebellum segmentation in FDG PET scans. For network training and testing, 134 PET scans with corresponding manual volumetric segmentations were utilized. For segmentation performance assessment, a fivefold cross-validation was used, and the Dice coefficient as well as signed and unsigned distance errors were calculated. In addition, standardized uptake value (SUV) uptake measurement performance was assessed by means of a statistical comparison to an independent reference standard. Furthermore, a comparison to a previously reported active-shape-model-based approach was performed. RESULTS Out of the three convolutional neural networks investigated, the modified U-Net showed significantly better segmentation performance. It achieved a Dice coefficient of 0.911 ± 0.026, a signed distance error of 0.220 ± 0.103 mm, and an unsigned distance error of 1.048 ± 0.340 mm. When compared to the independent reference standard, SUV uptake measurements produced with the modified U-Net showed no significant error in slope and intercept. The estimated reduction in total SUV measurement error was 95.1%. CONCLUSIONS The presented work demonstrates the potential of deep convolutional neural networks in automated SUV measurement of reference regions. While it focuses on the cerebellum, utilized methods can be generalized to other reference regions like the liver or aortic arch. Future work will focus on combining lesion and reference region analysis into one approach.
Collapse
Affiliation(s)
- Xiaofan Xiong
- Department of Biomedical Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - Timothy J Linhardt
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - Weiren Liu
- Roy J. and Lucille A. Carver College of Medicine, The University of Iowa, Iowa City, IA, 52242, USA
| | - Brian J Smith
- Department of Biostatistics, The University of Iowa, Iowa City, IA, 52242, USA
| | - Wenqing Sun
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Christian Bauer
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, IA, 52242, USA
| | - Michael M Graham
- Department of Radiology, The University of Iowa, Iowa City, IA, 52242, USA
| | - John M Buatti
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Reinhard R Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, 52242, USA
| |
Collapse
|
39
|
Rubin DL. Artificial Intelligence in Imaging: The Radiologist's Role. J Am Coll Radiol 2019; 16:1309-1317. [PMID: 31492409 PMCID: PMC6733578 DOI: 10.1016/j.jacr.2019.05.036] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 05/17/2019] [Indexed: 12/15/2022]
Abstract
Rapid technological advancements in artificial intelligence (AI) methods have fueled explosive growth in decision tools being marketed by a rapidly growing number of companies. AI developments are being driven largely by computer scientists, informaticians, engineers, and businesspeople, with much less direct participation by radiologists. Participation by radiologists in AI is largely restricted to educational efforts to familiarize them with the tools and promising results, but techniques to help them decide which AI tools should be used in their practices and to how to quantify their value are not being addressed. This article focuses on the role of radiologists in imaging AI and suggests specific ways they can be engaged by (1) considering the clinical need for AI tools in specific clinical use cases, (2) undertaking formal evaluation of AI tools they are considering adopting in their practices, and (3) maintaining their expertise and guarding against the pitfalls of overreliance on technology.
Collapse
Affiliation(s)
- Daniel L Rubin
- Department of Biomedical Data Science, Radiology, and Medicine (Biomedical Informatics Research), Stanford University, Stanford, California.
| |
Collapse
|