1
|
Li Z, Du W, Shi Y, Li W, Gao C. A bi-directional segmentation method for prostate ultrasound images under semantic constraints. Sci Rep 2024; 14:11701. [PMID: 38778034 DOI: 10.1038/s41598-024-61238-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 05/02/2024] [Indexed: 05/25/2024] Open
Abstract
Due to the lack of sufficient labeled data for the prostate and the extensive and complex semantic information in ultrasound images, accurately and quickly segmenting the prostate in transrectal ultrasound (TRUS) images remains a challenging task. In this context, this paper proposes a solution for TRUS image segmentation using an end-to-end bidirectional semantic constraint method, namely the BiSeC model. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the Dice Similarity Coefficient (DSC) of 96.74% and the Intersection over Union (IoU) of 93.71%. Our model achieves a good balance between actual boundaries and noise areas, reducing costs while ensuring the accuracy and speed of segmentation.
Collapse
Affiliation(s)
- Zexiang Li
- College of Electrical Engineering and New Energy, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Wei Du
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Yongtao Shi
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China.
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China.
| | - Wei Li
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Chao Gao
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| |
Collapse
|
2
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
3
|
Gao C, Shi Y, Yang S, Lei B. SAA-SDM: Neural Networks Faster Learned to Segment Organ Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:547-562. [PMID: 38343217 PMCID: PMC11031521 DOI: 10.1007/s10278-023-00947-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 10/16/2023] [Accepted: 10/18/2023] [Indexed: 04/20/2024]
Abstract
In the field of medicine, rapidly and accurately segmenting organs in medical images is a crucial application of computer technology. This paper introduces a feature map module, Strength Attention Area Signed Distance Map (SAA-SDM), based on the principal component analysis (PCA) principle. The module is designed to accelerate neural networks' convergence speed in rapidly achieving high precision. SAA-SDM provides the neural network with confidence information regarding the target and background, similar to the signed distance map (SDM), thereby enhancing the network's understanding of semantic information related to the target. Furthermore, this paper presents a training scheme tailored for the module, aiming to achieve finer segmentation and improved generalization performance. Validation of our approach is carried out using TRUS and chest X-ray datasets. Experimental results demonstrate that our method significantly enhances neural networks' convergence speed and precision. For instance, the convergence speed of UNet and UNET + + is improved by more than 30%. Moreover, Segformer achieves an increase of over 6% and 3% in mIoU (mean Intersection over Union) on two test datasets without requiring pre-trained parameters. Our approach reduces the time and resource costs associated with training neural networks for organ segmentation tasks while effectively guiding the network to achieve meaningful learning even without pre-trained parameters.
Collapse
Affiliation(s)
- Chao Gao
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China
| | - Yongtao Shi
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China.
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China.
| | - Shuai Yang
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China
| | - Bangjun Lei
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China
| |
Collapse
|
4
|
Jiang H, Imran M, Muralidharan P, Patel A, Pensa J, Liang M, Benidir T, Grajo JR, Joseph JP, Terry R, DiBianco JM, Su LM, Zhou Y, Brisbane WG, Shao W. MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images. Comput Med Imaging Graph 2024; 112:102326. [PMID: 38211358 DOI: 10.1016/j.compmedimag.2024.102326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 12/07/2023] [Accepted: 12/12/2023] [Indexed: 01/13/2024]
Abstract
Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis, prostate biopsy, and treatment planning. However, prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline. This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges. During the training process, MicroSegNet focuses more on regions that are hard to segment (hard regions), characterized by discrepancies between expert and non-expert annotations. We achieve this by proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns a larger weight to prediction errors in hard regions and a lower weight to prediction errors in easy regions. The AG-BCE loss was seamlessly integrated into the training process through the utilization of multi-scale deep supervision, enabling MicroSegNet to capture global contextual dependencies and local information at various scales. We trained our model using micro-US images from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model achieved a Dice coefficient of 0.939 and a Hausdorff distance of 2.02 mm, outperforming several state-of-the-art segmentation methods, as well as three human annotators with different experience levels. Our code is publicly available at https://github.com/mirthAI/MicroSegNet and our dataset is publicly available at https://zenodo.org/records/10475293.
Collapse
Affiliation(s)
- Hongxu Jiang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32608, United States
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL, 32608, United States
| | - Preethika Muralidharan
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, 32608, United States
| | - Anjali Patel
- College of Medicine , University of Florida, Gainesville, FL, 32608, United States
| | - Jake Pensa
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, United States
| | - Muxuan Liang
- Department of Biostatistics, University of Florida, Gainesville, FL, 32608, United States
| | - Tarik Benidir
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Joseph R Grajo
- Department of Radiology, University of Florida, Gainesville, FL, 32608, United States
| | - Jason P Joseph
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Russell Terry
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | | | - Li-Ming Su
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, 95064, United States
| | - Wayne G Brisbane
- Department of Urology, University of California, Los Angeles, CA, 90095, United States
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, 32608, United States.
| |
Collapse
|
5
|
Peng T, Gu Y, Ruan SJ, Wu QJ, Cai J. Novel Solution for Using Neural Networks for Kidney Boundary Extraction in 2D Ultrasound Data. Biomolecules 2023; 13:1548. [PMID: 37892229 PMCID: PMC10604927 DOI: 10.3390/biom13101548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 09/30/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
Background and Objective: Kidney ultrasound (US) imaging is a significant imaging modality for evaluating kidney health and is essential for diagnosis, treatment, surgical intervention planning, and follow-up assessments. Kidney US image segmentation consists of extracting useful objects or regions from the total image, which helps determine tissue organization and improve diagnosis. Thus, obtaining accurate kidney segmentation data is an important first step for precisely diagnosing kidney diseases. However, manual delineation of the kidney in US images is complex and tedious in clinical practice. To overcome these challenges, we developed a novel automatic method for US kidney segmentation. Methods: Our method comprises two cascaded steps for US kidney segmentation. The first step utilizes a coarse segmentation procedure based on a deep fusion learning network to roughly segment each input US kidney image. The second step utilizes a refinement procedure to fine-tune the result of the first step by combining an automatic searching polygon tracking method with a machine learning network. In the machine learning network, a suitable and explainable mathematical formula for kidney contours is denoted by basic parameters. Results: Our method is assessed using 1380 trans-abdominal US kidney images obtained from 115 patients. Based on comprehensive comparisons of different noise levels, our method achieves accurate and robust results for kidney segmentation. We use ablation experiments to assess the significance of each component of the method. Compared with state-of-the-art methods, the evaluation metrics of our method are significantly higher. The Dice similarity coefficient (DSC) of our method is 94.6 ± 3.4%, which is higher than those of recent deep learning and hybrid algorithms (89.4 ± 7.1% and 93.7 ± 3.8%, respectively). Conclusions: We develop a coarse-to-refined architecture for the accurate segmentation of US kidney images. It is important to precisely extract kidney contour features because segmentation errors can cause under-dosing of the target or over-dosing of neighboring normal tissues during US-guided brachytherapy. Hence, our method can be used to increase the rigor of kidney US segmentation.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou 215006, China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Yidong Gu
- Department of Medical Ultrasound, Suzhou Municipal Hospital, Suzhou 215000, China;
| | - Shanq-Jang Ruan
- Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei City 10607, Taiwan;
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710, USA;
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
6
|
Peng T, Dong Y, Di G, Zhao J, Li T, Ren G, Zhang L, Cai J. Boundary delineation in transrectal ultrasound images for region of interest of prostate. Phys Med Biol 2023; 68:195008. [PMID: 37652058 DOI: 10.1088/1361-6560/acf5c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 08/31/2023] [Indexed: 09/02/2023]
Abstract
Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for ultrasound-guided brachytherapy for prostate cancer. However, the current practice of manual segmentation is difficult, time-consuming, and prone to errors. To overcome these challenges, we developed an accurate prostate segmentation framework (A-ProSeg) for TRUS images. The proposed segmentation method includes three innovation steps: (1) acquiring the sequence of vertices by using an improved polygonal segment-based method with a small number of radiologist-defined seed points as prior points; (2) establishing an optimal machine learning-based method by using the improved evolutionary neural network; and (3) obtaining smooth contours of the prostate region of interest using the optimized machine learning-based method. The proposed method was evaluated on 266 patients who underwent prostate cancer brachytherapy. The proposed method achieved a high performance against the ground truth with a Dice similarity coefficient of 96.2% ± 2.4%, a Jaccard similarity coefficient of 94.4% ± 3.3%, and an accuracy of 95.7% ± 2.7%; these values are all higher than those obtained using state-of-the-art methods. A sensitivity evaluation on different noise levels demonstrated that our method achieved high robustness against changes in image quality. Meanwhile, an ablation study was performed, and the significance of all the key components of the proposed method was demonstrated.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, People's Republic of China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Yan Dong
- Department of Ultrasonography, The First Affiliated Hospital of Soochow University, Suzhou, People's Republic of China
| | - Gongye Di
- Department of Ultrasonic, Taizhou People's Hospital Affiliated to Nanjing Medical University, Taizhou, People's Republic of China
| | - Jing Zhao
- Department of Ultrasound, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, People's Republic of China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| | - Lei Zhang
- Medical Physics Graduate Program and Data Science Research Center, Duke Kunshan University, Kunshan, Jiangsu, People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| |
Collapse
|
7
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
8
|
Peng T, Tang C, Wu Y, Cai J. Semi-Automatic Prostate Segmentation From Ultrasound Images Using Machine Learning and Principal Curve Based on Interpretable Mathematical Model Expression. Front Oncol 2022; 12:878104. [PMID: 35747834 PMCID: PMC9209717 DOI: 10.3389/fonc.2022.878104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 05/03/2022] [Indexed: 01/16/2023] Open
Abstract
Accurate prostate segmentation in transrectal ultrasound (TRUS) is a challenging problem due to the low contrast of TRUS images and the presence of imaging artifacts such as speckle and shadow regions. To address this issue, we propose a semi-automatic model termed Hybrid Segmentation Model (H-SegMod) for prostate Region of Interest (ROI) segmentation in TRUS images. H-SegMod contains two cascaded stages. The first stage is to obtain the vertices sequences based on an improved principal curve-based model, where a few radiologist-selected seed points are used as prior. The second stage is to find a map function for describing the smooth prostate contour based on an improved machine learning model. Experimental results show that our proposed model achieved superior segmentation results compared with several other state-of-the-art models, achieving an average Dice Similarity Coefficient (DSC), Jaccard Similarity Coefficient (Ω), and Accuracy (ACC) of 96.5%, 95.2%, and 96.3%, respectively.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong KongSAR, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, United States
- *Correspondence: Jing Cai, ; Tao Peng,
| | - Caiyin Tang
- Department of Medical Imaging, Taizhou People’s Hospital, Taizhou, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, Hong KongSAR, China
- *Correspondence: Jing Cai, ; Tao Peng,
| |
Collapse
|