1
|
Hampole P, Harding T, Gillies D, Orlando N, Edirisinghe C, Mendez LC, D'Souza D, Velker V, Correa R, Helou J, Xing S, Fenster A, Hoover DA. Deep learning-based ultrasound auto-segmentation of the prostate with brachytherapy implanted needles. Med Phys 2024; 51:2665-2677. [PMID: 37888789 DOI: 10.1002/mp.16811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND Accurate segmentation of the clinical target volume (CTV) corresponding to the prostate with or without proximal seminal vesicles is required on transrectal ultrasound (TRUS) images during prostate brachytherapy procedures. Implanted needles cause artifacts that may make this task difficult and time-consuming. Thus, previous studies have focused on the simpler problem of segmentation in the absence of needles at the cost of reduced clinical utility. PURPOSE To use a convolutional neural network (CNN) algorithm for segmentation of the prostatic CTV in TRUS images post-needle insertion obtained from prostate brachytherapy procedures to better meet the demands of the clinical procedure. METHODS A dataset consisting of 144 3-dimensional (3D) TRUS images with implanted metal brachytherapy needles and associated manual CTV segmentations was used for training a 2-dimensional (2D) U-Net CNN using a Dice Similarity Coefficient (DSC) loss function. These were split by patient, with 119 used for training and 25 reserved for testing. The 3D TRUS training images were resliced at radial (around the axis normal to the coronal plane) and oblique angles through the center of the 3D image, as well as axial, coronal, and sagittal planes to obtain 3689 2D TRUS images and masks for training. The network generated boundary predictions on 300 2D TRUS images obtained from reslicing each of the 25 3D TRUS images used for testing into 12 radial slices (15° apart), which were then reconstructed into 3D surfaces. Performance metrics included DSC, recall, precision, unsigned and signed volume percentage differences (VPD/sVPD), mean surface distance (MSD), and Hausdorff distance (HD). In addition, we studied whether providing algorithm-predicted boundaries to the physicians and allowing modifications increased the agreement between physicians. This was performed by providing a subset of 3D TRUS images of five patients to five physicians who segmented the CTV using clinical software and repeated this at least 1 week apart. The five physicians were given the algorithm boundary predictions and allowed to modify them, and the resulting inter- and intra-physician variability was evaluated. RESULTS Median DSC, recall, precision, VPD, sVPD, MSD, and HD of the 3D-reconstructed algorithm segmentations were 87.2 [84.1, 88.8]%, 89.0 [86.3, 92.4]%, 86.6 [78.5, 90.8]%, 10.3 [4.5, 18.4]%, 2.0 [-4.5, 18.4]%, 1.6 [1.2, 2.0] mm, and 6.0 [5.3, 8.0] mm, respectively. Segmentation time for a set of 12 2D radial images was 2.46 [2.44, 2.48] s. With and without U-Net starting points, the intra-physician median DSCs were 97.0 [96.3, 97.8]%, and 94.4 [92.5, 95.4]% (p < 0.0001), respectively, while the inter-physician median DSCs were 94.8 [93.3, 96.8]% and 90.2 [88.7, 92.1]%, respectively (p < 0.0001). The median segmentation time for physicians, with and without U-Net-generated CTV boundaries, were 257.5 [211.8, 300.0] s and 288.0 [232.0, 333.5] s, respectively (p = 0.1034). CONCLUSIONS Our algorithm performed at a level similar to physicians in a fraction of the time. The use of algorithm-generated boundaries as a starting point and allowing modifications reduced physician variability, although it did not significantly reduce the time compared to manual segmentations.
Collapse
Affiliation(s)
- Prakash Hampole
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Thomas Harding
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Derek Gillies
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Nathan Orlando
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
| | | | - Lucas C Mendez
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - David D'Souza
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Vikram Velker
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Rohann Correa
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Joelle Helou
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Shuwei Xing
- Robarts Research Institute, Western University, London, ON, Canada
- School of Biomedical Engineering, Western University, London, ON, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
- Department of Medical Imaging, Western University, London, ON, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, ON, Canada
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| |
Collapse
|
2
|
Jiang H, Imran M, Muralidharan P, Patel A, Pensa J, Liang M, Benidir T, Grajo JR, Joseph JP, Terry R, DiBianco JM, Su LM, Zhou Y, Brisbane WG, Shao W. MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images. Comput Med Imaging Graph 2024; 112:102326. [PMID: 38211358 DOI: 10.1016/j.compmedimag.2024.102326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 12/07/2023] [Accepted: 12/12/2023] [Indexed: 01/13/2024]
Abstract
Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis, prostate biopsy, and treatment planning. However, prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline. This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges. During the training process, MicroSegNet focuses more on regions that are hard to segment (hard regions), characterized by discrepancies between expert and non-expert annotations. We achieve this by proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns a larger weight to prediction errors in hard regions and a lower weight to prediction errors in easy regions. The AG-BCE loss was seamlessly integrated into the training process through the utilization of multi-scale deep supervision, enabling MicroSegNet to capture global contextual dependencies and local information at various scales. We trained our model using micro-US images from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model achieved a Dice coefficient of 0.939 and a Hausdorff distance of 2.02 mm, outperforming several state-of-the-art segmentation methods, as well as three human annotators with different experience levels. Our code is publicly available at https://github.com/mirthAI/MicroSegNet and our dataset is publicly available at https://zenodo.org/records/10475293.
Collapse
Affiliation(s)
- Hongxu Jiang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32608, United States
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL, 32608, United States
| | - Preethika Muralidharan
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, 32608, United States
| | - Anjali Patel
- College of Medicine , University of Florida, Gainesville, FL, 32608, United States
| | - Jake Pensa
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, United States
| | - Muxuan Liang
- Department of Biostatistics, University of Florida, Gainesville, FL, 32608, United States
| | - Tarik Benidir
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Joseph R Grajo
- Department of Radiology, University of Florida, Gainesville, FL, 32608, United States
| | - Jason P Joseph
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Russell Terry
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | | | - Li-Ming Su
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, 95064, United States
| | - Wayne G Brisbane
- Department of Urology, University of California, Los Angeles, CA, 90095, United States
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, 32608, United States.
| |
Collapse
|
3
|
Zhang Y, Yuan Q, Muzzammil HM, Gao G, Xu Y. Image-guided prostate biopsy robots: A review. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15135-15166. [PMID: 37679175 DOI: 10.3934/mbe.2023678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
At present, the incidence of prostate cancer (PCa) in men is increasing year by year. So, the early diagnosis of PCa is of great significance. Transrectal ultrasonography (TRUS)-guided biopsy is a common method for diagnosing PCa. The biopsy process is performed manually by urologists but the diagnostic rate is only 20%-30% and its reliability and accuracy can no longer meet clinical needs. The image-guided prostate biopsy robot has the advantages of a high degree of automation, does not rely on the skills and experience of operators, reduces the work intensity and operation time of urologists and so on. Capable of delivering biopsy needles to pre-defined biopsy locations with minimal needle placement errors, it makes up for the shortcomings of traditional free-hand biopsy and improves the reliability and accuracy of biopsy. The integration of medical imaging technology and the robotic system is an important means for accurate tumor location, biopsy puncture path planning and visualization. This paper mainly reviews image-guided prostate biopsy robots. According to the existing literature, guidance modalities are divided into magnetic resonance imaging (MRI), ultrasound (US) and fusion image. First, the robot structure research by different guided methods is the main line and the actuators and material research of these guided modalities is the auxiliary line to introduce and compare. Second, the robot image-guided localization technology is discussed. Finally, the image-guided prostate biopsy robot is summarized and suggestions for future development are provided.
Collapse
Affiliation(s)
- Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
- Foshan Baikang Robot Technology Co., Ltd, Nanhai District, Foshan City, Guangdong Province 528225, China
| | - Qihang Yuan
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Hafiz Muhammad Muzzammil
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Guoqiang Gao
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin University of Science and Technology, Harbin 150080, China
| | - Yong Xu
- Department of Urology, the Third Medical Centre, Chinese PLA (People's Liberation Army) General Hospital, Beijing 100039, China
| |
Collapse
|
4
|
Shao D, Ren L, Ma L. MSF-Net: A Lightweight Multi-Scale Feature Fusion Network for Skin Lesion Segmentation. Biomedicines 2023; 11:1733. [PMID: 37371828 DOI: 10.3390/biomedicines11061733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/08/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
Segmentation of skin lesion images facilitates the early diagnosis of melanoma. However, this remains a challenging task due to the diversity of target scales, irregular segmentation shapes, low contrast, and blurred boundaries of dermatological graphics. This paper proposes a multi-scale feature fusion network (MSF-Net) based on comprehensive attention convolutional neural network (CA-Net). We introduce the spatial attention mechanism in the convolution block through the residual connection to focus on the key regions. Meanwhile, Multi-scale Dilated Convolution Modules (MDC) and Multi-scale Feature Fusion Modules (MFF) are introduced to extract context information across scales and adaptively adjust the receptive field size of the feature map. We conducted many experiments on the public data set ISIC2018 to verify the validity of MSF-Net. The ablation experiment demonstrated the effectiveness of our three modules. The comparison experiment with the existing advanced network confirms that MSF-Net can achieve better segmentation under fewer parameters.
Collapse
Affiliation(s)
- Dangguo Shao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
- Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming 650500, China
| | - Lifan Ren
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| | - Lei Ma
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|