1
|
Chen Z, Lu Y, Long S, Campello VM, Bai J, Lekadir K. Fetal Head and Pubic Symphysis Segmentation in Intrapartum Ultrasound Image Using a Dual-Path Boundary-Guided Residual Network. IEEE J Biomed Health Inform 2024; 28:4648-4659. [PMID: 38739504 DOI: 10.1109/jbhi.2024.3399762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Accurate segmentation of the fetal head and pubic symphysis in intrapartum ultrasound images and measurement of fetal angle of progression (AoP) are critical to both outcome prediction and complication prevention in delivery. However, due to poor quality of perinatal ultrasound imaging with blurred target boundaries and the relatively small target of the public symphysis, fully automated and accurate segmentation remains challenging. In this paper, we propse a dual-path boundary-guided residual network (DBRN), which is a novel approach to tackle these challenges. The model contains a multi-scale weighted module (MWM) to gather global context information, and enhance the feature response within the target region by weighting the feature map. The model also incorporates an enhanced boundary module (EBM) to obtain more precise boundary information. Furthermore, the model introduces a boundary-guided dual-attention residual module (BDRM) for residual learning. BDRM leverages boundary information as prior knowledge and employs spatial attention to simultaneously focus on background and foreground information, in order to capture concealed details and improve segmentation accuracy. Extensive comparative experiments have been conducted on three datasets. The proposed method achieves average Dice score of 0.908 ±0.05 and average Hausdorff distance of 3.396 ±0.66 mm. Compared with state-of-the-art competitors, the proposed DBRN achieves better results. In addition, the average difference between the automatic measurement of AoPs based on this model and the manual measurement results is 6.157 °, which has good consistency and has broad application prospects in clinical practice.
Collapse
|
2
|
Miyahira AK, Kamran SC, Jamaspishvili T, Marshall CH, Maxwell KN, Parolia A, Zorko NA, Pienta KJ, Soule HR. Disrupting prostate cancer research: Challenge accepted; report from the 2023 Coffey-Holden Prostate Cancer Academy Meeting. Prostate 2024; 84:993-1015. [PMID: 38682886 DOI: 10.1002/pros.24721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 04/16/2024] [Indexed: 05/01/2024]
Abstract
INTRODUCTION The 2023 Coffey-Holden Prostate Cancer Academy (CHPCA) Meeting, themed "Disrupting Prostate Cancer Research: Challenge Accepted," was convened at the University of California, Los Angeles, Luskin Conference Center, in Los Angeles, CA, from June 22 to 25, 2023. METHODS The 2023 marked the 10th Annual CHPCA Meeting, a discussion-oriented scientific think-tank conference convened annually by the Prostate Cancer Foundation, which centers on innovative and emerging research topics deemed pivotal for advancing critical unmet needs in prostate cancer research and clinical care. The 2023 CHPCA Meeting was attended by 81 academic investigators and included 40 talks across 8 sessions. RESULTS The central topic areas covered at the meeting included: targeting transcription factor neo-enhancesomes in cancer, AR as a pro-differentiation and oncogenic transcription factor, why few are cured with androgen deprivation therapy and how to change dogma to cure metastatic prostate cancer without castration, reducing prostate cancer morbidity and mortality with genetics, opportunities for radiation to enhance therapeutic benefit in oligometastatic prostate cancer, novel immunotherapeutic approaches, and the new era of artificial intelligence-driven precision medicine. DISCUSSION This article provides an overview of the scientific presentations delivered at the 2023 CHPCA Meeting, such that this knowledge can help in facilitating the advancement of prostate cancer research worldwide.
Collapse
Affiliation(s)
- Andrea K Miyahira
- Science Department, Prostate Cancer Foundation, Santa Monica, California, USA
| | - Sophia C Kamran
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Tamara Jamaspishvili
- Department of Pathology and Laboratory Medicine, SUNY Upstate Medical University, Syracuse, New York, USA
| | - Catherine H Marshall
- Department of Oncology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Kara N Maxwell
- Department of Medicine-Hematology/Oncology and Department of Genetics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Medicine Service, Corporal Michael J. Crescenz VA Medical Center, Philadelphia, Pennsylvania, USA
| | - Abhijit Parolia
- Department of Pathology, Rogel Cancer Center, University of Michigan, Ann Arbor, Michigan, USA
| | - Nicholas A Zorko
- Division of Hematology, Oncology and Transplantation, Department of Medicine, University of Minnesota, Minneapolis, Minnesota, USA
- University of Minnesota Masonic Cancer Center, University of Minnesota, Minneapolis, Minnesota, USA
| | - Kenneth J Pienta
- The James Buchanan Brady Urological Institute, The Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Howard R Soule
- Science Department, Prostate Cancer Foundation, Santa Monica, California, USA
| |
Collapse
|
3
|
Jiang H, Imran M, Muralidharan P, Patel A, Pensa J, Liang M, Benidir T, Grajo JR, Joseph JP, Terry R, DiBianco JM, Su LM, Zhou Y, Brisbane WG, Shao W. MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images. Comput Med Imaging Graph 2024; 112:102326. [PMID: 38211358 DOI: 10.1016/j.compmedimag.2024.102326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 12/07/2023] [Accepted: 12/12/2023] [Indexed: 01/13/2024]
Abstract
Micro-ultrasound (micro-US) is a novel 29-MHz ultrasound technique that provides 3-4 times higher resolution than traditional ultrasound, potentially enabling low-cost, accurate diagnosis of prostate cancer. Accurate prostate segmentation is crucial for prostate volume measurement, cancer diagnosis, prostate biopsy, and treatment planning. However, prostate segmentation on micro-US is challenging due to artifacts and indistinct borders between the prostate, bladder, and urethra in the midline. This paper presents MicroSegNet, a multi-scale annotation-guided transformer UNet model designed specifically to tackle these challenges. During the training process, MicroSegNet focuses more on regions that are hard to segment (hard regions), characterized by discrepancies between expert and non-expert annotations. We achieve this by proposing an annotation-guided binary cross entropy (AG-BCE) loss that assigns a larger weight to prediction errors in hard regions and a lower weight to prediction errors in easy regions. The AG-BCE loss was seamlessly integrated into the training process through the utilization of multi-scale deep supervision, enabling MicroSegNet to capture global contextual dependencies and local information at various scales. We trained our model using micro-US images from 55 patients, followed by evaluation on 20 patients. Our MicroSegNet model achieved a Dice coefficient of 0.939 and a Hausdorff distance of 2.02 mm, outperforming several state-of-the-art segmentation methods, as well as three human annotators with different experience levels. Our code is publicly available at https://github.com/mirthAI/MicroSegNet and our dataset is publicly available at https://zenodo.org/records/10475293.
Collapse
Affiliation(s)
- Hongxu Jiang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32608, United States
| | - Muhammad Imran
- Department of Medicine, University of Florida, Gainesville, FL, 32608, United States
| | - Preethika Muralidharan
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, 32608, United States
| | - Anjali Patel
- College of Medicine , University of Florida, Gainesville, FL, 32608, United States
| | - Jake Pensa
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, United States
| | - Muxuan Liang
- Department of Biostatistics, University of Florida, Gainesville, FL, 32608, United States
| | - Tarik Benidir
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Joseph R Grajo
- Department of Radiology, University of Florida, Gainesville, FL, 32608, United States
| | - Jason P Joseph
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Russell Terry
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | | | - Li-Ming Su
- Department of Urology, University of Florida, Gainesville, FL, 32608, United States
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, 95064, United States
| | - Wayne G Brisbane
- Department of Urology, University of California, Los Angeles, CA, 90095, United States
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, 32608, United States.
| |
Collapse
|
4
|
Aung MTZ, Lim SH, Han J, Yang S, Kang JH, Kim JE, Huh KH, Yi WJ, Heo MS, Lee SS. Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study. Imaging Sci Dent 2024; 54:81-91. [PMID: 38571772 PMCID: PMC10985527 DOI: 10.5624/isd.20230245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/09/2024] [Accepted: 01/10/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.
Collapse
Affiliation(s)
- Moe Thu Zar Aung
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
- Department of Oral Medicine, University of Dental Medicine, Mandalay, Myanmar
| | - Sang-Heon Lim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
| | - Jiyong Han
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| |
Collapse
|
5
|
King MT, Kehayias CE, Chaunzwa T, Rosen DB, Mahal AR, Wallburn TD, Milligan MG, Dyer MA, Nguyen PL, Orio PF, Harris TC, Buzurovic I, Guthier CV. Observer preference of artificial intelligence-generated versus clinical prostate contours for ultrasound-based high dose rate brachytherapy. Med Phys 2023; 50:5935-5943. [PMID: 37665729 DOI: 10.1002/mp.16716] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/21/2023] [Accepted: 08/23/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND For trans-rectal ultrasound (TRUS)-based high dose rate (HDR) prostate brachytherapy, prostate contouring can be challenging due to artifacts from implanted needles, bleeding, and calcifications. PURPOSE To evaluate the geometric accuracy and observer preference of an artificial intelligence (AI) algorithm for generating prostate contours on TRUS images with implanted needles. METHODS We conducted a retrospective study of 150 patients, who underwent HDR brachytherapy. These patients were randomly divided into training (104), validation (26) and testing (20) sets. An AI algorithm was trained/validated utilizing the TRUS image and reference (clinical) contours. The algorithm then provided contours for the test set. For evaluation, we calculated the Dice coefficient between AI and reference prostate contours. We then presented AI and reference contours to eight clinician observers, and asked observers to select their preference. Observers were blinded to the source of contours. We calculated the percentage of cases in which observers preferred AI contours. Lastly, we evaluate whether the presence of AI contours improved the geometric accuracy of prostate contours provided by five resident observers for a 10-patient subset. RESULTS The median Dice coefficient between AI and reference contours was 0.92 (IQR: 0.90-0.94). Observers preferred AI contours for a median of 57.5% (IQR: 47.5, 65.0) of the test cases. For resident observers, the presence of AI contours was associated with a 0.107 (95% CI: 0.086, 0.128; p < 0.001) improvement in Dice coefficient for the 10-patient subset. CONCLUSION The AI algorithm provided high-quality prostate contours on TRUS with implanted needles. Further prospective study is needed to better understand how to incorporate AI prostate contours into the TRUS-based HDR brachytherapy workflow.
Collapse
Affiliation(s)
- Martin T King
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Christopher E Kehayias
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Tafadzwa Chaunzwa
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Daniel B Rosen
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Amandeep R Mahal
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Tyler D Wallburn
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Michael G Milligan
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - M Aiven Dyer
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Paul L Nguyen
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Peter F Orio
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Thomas C Harris
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Ivan Buzurovic
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Christian V Guthier
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| |
Collapse
|
6
|
Hua K, Fang X, Tang Z, Cheng Y, Yu Z. DCAM-NET:A novel domain generalization optic cup and optic disc segmentation pipeline with multi-region and multi-scale convolution attention mechanism. Comput Biol Med 2023; 163:107076. [PMID: 37379616 DOI: 10.1016/j.compbiomed.2023.107076] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 04/27/2023] [Accepted: 05/27/2023] [Indexed: 06/30/2023]
Abstract
Fundus images are an essential basis for diagnosing ocular diseases, and using convolutional neural networks has shown promising results in achieving accurate fundus image segmentation. However, the difference between the training data (source domain) and the testing data (target domain) will significantly affect the final segmentation performance. This paper proposes a novel framework named DCAM-NET for fundus domain generalization segmentation, which substantially improves the generalization ability of the segmentation model to the target domain data and enhances the extraction of detailed information on the source domain data. This model can effectively overcome the problem of poor model performance due to cross-domain segmentation. To enhance the adaptability of the segmentation model to target domain data, this paper proposes a multi-scale attention mechanism module (MSA) that functions at the feature extraction level. Extracting different attribute features to enter the corresponding scale attention module further captures the critical features in channel, position, and spatial regions. The MSA attention mechanism module also integrates the characteristics of the self-attention mechanism, it can capture dense context information, and the aggregation of multi-feature information effectively enhances the generalization of the model when dealing with unknown domain data. In addition, this paper proposes the multi-region weight fusion convolution module (MWFC), which is essential for the segmentation model to extract feature information from the source domain data accurately. Fusing multiple region weights and convolutional kernel weights on the image to enhance the model adaptability to information at different locations on the image, the fusion of weights deepens the capacity and depth of the model. It enhances the learning ability of the model for multiple regions on the source domain. Our experiments on fundus data for cup/disc segmentation show that the introduction of MSA and MWFC modules in this paper effectively improves the segmentation ability of the segmentation model on the unknown domain. And the performance of the proposed method is significantly better than other methods in the current domain generalization segmentation of the optic cup/disc.
Collapse
Affiliation(s)
- Kaiwen Hua
- School of Computer Science and Engineering, Anhui University of Science and Technology, 232001, Huainan, Anhui, China
| | - Xianjin Fang
- School of Computer Science and Engineering, Anhui University of Science and Technology, 232001, Huainan, Anhui, China.
| | - Zhiri Tang
- Academy for Engineering and Technology, Fudan University, 200433, Shanghai, China
| | - Ying Cheng
- School of Artificial Intelligence Academy, Anhui University of Science and Technology, 232001, Huainan, Anhui, China
| | - Zekuan Yu
- Academy for Engineering and Technology, Fudan University, 200433, Shanghai, China.
| |
Collapse
|
7
|
Rodrigues NM, Silva S, Vanneschi L, Papanikolaou N. A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers (Basel) 2023; 15:cancers15051467. [PMID: 36900261 PMCID: PMC10001231 DOI: 10.3390/cancers15051467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/03/2023] Open
Abstract
Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation.
Collapse
Affiliation(s)
- Nuno M. Rodrigues
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
- Champalimaud Foundation, Centre for the Unknown, 1400-038 Lisbon, Portugal
- Correspondence:
| | - Sara Silva
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
| | - Leonardo Vanneschi
- NOVA Information Management School (NOVA IMS), Campus de Campolide, Universidade Nova de Lisboa, 1070-312 Lisboa, Portugal
| | | |
Collapse
|