1
|
Zhang H, Yang X, Cui Y, Wang Q, Zhao J, Li D. A novel GAN-based three-axis mutually supervised super-resolution reconstruction method for rectal cancer MR image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108426. [PMID: 39368440 DOI: 10.1016/j.cmpb.2024.108426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 08/12/2024] [Accepted: 09/14/2024] [Indexed: 10/07/2024]
Abstract
BACKGROUND AND OBJECTIVE This study aims to enhance the resolution in the axial direction of rectal cancer magnetic resonance (MR) imaging scans to improve the accuracy of visual interpretation and quantitative analysis. MR imaging is a critical technique for the diagnosis and treatment planning of rectal cancer. However, obtaining high-resolution MR images is both time-consuming and costly. As a result, many hospitals store only a limited number of slices, often leading to low-resolution MR images, particularly in the axial plane. Given the importance of image resolution in accurate assessment, these low-resolution images frequently lack the necessary detail, posing substantial challenges for both human experts and computer-aided diagnostic systems. Image super-resolution (SR), a technique developed to enhance image resolution, was originally applied to natural images. Its success has since led to its application in various other tasks, especially in the reconstruction of low-resolution MR images. However, most existing SR methods fail to account for all anatomical planes during reconstruction, leading to unsatisfactory results when applied to rectal cancer MR images. METHODS In this paper, we propose a GAN-based three-axis mutually supervised super-resolution reconstruction method tailored for low-resolution rectal cancer MR images. Our approach involves performing one-dimensional (1D) intra-slice SR reconstruction along the axial direction for both the sagittal and coronal planes, coupled with inter-slice SR reconstruction based on slice synthesis in the axial direction. To further enhance the accuracy of super-resolution reconstruction, we introduce a consistency supervision mechanism across the reconstruction results of different axes, promoting mutual learning between each axis. A key innovation of our method is the introduction of Depth-GAN for synthesize intermediate slices in the axial plane, incorporating depth information and leveraging Generative Adversarial Networks (GANs) for this purpose. Additionally, we enhance the accuracy of intermediate slice synthesis by employing a combination of supervised and unsupervised interactive learning techniques throughout the process. RESULTS We conducted extensive ablation studies and comparative analyses with existing methods to validate the effectiveness of our approach. On the test set from Shanxi Cancer Hospital, our method achieved a Peak Signal-to-Noise Ratio (PSNR) of 34.62 and a Structural Similarity Index (SSIM) of 96.34 %. These promising results demonstrate the superiority of our method.
Collapse
Affiliation(s)
- Huiting Zhang
- College of Computer Science and Technology, Taiyuan University of Technology, Jinzhong 030600, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan 030024, China; Intelligent Perception Engineering Technology Centre of Shanxi, Jinzhong 030600, China
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Yanfen Cui
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Qiang Wang
- College of Computer and Network Engineering, Shanxi Datong University, Datong 037009, China
| | - Jumin Zhao
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Jinzhong 030600, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan 030024, China; Intelligent Perception Engineering Technology Centre of Shanxi, Jinzhong 030600, China
| | - Dengao Li
- College of Computer Science and Technology, Taiyuan University of Technology, Jinzhong 030600, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan 030024, China; Intelligent Perception Engineering Technology Centre of Shanxi, Jinzhong 030600, China.
| |
Collapse
|
2
|
Bai X, Wang H, Qin Y, Han J, Yu N. MatchMorph: A real-time pre- and intra-operative deformable image registration framework for MRI-guided surgery. Comput Biol Med 2024; 180:108948. [PMID: 39121681 DOI: 10.1016/j.compbiomed.2024.108948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 06/27/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024]
Abstract
PURPOSE The technological advancements in surgical robots compatible with magnetic resonance imaging (MRI) have created an indispensable demand for real-time deformable image registration (DIR) of pre- and intra-operative MRI, but there is a lack of relevant methods. Challenges arise from dimensionality mismatch, resolution discrepancy, non-rigid deformation and requirement for real-time registration. METHODS In this paper, we propose a real-time DIR framework called MatchMorph, specifically designed for the registration of low-resolution local intraoperative MRI and high-resolution global preoperative MRI. Firstly, a super-resolution network based on global inference is developed to enhance the resolution of intraoperative MRI to the same as preoperative MRI, thus resolving the resolution discrepancy. Secondly, a fast-matching algorithm is designed to identify the optimal position of the intraoperative MRI within the corresponding preoperative MRI to address the dimensionality mismatch. Further, a cross-attention-based dual-stream DIR network is constructed to manipulate the deformation between pre- and intra-operative MRI, real-timely. RESULTS We conducted comprehensive experiments on publicly available datasets IXI and OASIS to evaluate the performance of the proposed MatchMorph framework. Compared to the state-of-the-art (SOTA) network TransMorph, the designed dual-stream DIR network of MatchMorph achieved superior performance with a 1.306 mm smaller HD and a 0.07 mm smaller ASD score on the IXI dataset. Furthermore, the MatchMorph framework demonstrates an inference speed of approximately 280 ms. CONCLUSIONS The qualitative and quantitative registration results obtained from high-resolution global preoperative MRI and simulated low-resolution local intraoperative MRI validated the effectiveness and efficiency of the proposed MatchMorph framework.
Collapse
Affiliation(s)
- Xinhao Bai
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Hongpeng Wang
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Yanding Qin
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Jianda Han
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China
| | - Ningbo Yu
- College of Artificial Intelligence, Nankai University, Tianjin, 300350, China; Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Nankai University, Tianjin, 300350, China; Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen, 518083, China.
| |
Collapse
|
3
|
Xing X, Li L, Sun M, Yang J, Zhu X, Peng F, Du J, Feng Y. Deep-learning-based 3D super-resolution CT radiomics model: Predict the possibility of the micropapillary/solid component of lung adenocarcinoma. Heliyon 2024; 10:e34163. [PMID: 39071606 PMCID: PMC11279278 DOI: 10.1016/j.heliyon.2024.e34163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/03/2024] [Accepted: 07/04/2024] [Indexed: 07/30/2024] Open
Abstract
Objective Invasive lung adenocarcinoma(ILA) with micropapillary (MPP)/solid (SOL) components has a poor prognosis. Preoperative identification is essential for decision-making for subsequent treatment. This study aims to construct and evaluate a super-resolution(SR) enhanced radiomics model designed to predict the presence of MPP/SOL components preoperatively to provide more accurate and individualized treatment planning. Methods Between March 2018 and November 2023, patients who underwent curative intent ILA resection were included in the study. We implemented a deep transfer learning network on CT images to improve their resolution, resulting in the acquisition of preoperative super-resolution CT (SR-CT) images. Models were developed using radiomic features extracted from CT and SR-CT images. These models employed a range of classifiers, including Logistic Regression (LR), Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Random Forest, Extra Trees, Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and Multilayer Perceptron (MLP). The diagnostic performance of the models was assessed by measuring the area under the curve (AUC). Result A total of 245 patients were recruited, of which 109 (44.5 %) were diagnosed with ILA with MPP/SOL components. In the analysis of CT images, the SVM model exhibited outstanding effectiveness, recording AUC scores of 0.864 in the training group and 0.761 in the testing group. When this SVM approach was used to develop a radiomics model with SR-CT images, it recorded AUCs of 0.904 in the training and 0.819 in the test cohorts. The calibration curves indicated a high goodness of fit, while decision curve analysis (DCA) highlighted the model's clinical utility. Conclusion The study successfully constructed and evaluated a deep learning(DL)-enhanced SR-CT radiomics model. This model outperformed conventional CT radiomics models in predicting MPP/SOL patterns in ILA. Continued research and broader validation are necessary to fully harness and refine the clinical potential of radiomics when combined with SR reconstruction technology.
Collapse
Affiliation(s)
- Xiaowei Xing
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Liangping Li
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Mingxia Sun
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Jiahu Yang
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Xinhai Zhu
- Department of Thoracic Surgery, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Fang Peng
- Department of Pathology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Jianzong Du
- Department of Respiratory Medicine, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Yue Feng
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, China
| |
Collapse
|
4
|
Schmidt B, Soerensen SJC, Bhambhvani HP, Fan RE, Bhattacharya I, Choi MH, Kunder CA, Kao CS, Higgins J, Rusu M, Sonn GA. External validation of an artificial intelligence model for Gleason grading of prostate cancer on prostatectomy specimens. BJU Int 2024. [PMID: 38989669 DOI: 10.1111/bju.16464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
OBJECTIVES To externally validate the performance of the DeepDx Prostate artificial intelligence (AI) algorithm (Deep Bio Inc., Seoul, South Korea) for Gleason grading on whole-mount prostate histopathology, considering potential variations observed when applying AI models trained on biopsy samples to radical prostatectomy (RP) specimens due to inherent differences in tissue representation and sample size. MATERIALS AND METHODS The commercially available DeepDx Prostate AI algorithm is an automated Gleason grading system that was previously trained using 1133 prostate core biopsy images and validated on 700 biopsy images from two institutions. We assessed the AI algorithm's performance, which outputs Gleason patterns (3, 4, or 5), on 500 1-mm2 tiles created from 150 whole-mount RP specimens from a third institution. These patterns were then grouped into grade groups (GGs) for comparison with expert pathologist assessments. The reference standard was the International Society of Urological Pathology GG as established by two experienced uropathologists with a third expert to adjudicate discordant cases. We defined the main metric as the agreement with the reference standard, using Cohen's kappa. RESULTS The agreement between the two experienced pathologists in determining GGs at the tile level had a quadratically weighted Cohen's kappa of 0.94. The agreement between the AI algorithm and the reference standard in differentiating cancerous vs non-cancerous tissue had an unweighted Cohen's kappa of 0.91. Additionally, the AI algorithm's agreement with the reference standard in classifying tiles into GGs had a quadratically weighted Cohen's kappa of 0.89. In distinguishing cancerous vs non-cancerous tissue, the AI algorithm achieved a sensitivity of 0.997 and specificity of 0.88; in classifying GG ≥2 vs GG 1 and non-cancerous tissue, it demonstrated a sensitivity of 0.98 and specificity of 0.85. CONCLUSION The DeepDx Prostate AI algorithm had excellent agreement with expert uropathologists and performance in cancer identification and grading on RP specimens, despite being trained on biopsy specimens from an entirely different patient population.
Collapse
Affiliation(s)
- Bogdana Schmidt
- Division of Urology, Department of Surgery, Huntsman Cancer Hospital, University of Utah, Salt Lake City, UT, USA
| | - Simon John Christoph Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Hriday P Bhambhvani
- Department of Urology, Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Moon Hyung Choi
- Department of Radiology, College of Medicine, Eunpyeong St. Mary's Hospital, The Catholic University of Korea, Seoul, Korea
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - Chia-Sui Kao
- Department of Pathology and Laboratory Medicine, Cleveland Clinic, Cleveland, OH, USA
| | - John Higgins
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - Mirabela Rusu
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Geoffrey A Sonn
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
5
|
Gou F, Liu J, Xiao C, Wu J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics (Basel) 2024; 14:1472. [PMID: 39061610 PMCID: PMC11275417 DOI: 10.3390/diagnostics14141472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/04/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
With the improvement of economic conditions and the increase in living standards, people's attention in regard to health is also continuously increasing. They are beginning to place their hopes on machines, expecting artificial intelligence (AI) to provide a more humanized medical environment and personalized services, thus greatly expanding the supply and bridging the gap between resource supply and demand. With the development of IoT technology, the arrival of the 5G and 6G communication era, and the enhancement of computing capabilities in particular, the development and application of AI-assisted healthcare have been further promoted. Currently, research on and the application of artificial intelligence in the field of medical assistance are continuously deepening and expanding. AI holds immense economic value and has many potential applications in regard to medical institutions, patients, and healthcare professionals. It has the ability to enhance medical efficiency, reduce healthcare costs, improve the quality of healthcare services, and provide a more intelligent and humanized service experience for healthcare professionals and patients. This study elaborates on AI development history and development timelines in the medical field, types of AI technologies in healthcare informatics, the application of AI in the medical field, and opportunities and challenges of AI in the field of medicine. The combination of healthcare and artificial intelligence has a profound impact on human life, improving human health levels and quality of life and changing human lifestyles.
Collapse
Affiliation(s)
- Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
6
|
Gundogdu B, Medved M, Chatterjee A, Engelmann R, Rosado A, Lee G, Oren NC, Oto A, Karczmar GS. Self-supervised multicontrast super-resolution for diffusion-weighted prostate MRI. Magn Reson Med 2024; 92:319-331. [PMID: 38308149 PMCID: PMC11288973 DOI: 10.1002/mrm.30047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 01/19/2024] [Accepted: 01/23/2024] [Indexed: 02/04/2024]
Abstract
PURPOSE This study addresses the challenge of low resolution and signal-to-noise ratio (SNR) in diffusion-weighted images (DWI), which are pivotal for cancer detection. Traditional methods increase SNR at high b-values through multiple acquisitions, but this results in diminished image resolution due to motion-induced variations. Our research aims to enhance spatial resolution by exploiting the global structure within multicontrast DWI scans and millimetric motion between acquisitions. METHODS We introduce a novel approach employing a "Perturbation Network" to learn subvoxel-size motions between scans, trained jointly with an implicit neural representation (INR) network. INR encodes the DWI as a continuous volumetric function, treating voxel intensities of low-resolution acquisitions as discrete samples. By evaluating this function with a finer grid, our model predicts higher-resolution signal intensities for intermediate voxel locations. The Perturbation Network's motion-correction efficacy was validated through experiments on biological phantoms and in vivo prostate scans. RESULTS Quantitative analyses revealed significantly higher structural similarity measures of super-resolution images to ground truth high-resolution images compared to high-order interpolation (p< $$ < $$ 0.005). In blind qualitative experiments,96 . 1 % $$ 96.1\% $$ of super-resolution images were assessed to have superior diagnostic quality compared to interpolated images. CONCLUSION High-resolution details in DWI can be obtained without the need for high-resolution training data. One notable advantage of the proposed method is that it does not require a super-resolution training set. This is important in clinical practice because the proposed method can easily be adapted to images with different scanner settings or body parts, whereas the supervised methods do not offer such an option.
Collapse
Affiliation(s)
- Batuhan Gundogdu
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Milica Medved
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | | | - Roger Engelmann
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Avery Rosado
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Grace Lee
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Nisa C Oren
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Aytekin Oto
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | | |
Collapse
|
7
|
Jiang C, Gedeon A, Lyu Y, Landgraf E, Zhang Y, Hou X, Kondepudi A, Chowdury A, Lee H, Hollon T. Super-resolution of biomedical volumes with 2D supervision. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2024; 2024:6966-6977. [PMID: 39355755 PMCID: PMC11444667 DOI: 10.1109/cvprw63382.2024.00690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/03/2024]
Abstract
Volumetric biomedical microscopy has the potential to increase the diagnostic information extracted from clinical tissue specimens and improve the diagnostic accuracy of both human pathologists and computational pathology models. Unfortunately, barriers to integrating 3-dimensional (3D) volumetric microscopy into clinical medicine include long imaging times, poor depth/z-axis resolution, and an insufficient amount of high-quality volumetric data. Leveraging the abundance of high-resolution 2D microscopy data, we introduce masked slice diffusion for super-resolution (MSDSR), which exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens. This intrinsic characteristic allows for super-resolution models trained on high-resolution images from one plane (e.g., XY) to effectively generalize to others (XZ, YZ), overcoming the traditional dependency on orientation. We focus on the application of MSDSR to stimulated Raman histology (SRH), an optical imaging modality for biological specimen analysis and intraoperative diagnosis, characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning. To evaluate MSDSR's efficacy, we introduce a new performance metric, SliceFID, and demonstrate MSDSR's superior performance over baseline models through extensive evaluations. Our findings reveal that MSDSR not only significantly enhances the quality and resolution of 3D volumetric data, but also addresses major obstacles hindering the broader application of 3D volumetric microscopy in clinical diagnostics and biomedical research.
Collapse
|
8
|
Shao W, Vesal S, Soerensen SJC, Bhattacharya I, Golestani N, Yamashita R, Kunder CA, Fan RE, Ghanouni P, Brooks JD, Sonn GA, Rusu M. RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate. Comput Biol Med 2024; 173:108318. [PMID: 38522253 DOI: 10.1016/j.compbiomed.2024.108318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Medicine, University of Florida, Gainesville, FL, 32610, United States.
| | - Sulaiman Vesal
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Simon J C Soerensen
- Department of Urology, Stanford University, Stanford, CA, 94305, United States; Department of Epidemiology and Population Health, Stanford University, Stanford, CA, 94305, United States
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Negar Golestani
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94305, United States
| | - Christian A Kunder
- Department of Pathology, Stanford University, Stanford, CA, 94305, United States
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States.
| |
Collapse
|
9
|
Levy JJ, Davis MJ, Chacko RS, Davis MJ, Fu LJ, Goel T, Pamal A, Nafi I, Angirekula A, Suvarna A, Vempati R, Christensen BC, Hayden MS, Vaickus LJ, LeBoeuf MR. Intraoperative margin assessment for basal cell carcinoma with deep learning and histologic tumor mapping to surgical site. NPJ Precis Oncol 2024; 8:2. [PMID: 38172524 PMCID: PMC10764333 DOI: 10.1038/s41698-023-00477-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 11/14/2023] [Indexed: 01/05/2024] Open
Abstract
Successful treatment of solid cancers relies on complete surgical excision of the tumor either for definitive treatment or before adjuvant therapy. Intraoperative and postoperative radial sectioning, the most common form of margin assessment, can lead to incomplete excision and increase the risk of recurrence and repeat procedures. Mohs Micrographic Surgery is associated with complete removal of basal cell and squamous cell carcinoma through real-time margin assessment of 100% of the peripheral and deep margins. Real-time assessment in many tumor types is constrained by tissue size, complexity, and specimen processing / assessment time during general anesthesia. We developed an artificial intelligence platform to reduce the tissue preprocessing and histological assessment time through automated grossing recommendations, mapping and orientation of tumor to the surgical specimen. Using basal cell carcinoma as a model system, results demonstrate that this approach can address surgical laboratory efficiency bottlenecks for rapid and complete intraoperative margin assessment.
Collapse
Affiliation(s)
- Joshua J Levy
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA.
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA.
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
- Program in Quantitative Biomedical Sciences, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA.
| | - Matthew J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | | | - Michael J Davis
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Lucy J Fu
- Geisel School of Medicine at Dartmouth, Hanover, NH, 03755, USA
| | - Tarushii Goel
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Akash Pamal
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Virginia, Charlottesville, VA, 22903, USA
| | - Irfan Nafi
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- Stanford University, Palo Alto, CA, 94305, USA
| | - Abhinav Angirekula
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
- University of Illinois Urbana-Champaign, Champaign, IL, 61820, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Ram Vempati
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, 22312, USA
| | - Brock C Christensen
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Molecular and Systems Biology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
- Department of Community and Family Medicine, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Matthew S Hayden
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Clinical Genomics and Advanced Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Matthew R LeBoeuf
- Department of Dermatology, Geisel School of Medicine at Dartmouth, Hanover, NH, 03756, USA
| |
Collapse
|
10
|
Lin J, Miao QI, Surawech C, Raman SS, Zhao K, Wu HH, Sung K. High-Resolution 3D MRI With Deep Generative Networks via Novel Slice-Profile Transformation Super-Resolution. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:95022-95036. [PMID: 37711392 PMCID: PMC10501177 DOI: 10.1109/access.2023.3307577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
High-resolution magnetic resonance imaging (MRI) sequences, such as 3D turbo or fast spin-echo (TSE/FSE) imaging, are clinically desirable but suffer from long scanning time-related blurring when reformatted into preferred orientations. Instead, multi-slice two-dimensional (2D) TSE imaging is commonly used because of its high in-plane resolution but is limited clinically by poor through-plane resolution due to elongated voxels and the inability to generate multi-planar reformations due to staircase artifacts. Therefore, multiple 2D TSE scans are acquired in various orthogonal imaging planes, increasing the overall MRI scan time. In this study, we propose a novel slice-profile transformation super-resolution (SPTSR) framework with deep generative learning for through-plane super-resolution (SR) of multi-slice 2D TSE imaging. The deep generative networks were trained by synthesized low-resolution training input via slice-profile downsampling (SP-DS), and the trained networks inferred on the slice profile convolved (SP-conv) testing input for 5.5x through-plane SR. The network output was further slice-profile deconvolved (SP-deconv) to achieve an isotropic super-resolution. Compared to SMORE SR method and the networks trained by conventional downsampling, our SPTSR framework demonstrated the best overall image quality from 50 testing cases, evaluated by two abdominal radiologists. The quantitative analysis cross-validated the expert reader study results. 3D simulation experiments confirmed the quantitative improvement of the proposed SPTSR and the effectiveness of the SP-deconv step, compared to 3D ground-truths. Ablation studies were conducted on the individual contributions of SP-DS and SP-conv, networks structure, training dataset size, and different slice profiles.
Collapse
Affiliation(s)
- Jiahao Lin
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Electrical and Computer Engineering, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Q I Miao
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning 110001, China
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
- Department of Radiology, Faculty of Medicine, Chulalongkorn University, Bangkok 10330, Thailand
- Division of Diagnostic Radiology, Department of Radiology, King Chulalongkorn Memorial Hospital, Bangkok 10330, Thailand
| | - Steven S Raman
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kai Zhao
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Holden H Wu
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, University of California at Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
11
|
Xu M, Cao L, Lu D, Hu Z, Yue Y. Application of Swarm Intelligence Optimization Algorithms in Image Processing: A Comprehensive Review of Analysis, Synthesis, and Optimization. Biomimetics (Basel) 2023; 8:235. [PMID: 37366829 DOI: 10.3390/biomimetics8020235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 05/27/2023] [Accepted: 06/01/2023] [Indexed: 06/28/2023] Open
Abstract
Image processing technology has always been a hot and difficult topic in the field of artificial intelligence. With the rise and development of machine learning and deep learning methods, swarm intelligence algorithms have become a hot research direction, and combining image processing technology with swarm intelligence algorithms has become a new and effective improvement method. Swarm intelligence algorithm refers to an intelligent computing method formed by simulating the evolutionary laws, behavior characteristics, and thinking patterns of insects, birds, natural phenomena, and other biological populations. It has efficient and parallel global optimization capabilities and strong optimization performance. In this paper, the ant colony algorithm, particle swarm optimization algorithm, sparrow search algorithm, bat algorithm, thimble colony algorithm, and other swarm intelligent optimization algorithms are deeply studied. The model, features, improvement strategies, and application fields of the algorithm in image processing, such as image segmentation, image matching, image classification, image feature extraction, and image edge detection, are comprehensively reviewed. The theoretical research, improvement strategies, and application research of image processing are comprehensively analyzed and compared. Combined with the current literature, the improvement methods of the above algorithms and the comprehensive improvement and application of image processing technology are analyzed and summarized. The representative algorithms of the swarm intelligence algorithm combined with image segmentation technology are extracted for list analysis and summary. Then, the unified framework, common characteristics, different differences of the swarm intelligence algorithm are summarized, existing problems are raised, and finally, the future trend is projected.
Collapse
Affiliation(s)
- Minghai Xu
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
| | - Li Cao
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
| | - Dongwan Lu
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| | - Zhongyi Hu
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| | - Yinggao Yue
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| |
Collapse
|
12
|
Singh A, Kwiecinski J, Cadet S, Killekar A, Tzolos E, Williams MC, Dweck MR, Newby DE, Dey D, Slomka PJ. Automated nonlinear registration of coronary PET to CT angiography using pseudo-CT generated from PET with generative adversarial networks. J Nucl Cardiol 2023; 30:604-615. [PMID: 35701650 PMCID: PMC9747983 DOI: 10.1007/s12350-022-03010-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 05/04/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND Coronary 18F-sodium-fluoride (18F-NaF) positron emission tomography (PET) showed promise in imaging coronary artery disease activity. Currently image processing remains subjective due to the need for manual registration of PET and computed tomography (CT) angiography data. We aimed to develop a novel fully automated method to register coronary 18F-NaF PET to CT angiography using pseudo-CT generated by generative adversarial networks (GAN). METHODS A total of 169 patients, 139 in the training and 30 in the testing sets were considered for generation of pseudo-CT from non-attenuation corrected (NAC) PET using GAN. Non-rigid registration was used to register pseudo-CT to CT angiography and the resulting transformation was used to align PET with CT angiography. We compared translations, maximal standard uptake value (SUVmax) and target to background ratio (TBRmax) at the location of plaques, obtained after observer and automated alignment. RESULTS Automatic end-to-end registration was performed for 30 patients with 88 coronary vessels and took 27.5 seconds per patient. Difference in displacement motion vectors between GAN-based and observer-based registration in the x-, y-, and z-directions was 0.8 ± 3.0, 0.7 ± 3.0, and 1.7 ± 3.9 mm, respectively. TBRmax had a coefficient of repeatability (CR) of 0.31, mean bias of 0.03 and narrow limits of agreement (LOA) (95% LOA: - 0.29 to 0.33). SUVmax had CR of 0.26, mean bias of 0 and narrow LOA (95% LOA: - 0.26 to 0.26). CONCLUSION Pseudo-CT generated by GAN are perfectly registered to PET can be used to facilitate quick and fully automated registration of PET and CT angiography.
Collapse
Affiliation(s)
- Ananya Singh
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA, 90048, USA
| | - Jacek Kwiecinski
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA, 90048, USA
- Department of Interventional Cardiology and Angiology, Institute of Cardiology, Warsaw, Poland
| | - Sebastien Cadet
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA, 90048, USA
| | - Aditya Killekar
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA, 90048, USA
| | - Evangelos Tzolos
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Michelle C Williams
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Marc R Dweck
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - David E Newby
- BHF Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Damini Dey
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA, 90048, USA
| | - Piotr J Slomka
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Imaging and Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA, 90048, USA.
| |
Collapse
|
13
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
14
|
Ruchti A, Neuwirth A, Lowman AK, Duenweg SR, LaViolette PS, Bukowy JD. Homologous point transformer for multi-modality prostate image registration. PeerJ Comput Sci 2022; 8:e1155. [PMID: 36532813 PMCID: PMC9748842 DOI: 10.7717/peerj-cs.1155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 10/24/2022] [Indexed: 06/17/2023]
Abstract
Registration is the process of transforming images so they are aligned in the same coordinate space. In the medical field, image registration is often used to align multi-modal or multi-parametric images of the same organ. A uniquely challenging subset of medical image registration is cross-modality registration-the task of aligning images captured with different scanning methodologies. In this study, we present a transformer-based deep learning pipeline for performing cross-modality, radiology-pathology image registration for human prostate samples. While existing solutions for multi-modality prostate image registration focus on the prediction of transform parameters, our pipeline predicts a set of homologous points on the two image modalities. The homologous point registration pipeline achieves better average control point deviation than the current state-of-the-art automatic registration pipeline. It reaches this accuracy without requiring masked MR images which may enable this approach to achieve similar results in other organ systems and for partial tissue samples.
Collapse
Affiliation(s)
- Alexander Ruchti
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| | - Alexander Neuwirth
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| | - Allison K. Lowman
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Savannah R. Duenweg
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Peter S. LaViolette
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
- Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI, United States
| | - John D. Bukowy
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| |
Collapse
|
15
|
Ji J, Wan T, Chen D, Wang H, Zheng M, Qin Z. A deep learning method for automatic evaluation of diagnostic information from multi-stained histopathological images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
16
|
Bhattacharya I, Lim DS, Aung HL, Liu X, Seetharaman A, Kunder CA, Shao W, Soerensen SJC, Fan RE, Ghanouni P, To'o KJ, Brooks JD, Sonn GA, Rusu M. Bridging the gap between prostate radiology and pathology through machine learning. Med Phys 2022; 49:5160-5181. [PMID: 35633505 PMCID: PMC9543295 DOI: 10.1002/mp.15777] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 11/27/2022] Open
Abstract
Background Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, magnetic resonance imaging (MRI) is considered the most sensitive non‐invasive imaging modality that enables visualization, detection, and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter‐reader agreements. Purpose Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI. Methods Four different deep learning models (SPCNet, U‐Net, branched U‐Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology‐confirmed radiologist labels, pathologist labels on whole‐mount histopathology images, and lesion‐level and pixel‐level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel‐level Gleason patterns) on whole‐mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre‐operative MRI using an automated MRI‐histopathology registration platform. Results Radiologist labels missed cancers (ROC‐AUC: 0.75‐0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24‐0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC‐AUC: 0.97‐1, lesion Dice: 0.75‐0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC‐AUC: 0.91‐0.94), and had generalizable and comparable performance to pathologist label‐trained‐models in the targeted biopsy cohort (aggressive lesion ROC‐AUC: 0.87‐0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel‐level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human‐annotated label type. Conclusions Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label‐trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter‐ and intra‐reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - David S Lim
- Department of Computer Science, Stanford University, Stanford, CA 94305
| | - Han Lin Aung
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Xingchen Liu
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA 94305
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Katherine J To'o
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA 94304
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| |
Collapse
|
17
|
Texture Analysis of Enhanced MRI and Pathological Slides Predicts EGFR Mutation Status in Breast Cancer. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1376659. [PMID: 35663041 PMCID: PMC9162871 DOI: 10.1155/2022/1376659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 04/25/2022] [Accepted: 04/29/2022] [Indexed: 12/02/2022]
Abstract
Objective Image texture information was extracted from enhanced magnetic resonance imaging (MRI) and pathological hematoxylin and eosin- (HE-) stained images of female breast cancer patients. We established models individually, and then, we combine the two kinds of data to establish model. Through this method, we verified whether sufficient information could be obtained from enhanced MRI and pathological slides to assist in the determination of epidermal growth factor receptor (EGFR) mutation status in patients. Methods We obtained enhanced MRI data from patients with breast cancer before treatment and selected diffusion-weighted imaging (DWI), T1 fast-spin echo (T1 FSE), and T2 fast-spin echo (T2 FSE) as the data sources for extracting texture information. Imaging physicians manually outlined the 3D regions of interest (ROIs) and extracted texture features according to the gray level cooccurrence matrix (GLCM) of the images. For the HE staining images of the patients, we adopted a specific normalization algorithm to simulate the images dyed with only hematoxylin or eosin and extracted textures. We extracted texture features to predict the expression of EGFR. After evaluating the predictive power of each model, the models from the two data sources were combined for remodeling. Results For enhanced MRI data, the modeling of texture information of T1 FSE had a good predictive effect for EGFR mutation status. For pathological images, eosin-stained images can achieve a better prediction effect. We selected these two classifiers as the weak classifiers of the final model and obtained good results (training group: AUC, 0.983; 95% CI, 0.95-1.00; accuracy, 0.962; specificity, 0.936; and sensitivity, 0.979; test group: AUC, 0.983; 95% CI, 0.94-1.00; accuracy, 0.943; specificity, 1.00; and sensitivity, 0.905). Conclusion The EGFR mutation status of patients with breast cancer can be well predicted based on enhanced MRI data and pathological data. This helps hospitals that do not test the EGFR mutation status of patients with breast cancer. The technology gives clinicians more information about breast cancer, which helps them make accurate diagnoses and select suitable treatments.
Collapse
|
18
|
Yurt M, Özbey M, UH Dar S, Tinaz B, Oguz KK, Çukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. Med Image Anal 2022; 78:102429. [DOI: 10.1016/j.media.2022.102429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 10/18/2022]
|
19
|
Urakami A, Arimura H, Takayama Y, Kinoshita F, Ninomiya K, Imada K, Watanabe S, Nishie A, Oda Y, Ishigami K. Stratification of prostate cancer patients into low- and high-grade groups using multiparametric magnetic resonance radiomics with dynamic contrast-enhanced image joint histograms. Prostate 2022; 82:330-344. [PMID: 35014713 DOI: 10.1002/pros.24278] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 11/09/2021] [Accepted: 11/23/2021] [Indexed: 01/04/2023]
Abstract
PURPOSE This study aimed to investigate the potential of stratification of prostate cancer patients into low- and high-grade groups (GGs) using multiparametric magnetic resonance (mpMR) radiomics in conjunction with two-dimensional (2D) joint histograms computed with dynamic contrast-enhanced (DCE) images. METHODS A total of 101 prostate cancer regions extracted from the MR images of 44 patients were identified and divided into training (n = 31 with 72 cancer regions) and test datasets (n = 13 with 29 cancer regions). Each dataset included low-grade tumors (International Society of Urological Pathology [ISUP] GG ≤ 2) and high-grade tumors (ISUP GG ≥ 3). A total of 137,970 features consisted of mpMR image (16 types of images in four sequences)-based and joint histogram (DCE images at 10 phases)-based features for each cancer region. Joint histogram features can visualize temporally changing perfusion patterns in prostate cancer based on the joint histograms between different phases or subtraction phases of DCE images. Nine signatures (a set of significant features related to GGs) were determined using the best combinations of features selected using the least absolute shrinkage and selection operator. Further, support vector machine models with the nine signatures were built based on a leave-one-out cross-validation for the training dataset and evaluated with receiver operating characteristic (ROC) curve analysis. RESULTS The signature showing the best performance was constructed using six features derived from the joint histograms, DCE original images, and apparent diffusion coefficient maps. The areas under the ROC curves for the training and test datasets were 1.00 and 0.985, respectively. CONCLUSION This study suggests that the proposed approach with mpMR radiomics in conjunction with 2D joint histogram computed with DCE images could have the potential to stratify prostate cancer patients into low- and high-GGs.
Collapse
Affiliation(s)
- Akimasa Urakami
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Yukihisa Takayama
- Department of Radiology, Faculty of Medicine, Fukuoka University, Fukuoka, Japan
| | - Fumio Kinoshita
- Department of Anatomic Pathology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kenta Ninomiya
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kenjiro Imada
- Department of Urology, Prostate, Kidney, Adrenal Surgery, Kyushu University Hospital, Fukuoka, Japan
| | - Sumiko Watanabe
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Akihiro Nishie
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Yoshinao Oda
- Department of Anatomic Pathology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kousei Ishigami
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
20
|
Feng F, Ashton-Miller JA, DeLancey JOL, Luo J. Three-dimensional self super-resolution for pelvic floor MRI using a convolutional neural network with multi-orientation data training. Med Phys 2022; 49:1083-1096. [PMID: 34967014 PMCID: PMC9013299 DOI: 10.1002/mp.15438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 12/01/2021] [Accepted: 12/07/2021] [Indexed: 02/03/2023] Open
Abstract
PURPOSE High-resolution pelvic magnetic resonance (MR) imaging is important for the high-resolution and high-precision evaluation of pelvic floor disorders (PFDs), but the data acquisition time is long. Because high-resolution three-dimensional (3D) MR data of the pelvic floor are difficult to obtain, MR images are usually obtained in three orthogonal planes: axial, sagittal, and coronal. The in-plane resolution of the MR data in each plane is high, but the through-plane resolution is low. Thus, we aimed to achieve 3D super-resolution using a convolutional neural network (CNN) approach to capture the intrinsic similarity of low-resolution 3D MR data from three orientations. METHODS We used a two-dimensional (2D) super-resolution CNN model to solve the 3D super-resolution problem. The residual-in-residual dense block network (RRDBNet) was used as our CNN backbone. For a given set of low through-plane resolution pelvic floor MR data in the axial or coronal or sagittal scan plane, we applied the RRDBNet sequentially to perform super-resolution on its two projected low-resolution views. Three datasets were used in the experiments, including two private datasets and one public dataset. In the first dataset (dataset 1), MR data acquired from 34 subjects in three planes were used to train our super-resolution model, and low-resolution MR data from nine subjects were used for testing. The second dataset (dataset 2) included a sequence of relatively high-resolution MR data acquired in the coronal plane. The public MR dataset (dataset 3) was used to demonstrate the generalization ability of our model. To show the effectiveness of RRDBNet, we used datasets 1 and 2 to compare RRDBNet with interpolation and enhanced deep super-resolution (EDSR) methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index. As 3D MR data from one view have two projected low-resolution views, different super-resolution orders were compared in terms of PSNR and SSIM. Finally, to demonstrate the impact of super-resolution on the image analysis task, we used datasets 2 and 3 to compare the performance of our method with interpolation on the 3D geometric model reconstruction of the urinary bladder. RESULTS A CNN-based method was used to learn the intrinsic similarity among MR acquisitions from different scan planes. Through-plane super-resolution for pelvic MR images was achieved without using high-resolution 3D data, which is useful for the analysis of PFDs.
Collapse
Affiliation(s)
- Fei Feng
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, China
| | - James A Ashton-Miller
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | - John O L DeLancey
- Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, Michigan, USA
| | - Jiajia Luo
- Biomedical Engineering Department, Peking University, Beijing, China
| |
Collapse
|
21
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
22
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
23
|
Xie H, Lei Y, Wang T, Roper J, Dhabaan AH, Bradley JD, Liu T, Mao H, Yang X. Synthesizing high-resolution magnetic resonance imaging using parallel cycle-consistent generative adversarial networks for fast magnetic resonance imaging. Med Phys 2022; 49:357-369. [PMID: 34821395 DOI: 10.1002/mp.15380] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 11/07/2021] [Accepted: 11/09/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The common practice in acquiring the magnetic resonance (MR) images is to obtain two-dimensional (2D) slices at coarse locations while keeping the high in-plane resolution in order to ensure enough body coverage while shortening the MR scan time. The aim of this study is to propose a novel method to generate HR MR images from low-resolution MR images along the longitudinal direction. In order to address the difficulty of collecting paired low- and high-resolution MR images in clinical settings and to gain the advantage of parallel cycle consistent generative adversarial networks (CycleGANs) in synthesizing realistic medical images, we developed a parallel CycleGANs based method using a self-supervised strategy. METHODS AND MATERIALS The proposed workflow consists of two parallely trained CycleGANs to independently predict the HR MR images in the two planes along the directions that are orthogonal to the longitudinal MR scan direction. Then, the final synthetic HR MR images are generated by fusing the two predicted images. MR images, including T1-weighted (T1), contrast enhanced T1-weighted (T1CE), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR), of the multimodal brain tumor segmentation challenge 2020 (BraTS2020) dataset were processed to evaluate the proposed workflow along the cranial-caudal (CC), lateral, and anterior-posterior directions. Institutional collected MR images were also processed for evaluation of the proposed method. The performance of the proposed method was investigated via both qualitative and quantitative evaluations. Metrics of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), edge keeping index (EKI), structural similarity index measurement (SSIM), information fidelity criterion (IFC), and visual information fidelity in pixel domain (VIFP) were calculated. RESULTS It is shown that the proposed method can generate HR MR images visually indistinguishable from the ground truth in the investigations on the BraTS2020 dataset. In addition, the intensity profiles, difference images and SSIM maps can also confirm the feasibility of the proposed method for synthesizing HR MR images. Quantitative evaluations on the BraTS2020 dataset shows that the calculated metrics of synthetic HR MR images can all be enhanced for the T1, T1CE, T2, and FLAIR images. The enhancements in the numerical metrics over the low-resolution and bi-cubic interpolated MR images, as well as those genearted with a comparative deep learning method, are statistically significant. Qualitative evaluation of the synthetic HR MR images of the clinical collected dataset could also confirm the feasibility of the proposed method. CONCLUSIONS The proposed method is feasible to synthesize HR MR images using self-supervised parallel CycleGANs, which can be expected to shorten MR acquisition time in clinical practices.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Anees H Dhabaan
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
- Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
24
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. INFORMATION FUSION 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
25
|
Zimmerman BE, Johnson SL, Odéen HA, Shea JE, Factor RE, Joshi SC, Payne AH. Histology to 3D in vivo MR registration for volumetric evaluation of MRgFUS treatment assessment biomarkers. Sci Rep 2021; 11:18923. [PMID: 34556678 PMCID: PMC8460731 DOI: 10.1038/s41598-021-97309-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 08/24/2021] [Indexed: 11/09/2022] Open
Abstract
Advances in imaging and early cancer detection have increased interest in magnetic resonance (MR) guided focused ultrasound (MRgFUS) technologies for cancer treatment. MRgFUS ablation treatments could reduce surgical risks, preserve organ tissue and function, and improve patient quality of life. However, surgical resection and histological analysis remain the gold standard to assess cancer treatment response. For non-invasive ablation therapies such as MRgFUS, the treatment response must be determined through MR imaging biomarkers. However, current MR biomarkers are inconclusive and have not been rigorously evaluated against histology via accurate registration. Existing registration methods rely on anatomical features to directly register in vivo MR and histology. For MRgFUS applications in anatomies such as liver, kidney, or breast, anatomical features that are not caused by the treatment are often insufficient to drive direct registration. We present a novel MR to histology registration workflow that utilizes intermediate imaging and does not rely on anatomical MR features being visible in histology. The presented workflow yields an overall registration accuracy of 1.00 ± 0.13 mm. The developed registration pipeline is used to evaluate a common MRgFUS treatment assessment biomarker against histology. Evaluating MR biomarkers against histology using this registration pipeline will facilitate validating novel MRgFUS biomarkers to improve treatment assessment without surgical intervention. While the presented registration technique has been evaluated in a MRgFUS ablation treatment model, this technique could be potentially applied in any tissue to evaluate a variety of therapeutic options.
Collapse
Affiliation(s)
- Blake E Zimmerman
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA. .,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA.
| | - Sara L Johnson
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Henrik A Odéen
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Jill E Shea
- Department of Surgery, University of Utah, Salt Lake City, UT, USA
| | - Rachel E Factor
- Department of Pathology, University of Utah, Salt Lake City, UT, USA
| | - Sarang C Joshi
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA
| | - Allison H Payne
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
26
|
Seetharaman A, Bhattacharya I, Chen LC, Kunder CA, Shao W, Soerensen SJC, Wang JB, Teslovich NC, Fan RE, Ghanouni P, Brooks JD, Too KJ, Sonn GA, Rusu M. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging. Med Phys 2021; 48:2960-2972. [PMID: 33760269 PMCID: PMC8360053 DOI: 10.1002/mp.14855] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 01/31/2021] [Accepted: 03/16/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. METHODS We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. RESULTS Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. CONCLUSIONS Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
Collapse
Affiliation(s)
- Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Leo C Chen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Jeffrey B Wang
- Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Katherine J Too
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA, 94304, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|