1
|
Golestani N, Wang A, Moallem G, Bean GR, Rusu M. PViT-AIR: Puzzling vision transformer-based affine image registration for multi histopathology and faxitron images of breast tissue. Med Image Anal 2024; 99:103356. [PMID: 39378568 DOI: 10.1016/j.media.2024.103356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/16/2024] [Accepted: 09/23/2024] [Indexed: 10/10/2024]
Abstract
Breast cancer is a significant global public health concern, with various treatment options available based on tumor characteristics. Pathological examination of excision specimens after surgery provides essential information for treatment decisions. However, the manual selection of representative sections for histological examination is laborious and subjective, leading to potential sampling errors and variability, especially in carcinomas that have been previously treated with chemotherapy. Furthermore, the accurate identification of residual tumors presents significant challenges, emphasizing the need for systematic or assisted methods to address this issue. In order to enable the development of deep-learning algorithms for automated cancer detection on radiology images, it is crucial to perform radiology-pathology registration, which ensures the generation of accurately labeled ground truth data. The alignment of radiology and histopathology images plays a critical role in establishing reliable cancer labels for training deep-learning algorithms on radiology images. However, aligning these images is challenging due to their content and resolution differences, tissue deformation, artifacts, and imprecise correspondence. We present a novel deep learning-based pipeline for the affine registration of faxitron images, the x-ray representations of macrosections of ex-vivo breast tissue, and their corresponding histopathology images of tissue segments. The proposed model combines convolutional neural networks and vision transformers, allowing it to effectively capture both local and global information from the entire tissue macrosection as well as its segments. This integrated approach enables simultaneous registration and stitching of image segments, facilitating segment-to-macrosection registration through a puzzling-based mechanism. To address the limitations of multi-modal ground truth data, we tackle the problem by training the model using synthetic mono-modal data in a weakly supervised manner. The trained model demonstrated successful performance in multi-modal registration, yielding registration results with an average landmark error of 1.51 mm (±2.40), and stitching distance of 1.15 mm (±0.94). The results indicate that the model performs significantly better than existing baselines, including both deep learning-based and iterative models, and it is also approximately 200 times faster than the iterative approach. This work bridges the gap in the current research and clinical workflow and has the potential to improve efficiency and accuracy in breast cancer evaluation and streamline pathology workflow.
Collapse
Affiliation(s)
| | - Aihui Wang
- Department of Pathology, Stanford University, USA
| | | | | | - Mirabela Rusu
- Department of Radiology, Stanford University, USA; Department of Urology, Stanford University, USA; Department of Biomedical Data Science, Stanford University, USA.
| |
Collapse
|
2
|
Schmidt B, Soerensen SJC, Bhambhvani HP, Fan RE, Bhattacharya I, Choi MH, Kunder CA, Kao CS, Higgins J, Rusu M, Sonn GA. External validation of an artificial intelligence model for Gleason grading of prostate cancer on prostatectomy specimens. BJU Int 2024. [PMID: 38989669 DOI: 10.1111/bju.16464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
OBJECTIVES To externally validate the performance of the DeepDx Prostate artificial intelligence (AI) algorithm (Deep Bio Inc., Seoul, South Korea) for Gleason grading on whole-mount prostate histopathology, considering potential variations observed when applying AI models trained on biopsy samples to radical prostatectomy (RP) specimens due to inherent differences in tissue representation and sample size. MATERIALS AND METHODS The commercially available DeepDx Prostate AI algorithm is an automated Gleason grading system that was previously trained using 1133 prostate core biopsy images and validated on 700 biopsy images from two institutions. We assessed the AI algorithm's performance, which outputs Gleason patterns (3, 4, or 5), on 500 1-mm2 tiles created from 150 whole-mount RP specimens from a third institution. These patterns were then grouped into grade groups (GGs) for comparison with expert pathologist assessments. The reference standard was the International Society of Urological Pathology GG as established by two experienced uropathologists with a third expert to adjudicate discordant cases. We defined the main metric as the agreement with the reference standard, using Cohen's kappa. RESULTS The agreement between the two experienced pathologists in determining GGs at the tile level had a quadratically weighted Cohen's kappa of 0.94. The agreement between the AI algorithm and the reference standard in differentiating cancerous vs non-cancerous tissue had an unweighted Cohen's kappa of 0.91. Additionally, the AI algorithm's agreement with the reference standard in classifying tiles into GGs had a quadratically weighted Cohen's kappa of 0.89. In distinguishing cancerous vs non-cancerous tissue, the AI algorithm achieved a sensitivity of 0.997 and specificity of 0.88; in classifying GG ≥2 vs GG 1 and non-cancerous tissue, it demonstrated a sensitivity of 0.98 and specificity of 0.85. CONCLUSION The DeepDx Prostate AI algorithm had excellent agreement with expert uropathologists and performance in cancer identification and grading on RP specimens, despite being trained on biopsy specimens from an entirely different patient population.
Collapse
Affiliation(s)
- Bogdana Schmidt
- Division of Urology, Department of Surgery, Huntsman Cancer Hospital, University of Utah, Salt Lake City, UT, USA
| | - Simon John Christoph Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Hriday P Bhambhvani
- Department of Urology, Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Moon Hyung Choi
- Department of Radiology, College of Medicine, Eunpyeong St. Mary's Hospital, The Catholic University of Korea, Seoul, Korea
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - Chia-Sui Kao
- Department of Pathology and Laboratory Medicine, Cleveland Clinic, Cleveland, OH, USA
| | - John Higgins
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - Mirabela Rusu
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Geoffrey A Sonn
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
3
|
Chen Y, Liu J, Jiang P, Jin Y. A novel multilevel iterative training strategy for the ResNet50 based mitotic cell classifier. Comput Biol Chem 2024; 110:108092. [PMID: 38754259 DOI: 10.1016/j.compbiolchem.2024.108092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 03/21/2024] [Accepted: 04/02/2024] [Indexed: 05/18/2024]
Abstract
The number of mitotic cells is an important indicator of grading invasive breast cancer. It is very challenging for pathologists to identify and count mitotic cells in pathological sections with naked eyes under the microscope. Therefore, many computational models for the automatic identification of mitotic cells based on machine learning, especially deep learning, have been proposed. However, converging to the local optimal solution is one of the main problems in model training. In this paper, we proposed a novel multilevel iterative training strategy to address the problem. To evaluate the proposed training strategy, we constructed the mitotic cell classification model with ResNet50 and trained the model with different training strategies. The results showed that the models trained with the proposed training strategy performed better than those trained with the conventional strategy in the independent test set, illustrating the effectiveness of the new training strategy. Furthermore, after training with our proposed strategy, the ResNet50 model with Adam optimizer has achieved 89.26% F1 score on the public MITOSI14 dataset, which is higher than that of the state-of-the-art methods reported in the literature.
Collapse
Affiliation(s)
- Yuqi Chen
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Juan Liu
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China.
| | - Peng Jiang
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| | - Yu Jin
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072, China
| |
Collapse
|
4
|
Strittmatter A, Schad LR, Zöllner FG. Deep learning-based affine medical image registration for multimodal minimal-invasive image-guided interventions - A comparative study on generalizability. Z Med Phys 2024; 34:291-317. [PMID: 37355435 PMCID: PMC11156775 DOI: 10.1016/j.zemedi.2023.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 06/26/2023]
Abstract
Multimodal image registration is applied in medical image analysis as it allows the integration of complementary data from multiple imaging modalities. In recent years, various neural network-based approaches for medical image registration have been presented in papers, but due to the use of different datasets, a fair comparison is not possible. In this research 20 different neural networks for an affine registration of medical images were implemented. The networks' performance and the networks' generalizability to new datasets were evaluated using two multimodal datasets - a synthetic and a real patient dataset - of three-dimensional CT and MR images of the liver. The networks were first trained semi-supervised using the synthetic dataset and then evaluated on the synthetic dataset and the unseen patient dataset. Afterwards, the networks were finetuned on the patient dataset and subsequently evaluated on the patient dataset. The networks were compared using our own developed CNN as benchmark and a conventional affine registration with SimpleElastix as baseline. Six networks improved the pre-registration Dice coefficient of the synthetic dataset significantly (p-value < 0.05) and nine networks improved the pre-registration Dice coefficient of the patient dataset significantly and are therefore able to generalize to the new datasets used in our experiments. Many different machine learning-based methods have been proposed for affine multimodal medical image registration, but few are generalizable to new data and applications. It is therefore necessary to conduct further research in order to develop medical image registration techniques that can be applied more widely.
Collapse
Affiliation(s)
- Anika Strittmatter
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany.
| | - Lothar R Schad
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| | - Frank G Zöllner
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| |
Collapse
|
5
|
Shao W, Vesal S, Soerensen SJC, Bhattacharya I, Golestani N, Yamashita R, Kunder CA, Fan RE, Ghanouni P, Brooks JD, Sonn GA, Rusu M. RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate. Comput Biol Med 2024; 173:108318. [PMID: 38522253 DOI: 10.1016/j.compbiomed.2024.108318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Medicine, University of Florida, Gainesville, FL, 32610, United States.
| | - Sulaiman Vesal
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Simon J C Soerensen
- Department of Urology, Stanford University, Stanford, CA, 94305, United States; Department of Epidemiology and Population Health, Stanford University, Stanford, CA, 94305, United States
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Negar Golestani
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94305, United States
| | - Christian A Kunder
- Department of Pathology, Stanford University, Stanford, CA, 94305, United States
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States.
| |
Collapse
|
6
|
Li L, Shiradkar R, Gottlieb N, Buzzy C, Hiremath A, Viswanathan VS, MacLennan GT, Omil Lima D, Gupta K, Shen DL, Tirumani SH, Magi-Galluzzi C, Purysko A, Madabhushi A. Multi-scale statistical deformation based co-registration of prostate MRI and post-surgical whole mount histopathology. Med Phys 2024; 51:2549-2562. [PMID: 37742344 PMCID: PMC10960735 DOI: 10.1002/mp.16753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 09/07/2023] [Accepted: 09/12/2023] [Indexed: 09/26/2023] Open
Abstract
BACKGROUND Accurate delineations of regions of interest (ROIs) on multi-parametric magnetic resonance imaging (mpMRI) are crucial for development of automated, machine learning-based prostate cancer (PCa) detection and segmentation models. However, manual ROI delineations are labor-intensive and susceptible to inter-reader variability. Histopathology images from radical prostatectomy (RP) represent the "gold standard" in terms of the delineation of disease extents, for example, PCa, prostatitis, and benign prostatic hyperplasia (BPH). Co-registering digitized histopathology images onto pre-operative mpMRI enables automated mapping of the ground truth disease extents onto mpMRI, thus enabling the development of machine learning tools for PCa detection and risk stratification. Still, MRI-histopathology co-registration is challenging due to various artifacts and large deformation between in vivo MRI and ex vivo whole-mount histopathology images (WMHs). Furthermore, the artifacts on WMHs, such as tissue loss, may introduce unrealistic deformation during co-registration. PURPOSE This study presents a new registration pipeline, MSERgSDM, a multi-scale feature-based registration (MSERg) with a statistical deformation (SDM) constraint, which aims to improve accuracy of MRI-histopathology co-registration. METHODS In this study, we collected 85 pairs of MRI and WMHs from 48 patients across three cohorts. Cohort 1 (D1), comprised of a unique set of 3D printed mold data from six patients, facilitated the generation of ground truth deformations between ex vivo WMHs and in vivo MRI. The other two clinically acquired cohorts (D2 and D3) included 42 patients. Affine and nonrigid registrations were employed to minimize the deformation between ex vivo WMH and ex vivo T2-weighted MRI (T2WI) in D1. Subsequently, ground truth deformation between in vivo T2WI and ex vivo WMH was approximated as the deformation between in vivo T2WI and ex vivo T2WI. In D2 and D3, the prostate anatomical annotations, for example, tumor and urethra, were made by a pathologist and a radiologist in collaboration. These annotations included ROI boundary contours and landmark points. Before applying the registration, manual corrections were made for flipping and rotation of WMHs. MSERgSDM comprises two main components: (1) multi-scale representation construction, and (2) SDM construction. For the SDM construction, we collected N = 200 reasonable deformation fields generated using MSERg, verified through visual inspection. Three additional methods, including intensity-based registration, ProsRegNet, and MSERg, were also employed for comparison against MSERgSDM. RESULTS Our results suggest that MSERgSDM performed comparably to the ground truth (p > 0.05). Additionally, MSERgSDM (ROI Dice ratio = 0.61, landmark distance = 3.26 mm) exhibited significant improvement over MSERg (ROI Dice ratio = 0.59, landmark distance = 3.69 mm) and ProsRegNet (ROI Dice ratio = 0.56, landmark distance = 4.00 mm) in local alignment. CONCLUSIONS This study presents a novel registration method, MSERgSDM, for mapping ex vivo WMH onto in vivo prostate MRI. Our preliminary results demonstrate that MSERgSDM can serve as a valuable tool to map ground truth disease annotations from histopathology images onto MRI, thereby assisting in the development of machine learning models for PCa detection on MRI.
Collapse
Affiliation(s)
- Lin Li
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Rakesh Shiradkar
- Wallace H Coulter Department of Biomedical Engineering at Emory University and Georgia Institute of Technology
| | - Noah Gottlieb
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Christina Buzzy
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Amogh Hiremath
- Deptartment of Biomedical Engineering, Case Western Reserve University, Cleveland, OH
| | - Vidya Sankar Viswanathan
- Wallace H Coulter Department of Biomedical Engineering at Emory University and Georgia Institute of Technology
| | - Gregory T. MacLennan
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Danly Omil Lima
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Karishma Gupta
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Daniel Lee Shen
- Department of Pathology and Urology, Case Western Reserve University, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | | | | | - Andrei Purysko
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, OH, USA
- Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Anant Madabhushi
- Wallace H Coulter Department of Biomedical Engineering at Emory University and Georgia Institute of Technology
- Atlanta Veterans Administration Medical Center
| |
Collapse
|
7
|
Schouten D, van der Laak J, van Ginneken B, Litjens G. Full resolution reconstruction of whole-mount sections from digitized individual tissue fragments. Sci Rep 2024; 14:1497. [PMID: 38233535 PMCID: PMC10794243 DOI: 10.1038/s41598-024-52007-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 01/12/2024] [Indexed: 01/19/2024] Open
Abstract
Whole-mount sectioning is a technique in histopathology where a full slice of tissue, such as a transversal cross-section of a prostate specimen, is prepared on a large microscope slide without further sectioning into smaller fragments. Although this technique can offer improved correlation with pre-operative imaging and is paramount for multimodal research, it is not commonly employed due to its technical difficulty, associated cost and cumbersome integration in (digital) pathology workflows. In this work, we present a computational tool named PythoStitcher which reconstructs artificial whole-mount sections from digitized tissue fragments, thereby bringing the benefits of whole-mount sections to pathology labs currently unable to employ this technique. Our proposed algorithm consists of a multi-step approach where it (i) automatically determines how fragments need to be reassembled, (ii) iteratively optimizes the stitch using a genetic algorithm and (iii) efficiently reconstructs the final artificial whole-mount section on full resolution (0.25 µm/pixel). PythoStitcher was validated on a total of 198 cases spanning five datasets with a varying number of tissue fragments originating from different organs from multiple centers. PythoStitcher successfully reconstructed the whole-mount section in 86-100% of cases for a given dataset with a residual registration mismatch of 0.65-2.76 mm on automatically selected landmarks. It is expected that our algorithm can aid pathology labs unable to employ whole-mount sectioning through faster clinical case evaluation and improved radiology-pathology correlation workflows.
Collapse
Affiliation(s)
- Daan Schouten
- Department of Pathology, Radboud University Medical Centre, Nijmegen, The Netherlands.
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Department of Radiology, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Centre, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Wang X, Song Z, Zhu J, Li Z. Correlation Attention Registration Based on Deep Learning from Histopathology to MRI of Prostate. Crit Rev Biomed Eng 2024; 52:39-50. [PMID: 38305277 DOI: 10.1615/critrevbiomedeng.2023050566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
Deep learning offers a promising methodology for the registration of prostate cancer images from histopathology to MRI. We explored how to effectively leverage key information from images to achieve improved end-to-end registration. We developed an approach based on a correlation attention registration framework to register segmentation labels of histopathology onto MRI. The network was trained using paired prostate datasets of histopathology and MRI from the Cancer Imaging Archive. We introduced An L2-Pearson correlation layer to enhance feature matching. Furthermore, our model employed an enhanced attention regression network to distinguish between key and nonkey features. For data analysis, we used the Kolmogorov-Smirnov test and a one-sample t-test, with the statistical significance level for the one-sample t-test set at 0.001. Compared with two other models (ProsRegNet and CNNGeo), our model exhibited improved performance in Dice coefficient, with increases of 9.893% and 2.753%, respectively. The Hausdorff distance was reduced by approximately 50% and 50%, while the average label error (ALE) was reduced by 0.389% and 15.021%. The proposed improved multimodal prostate registration framework demonstrated high performance in statistical analysis. The results indicate that our enhanced strategy significantly improves registration performance and enables faster registration of histopathological images of patients undergoing radical prostatectomy to preoperative MRI. More accurate registration can prevent over-diagnosing low-risk cancers and frequent false positives due to observer differences.
Collapse
Affiliation(s)
- Xue Wang
- Shanghai Institute of Technology
| | - Zhili Song
- School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Jianlin Zhu
- School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| | - Zhixiang Li
- School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai, 201418, China
| |
Collapse
|
9
|
Lin Y, Liang Z, He Y, Huang W, Guan T. End-to-end affine registration framework for histopathological images with weak annotations. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107763. [PMID: 37634308 DOI: 10.1016/j.cmpb.2023.107763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 08/12/2023] [Accepted: 08/12/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND AND OBJECTIVE Histopathological image registration is an essential component in digital pathology and biomedical image analysis. Deep-learning-based algorithms have been proposed to achieve fast and accurate affine registration. Some previous studies assume that the pairs are free from sizeable initial position misalignment and large rotation angles before performing the affine transformation. However, large-rotation angles are often introduced into image pairs during the production process in real-world pathology images. Reliable initial alignment is important for registration performance. The existing deep-learning-based approaches often use a two-step affine registration pipeline because convolutional neural networks (CNNs) cannot correct large-angle rotations. METHODS In this manuscript, a general framework ARoNet is developed to achieve end-to-end affine registration for histopathological images. We use CNNs to extract global features of images and fuse them to construct correspondent information for affine transformation. In ARoNet, a rotation recognition network is implemented to eliminate great rotation misalignment. In addition, a self-supervised learning task is proposed to assist the learning of image representations in an unsupervised manner. RESULTS We applied our model to four datasets, and the results indicate that ARoNet surpasses existing affine registration algorithms in alignment accuracy when large angular misalignments (e.g., 180 rotation) are present, providing accurate affine initialization for subsequent non-rigid alignments. Besides, ARoNet shows advantages in execution time (0.05 per pair), registration accuracy, and robustness. CONCLUSION We believe that the proposed general framework promises to simplify and speed up the registration process and has the potential for clinical applications.
Collapse
Affiliation(s)
- Yuanhua Lin
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Zhendong Liang
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Yonghong He
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Wenting Huang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 518116, Shenzhen, China
| | - Tian Guan
- Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China.
| |
Collapse
|
10
|
Wu M, He X, Li F, Zhu J, Wang S, Burstein PD. Weakly supervised volumetric prostate registration for MRI-TRUS image driven by signed distance map. Comput Biol Med 2023; 163:107150. [PMID: 37321103 DOI: 10.1016/j.compbiomed.2023.107150] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 05/24/2023] [Accepted: 06/07/2023] [Indexed: 06/17/2023]
Abstract
Image registration is a fundamental step for MRI-TRUS fusion targeted biopsy. Due to the inherent representational differences between these two image modalities, though, intensity-based similarity losses for registration tend to result in poor performance. To mitigate this, comparison of organ segmentations, functioning as a weak proxy measure of image similarity, has been proposed. Segmentations, though, are limited in their information encoding capabilities. Signed distance maps (SDMs), on the other hand, encode these segmentations into a higher dimensional space where shape and boundary information are implicitly captured, and which, in addition, yield high gradients even for slight mismatches, thus preventing vanishing gradients during deep-network training. Based on these advantages, this study proposes a weakly-supervised deep learning volumetric registration approach driven by a mixed loss that operates both on segmentations and their corresponding SDMs, and which is not only robust to outliers, but also encourages optimal global alignment. Our experimental results, performed on a public prostate MRI-TRUS biopsy dataset, demonstrate that our method outperforms other weakly-supervised registration approaches with a dice similarity coefficient (DSC), Hausdorff distance (HD) and mean surface distance (MSD) of 87.3 ± 11.3, 4.56 ± 1.95 mm, and 0.053 ± 0.026 mm, respectively. We also show that the proposed method effectively preserves the prostate gland's internal structure.
Collapse
Affiliation(s)
- Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China.
| | - Xuchen He
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Fan Li
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Jie Zhu
- Senior Department of Urology, The Third Medical Center of Chinese People's Liberation Army General Hospital, Beijing, China.
| | | | | |
Collapse
|
11
|
Ghezzo S, Neri I, Mapelli P, Savi A, Samanes Gajate AM, Brembilla G, Bezzi C, Maghini B, Villa T, Briganti A, Montorsi F, De Cobelli F, Freschi M, Chiti A, Picchio M, Scifo P. [ 68Ga]Ga-PSMA and [ 68Ga]Ga-RM2 PET/MRI vs. Histopathological Images in Prostate Cancer: A New Workflow for Spatial Co-Registration. Bioengineering (Basel) 2023; 10:953. [PMID: 37627838 PMCID: PMC10451901 DOI: 10.3390/bioengineering10080953] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/05/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
This study proposed a new workflow for co-registering prostate PET images from a dual-tracer PET/MRI study with histopathological images of resected prostate specimens. The method aims to establish an accurate correspondence between PET/MRI findings and histology, facilitating a deeper understanding of PET tracer distribution and enabling advanced analyses like radiomics. To achieve this, images derived by three patients who underwent both [68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET/MRI before radical prostatectomy were selected. After surgery, in the resected fresh specimens, fiducial markers visible on both histology and MR images were inserted. An ex vivo MRI of the prostate served as an intermediate step for co-registration between histological specimens and in vivo MRI examinations. The co-registration workflow involved five steps, ensuring alignment between histopathological images and PET/MRI data. The target registration error (TRE) was calculated to assess the precision of the co-registration. Furthermore, the DICE score was computed between the dominant intraprostatic tumor lesions delineated by the pathologist and the nuclear medicine physician. The TRE for the co-registration of histopathology and in vivo images was 1.59 mm, while the DICE score related to the site of increased intraprostatic uptake on [68Ga]Ga-PSMA and [68Ga]Ga-RM2 PET images was 0.54 and 0.75, respectively. This work shows an accurate co-registration method for histopathological and in vivo PET/MRI prostate examinations that allows the quantitative assessment of dual-tracer PET/MRI diagnostic accuracy at a millimetric scale. This approach may unveil radiotracer uptake mechanisms and identify new PET/MRI biomarkers, thus establishing the basis for precision medicine and future analyses, such as radiomics.
Collapse
Affiliation(s)
- Samuele Ghezzo
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Ilaria Neri
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Paola Mapelli
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Annarita Savi
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Ana Maria Samanes Gajate
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Giorgio Brembilla
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Radiology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Carolina Bezzi
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Beatrice Maghini
- Department of Pathology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (B.M.); (M.F.)
| | - Tommaso Villa
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
| | - Alberto Briganti
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Urology, Division of Experimental Oncology, Urological Research Institute, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Francesco Montorsi
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Urology, Division of Experimental Oncology, Urological Research Institute, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Francesco De Cobelli
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Radiology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy
| | - Massimo Freschi
- Department of Pathology, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (B.M.); (M.F.)
| | - Arturo Chiti
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Maria Picchio
- Faculty of Medicine and Surgery, Vita-Salute San Raffaele University, Via Olgettina 58, 20132 Milan, Italy; (S.G.); (I.N.); (P.M.); (G.B.); (C.B.); (T.V.); (A.B.); (F.M.); (F.D.C.); (A.C.); (M.P.)
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| | - Paola Scifo
- Department of Nuclear Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina 60, 20132 Milan, Italy; (A.S.); (A.M.S.G.)
| |
Collapse
|
12
|
Xu M, Cao L, Lu D, Hu Z, Yue Y. Application of Swarm Intelligence Optimization Algorithms in Image Processing: A Comprehensive Review of Analysis, Synthesis, and Optimization. Biomimetics (Basel) 2023; 8:235. [PMID: 37366829 DOI: 10.3390/biomimetics8020235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 05/27/2023] [Accepted: 06/01/2023] [Indexed: 06/28/2023] Open
Abstract
Image processing technology has always been a hot and difficult topic in the field of artificial intelligence. With the rise and development of machine learning and deep learning methods, swarm intelligence algorithms have become a hot research direction, and combining image processing technology with swarm intelligence algorithms has become a new and effective improvement method. Swarm intelligence algorithm refers to an intelligent computing method formed by simulating the evolutionary laws, behavior characteristics, and thinking patterns of insects, birds, natural phenomena, and other biological populations. It has efficient and parallel global optimization capabilities and strong optimization performance. In this paper, the ant colony algorithm, particle swarm optimization algorithm, sparrow search algorithm, bat algorithm, thimble colony algorithm, and other swarm intelligent optimization algorithms are deeply studied. The model, features, improvement strategies, and application fields of the algorithm in image processing, such as image segmentation, image matching, image classification, image feature extraction, and image edge detection, are comprehensively reviewed. The theoretical research, improvement strategies, and application research of image processing are comprehensively analyzed and compared. Combined with the current literature, the improvement methods of the above algorithms and the comprehensive improvement and application of image processing technology are analyzed and summarized. The representative algorithms of the swarm intelligence algorithm combined with image segmentation technology are extracted for list analysis and summary. Then, the unified framework, common characteristics, different differences of the swarm intelligence algorithm are summarized, existing problems are raised, and finally, the future trend is projected.
Collapse
Affiliation(s)
- Minghai Xu
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
| | - Li Cao
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
| | - Dongwan Lu
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| | - Zhongyi Hu
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| | - Yinggao Yue
- School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
- Intelligent Information Systems Institute, Wenzhou University, Wenzhou 325035, China
| |
Collapse
|
13
|
Zou J, Gao B, Song Y, Qin J. A review of deep learning-based deformable medical image registration. Front Oncol 2022; 12:1047215. [PMID: 36568171 PMCID: PMC9768226 DOI: 10.3389/fonc.2022.1047215] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 11/08/2022] [Indexed: 12/12/2022] Open
Abstract
The alignment of images through deformable image registration is vital to clinical applications (e.g., atlas creation, image fusion, and tumor targeting in image-guided navigation systems) and is still a challenging problem. Recent progress in the field of deep learning has significantly advanced the performance of medical image registration. In this review, we present a comprehensive survey on deep learning-based deformable medical image registration methods. These methods are classified into five categories: Deep Iterative Methods, Supervised Methods, Unsupervised Methods, Weakly Supervised Methods, and Latest Methods. A detailed review of each category is provided with discussions about contributions, tasks, and inadequacies. We also provide statistical analysis for the selected papers from the point of view of image modality, the region of interest (ROI), evaluation metrics, and method categories. In addition, we summarize 33 publicly available datasets that are used for benchmarking the registration algorithms. Finally, the remaining challenges, future directions, and potential trends are discussed in our review.
Collapse
Affiliation(s)
- Jing Zou
- Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR, China
| | | | | | | |
Collapse
|
14
|
Lu X, Zhang S, Liu Z, Liu S, Huang J, Kong G, Li M, Liang Y, Cui Y, Yang C, Zhao S. Ultrasonographic pathological grading of prostate cancer using automatic region-based Gleason grading network. Comput Med Imaging Graph 2022; 102:102125. [PMID: 36257091 DOI: 10.1016/j.compmedimag.2022.102125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 08/26/2022] [Accepted: 09/20/2022] [Indexed: 11/05/2022]
Abstract
The Gleason scoring system is a reliable method for quantifying the aggressiveness of prostate cancer, which provides an important reference value for clinical assessment on therapeutic strategies. However, to the best of our knowledge, no study has been done on the pathological grading of prostate cancer from single ultrasound images. In this work, a novel Automatic Region-based Gleason Grading (ARGG) network for prostate cancer based on deep learning is proposed. ARGG consists of two stages: (1) a region labeling object detection (RLOD) network is designed to label the prostate cancer lesion region; (2) a Gleason grading network (GNet) is proposed for pathological grading of prostate ultrasound images. In RLOD, a new feature fusion structure Skip-connected Feature Pyramid Network (CFPN) is proposed as an auxiliary branch for extracting features and enhancing the fusion of high-level features and low-level features, which helps to detect the small lesion and extract the image detail information. In GNet, we designed a synchronized pulse enhancement module (SPEM) based on pulse-coupled neural networks for enhancing the results of RLOD detection and used as training samples, and then fed the enhanced results and the original ones into the channel attention classification network (CACN), which introduces an attention mechanism to benefit the prediction of cancer grading. Experimental performance on the dataset of prostate ultrasound images collected from hospitals shows that the proposed Gleason grading model outperforms the manual diagnosis by physicians with a precision of 0.830. In addition, we have evaluated the lesions detection performance of RLOD, which achieves a mean Dice metric of 0.815.
Collapse
Affiliation(s)
- Xu Lu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China; Pazhou Lab, Guangzhou 510330, China
| | - Shulian Zhang
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Zhiyong Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Shaopeng Liu
- Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Jun Huang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Guoquan Kong
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Mingzhu Li
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yinying Liang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - Yunneng Cui
- Department of Radiology, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan 528000, China
| | - Chuan Yang
- Department of Ultrasonography, The First Affiliated Hospital of Jinan University, Guangzhou 510630, China.
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| |
Collapse
|
15
|
Ruchti A, Neuwirth A, Lowman AK, Duenweg SR, LaViolette PS, Bukowy JD. Homologous point transformer for multi-modality prostate image registration. PeerJ Comput Sci 2022; 8:e1155. [PMID: 36532813 PMCID: PMC9748842 DOI: 10.7717/peerj-cs.1155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 10/24/2022] [Indexed: 06/17/2023]
Abstract
Registration is the process of transforming images so they are aligned in the same coordinate space. In the medical field, image registration is often used to align multi-modal or multi-parametric images of the same organ. A uniquely challenging subset of medical image registration is cross-modality registration-the task of aligning images captured with different scanning methodologies. In this study, we present a transformer-based deep learning pipeline for performing cross-modality, radiology-pathology image registration for human prostate samples. While existing solutions for multi-modality prostate image registration focus on the prediction of transform parameters, our pipeline predicts a set of homologous points on the two image modalities. The homologous point registration pipeline achieves better average control point deviation than the current state-of-the-art automatic registration pipeline. It reaches this accuracy without requiring masked MR images which may enable this approach to achieve similar results in other organ systems and for partial tissue samples.
Collapse
Affiliation(s)
- Alexander Ruchti
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| | - Alexander Neuwirth
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| | - Allison K. Lowman
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Savannah R. Duenweg
- Department of Biophysics, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Peter S. LaViolette
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
- Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI, United States
| | - John D. Bukowy
- Department of Electrical Engineering and Computer Science, Milwaukee School of Engineering, Milwaukee, WI, United States
| |
Collapse
|
16
|
Ji J, Wan T, Chen D, Wang H, Zheng M, Qin Z. A deep learning method for automatic evaluation of diagnostic information from multi-stained histopathological images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
17
|
Shao L, Liu Z, Liu J, Yan Y, Sun K, Liu X, Lu J, Tian J. Patient-level grading prediction of prostate cancer from mp-MRI via GMINet. Comput Biol Med 2022; 150:106168. [PMID: 36240594 DOI: 10.1016/j.compbiomed.2022.106168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Magnetic resonance imaging (MRI) is considered the best imaging modality for non-invasive observation of prostate cancer. However, the existing quantitative analysis methods still have challenges in patient-level prediction, including accuracy, interpretability, context understanding, tumor delineation dependence, and multiple sequence fusion. Therefore, we propose a topological graph-guided multi-instance network (GMINet) to catch global contextual information of multi-parametric MRI for patient-level prediction. We integrate visual information from multi-slice MRI with slice-to-slice correlations for a more complete context. A novel strategy of attention folwing is proposed to fuse different MRI-based network branches for mp-MRI. Our method achieves state-of-the-art performance for Prostate cancer on a multi-center dataset (N = 478) and a public dataset (N = 204). The five-classification accuracy of Grade Group is 81.1 ± 1.8% (multi-center dataset) from the test set of five-fold cross-validation, and the area under curve of detecting clinically significant prostate cancer is 0.801 ± 0.018 (public dataset) from the test set of five-fold cross-validation respectively. The model also achieves tumor detection based on attention analysis, which improves the interpretability of the model. The novel method is hopeful to further improve the accurate prediction ability of MRI in the diagnosis and treatment of prostate cancer.
Collapse
Affiliation(s)
- Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jiangang Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China and Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China
| | - Ye Yan
- Department of Urology, Peking University Third Hospital, Beijing, 100191, China
| | - Kai Sun
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Xiangyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Jian Lu
- Department of Urology, Peking University Third Hospital, Beijing, 100191, China.
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China and Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
18
|
Naik N, Tokas T, Shetty DK, Hameed BZ, Shastri S, Shah MJ, Ibrahim S, Rai BP, Chłosta P, Somani BK. Role of Deep Learning in Prostate Cancer Management: Past, Present and Future Based on a Comprehensive Literature Review. J Clin Med 2022; 11:jcm11133575. [PMID: 35806859 PMCID: PMC9267773 DOI: 10.3390/jcm11133575] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 06/07/2022] [Accepted: 06/18/2022] [Indexed: 11/16/2022] Open
Abstract
This review aims to present the applications of deep learning (DL) in prostate cancer diagnosis and treatment. Computer vision is becoming an increasingly large part of our daily lives due to advancements in technology. These advancements in computational power have allowed more extensive and more complex DL models to be trained on large datasets. Urologists have found these technologies help them in their work, and many such models have been developed to aid in the identification, treatment and surgical practices in prostate cancer. This review will present a systematic outline and summary of these deep learning models and technologies used for prostate cancer management. A literature search was carried out for English language articles over the last two decades from 2000–2021, and present in Scopus, MEDLINE, Clinicaltrials.gov, Science Direct, Web of Science and Google Scholar. A total of 224 articles were identified on the initial search. After screening, 64 articles were identified as related to applications in urology, from which 24 articles were identified to be solely related to the diagnosis and treatment of prostate cancer. The constant improvement in DL models should drive more research focusing on deep learning applications. The focus should be on improving models to the stage where they are ready to be implemented in clinical practice. Future research should prioritize developing models that can train on encrypted images, allowing increased data sharing and accessibility.
Collapse
Affiliation(s)
- Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Krnataka, India;
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India; (M.J.S.); (S.I.); (B.P.R.); (B.K.S.)
| | - Theodoros Tokas
- Department of Urology and Andrology, General Hospital Hall i.T., Milser Str. 10, 6060 Hall in Tirol, Austria;
| | - Dasharathraj K. Shetty
- Department of Humanities and Management, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
- Correspondence: (D.K.S.); (B.M.Z.H.)
| | - B.M. Zeeshan Hameed
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India; (M.J.S.); (S.I.); (B.P.R.); (B.K.S.)
- Department of Urology, Father Muller Medical College, Mangalore 575002, Karnataka, India
- Correspondence: (D.K.S.); (B.M.Z.H.)
| | - Sarthak Shastri
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
| | - Milap J. Shah
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India; (M.J.S.); (S.I.); (B.P.R.); (B.K.S.)
- Robotics and Urooncology, Max Hospital and Max Institute of Cancer Care, New Delhi 110024, India
| | - Sufyan Ibrahim
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India; (M.J.S.); (S.I.); (B.P.R.); (B.K.S.)
- Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Bhavan Prasad Rai
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India; (M.J.S.); (S.I.); (B.P.R.); (B.K.S.)
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK
| | - Piotr Chłosta
- Department of Urology, Jagiellonian University in Krakow, Gołębia 24, 31-007 Kraków, Poland;
| | - Bhaskar K. Somani
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India; (M.J.S.); (S.I.); (B.P.R.); (B.K.S.)
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
19
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
20
|
Khawaled S, Freiman M. NPBDREG: Uncertainty Assessment in Diffeomorphic Brain MRI Registration using a Non-parametric Bayesian Deep-Learning Based Approach. Comput Med Imaging Graph 2022; 99:102087. [DOI: 10.1016/j.compmedimag.2022.102087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 04/28/2022] [Accepted: 05/31/2022] [Indexed: 10/18/2022]
|
21
|
Bhattacharya I, Lim DS, Aung HL, Liu X, Seetharaman A, Kunder CA, Shao W, Soerensen SJC, Fan RE, Ghanouni P, To'o KJ, Brooks JD, Sonn GA, Rusu M. Bridging the gap between prostate radiology and pathology through machine learning. Med Phys 2022; 49:5160-5181. [PMID: 35633505 PMCID: PMC9543295 DOI: 10.1002/mp.15777] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 11/27/2022] Open
Abstract
Background Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, magnetic resonance imaging (MRI) is considered the most sensitive non‐invasive imaging modality that enables visualization, detection, and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter‐reader agreements. Purpose Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI. Methods Four different deep learning models (SPCNet, U‐Net, branched U‐Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology‐confirmed radiologist labels, pathologist labels on whole‐mount histopathology images, and lesion‐level and pixel‐level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel‐level Gleason patterns) on whole‐mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre‐operative MRI using an automated MRI‐histopathology registration platform. Results Radiologist labels missed cancers (ROC‐AUC: 0.75‐0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24‐0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC‐AUC: 0.97‐1, lesion Dice: 0.75‐0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC‐AUC: 0.91‐0.94), and had generalizable and comparable performance to pathologist label‐trained‐models in the targeted biopsy cohort (aggressive lesion ROC‐AUC: 0.87‐0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel‐level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human‐annotated label type. Conclusions Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label‐trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter‐ and intra‐reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - David S Lim
- Department of Computer Science, Stanford University, Stanford, CA 94305
| | - Han Lin Aung
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Xingchen Liu
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA 94305
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Katherine J To'o
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA 94304
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| |
Collapse
|
22
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
23
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
24
|
Xiao H, Teng X, Liu C, Li T, Ren G, Yang R, Shen D, Cai J. A review of deep learning-based three-dimensional medical image registration methods. Quant Imaging Med Surg 2021; 11:4895-4916. [PMID: 34888197 PMCID: PMC8611468 DOI: 10.21037/qims-21-175] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 07/15/2021] [Indexed: 01/10/2023]
Abstract
Medical image registration is a vital component of many medical procedures, such as image-guided radiotherapy (IGRT), as it allows for more accurate dose-delivery and better management of side effects. Recently, the successful implementation of deep learning (DL) in various fields has prompted many research groups to apply DL to three-dimensional (3D) medical image registration. Several of these efforts have led to promising results. This review summarized the progress made in DL-based 3D image registration over the past 5 years and identify existing challenges and potential avenues for further research. The collected studies were statistically analyzed based on the region of interest (ROI), image modality, supervision method, and registration evaluation metrics. The studies were classified into three categories: deep iterative registration, supervised registration, and unsupervised registration. The studies are thoroughly reviewed and their unique contributions are highlighted. A summary is presented following a review of each category of study, discussing its advantages, challenges, and trends. Finally, the common challenges for all categories are discussed, and potential future research topics are identified.
Collapse
Affiliation(s)
- Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
25
|
Gassenmaier S, Küstner T, Nickel D, Herrmann J, Hoffmann R, Almansour H, Afat S, Nikolaou K, Othman AE. Deep Learning Applications in Magnetic Resonance Imaging: Has the Future Become Present? Diagnostics (Basel) 2021; 11:2181. [PMID: 34943418 PMCID: PMC8700442 DOI: 10.3390/diagnostics11122181] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/18/2021] [Accepted: 11/22/2021] [Indexed: 12/11/2022] Open
Abstract
Deep learning technologies and applications demonstrate one of the most important upcoming developments in radiology. The impact and influence of these technologies on image acquisition and reporting might change daily clinical practice. The aim of this review was to present current deep learning technologies, with a focus on magnetic resonance image reconstruction. The first part of this manuscript concentrates on the basic technical principles that are necessary for deep learning image reconstruction. The second part highlights the translation of these techniques into clinical practice. The third part outlines the different aspects of image reconstruction techniques, and presents a review of the current literature regarding image reconstruction and image post-processing in MRI. The promising results of the most recent studies indicate that deep learning will be a major player in radiology in the upcoming years. Apart from decision and diagnosis support, the major advantages of deep learning magnetic resonance imaging reconstruction techniques are related to acquisition time reduction and the improvement of image quality. The implementation of these techniques may be the solution for the alleviation of limited scanner availability via workflow acceleration. It can be assumed that this disruptive technology will change daily routines and workflows permanently.
Collapse
Affiliation(s)
- Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, Medical Image and Data Analysis (MIDAS.lab), Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany;
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany;
| | - Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Rüdiger Hoffmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany; (S.G.); (J.H.); (R.H.); (H.A.); (S.A.); (K.N.)
- Department of Neuroradiology, University Medical Center, 55131 Mainz, Germany
| |
Collapse
|
26
|
Connolly L, Jamzad A, Kaufmann M, Farquharson CE, Ren K, Rudan JF, Fichtinger G, Mousavi P. Combined Mass Spectrometry and Histopathology Imaging for Perioperative Tissue Assessment in Cancer Surgery. J Imaging 2021; 7:203. [PMID: 34677289 PMCID: PMC8539093 DOI: 10.3390/jimaging7100203] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/28/2021] [Accepted: 09/30/2021] [Indexed: 12/16/2022] Open
Abstract
Mass spectrometry is an effective imaging tool for evaluating biological tissue to detect cancer. With the assistance of deep learning, this technology can be used as a perioperative tissue assessment tool that will facilitate informed surgical decisions. To achieve such a system requires the development of a database of mass spectrometry signals and their corresponding pathology labels. Assigning correct labels, in turn, necessitates precise spatial registration of histopathology and mass spectrometry data. This is a challenging task due to the domain differences and noisy nature of images. In this study, we create a registration framework for mass spectrometry and pathology images as a contribution to the development of perioperative tissue assessment. In doing so, we explore two opportunities in deep learning for medical image registration, namely, unsupervised, multi-modal deformable image registration and evaluation of the registration. We test this system on prostate needle biopsy cores that were imaged with desorption electrospray ionization mass spectrometry (DESI) and show that we can successfully register DESI and histology images to achieve accurate alignment and, consequently, labelling for future training. This automation is expected to improve the efficiency and development of a deep learning architecture that will benefit the use of mass spectrometry imaging for cancer diagnosis.
Collapse
Affiliation(s)
- Laura Connolly
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Amoon Jamzad
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Martin Kaufmann
- Department of Surgery, Queen’s University, Kingston, ON K7L 3N6, Canada; (M.K.); (J.F.R.)
| | - Catriona E. Farquharson
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Kevin Ren
- Department of Pathology and Molecular Medicine, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - John F. Rudan
- Department of Surgery, Queen’s University, Kingston, ON K7L 3N6, Canada; (M.K.); (J.F.R.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| |
Collapse
|
27
|
Zimmerman BE, Johnson SL, Odéen HA, Shea JE, Factor RE, Joshi SC, Payne AH. Histology to 3D in vivo MR registration for volumetric evaluation of MRgFUS treatment assessment biomarkers. Sci Rep 2021; 11:18923. [PMID: 34556678 PMCID: PMC8460731 DOI: 10.1038/s41598-021-97309-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 08/24/2021] [Indexed: 11/09/2022] Open
Abstract
Advances in imaging and early cancer detection have increased interest in magnetic resonance (MR) guided focused ultrasound (MRgFUS) technologies for cancer treatment. MRgFUS ablation treatments could reduce surgical risks, preserve organ tissue and function, and improve patient quality of life. However, surgical resection and histological analysis remain the gold standard to assess cancer treatment response. For non-invasive ablation therapies such as MRgFUS, the treatment response must be determined through MR imaging biomarkers. However, current MR biomarkers are inconclusive and have not been rigorously evaluated against histology via accurate registration. Existing registration methods rely on anatomical features to directly register in vivo MR and histology. For MRgFUS applications in anatomies such as liver, kidney, or breast, anatomical features that are not caused by the treatment are often insufficient to drive direct registration. We present a novel MR to histology registration workflow that utilizes intermediate imaging and does not rely on anatomical MR features being visible in histology. The presented workflow yields an overall registration accuracy of 1.00 ± 0.13 mm. The developed registration pipeline is used to evaluate a common MRgFUS treatment assessment biomarker against histology. Evaluating MR biomarkers against histology using this registration pipeline will facilitate validating novel MRgFUS biomarkers to improve treatment assessment without surgical intervention. While the presented registration technique has been evaluated in a MRgFUS ablation treatment model, this technique could be potentially applied in any tissue to evaluate a variety of therapeutic options.
Collapse
Affiliation(s)
- Blake E Zimmerman
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA. .,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA.
| | - Sara L Johnson
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Henrik A Odéen
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Jill E Shea
- Department of Surgery, University of Utah, Salt Lake City, UT, USA
| | - Rachel E Factor
- Department of Pathology, University of Utah, Salt Lake City, UT, USA
| | - Sarang C Joshi
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA
| | - Allison H Payne
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
28
|
Gassenmaier S, Afat S, Nickel MD, Mostapha M, Herrmann J, Almansour H, Nikolaou K, Othman AE. Accelerated T2-Weighted TSE Imaging of the Prostate Using Deep Learning Image Reconstruction: A Prospective Comparison with Standard T2-Weighted TSE Imaging. Cancers (Basel) 2021; 13:cancers13143593. [PMID: 34298806 PMCID: PMC8303682 DOI: 10.3390/cancers13143593] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 07/07/2021] [Accepted: 07/15/2021] [Indexed: 12/22/2022] Open
Abstract
Multiparametric MRI (mpMRI) of the prostate has become the standard of care in prostate cancer evaluation. Recently, deep learning image reconstruction (DLR) methods have been introduced with promising results regarding scan acceleration. Therefore, the aim of this study was to investigate the impact of deep learning image reconstruction (DLR) in a shortened acquisition process of T2-weighted TSE imaging, regarding the image quality and diagnostic confidence, as well as PI-RADS and T2 scoring, as compared to standard T2 TSE imaging. Sixty patients undergoing 3T mpMRI for the evaluation of prostate cancer were prospectively enrolled in this institutional review board-approved study between October 2020 and March 2021. After the acquisition of standard T2 TSE imaging (T2S), the novel T2 TSE sequence with DLR (T2DLR) was applied in three planes. Overall, the acquisition time for T2S resulted in 10:21 min versus 3:50 min for T2DLR. The image evaluation was performed by two radiologists independently using a Likert scale ranging from 1-4 (4 best) applying the following criteria: noise levels, artifacts, overall image quality, diagnostic confidence, and lesion conspicuity. Additionally, T2 and PI-RADS scoring were performed. The mean patient age was 69 ± 9 years (range, 49-85 years). The noise levels and the extent of the artifacts were evaluated to be significantly improved in T2DLR versus T2S by both readers (p < 0.05). Overall image quality was also evaluated to be superior in T2DLR versus T2S in all three acquisition planes (p = 0.005-<0.001). Both readers evaluated the item lesion conspicuity to be superior in T2DLR with a median of 4 versus a median of 3 in T2S (p = 0.001 and <0.001, respectively). T2-weighted TSE imaging of the prostate in three planes with an acquisition time reduction of more than 60% including DLR is feasible with a significant improvement of image quality.
Collapse
Affiliation(s)
- Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (S.G.); (S.A.); (J.H.); (H.A.); (K.N.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (S.G.); (S.A.); (J.H.); (H.A.); (K.N.)
| | | | - Mahmoud Mostapha
- Digital Technology & Innovation, Siemens Medical Solutions USA, Inc., Princeton, NJ 08540, USA;
| | - Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (S.G.); (S.A.); (J.H.); (H.A.); (K.N.)
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (S.G.); (S.A.); (J.H.); (H.A.); (K.N.)
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (S.G.); (S.A.); (J.H.); (H.A.); (K.N.)
- Cluster of Excellence iFIT (EXC 2180) “Image Guided and Functionally Instructed Tumor Therapies”, University of Tuebingen, 72076 Tuebingen, Germany
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (S.G.); (S.A.); (J.H.); (H.A.); (K.N.)
- Department of Neuroradiology, University Medical Centre, Johannes Gutenberg University Mainz, 55131 Mainz, Germany
- Correspondence: ; Tel.: +49-7071-29-68624; Fax: +49-7071-29-5845
| |
Collapse
|
29
|
Seetharaman A, Bhattacharya I, Chen LC, Kunder CA, Shao W, Soerensen SJC, Wang JB, Teslovich NC, Fan RE, Ghanouni P, Brooks JD, Too KJ, Sonn GA, Rusu M. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging. Med Phys 2021; 48:2960-2972. [PMID: 33760269 PMCID: PMC8360053 DOI: 10.1002/mp.14855] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 01/31/2021] [Accepted: 03/16/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. METHODS We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. RESULTS Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. CONCLUSIONS Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
Collapse
Affiliation(s)
- Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Leo C Chen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Jeffrey B Wang
- Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Katherine J Too
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA, 94304, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
30
|
The impact of the co-registration technique and analysis methodology in comparison studies between advanced imaging modalities and whole-mount-histology reference in primary prostate cancer. Sci Rep 2021; 11:5836. [PMID: 33712662 PMCID: PMC7954803 DOI: 10.1038/s41598-021-85028-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 02/24/2021] [Indexed: 12/17/2022] Open
Abstract
Comparison studies using histopathology as standard of reference enable a validation of the diagnostic performance of imaging methods. This study analysed (1) the impact of different image-histopathology co-registration pathways, (2) the impact of the applied data analysis method and (3) intraindividually compared multiparametric magnet resonance tomography (mpMRI) and prostate specific membrane antigen positron emission tomography (PSMA-PET) by using the different approaches. Ten patients with primary PCa who underwent mpMRI and [18F]PSMA-1007 PET/CT followed by prostatectomy were prospectively enrolled. We demonstrate that the choice of the intermediate registration step [(1) via ex-vivo CT or (2) mpMRI] does not significantly affect the performance of the registration framework. Comparison of analysis methods revealed that methods using high spatial resolutions e.g. quadrant-based slice-by-slice analysis are beneficial for a differentiated analysis of performance, compared to methods with a lower resolution (segment-based analysis with 6 or 18 segments and lesions-based analysis). Furthermore, PSMA-PET outperformed mpMRI for intraprostatic PCa detection in terms of sensitivity (median %: 83-85 vs. 60-69, p < 0.04) with similar specificity (median %: 74-93.8 vs. 100) using both registration pathways. To conclude, the choice of an intermediate registration pathway does not significantly affect registration performance, analysis methods with high spatial resolution are preferable and PSMA-PET outperformed mpMRI in terms of sensitivity in our cohort.
Collapse
|
31
|
Sood RR, Shao W, Kunder C, Teslovich NC, Wang JB, Soerensen SJC, Madhuripan N, Jawahar A, Brooks JD, Ghanouni P, Fan RE, Sonn GA, Rusu M. 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction. Med Image Anal 2021; 69:101957. [PMID: 33550008 DOI: 10.1016/j.media.2021.101957] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 12/15/2022]
Abstract
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
Collapse
Affiliation(s)
- Rewa R Sood
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Christian Kunder
- Department of Pathology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Jeffrey B Wang
- Stanford School of Medicine, 291 Campus Drive, Stanford, CA 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - James D Brooks
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
32
|
Haryanto T, Suhartanto H, Arymurthy AM, Kusmardi K. Conditional sliding windows: An approach for handling data limitation in colorectal histopathology image classification. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100565] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
33
|
Rusu M, Shao W, Kunder CA, Wang JB, Soerensen SJC, Teslovich NC, Sood RR, Chen LC, Fan RE, Ghanouni P, Brooks JD, Sonn GA. Registration of presurgical MRI and histopathology images from radical prostatectomy via RAPSODI. Med Phys 2020; 47:4177-4188. [PMID: 32564359 PMCID: PMC7586964 DOI: 10.1002/mp.14337] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 05/17/2020] [Accepted: 06/08/2020] [Indexed: 01/29/2023] Open
Abstract
PURPOSE Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis; however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with preoperative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align presurgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI. METHODS Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a three-dimensional (3D) reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the preoperative MRI. RESULTS We tested RAPSODI in a phantom study where we simulated various conditions, for example, tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97 ± 0.01 for the prostate, a Hausdorff distance of 1.99 ± 0.70 mm for the prostate boundary, a urethra deviation of 3.09 ± 1.45 mm, and a landmark deviation of 2.80 ± 0.59 mm between registered histopathology images and MRI. CONCLUSION Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
Collapse
Affiliation(s)
- Mirabela Rusu
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Wei Shao
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Christian A. Kunder
- Department of PathologySchool of MedicineStanford UniversityStanfordCA94305USA
| | | | - Simon J. C. Soerensen
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
- Department of UrologyAarhus University HospitalAarhusDenmark
| | - Nikola C. Teslovich
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Rewa R. Sood
- Department of Electrical EngineeringStanford UniversityStanfordCA94305USA
| | - Leo C. Chen
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Richard E. Fan
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Pejman Ghanouni
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - James D. Brooks
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Geoffrey A. Sonn
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| |
Collapse
|