1
|
Stirling CE, Neeteson NJ, Walker REA, Boyd SK. Deep learning-based automated detection and segmentation of bone and traumatic bone marrow lesions from MRI following an acute ACL tear. Comput Biol Med 2024; 178:108791. [PMID: 38905892 DOI: 10.1016/j.compbiomed.2024.108791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 06/17/2024] [Accepted: 06/18/2024] [Indexed: 06/23/2024]
Abstract
INTRODUCTION Traumatic bone marrow lesions (BML) are frequently identified on knee MRI scans in patients following an acute full-thickness, complete ACL tear. BMLs coincide with regions of elevated localized bone loss, and studies suggest these may act as a precursor to the development of post-traumatic osteoarthritis. This study addresses the labour-intensive manual assessment of BMLs by using a 3D U-Net for automated identification and segmentation from MRI scans. METHODS A multi-task learning approach was used to segment both bone and BML from T2 fat-suppressed (FS) fast spin echo (FSE) MRI sequences for BML assessment. Training and testing utilized datasets from individuals with complete ACL tears, employing a five-fold cross-validation approach and pre-processing involved image intensity normalization and data augmentation. A post-processing algorithm was developed to improve segmentation and remove outliers. Training and testing datasets were acquired from different studies with similar imaging protocol to assess the model's performance robustness across different populations and acquisition conditions. RESULTS The 3D U-Net model exhibited effectiveness in semantic segmentation, while post-processing enhanced segmentation accuracy and precision through morphological operations. The trained model with post-processing achieved a Dice similarity coefficient (DSC) of 0.75 ± 0.08 (mean ± std) and a precision of 0.87 ± 0.07 for BML segmentation on testing data. Additionally, the trained model with post-processing achieved a DSC of 0.93 ± 0.02 and a precision of 0.92 ± 0.02 for bone segmentation on testing data. This demonstrates the approach's high accuracy for capturing true positives and effectively minimizing false positives in the identification and segmentation of bone structures. CONCLUSION Automated segmentation methods are a valuable tool for clinicians and researchers, streamlining the assessment of BMLs and allowing for longitudinal assessments. This study presents a model with promising clinical efficacy and provides a quantitative approach for bone-related pathology research and diagnostics.
Collapse
Affiliation(s)
- Callie E Stirling
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, Calgary, Canada; McCaig Institute for Bone and Joint Health, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Nathan J Neeteson
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, Calgary, Canada; McCaig Institute for Bone and Joint Health, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Richard E A Walker
- McCaig Institute for Bone and Joint Health, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada; Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Steven K Boyd
- Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, Calgary, Canada; McCaig Institute for Bone and Joint Health, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada; Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada.
| |
Collapse
|
2
|
Chen M, Xing J, Guo L. MRI-based Deep Learning Models for Preoperative Breast Volume and Density Assessment Assisting Breast Reconstruction. Aesthetic Plast Surg 2024:10.1007/s00266-024-04074-2. [PMID: 38806828 DOI: 10.1007/s00266-024-04074-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/09/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND The volume of the implant is the most critical element of breast reconstruction, so it is necessary to accurately assess the preoperative volume of the healthy and affected breasts and select the appropriate implant for placement. Accurate and automated methods for quantitative assessment of breast volume can optimize breast reconstruction surgery and assist physicians in clinical decision making. The aim of this study was to develop an artificial intelligence model for automated segmentation of the breast and measurement of volume. MATERIAL AND METHODS A total of 249 subjects undergoing breast reconstruction surgery were enrolled in this study. Subjects underwent preoperative breast MRI, and the breast region manually outlined by the imaging physician served as the gold standard for volume measurement by the automated segmentation model. In this study, we developed three automated algorithms for automatic segmentation of breast regions, including a simple alignment model, an alignment dynamic encoding model, and a deep learning model. The volumetric agreement between the three automated segmentation algorithms and the breast regions manually segmented by imaging physicians was evaluated by calculating the mean square error (MSE) and intragroup correlation coefficient (ICC), and the reproducibility of the automated segmentation of the breast regions was assessed by the test-retest step. RESULTS The three breast automated segmentation models developed in this study (simple registration model, dynamic programming model, and deep learning model) showed strong ICC with manual segmentation of the breast region, with MSEs of 1.124, 0.693, and 0.781, and ICCs of 0.975 (95% CI, 0.869-0.991), 0.986 (95% CI, 0.967-0.996), and 0.983 (95% CI, 0.961-0.992), respectively. Regarding the test-retest results of breast volume, the dynamic programming model performed the best with an MSE of 0.370 and an ICC of 0.993 (95% CI, 0.982-0.997), followed by the deep learning algorithm with an MSE of 0.741 and an ICC of 0.983 (95% CI, 0.956-0.993), and the simple registration algorithm with an MSE of 0.763 and an ICC of 0.982 (95% CI, 0.949-0.993). The reproducibility of the breast region segmented by the three automated algorithms was higher than that of manual segmentation by different radiologists. CONCLUSION The three automated breast segmentation algorithms developed in this study generate accurate and reliable breast regions, enable highly reproducible breast region segmentation and automated volume measurements, and provide a valuable tool for surgical selection of appropriate prostheses. NO LEVEL ASSIGNED This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Muzi Chen
- Department of Plastic and Reconstructive Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
| | - Jiahua Xing
- Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 33 Badachu Road, Shijingshan District, Beijing, 100144, China
| | - Lingli Guo
- Department of Plastic and Reconstructive Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China.
| |
Collapse
|
3
|
Hausmann D, Lerch A, Hitziger S, Farkas M, Weiland E, Lemke A, Grimm M, Kubik-Huch RA. AI-Supported Autonomous Uterus Reconstructions: First Application in MRI Using 3D SPACE with Iterative Denoising. Acad Radiol 2024; 31:1400-1409. [PMID: 37925344 DOI: 10.1016/j.acra.2023.09.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 09/19/2023] [Accepted: 09/22/2023] [Indexed: 11/06/2023]
Abstract
RATIONALE AND OBJECTIVES T2-weighted imaging in at least two orthogonal planes is recommended for assessment of the uterus. To determine whether a convolutional neural network-based algorithm could be used for the re-constructions of uterus axes derived from a 3D SPACE with iterative denoising. MATERIALS AND METHODS 50 patients aged 18-81 (mean: 42) years who underwent an MRI examination of the uterus participated voluntarily in this prospective study after informed consent. In addition to a standard MRI pelvis protocol, a 3D SPACE research application sequence was acquired in sagittal orientation. Reconstructions for both the cervix and the cavum in the short and long axes were performed by a research trainee (T), an experienced radiologist (E), and the prototype software (P). In the next step, the reconstructions were evaluated anonymously by two experienced readers according to 5-point-Likert-Scales. In addition, the length of the cervical canal, the length of the cavum and the distance between the tube angles were measured on all reconstructions. Interobserver agreement was assessed for all ratings. RESULTS For all axes, significant differences were found between the scores of the reconstructions by research T, E and P. P received higher scores and was preferred significantly more often with the exception of the comparison of the reconstruction Cervix short of E (Cervix short: P vs. T: p = 0.02; P vs. E: p = 0.26; Cervix long: P vs. T: p = 0.01; P vs. E: p < 0.01; Cavum short: P vs. T: p = 0.01; P vs. E: p = 0.02; Cavum long: P vs. T: p < 0.01; P vs. E: p < 0.01). Regarding the measured diameters, (length of cervical canal/cavum/distance between tube angles) significantly larger diameters were recorded for P compared to E and T (Cervix long (mm): T: 25.43; E: 25.65; P: 26.65; Cavum short (mm): T: 26.24; E: 25.04; P: 27.33; Cavum long (mm): T: 31.98; E: 32.91; P: 34.41; P vs. T: p < 0.01); P vs. E: p = 0.04). Moderate to substantial agreement was found between Reader 1 and Reader 2 (range: 0.39-0.67). CONCLUSION P was able to reconstruct the axes at least as well as or better than E and T. P could thereby lead to workflow facilitation and enable more efficient reporting of uterine MRI.
Collapse
Affiliation(s)
- Daniel Hausmann
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.); Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany (D.H.).
| | - Aline Lerch
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.); Institute for Translational Medicine, ETH Zurich, Zurich, Switzerland (A.L); ETH, Department of Health Sciences and Technology (A.L.)
| | | | - Monika Farkas
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.)
| | - Elisabeth Weiland
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany (E.W.)
| | | | - Maximilian Grimm
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.)
| | - Rahel A Kubik-Huch
- Department of Radiology, Kantonsspital Baden, Im Ergel 1, Baden, 5404, Switzerland (D.H., A.L., M.F., M.G., K.H.)
| |
Collapse
|
4
|
Lew CO, Harouni M, Kirksey ER, Kang EJ, Dong H, Gu H, Grimm LJ, Walsh R, Lowell DA, Mazurowski MA. A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI. Sci Rep 2024; 14:5383. [PMID: 38443410 PMCID: PMC10915139 DOI: 10.1038/s41598-024-54048-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 02/08/2024] [Indexed: 03/07/2024] Open
Abstract
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.
Collapse
Affiliation(s)
- Christopher O Lew
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA.
| | - Majid Harouni
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ella R Kirksey
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Elianne J Kang
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Haoyu Dong
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Hanxue Gu
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Lars J Grimm
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ruth Walsh
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Dorothy A Lowell
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Maciej A Mazurowski
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| |
Collapse
|
5
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
6
|
Nowakowska S, Borkowski K, Ruppert CM, Landsmann A, Marcon M, Berger N, Boss A, Ciritsis A, Rossi C. Generalizable attention U-Net for segmentation of fibroglandular tissue and background parenchymal enhancement in breast DCE-MRI. Insights Imaging 2023; 14:185. [PMID: 37932462 PMCID: PMC10628070 DOI: 10.1186/s13244-023-01531-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/25/2023] [Indexed: 11/08/2023] Open
Abstract
OBJECTIVES Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast-enhanced breast MRI. Subsequent assessment of the developed models in the context of FGT and BPE Breast Imaging Reporting and Data System (BI-RADS)-compliant classification. METHODS For the training and validation of attention U-Net models, data coming from a single 3.0-T scanner was used. For testing, additional data from 1.5-T scanner and data acquired in a different institution with a 3.0-T scanner was utilized. The developed models were used to quantify the amount of FGT and BPE in 80 DCE-MRI examinations, and a correlation between these volumetric measures and the classes assigned by radiologists was performed. RESULTS To assess the model performance using application-relevant metrics, the correlation between the volumes of breast, FGT, and BPE calculated from ground truth masks and predicted masks was checked. Pearson correlation coefficients ranging from 0.963 ± 0.004 to 0.999 ± 0.001 were achieved. The Spearman correlation coefficient for the quantitative and qualitative assessment, i.e., classification by radiologist, of FGT amounted to 0.70 (p < 0.0001), whereas BPE amounted to 0.37 (p = 0.0006). CONCLUSIONS Generalizable algorithms for FGT and BPE segmentation were developed and tested. Our results suggest that when assessing FGT, it is sufficient to use volumetric measures alone. However, for the evaluation of BPE, additional models considering voxels' intensity distribution and morphology are required. CRITICAL RELEVANCE STATEMENT A standardized assessment of FGT density can rely on volumetric measures, whereas in the case of BPE, the volumetric measures constitute, along with voxels' intensity distribution and morphology, an important factor. KEY POINTS • Our work contributes to the standardization of FGT and BPE assessment. • Attention U-Net can reliably segment intricately shaped FGT and BPE structures. • The developed models were robust to domain shift.
Collapse
Affiliation(s)
- Sylwia Nowakowska
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
| | | | - Carlotta M Ruppert
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Anna Landsmann
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Magda Marcon
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Nicole Berger
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present Address: Institut RadiologieSpital Lachen, Oberdorfstrasse 41, 8853, Lachen, Switzerland
| | - Andreas Boss
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present address: GZO AG Spital Wetzikon, Spitalstrasse 66, 8620, Wetzikon, Switzerland
| | - Alexander Ciritsis
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| | - Cristina Rossi
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| |
Collapse
|
7
|
Yan S, Li J, Wu W. Artificial intelligence in breast cancer: application and future perspectives. J Cancer Res Clin Oncol 2023; 149:16179-16190. [PMID: 37656245 DOI: 10.1007/s00432-023-05337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/24/2023] [Indexed: 09/02/2023]
Abstract
Breast cancer is one of the most common cancers and is one of the leading causes of cancer-related deaths in women worldwide. Early diagnosis and treatment are the key for a favorable prognosis. The application of artificial intelligence technology in the medical field is increasingly extensive, including image analysis, automated diagnosis, intelligent pharmaceutical system, personalized treatment and so on. AI-based breast cancer imaging, pathology and adjuvant therapy technology cannot only reduce the workload of clinicians, but also continuously improve the accuracy and sensitivity of breast cancer diagnosis and treatment. This paper reviews the application of AI in breast cancer, as well as looks ahead and poses challenges to the future development of AI for breast cancer detection and therapeutic, so as to provide ideas for future research.
Collapse
Affiliation(s)
- Shuixin Yan
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jiadi Li
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Weizhu Wu
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China.
| |
Collapse
|
8
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
9
|
Kuang S, Woodruff HC, Granzier R, van Nijnatten TJA, Lobbes MBI, Smidt ML, Lambin P, Mehrkanoon S. MSCDA: Multi-level semantic-guided contrast improves unsupervised domain adaptation for breast MRI segmentation in small datasets. Neural Netw 2023; 165:119-134. [PMID: 37285729 DOI: 10.1016/j.neunet.2023.05.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/09/2023] [Accepted: 05/09/2023] [Indexed: 06/09/2023]
Abstract
Deep learning (DL) applied to breast tissue segmentation in magnetic resonance imaging (MRI) has received increased attention in the last decade, however, the domain shift which arises from different vendors, acquisition protocols, and biological heterogeneity, remains an important but challenging obstacle on the path towards clinical implementation. In this paper, we propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation (MSCDA) framework to address this issue in an unsupervised manner. Our approach incorporates self-training with contrastive learning to align feature representations between domains. In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts to better exploit the underlying semantic information of the image at different levels. To resolve the data imbalance problem, we utilize a category-wise cross-domain sampling strategy to sample anchors from target images and build a hybrid memory bank to store samples from source images. We have validated MSCDA with a challenging task of cross-domain breast MRI segmentation between datasets of healthy volunteers and invasive breast cancer patients. Extensive experiments show that MSCDA effectively improves the model's feature alignment capabilities between domains, outperforming state-of-the-art methods. Furthermore, the framework is shown to be label-efficient, achieving good performance with a smaller source dataset. The code is publicly available at https://github.com/ShengKuangCN/MSCDA.
Collapse
Affiliation(s)
- Sheng Kuang
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Renee Granzier
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Thiemo J A van Nijnatten
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Marc B I Lobbes
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, The Netherlands
| | - Marjolein L Smidt
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Siamak Mehrkanoon
- Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
10
|
Yang KB, Lee J, Yang J. Multi-class semantic segmentation of breast tissues from MRI images using U-Net based on Haar wavelet pooling. Sci Rep 2023; 13:11704. [PMID: 37474633 PMCID: PMC10359288 DOI: 10.1038/s41598-023-38557-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/11/2023] [Indexed: 07/22/2023] Open
Abstract
MRI images used in breast cancer diagnosis are taken in a lying position and therefore are inappropriate for reconstructing the natural breast shape in a standing position. Some studies have proposed methods to present the breast shape in a standing position using an ordinary differential equation of the finite element method. However, it is difficult to obtain meaningful results because breast tissues have different elastic moduli. This study proposed a multi-class semantic segmentation method for breast tissues to reconstruct breast shapes using U-Net based on Haar wavelet pooling. First, a dataset was constructed by labeling the skin, fat, and fibro-glandular tissues and the background from MRI images taken in a lying position. Next, multi-class semantic segmentation was performed using U-Net based on Haar wavelet pooling to improve the segmentation accuracy for breast tissues. The U-Net effectively extracted breast tissue features while reducing image information loss in a subsampling stage using multiple sub-bands. In addition, the proposed network is robust to overfitting. The proposed network showed a mIOU of 87.48 for segmenting breast tissues. The proposed networks demonstrated high-accuracy segmentation for breast tissue with different elastic moduli to reconstruct the natural breast shape.
Collapse
Affiliation(s)
- Kwang Bin Yang
- Devision of Memory - Memory FAB Team 1, Samsung Electronics, 1 Samsungjeonja-ro, Hwaseong, Gyeonggi, 18448, Republic of Korea
| | - Jinwon Lee
- Department of Industrial and Management Engineering, Gangneung-Wonju National University, 150 Namwon-ro, Wonju, Gangwon, 26403, Republic of Korea
| | - Jeongsam Yang
- Department of Industrial Engineering, Ajou University, 206 Worldcup-ro, Suwon, Gyeonggi, 16499, Republic of Korea.
| |
Collapse
|
11
|
Tang W, Zhang M, Xu C, Shao Y, Tang J, Gong S, Dong H, Sheng M. Diagnostic efficiency of multi-modal MRI based deep learning with Sobel operator in differentiating benign and malignant breast mass lesions-a retrospective study. PeerJ Comput Sci 2023; 9:e1460. [PMID: 37547396 PMCID: PMC10403185 DOI: 10.7717/peerj-cs.1460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 06/06/2023] [Indexed: 08/08/2023]
Abstract
Purpose To compare the diagnostic efficiencies of deep learning single-modal and multi-modal for the classification of benign and malignant breast mass lesions. Methods We retrospectively collected data from 203 patients (207 lesions, 101 benign and 106 malignant) with breast tumors who underwent breast magnetic resonance imaging (MRI) before surgery or biopsy between January 2014 and October 2020. Mass segmentation was performed based on the three dimensions-region of interest (3D-ROI) minimum bounding cube at the edge of the lesion. We established single-modal models based on a convolutional neural network (CNN) including T2WI and non-fs T1WI, the dynamic contrast-enhanced (DCE-MRI) first phase was pre-contrast T1WI (d1), and Phases 2, 4, and 6 were post-contrast T1WI (d2, d4, d6); and Multi-modal fusion models with a Sobel operator (four_mods:T2WI, non-fs-T1WI, d1, d2). Training set (n = 145), validation set (n = 22), and test set (n = 40). Five-fold cross validation was performed. Accuracy, sensitivity, specificity, negative predictive value, positive predictive value, and area under the ROC curve (AUC) were used as evaluation indicators. Delong's test compared the diagnostic performance of the multi-modal and single-modal models. Results All models showed good performance, and the AUC values were all greater than 0.750. Among the single-modal models, T2WI, non-fs-T1WI, d1, and d2 had specificities of 77.1%, 77.2%, 80.2%, and 78.2%, respectively. d2 had the highest accuracy of 78.5% and showed the best diagnostic performance with an AUC of 0.827. The multi-modal model with the Sobel operator performed better than single-modal models, with an AUC of 0.887, sensitivity of 79.8%, specificity of 86.1%, and positive prediction value of 85.6%. Delong's test showed that the diagnostic performance of the multi-modal fusion models was higher than that of the six single-modal models (T2WI, non-fs-T1WI, d1, d2, d4, d6); the difference was statistically significant (p = 0.043, 0.017, 0.006, 0.017, 0.020, 0.004, all were greater than 0.05). Conclusions Multi-modal fusion deep learning models with a Sobel operator had excellent diagnostic value in the classification of breast masses, and further increase the efficiency of diagnosis.
Collapse
Affiliation(s)
- Weixia Tang
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Ming Zhang
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Changyan Xu
- School of Transportation and Civil Engineering, Nantong University, Nantong, China
| | - Yeqin Shao
- School of Transportation and Civil Engineering, Nantong University, Nantong, China
| | - Jiahuan Tang
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Shenchu Gong
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Hao Dong
- Department of Research Collaboration, R&D Center, Beijing Deepwise & League of PHD Technology Co., Ltd., Beijing, China
| | - Meihong Sheng
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| |
Collapse
|
12
|
Ham S, Kim M, Lee S, Wang CB, Ko B, Kim N. Improvement of semantic segmentation through transfer learning of multi-class regions with convolutional neural networks on supine and prone breast MRI images. Sci Rep 2023; 13:6877. [PMID: 37106024 PMCID: PMC10140273 DOI: 10.1038/s41598-023-33900-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
Semantic segmentation of breast and surrounding tissues in supine and prone breast magnetic resonance imaging (MRI) is required for various kinds of computer-assisted diagnoses for surgical applications. Variability of breast shape in supine and prone poses along with various MRI artifacts makes it difficult to determine robust breast and surrounding tissue segmentation. Therefore, we evaluated semantic segmentation with transfer learning of convolutional neural networks to create robust breast segmentation in supine breast MRI without considering supine or prone positions. Total 29 patients with T1-weighted contrast-enhanced images were collected at Asan Medical Center and two types of breast MRI were performed in the prone position and the supine position. The four classes, including lungs and heart, muscles and bones, parenchyma with cancer, and skin and fat, were manually drawn by an expert. Semantic segmentation on breast MRI scans with supine, prone, transferred from prone to supine, and pooled supine and prone MRI were trained and compared using 2D U-Net, 3D U-Net, 2D nnU-Net and 3D nnU-Net. The best performance was 2D models with transfer learning. Our results showed excellent performance and could be used for clinical purposes such as breast registration and computer-aided diagnosis.
Collapse
Affiliation(s)
- Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-ro, Danwon-gu, Ansan city, Gyeonggi-do, Republic of Korea
| | - Minjee Kim
- Promedius Inc., 4 Songpa-daero 49-gil, Songpa-gu, Seoul, South Korea
| | - Sangwook Lee
- ANYMEDI Inc., 388-1 Pungnap-dong, Songpa-gu, Seoul, South Korea
| | - Chuan-Bing Wang
- Department of Radiology, First Affiliated Hospital of Nanjing Medical University, 300, Guangzhou Road, Nanjing, Jiangsu, China
| | - BeomSeok Ko
- Department of Breast Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Namkug Kim
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
13
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
14
|
Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy. J Imaging 2023; 9:jimaging9030059. [PMID: 36976110 PMCID: PMC10058680 DOI: 10.3390/jimaging9030059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/16/2023] [Accepted: 02/17/2023] [Indexed: 03/06/2023] Open
Abstract
This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions 8192×8192×517. From there, a smaller region of interest (ROI) of 2000×2000×300 was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the 8192×8192 slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the 8192×8192 slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the 8192×8192 slices. When the 8192×8192 slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the 8192×8192 slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results.
Collapse
|
15
|
Alsharif WM. The utilization of artificial intelligence applications to improve breast cancer detection and prognosis. Saudi Med J 2023; 44:119-127. [PMID: 36773967 PMCID: PMC9987701 DOI: 10.15537/smj.2023.44.2.20220611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Abstract
Breast imaging faces challenges with the current increase in medical imaging requests and lesions that breast screening programs can miss. Solutions to improve these challenges are being sought with the recent advancement and adoption of artificial intelligent (AI)-based applications to enhance workflow efficiency as well as patient-healthcare outcomes. rtificial intelligent tools have been proposed and used to analyze different modes of breast imaging, in most of the published studies, mainly for the detection and classification of breast lesions, breast lesion segmentation, breast density evaluation, and breast cancer risk assessment. This article reviews the background of the Conventional Computer-aided Detection system and AI, AI-based applications in breast medical imaging for the identification, segmentation, and categorization of lesions, breast density and cancer risk evaluation. In addition, the challenges, and limitations of AI-based applications in breast imaging are also discussed.
Collapse
Affiliation(s)
- Walaa M. Alsharif
- From the Diagnostic Radiology Technology Department, College of Applied Medical Sciences, Taibah University, Al Madinah Al Munawwarah; and from the Society of Artificial Intelligence in Healthcare, Riyadh, Kingdom of Saudi Arabia.
- Address correspondence and reprint request to: Dr. Walaa M. Alsharif, Diagnostic Radiology Technology Department, College of Applied Medical Sciences, Taibah University, Al Madinah Al Munawwarah, Kingdom of Saudi Arabia. E-mail: ORCID ID: https//:orcid.org/0000-0001-7607-3255
| |
Collapse
|
16
|
Liu TJ, Wang H, Christian M, Chang CW, Lai F, Tai HC. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera. Sci Rep 2023; 13:680. [PMID: 36639395 PMCID: PMC9839689 DOI: 10.1038/s41598-022-26812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/20/2022] [Indexed: 01/15/2023] Open
Abstract
Pressure injuries are a common problem resulting in poor prognosis, long-term hospitalization, and increased medical costs in an aging society. This study developed a method to do automatic segmentation and area measurement of pressure injuries using deep learning models and a light detection and ranging (LiDAR) camera. We selected the finest photos of patients with pressure injuries, 528 in total, at National Taiwan University Hospital from 2016 to 2020. The margins of the pressure injuries were labeled by three board-certified plastic surgeons. The labeled photos were trained by Mask R-CNN and U-Net for segmentation. After the segmentation model was constructed, we made an automatic wound area measurement via a LiDAR camera. We conducted a prospective clinical study to test the accuracy of this system. For automatic wound segmentation, the performance of the U-Net (Dice coefficient (DC): 0.8448) was better than Mask R-CNN (DC: 0.5006) in the external validation. In the prospective clinical study, we incorporated the U-Net in our automatic wound area measurement system and got 26.2% mean relative error compared with the traditional manual method. Our segmentation model, U-Net, and area measurement system achieved acceptable accuracy, making them applicable in clinical circumstances.
Collapse
Affiliation(s)
- Tom J. Liu
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan ,grid.256105.50000 0004 1937 1063Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Hanwei Wang
- grid.19188.390000 0004 0546 0241Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Che-Wei Chang
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan ,grid.414746.40000 0004 0604 4784Division of Plastic Reconstructive and Aesthetic Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei City, Taiwan
| | - Feipei Lai
- grid.19188.390000 0004 0546 0241Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Hao-Chih Tai
- National Taiwan University Hospital and College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
17
|
Applying Deep Learning for Breast Cancer Detection in Radiology. Curr Oncol 2022; 29:8767-8793. [PMID: 36421343 PMCID: PMC9689782 DOI: 10.3390/curroncol29110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.
Collapse
|
18
|
Ying J, Cattell R, Zhao T, Lei L, Jiang Z, Hussain SM, Gao Y, Chow HHS, Stopeck AT, Thompson PA, Huang C. Two fully automated data-driven 3D whole-breast segmentation strategies in MRI for MR-based breast density using image registration and U-Net with a focus on reproducibility. Vis Comput Ind Biomed Art 2022; 5:25. [PMID: 36219359 PMCID: PMC9554077 DOI: 10.1186/s42492-022-00121-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 09/21/2022] [Indexed: 11/07/2022] Open
Abstract
Presence of higher breast density (BD) and persistence over time are risk factors for breast cancer. A quantitatively accurate and highly reproducible BD measure that relies on precise and reproducible whole-breast segmentation is desirable. In this study, we aimed to develop a highly reproducible and accurate whole-breast segmentation algorithm for the generation of reproducible BD measures. Three datasets of volunteers from two clinical trials were included. Breast MR images were acquired on 3 T Siemens Biograph mMR, Prisma, and Skyra using 3D Cartesian six-echo GRE sequences with a fat-water separation technique. Two whole-breast segmentation strategies, utilizing image registration and 3D U-Net, were developed. Manual segmentation was performed. A task-based analysis was performed: a previously developed MR-based BD measure, MagDensity, was calculated and assessed using automated and manual segmentation. The mean squared error (MSE) and intraclass correlation coefficient (ICC) between MagDensity were evaluated using the manual segmentation as a reference. The test-retest reproducibility of MagDensity derived from different breast segmentation methods was assessed using the difference between the test and retest measures (Δ2-1), MSE, and ICC. The results showed that MagDensity derived by the registration and deep learning segmentation methods exhibited high concordance with manual segmentation, with ICCs of 0.986 (95%CI: 0.974-0.993) and 0.983 (95%CI: 0.961-0.992), respectively. For test-retest analysis, MagDensity derived using the registration algorithm achieved the smallest MSE of 0.370 and highest ICC of 0.993 (95%CI: 0.982-0.997) when compared to other segmentation methods. In conclusion, the proposed registration and deep learning whole-breast segmentation methods are accurate and reliable for estimating BD. Both methods outperformed a previously developed algorithm and manual segmentation in the test-retest assessment, with the registration exhibiting superior performance for highly reproducible BD measurements.
Collapse
Affiliation(s)
- Jia Ying
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Renee Cattell
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
- Department of Radiation Oncology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Tianyun Zhao
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Lan Lei
- Department of Medicine, Northside Hospital Gwinnett, Lawrenceville, GA, 30046, USA
- Program of Public Health, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Zhao Jiang
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Shahid M Hussain
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yi Gao
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | | | - Alison T Stopeck
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Patricia A Thompson
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
- Department of Medicine, Cedar Sinai Cancer, Cedars Sinai Medical Center, Los Angeles, CA, 90048, USA
| | - Chuan Huang
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA.
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA.
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA.
| |
Collapse
|
19
|
Breast MRI Segmentation and Ki-67 High- and Low-Expression Prediction Algorithm Based on Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1770531. [PMID: 36238476 PMCID: PMC9553330 DOI: 10.1155/2022/1770531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/11/2022] [Accepted: 09/08/2022] [Indexed: 11/17/2022]
Abstract
Background and Objective. Breast cancer is a common malignant tumor that seriously threatens the health of women in my country and even around the world. The proliferation marker Ki-67 has been utilized to distinguish luminal B from luminal A tumors and is a reliable indicator of more aggressive breast cancer growth. If a reliable prediction method for breast cancer patients to avoid invasive damage can be found to predict Ki-67 before pathological examination, it will be very beneficial for doctors to formulate later treatment plans and provide more useful treatment options. Methodology. This paper proposes a tumor segmentation and prediction framework based on the combination of improved attention U-Net and SVM. The framework first improves on attention U-Net by introducing coefficients for learning multidimensional attention. Make the attention mechanism more aware of the main situation in the segmentation process. At the same time, the segmented breast MRI results and corresponding labels were input into the SVM classifier to accurately predict the expression of Ki-67. Results. The DSC, PPV, and sensitivity of our combined model are 0.94, 0.93, and 0.94, respectively, with better segmentation performance. And we compare with the segmentation frameworks of other papers and find that our combined model can make accurate segmentation of breast tumors. Conclusion. Our method can adapt to the variability of breast tumors and segment breast tumors accurately and efficiently. In the future, it can be widely used in clinical practice, so as to help the clinic better formulate a reasonable diagnosis and treatment plan for breast cancer patients.
Collapse
|
20
|
Deep Learning for the Automatic Segmentation of Extracranial Venous Malformations of the Head and Neck from MR Images Using 3D U-Net. J Clin Med 2022; 11:jcm11195593. [PMID: 36233460 PMCID: PMC9573069 DOI: 10.3390/jcm11195593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 09/08/2022] [Accepted: 09/22/2022] [Indexed: 11/16/2022] Open
Abstract
Background: It is difficult to characterize extracranial venous malformations (VMs) of the head and neck region from magnetic resonance imaging (MRI) manually and one at a time. We attempted to perform the automatic segmentation of lesions from MRI of extracranial VMs using a convolutional neural network as a deep learning tool. Methods: T2-weighted MRI from 53 patients with extracranial VMs in the head and neck region was used for annotations. Preprocessing management was performed before training. Three-dimensional U-Net was used as a segmentation model. Dice similarity coefficients were evaluated along with other indicators. Results: Dice similarity coefficients in 3D U-Net were found to be 99.75% in the training set and 60.62% in the test set. The models showed overfitting, which can be resolved with a larger number of objects, i.e., MRI VM images. Conclusions: Our pilot study showed sufficient potential for the automatic segmentation of extracranial VMs through deep learning using MR images from VM patients. The overfitting phenomenon observed will be resolved with a larger number of MRI VM images.
Collapse
|
21
|
Bermudez A, Gonzalez Z, Zhao B, Salter E, Liu X, Ma L, Jawed MK, Hsieh CJ, Lin NYC. Supracellular measurement of spatially varying mechanical heterogeneities in live monolayers. Biophys J 2022; 121:3358-3369. [PMID: 36028999 PMCID: PMC9515370 DOI: 10.1016/j.bpj.2022.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 07/10/2022] [Accepted: 08/19/2022] [Indexed: 11/29/2022] Open
Abstract
The mechanical properties of tissues have profound impacts on a wide range of biological processes such as embryo development (1,2), wound healing (3-6), and disease progression (7). Specifically, the spatially varying moduli of cells largely influence the local tissue deformation and intercellular interaction. Despite the importance of characterizing such a heterogeneous mechanical property, it has remained difficult to measure the supracellular modulus field in live cell layers with a high-throughput and minimal perturbation. In this work, we developed a monolayer effective modulus measurement by integrating a custom cell stretcher, light microscopy, and AI-based inference. Our approach first quantifies the heterogeneous deformation of a slightly stretched cell layer and converts the measured strain fields into an effective modulus field using an AI inference. This method allowed us to directly visualize the effective modulus distribution of thousands of cells virtually instantly. We characterized the mean value, SD, and correlation length of the effective cell modulus for epithelial cells and fibroblasts, which are in agreement with previous results. We also observed a mild correlation between cell area and stiffness in jammed epithelia, suggesting the influence of cell modulus on packing. Overall, our reported experimental platform provides a valuable alternative cell mechanics measurement tool that can be integrated with microscopy-based characterizations.
Collapse
Affiliation(s)
- Alexandra Bermudez
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA; Department of Bioengineering, University of California, Los Angeles, California.
| | - Zachary Gonzalez
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA; Department of Physics and Astronomy, University of California, Los Angeles, California
| | - Bao Zhao
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA
| | - Ethan Salter
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA; Department of Bioengineering, University of California, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, California
| | - Leixin Ma
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA
| | - Mohammad Khalid Jawed
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, California
| | - Neil Y C Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, California 90095, USA; Department of Bioengineering, University of California, Los Angeles, California; Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California.
| |
Collapse
|
22
|
Samperna R, Moriakov N, Karssemeijer N, Teuwen J, Mann RM. Exploiting the Dixon Method for a Robust Breast and Fibro-Glandular Tissue Segmentation in Breast MRI. Diagnostics (Basel) 2022; 12:diagnostics12071690. [PMID: 35885594 PMCID: PMC9324146 DOI: 10.3390/diagnostics12071690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/07/2022] [Accepted: 07/09/2022] [Indexed: 11/26/2022] Open
Abstract
Automatic breast and fibro-glandular tissue (FGT) segmentation in breast MRI allows for the efficient and accurate calculation of breast density. The U-Net architecture, either 2D or 3D, has already been shown to be effective at addressing the segmentation problem in breast MRI. However, the lack of publicly available datasets for this task has forced several authors to rely on internal datasets composed of either acquisitions without fat suppression (WOFS) or with fat suppression (FS), limiting the generalization of the approach. To solve this problem, we propose a data-centric approach, efficiently using the data available. By collecting a dataset of T1-weighted breast MRI acquisitions acquired with the use of the Dixon method, we train a network on both T1 WOFS and FS acquisitions while utilizing the same ground truth segmentation. Using the “plug-and-play” framework nnUNet, we achieve, on our internal test set, a Dice Similarity Coefficient (DSC) of 0.96 and 0.91 for WOFS breast and FGT segmentation and 0.95 and 0.86 for FS breast and FGT segmentation, respectively. On an external, publicly available dataset, a panel of breast radiologists rated the quality of our automatic segmentation with an average of 3.73 on a four-point scale, with an average percentage agreement of 67.5%.
Collapse
Affiliation(s)
- Riccardo Samperna
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
- Correspondence:
| | - Nikita Moriakov
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiation Oncology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| | - Nico Karssemeijer
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- ScreenPoint Medical BV, 6525 EC Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiation Oncology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| | - Ritse M. Mann
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| |
Collapse
|
23
|
Alqaoud M, Plemmons J, Feliberti E, Dong S, Kaipa K, Fichtinger G, Xiao Y, Audette MA. nnUNet-based Multi-modality Breast MRI Segmentation and Tissue-Delineating Phantom for Robotic Tumor Surgery Planning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3495-3501. [PMID: 36086096 DOI: 10.1109/embc48229.2022.9871109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Segmentation of the thoracic region and breast tissues is crucial for analyzing and diagnosing the presence of breast masses. This paper introduces a medical image segmentation architecture that aggregates two neural networks based on the state-of-the-art nnU-Net. Additionally, this study proposes a polyvinyl alcohol cryogel (PVA-C) breast phantom, based on its automated segmentation approach, to enable planning and navigation experiments for robotic breast surgery. The dataset consists of multimodality breast MRI of T2W and STIR images obtained from 10 patients. A statistical analysis of segmentation tasks emphasizes the Dice Similarity Coefficient (DSC), segmentation accuracy, sensitivity, and specificity. We first use a single class labeling to segment the breast region and then exploit it as an input for three-class labeling to segment fatty, fibroglandular (FGT), and tumorous tissues. The first network has a 0.95 DCS, while the second network has a 0.95, 0.83, and 0.41 for fat, FGT, and tumor classes, respectively. Clinical Relevance-This research is relevant to the breast surgery community as it establishes a deep learning-based (DL) algorithmic and phantomic foundation for surgical planning and navigation that will exploit preoperative multimodal MRI and intraoperative ultrasound to achieve highly cosmetic breast surgery. In addition, the planning and navigation will guide a robot that can cut, resect, bag, and grasp a tissue mass that encapsulates breast tumors and positive tissue margins. This image-guided robotic approach promises to potentiate the accuracy of breast surgeons and improve patient outcomes.
Collapse
|
24
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
25
|
Bae J, Huang Z, Knoll F, Geras K, Sood TP, Feng L, Heacock L, Moy L, Kim SG. Estimation of the capillary level input function for dynamic contrast-enhanced MRI of the breast using a deep learning approach. Magn Reson Med 2022; 87:2536-2550. [PMID: 35001423 PMCID: PMC8852816 DOI: 10.1002/mrm.29148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 12/09/2021] [Accepted: 12/16/2021] [Indexed: 11/06/2022]
Abstract
PURPOSE To develop a deep learning approach to estimate the local capillary-level input function (CIF) for pharmacokinetic model analysis of DCE-MRI. METHODS A deep convolutional network was trained with numerically simulated data to estimate the CIF. The trained network was tested using simulated lesion data and used to estimate voxel-wise CIF for pharmacokinetic model analysis of breast DCE-MRI data using an abbreviated protocol from women with malignant (n = 25) and benign (n = 28) lesions. The estimated parameters were used to build a logistic regression model to detect the malignancy. RESULT The pharmacokinetic parameters estimated using the network-predicted CIF from our breast DCE data showed significant differences between the malignant and benign groups for all parameters. Testing the diagnostic performance with the estimated parameters, the conventional approach with arterial input function (AIF) showed an area under the curve (AUC) between 0.76 and 0.87, and the proposed approach with CIF demonstrated similar performance with an AUC between 0.79 and 0.81. CONCLUSION This study shows the feasibility of estimating voxel-wise CIF using a deep neural network. The proposed approach could eliminate the need to measure AIF manually without compromising the diagnostic performance to detect the malignancy in the clinical setting.
Collapse
Affiliation(s)
- Jonghyun Bae
- Vilcek Institute of Graduate Biomedical Science, New York University School of Medicine
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
- Department of Radiology, Weill Cornell Medical College
| | - Zhengnan Huang
- Vilcek Institute of Graduate Biomedical Science, New York University School of Medicine
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
| | - Florian Knoll
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
| | - Krzysztof Geras
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
- Center for Data Science, New York University
| | - Terlika Pandit Sood
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
| | - Li Feng
- Biomedical Engineering and Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount Sinai
| | - Laura Heacock
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
| | - Linda Moy
- Vilcek Institute of Graduate Biomedical Science, New York University School of Medicine
- Center for Biomedical Imaging, Radiology, New York University School of Medicine
- Center for Advanced Imaging Innovation and Research, Radiology, New York University School of Medicine
| | | |
Collapse
|
26
|
Punn NS, Agarwal S. Modality specific U-Net variants for biomedical image segmentation: a survey. Artif Intell Rev 2022; 55:5845-5889. [PMID: 35250146 PMCID: PMC8886195 DOI: 10.1007/s10462-022-10152-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2022] [Indexed: 02/06/2023]
Abstract
With the advent of advancements in deep learning approaches, such as deep convolution neural network, residual neural network, adversarial network; U-Net architectures are most widely utilized in biomedical image segmentation to address the automation in identification and detection of the target regions or sub-regions. In recent studies, U-Net based approaches have illustrated state-of-the-art performance in different applications for the development of computer-aided diagnosis systems for early diagnosis and treatment of diseases such as brain tumor, lung cancer, alzheimer, breast cancer, etc., using various modalities. This article contributes in presenting the success of these approaches by describing the U-Net framework, followed by the comprehensive analysis of the U-Net variants by performing (1) inter-modality, and (2) intra-modality categorization to establish better insights into the associated challenges and solutions. Besides, this article also highlights the contribution of U-Net based frameworks in the ongoing pandemic, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19. Finally, the strengths and similarities of these U-Net variants are analysed along with the challenges involved in biomedical image segmentation to uncover promising future research directions in this area.
Collapse
|
27
|
Ye G, He S, Pan R, Zhu L, Zhou D, Lu R. Research on DCE-MRI Images Based on Deep Transfer Learning in Breast Cancer Adjuvant Curative Effect Prediction. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4477099. [PMID: 35251566 PMCID: PMC8890845 DOI: 10.1155/2022/4477099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/21/2022] [Accepted: 01/27/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer is a serious threat to women's physical and mental health. In recent years, its incidence has been on the rise and it has become the top female malignant tumor in China. At present, adjuvant chemotherapy for breast cancer has become the standard mode of breast cancer treatment, but the response results usually need to be completed after the implementation of adjuvant chemotherapy, and the optimization of the treatment plan and the implementation of breast-conserving therapy need to be based on accurate estimation of the pathological response. Therefore, to predict the efficacy of adjuvant chemotherapy for breast cancer patients is to find a predictive method that is conducive to individualized choice of chemotherapy regimens. This article introduces the research of DCE-MRI images based on deep transfer learning in breast cancer adjuvant curative effect prediction. Deep transfer learning algorithms are used to process images, and then, the features of breast cancer after adjuvant chemotherapy are collected through image feature collection. Predictions are made, and the research results show that the accuracy of the prediction reaches 70%.
Collapse
Affiliation(s)
- Guolin Ye
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Suqun He
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Ruilin Pan
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Lewei Zhu
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Dan Zhou
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - RuiLiang Lu
- MRI Room, The First People's Hospital of Foshan, Foshan 528000, China
| |
Collapse
|
28
|
Chang CW, Lai F, Christian M, Chen YC, Hsu C, Chen YS, Chang DH, Roan TL, Yu YC. Deep Learning-Assisted Burn Wound Diagnosis: Diagnostic Model Development Study. JMIR Med Inform 2021; 9:e22798. [PMID: 34860674 PMCID: PMC8686480 DOI: 10.2196/22798] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 12/19/2020] [Accepted: 10/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate assessment of the percentage total body surface area (%TBSA) of burn wounds is crucial in the management of burn patients. The resuscitation fluid and nutritional needs of burn patients, their need for intensive unit care, and probability of mortality are all directly related to %TBSA. It is difficult to estimate a burn area of irregular shape by inspection. Many articles have reported discrepancies in estimating %TBSA by different doctors. OBJECTIVE We propose a method, based on deep learning, for burn wound detection, segmentation, and calculation of %TBSA on a pixel-to-pixel basis. METHODS A 2-step procedure was used to convert burn wound diagnosis into %TBSA. In the first step, images of burn wounds were collected from medical records and labeled by burn surgeons, and the data set was then input into 2 deep learning architectures, U-Net and Mask R-CNN, each configured with 2 different backbones, to segment the burn wounds. In the second step, we collected and labeled images of hands to create another data set, which was also input into U-Net and Mask R-CNN to segment the hands. The %TBSA of burn wounds was then calculated by comparing the pixels of mask areas on images of the burn wound and hand of the same patient according to the rule of hand, which states that one's hand accounts for 0.8% of TBSA. RESULTS A total of 2591 images of burn wounds were collected and labeled to form the burn wound data set. The data set was randomly split into training, validation, and testing sets in a ratio of 8:1:1. Four hundred images of volar hands were collected and labeled to form the hand data set, which was also split into 3 sets using the same method. For the images of burn wounds, Mask R-CNN with ResNet101 had the best segmentation result with a Dice coefficient (DC) of 0.9496, while U-Net with ResNet101 had a DC of 0.8545. For the hand images, U-Net and Mask R-CNN had similar performance with DC values of 0.9920 and 0.9910, respectively. Lastly, we conducted a test diagnosis in a burn patient. Mask R-CNN with ResNet101 had on average less deviation (0.115% TBSA) from the ground truth than burn surgeons. CONCLUSIONS This is one of the first studies to diagnose all depths of burn wounds and convert the segmentation results into %TBSA using different deep learning models. We aimed to assist medical staff in estimating burn size more accurately, thereby helping to provide precise care to burn victims.
Collapse
Affiliation(s)
- Che Wei Chang
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan.,Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu Chun Chen
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Ching Hsu
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Yo Shen Chen
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Dun Hao Chang
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan.,Department of Information Management, Yuan Ze University, Chung-Li, Taiwan
| | - Tyng Luen Roan
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Yen Che Yu
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| |
Collapse
|
29
|
Yu X, Zhou Q, Wang S, Zhang Y. A systematic survey of deep learning in breast cancer. INT J INTELL SYST 2021. [DOI: 10.1002/int.22622] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Qinghua Zhou
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Yu‐Dong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| |
Collapse
|
30
|
Kurata Y, Nishio M, Moribata Y, Kido A, Himoto Y, Otani S, Fujimoto K, Yakami M, Minamiguchi S, Mandai M, Nakamoto Y. Automatic segmentation of uterine endometrial cancer on multi-sequence MRI using a convolutional neural network. Sci Rep 2021; 11:14440. [PMID: 34262088 PMCID: PMC8280152 DOI: 10.1038/s41598-021-93792-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 06/29/2021] [Indexed: 12/29/2022] Open
Abstract
Endometrial cancer (EC) is the most common gynecological tumor in developed countries, and preoperative risk stratification is essential for personalized medicine. There have been several radiomics studies for noninvasive risk stratification of EC using MRI. Although tumor segmentation is usually necessary for these studies, manual segmentation is not only labor-intensive but may also be subjective. Therefore, our study aimed to perform the automatic segmentation of EC on MRI with a convolutional neural network. The effect of the input image sequence and batch size on the segmentation performance was also investigated. Of 200 patients with EC, 180 patients were used for training the modified U-net model; 20 patients for testing the segmentation performance and the robustness of automatically extracted radiomics features. Using multi-sequence images and larger batch size was effective for improving segmentation accuracy. The mean Dice similarity coefficient, sensitivity, and positive predictive value of our model for the test set were 0.806, 0.816, and 0.834, respectively. The robustness of automatically extracted first-order and shape-based features was high (median ICC = 0.86 and 0.96, respectively). Other high-order features presented moderate-high robustness (median ICC = 0.57-0.93). Our model could automatically segment EC on MRI and extract radiomics features with high reliability.
Collapse
Affiliation(s)
- Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Mizuho Nishio
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan.
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe, 650-0017, Japan.
| | - Yusaku Moribata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Aki Kido
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Yuki Himoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Satoshi Otani
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Koji Fujimoto
- Department of Real World Data Research and Development, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masahiro Yakami
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Sachiko Minamiguchi
- Department of Diagnostic Pathology, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masaki Mandai
- Department of Gynecology and Obstetrics, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| |
Collapse
|
31
|
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model. J Digit Imaging 2021; 34:877-887. [PMID: 34244879 PMCID: PMC8455741 DOI: 10.1007/s10278-021-00472-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 05/27/2021] [Accepted: 06/09/2021] [Indexed: 12/11/2022] Open
Abstract
To develop a U-net deep learning method for breast tissue segmentation on fat-sat T1-weighted (T1W) MRI using transfer learning (TL) from a model developed for non-fat-sat images. The training dataset (N = 126) was imaged on a 1.5 T MR scanner, and the independent testing dataset (N = 40) was imaged on a 3 T scanner, both using fat-sat T1W pulse sequence. Pre-contrast images acquired in the dynamic-contrast-enhanced (DCE) MRI sequence were used for analysis. All patients had unilateral cancer, and the segmentation was performed using the contralateral normal breast. The ground truth of breast and fibroglandular tissue (FGT) segmentation was generated using a template-based segmentation method with a clustering algorithm. The deep learning segmentation was performed using U-net models trained with and without TL, by using initial values of trainable parameters taken from the previous model for non-fat-sat images. The ground truth of each case was used to evaluate the segmentation performance of the U-net models by calculating the dice similarity coefficient (DSC) and the overall accuracy based on all pixels. Pearson’s correlation was used to evaluate the correlation of breast volume and FGT volume between the U-net prediction output and the ground truth. In the training dataset, the evaluation was performed using tenfold cross-validation, and the mean DSC with and without TL was 0.97 vs. 0.95 for breast and 0.86 vs. 0.80 for FGT. When the final model developed with and without TL from the training dataset was applied to the testing dataset, the mean DSC was 0.89 vs. 0.83 for breast and 0.81 vs. 0.81 for FGT, respectively. Application of TL not only improved the DSC, but also decreased the required training case number. Lastly, there was a high correlation (R2 > 0.90) for both the training and testing datasets between the U-net prediction output and ground truth for breast volume and FGT volume. U-net can be applied to perform breast tissue segmentation on fat-sat images, and TL is an efficient strategy to develop a specific model for each different dataset.
Collapse
|
32
|
Wang H, Cao J, Feng J, Xie Y, Yang D, Chen B. Mixed 2D and 3D convolutional network with multi-scale context for lesion segmentation in breast DCE-MRI. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102607] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
33
|
Huo L, Hu X, Xiao Q, Gu Y, Chu X, Jiang L. Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images. Magn Reson Imaging 2021; 82:31-41. [PMID: 34147598 DOI: 10.1016/j.mri.2021.06.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Segmentation of the whole breast and fibroglandular tissue (FGT) is important for quantitatively analyzing the breast cancer risk in the dynamic contrast-enhanced magnetic resonance (DCE-MR) images. The purpose of this study is to improve the accuracy and efficiency of the segmentation of the whole breast and FGT in 3-D fat-suppressed DCE-MR images with a versatile deep learning (DL) framework. METHODS We randomly collected 100 breast DCE-MR scans from Shanghai Cancer Hospital of Fudan University. The MR scans in the dataset were different in both the spatial resolution and the MR scanners employed. Furthermore, four breast density categories were assessed by radiologists based on Breast Imaging Reporting and Data System (BI-RADS) of American College of Radiology. The dataset was separated into the training and the testing sets, while keeping a balanced distribution of scans with different imaging parameters and density categories. The nnU-Net has been recently proposed to automatically adapt preprocessing strategies and network architectures for a given medical image dataset, thus showing a great potential in the systematic adaptation of DL methods to different datasets. In this study, we applied the nnU-Net to segment the whole breast and FGT in 3-D fat-suppressed DCE-MR images. Five-fold cross validation was employed to train and validate the segmentation method. RESULTS The segmentation performance was evaluated with the volume and surface agreement metrics between the DL-based automatic and the manually delineated masks, as quantified with the following measures: the average Dice volume overlap (0.968 ± 0.017 and 0.877 ± 0.081), the average surface distances (0.201 ± 0.080 mm and 0.310 ± 0.043 mm), and the Pearson correlation coefficient of masks (0.995 and 0.972) between the automatic and the manually delineated masks, as calculated for the whole breast and the FGT segmentation, respectively. The correlation coefficient between the breast densities obtained with the DL-based segmentation and the manual delineation was 0.981. There was a positive bias of 0.8% (DL-based relative to manual) in breast density measurement with the Bland-Altman plot. The execution time of the DL-based segmentation was approximately 20 s for the whole breast segmentation and 15 s for the FGT segmentation. CONCLUSIONS Our DL-based segmentation framework using nnU-Net could robustly achieve high accuracy and efficiency across variable MR imaging settings without extra pre- or post-processing procedures. It would be useful for developing DCE-MR-based CAD systems to quantify breast cancer risk and to be integrated into the clinical workflow.
Collapse
Affiliation(s)
- Lu Huo
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; University of Chinese Academy of Sciences, No.19 Yuquan Road, Beijing 100049, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Xiaoxin Hu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Qin Xiao
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Yajia Gu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Xu Chu
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Luan Jiang
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China.
| |
Collapse
|
34
|
Eskreis-Winkler S, Onishi N, Pinker K, Reiner JS, Kaplan J, Morris EA, Sutton EJ. Using Deep Learning to Improve Nonsystematic Viewing of Breast Cancer on MRI. JOURNAL OF BREAST IMAGING 2021; 3:201-207. [PMID: 38424820 DOI: 10.1093/jbi/wbaa102] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Indexed: 03/02/2024]
Abstract
OBJECTIVE To investigate the feasibility of using deep learning to identify tumor-containing axial slices on breast MRI images. METHODS This IRB-approved retrospective study included consecutive patients with operable invasive breast cancer undergoing pretreatment breast MRI between January 1, 2014, and December 31, 2017. Axial tumor-containing slices from the first postcontrast phase were extracted. Each axial image was subdivided into two subimages: one of the ipsilateral cancer-containing breast and one of the contralateral healthy breast. Cases were randomly divided into training, validation, and testing sets. A convolutional neural network was trained to classify subimages into "cancer" and "no cancer" categories. Accuracy, sensitivity, and specificity of the classification system were determined using pathology as the reference standard. A two-reader study was performed to measure the time savings of the deep learning algorithm using descriptive statistics. RESULTS Two hundred and seventy-three patients with unilateral breast cancer met study criteria. On the held-out test set, accuracy of the deep learning system for tumor detection was 92.8% (648/706; 95% confidence interval: 89.7%-93.8%). Sensitivity and specificity were 89.5% and 94.3%, respectively. Readers spent 3 to 45 seconds to scroll to the tumor-containing slices without use of the deep learning algorithm. CONCLUSION In breast MR exams containing breast cancer, deep learning can be used to identify the tumor-containing slices. This technology may be integrated into the picture archiving and communication system to bypass scrolling when viewing stacked images, which can be helpful during nonsystematic image viewing, such as during interdisciplinary tumor board meetings.
Collapse
Affiliation(s)
| | - Natsuko Onishi
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
- University of California, Department of Radiology, San Francisco, CA
| | - Katja Pinker
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jeffrey S Reiner
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jennifer Kaplan
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth A Morris
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth J Sutton
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| |
Collapse
|
35
|
Hesse LS, Kuling G, Veta M, Martel AL. Intensity Augmentation to Improve Generalizability of Breast Segmentation Across Different MRI Scan Protocols. IEEE Trans Biomed Eng 2021; 68:759-770. [DOI: 10.1109/tbme.2020.3016602] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
36
|
Zhang Z, Gao S, Huang Z. An Automatic Glioma Segmentation System Using a Multilevel Attention Pyramid Scene Parsing Network. Curr Med Imaging 2020; 17:751-761. [PMID: 33390119 DOI: 10.2174/1573405616666201231100623] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 09/15/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Due to the significant variances in their shape and size, it is a challenging task to automatically segment gliomas. To improve the performance of glioma segmentation tasks, this paper proposed a multilevel attention pyramid scene parsing network (MLAPSPNet) that aggregates the multiscale context and multilevel features. METHODS First, T1 pre-contrast, T2-weighted fluid-attenuated inversion recovery (FLAIR) and T1 post-contrast sequences of each slice are combined to form the input. Afterwards, image normalization and augmentation techniques are applied to accelerate the training process and avoid overfitting, respectively. Furthermore, the proposed MLAPSPNet that introduces multilevel pyramid pooling modules (PPMs) and attention gates is constructed. Eventually, the proposed network is compared with some existing networks. RESULTS The dice similarity coefficient (DSC), sensitivity and Jaccard score of the proposed system can reach 0.885, 0.933 and 0.8, respectively. The introduction of multilevel pyramid pooling modules and attention gates can improve the DSC by 0.029 and 0.022, respectively. Moreover, compared with Res-UNet, Dense-UNet, residual channel attention UNet (RCA-UNet), DeepLab V3+ and UNet++, the DSC is improved by 0.032, 0.026, 0.014, 0.041 and 0.011, respectively. CONCLUSION The proposed multilevel attention pyramid scene parsing network can achieve stateof- the-art performance, and the introduction of multilevel pyramid pooling modules and attention gates can improve the performance of glioma segmentation tasks.
Collapse
Affiliation(s)
- Zhenyu Zhang
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Shouwei Gao
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Zheng Huang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
| |
Collapse
|
37
|
Pathak P, Jalal AS, Rai R. Breast Cancer Image Classification: A Review. Curr Med Imaging 2020; 17:720-740. [PMID: 33371857 DOI: 10.2174/0929867328666201228125208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 09/23/2020] [Accepted: 10/14/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Breast cancer represents uncontrolled breast cell growth. Breast cancer is the most diagnosed cancer in women worldwide. Early detection of breast cancer improves the chances of survival and increases treatment options. There are various methods for screening breast cancer, such as mammogram, ultrasound, computed tomography and Magnetic Resonance Imaging (MRI). MRI is gaining prominence as an alternative screening tool for early detection and breast cancer diagnosis. Nevertheless, MRI can hardly be examined without the use of a Computer-Aided Diagnosis (CAD) framework, due to the vast amount of data. OBJECTIVE This paper aims to cover the approaches used in the CAD system for the detection of breast cancer. METHODS In this paper, the methods used in CAD systems are categories into two classes: the conventional approach and artificial intelligence (AI) approach. RESULTS The conventional approach covers the basic steps of image processing, such as preprocessing, segmentation, feature extraction and classification. The AI approach covers the various convolutional and deep learning networks used for diagnosis. CONCLUSION This review discusses some of the core concepts used in breast cancer and presents a comprehensive review of efforts in the past to address this problem.
Collapse
Affiliation(s)
- Pooja Pathak
- Department of Mathematics, GLA University, Mathura, India
| | - Anand Singh Jalal
- Department of Computer Engineering & Applications, GLA University, Mathura, India
| | - Ritu Rai
- Department of Computer Engineering & Applications, GLA University, Mathura, India
| |
Collapse
|
38
|
Nag MK, Gupta A, Hariharasudhan AS, Sadhu AK, Das A, Ghosh N. Quantitative analysis of brain herniation from non-contrast CT images using deep learning. J Neurosci Methods 2020; 349:109033. [PMID: 33316319 DOI: 10.1016/j.jneumeth.2020.109033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 12/06/2020] [Accepted: 12/08/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND Brain herniation is one of the fatal outcomes of increased intracranial pressure (ICP). It is caused due to the presence of hematoma or tumor mass in the brain. Ideal midline (iML) divides the healthy brain into two (right and left) nearly equal hemispheres. In the presence of hematoma, the midline tends to shift from its original position to the contralateral side of the mass and thus develops a deformed midline (dML). NEW METHOD In this study, a convolutional neural network (CNN) was used to predict the deformed left and right hemispheres. The proposed algorithm was validated with non-contrast computed tomography (NCCT) of (n = 45) subjects with two types of brain hemorrhages - epidural hemorrhage (EDH): (n = 5) and intra-parenchymal hemorrhage (IPH): (n = 40)). RESULTS The method demonstrated excellent potential in automatically predicting MLS with the average errors of 1.29 mm by location, 66.4 mm2 by 2D area, and 253.73 mm3 by 3D volume. Estimated MLS could be well correlated with other clinical markers including hematoma volume - R2 = 0.86 (EDH); 0.48 (IPH) and a Radiologist-defined severity score (RSS) - R2 = 0.62 (EDH); 0.57 (IPH). RSS was found to be even better correlated (R2 = 0.98 (EDH); 0.70 (IPH)), hence better predictable by a joint correlation between hematoma volume, midline pixel- or voxel-shift, and minimum distance of (ideal or deformed) midline from the hematoma (boundary or centroid). CONCLUSION All these predictors were computed automatically, which highlighted the excellent clinical potential of the proposed automated method in midline shift (MLS) estimation and severity prediction in hematoma decision support systems.
Collapse
Affiliation(s)
- Manas Kumar Nag
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, India.
| | - Akshat Gupta
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - A S Hariharasudhan
- Department of Computer Science, Delhi Technological University, New Delhi, India
| | - Anup Kumar Sadhu
- EKO Diagnostics, Medical College and Hospitals Campus, Kolkata, India
| | - Abir Das
- Department of Computer Science, Indian Institute of Technology, Kharagpur, India
| | - Nirmalya Ghosh
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| |
Collapse
|
39
|
Nam Y, Park GE, Kang J, Kim SH. Fully Automatic Assessment of Background Parenchymal Enhancement on Breast MRI Using Machine-Learning Models. J Magn Reson Imaging 2020; 53:818-826. [PMID: 33219624 DOI: 10.1002/jmri.27429] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/15/2020] [Accepted: 10/16/2020] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Automated measurement and classification models with objectivity and reproducibility are required for accurate evaluation of the breast cancer risk of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). PURPOSE To develop and evaluate a machine-learning algorithm for breast FGT segmentation and BPE classification. STUDY TYPE Retrospective. POPULATION A total of 794 patients with breast cancer, 594 patients assigned to the development set, and 200 patients to the test set. FIELD STRENGTH/SEQUENCE 3T and 1.5T; T2 -weighted, fat-saturated T1 -weighted (T1 W) with dynamic contrast enhancement (DCE). ASSESSMENT Manual segmentation was performed for the whole breast and FGT regions in the contralateral breast. The BPE region was determined by thresholding using the subtraction of the pre- and postcontrast T1 W images and the segmented FGT mask. Two radiologists independently assessed the categories of FGT and BPE. A deep-learning-based algorithm was designed to segment and measure the volume of whole breast and FGT and classify the grade of BPE. STATISTICAL TESTS Dice similarity coefficients (DSC) and Spearman correlation analysis were used to compare the volumes from the manual and deep-learning-based segmentations. Kappa statistics were used for agreement analysis. Comparison of area under the receiver operating characteristic (ROC) curves (AUC) and F1 scores were calculated to evaluate the performance of BPE classification. RESULTS The mean (±SD) DSC for manual and deep-learning segmentations was 0.85 ± 0.11. The correlation coefficient for FGT volume from manual- and deep-learning-based segmentations was 0.93. Overall accuracy of manual segmentation and deep-learning segmentation in BPE classification task was 66% and 67%, respectively. For binary categorization of BPE grade (minimal/mild vs. moderate/marked), overall accuracy increased to 91.5% in manual segmentation and 90.5% in deep-learning segmentation; the AUC was 0.93 in both methods. DATA CONCLUSION This deep-learning-based algorithm can provide reliable segmentation and classification results for BPE. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.,Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Ga Eun Park
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Junghwa Kang
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Sung Hun Kim
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
40
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|
41
|
Volumetric breast density estimation on MRI using explainable deep learning regression. Sci Rep 2020; 10:18095. [PMID: 33093572 PMCID: PMC7581772 DOI: 10.1038/s41598-020-75167-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 10/12/2020] [Indexed: 01/10/2023] Open
Abstract
To purpose of this paper was to assess the feasibility of volumetric breast density estimations on MRI without segmentations accompanied with an explainability step. A total of 615 patients with breast cancer were included for volumetric breast density estimation. A 3-dimensional regression convolutional neural network (CNN) was used to estimate the volumetric breast density. Patients were split in training (N = 400), validation (N = 50), and hold-out test set (N = 165). Hyperparameters were optimized using Neural Network Intelligence and augmentations consisted of translations and rotations. The estimated densities were evaluated to the ground truth using Spearman’s correlation and Bland–Altman plots. The output of the CNN was visually analyzed using SHapley Additive exPlanations (SHAP). Spearman’s correlation between estimated and ground truth density was ρ = 0.81 (N = 165, P < 0.001) in the hold-out test set. The estimated density had a median bias of 0.70% (95% limits of agreement = − 6.8% to 5.0%) to the ground truth. SHAP showed that in correct density estimations, the algorithm based its decision on fibroglandular and fatty tissue. In incorrect estimations, other structures such as the pectoral muscle or the heart were included. To conclude, it is feasible to automatically estimate volumetric breast density on MRI without segmentations, and to provide accompanying explanations.
Collapse
|
42
|
Verde F, Romeo V, Stanzione A, Maurea S. Current trends of artificial intelligence in cancer imaging. Artif Intell Med Imaging 2020; 1:87-93. [DOI: 10.35711/aimi.v1.i3.87] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 09/22/2020] [Accepted: 09/23/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we discussed the current research status of artificial intelligence (AI) in Oncology, reviewing the basics of machine learning (ML) and deep learning (DL) techniques and their emerging applications on clinical and imaging cancer workflow. The growing amounts of available “big data” coupled to the increasing computational power have enabled the development of computer-based systems capable to perform advanced tasks in many areas of clinical care, especially in medical imaging. ML is a branch of data science that allows the creation of computer algorithms that can learn and make predictions without prior instructions. DL is a subgroup of artificial neural network algorithms configurated to automatically extract features and perform high-level tasks; convolutional neural networks are the most common DL models used in medical image analysis. AI methods have been proposed in many areas of oncology granting promising results in radiology-based clinical applications. In detail, we explored the emerging applications of AI in oncological risk assessment, lesion detection, characterization, staging, and therapy response. Critical issues such as the lack of reproducibility and generalizability need to be addressed to fully implement AI systems in clinical practice. Nevertheless, AI impact on cancer imaging has been driving the shift of oncology towards a precision diagnostics and personalized cancer treatment.
Collapse
Affiliation(s)
- Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| |
Collapse
|
43
|
Liu M, Vanguri R, Mutasa S, Ha R, Liu YC, Button T, Jambawalikar S. Channel width optimized neural networks for liver and vessel segmentation in liver iron quantification. Comput Biol Med 2020; 122:103798. [PMID: 32658724 DOI: 10.1016/j.compbiomed.2020.103798] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 04/27/2020] [Accepted: 04/29/2020] [Indexed: 12/19/2022]
Abstract
INTRODUCTION MRI T2* relaxometry protocols are often used for Liver Iron Quantification in patients with hemochromatosis. Several methods exist to semi-automatically segment parenchyma and exclude vessels for this calculation. PURPOSE To determine if inclusion of multiple echoes inputs to Convolutional Neural Networks (CNN) improves automated liver and vessel segmentation in MRI T2* relaxometry protocols and to determine if the resultant segmentations agree with manual segmentations for liver iron quantification analysis. METHODS Multi echo Gradient Recalled Echo (GRE) MRI sequence for T2* relaxometry was performed for 79 exams on 31 patients with hemochromatosis for iron quantification analysis. 275 axial liver slices were manually segmented as ground truth masks. A batch normalized U-Net with variable input width to incorporate multiple echoes is used for segmentation, using DICE as the accuracy metric. ANOVA is used to evaluate significance of channel width changes in segmentation accuracy. Linear regression is used to model the relationship of channel width on segmentation accuracy. Liver segmentations are applied to relaxometry data to calculate liver T2* yielding liver iron concentration(LIC) derived from literature based calibration curves. Manual and CNN based LIC values are compared with Pearson correlation. Bland altman plots are used to visualize differences between manual and CNN based LIC values. RESULTS Performance metrics are tested on 55 hold out slices. Linear regression indicates that there is a monotonic increase of DICE with increasing channel depth (p = 0.001) with a slope of 3.61e-3. ANOVA indicates a significant increase segmentation accuracy over single channel starting at 3 channels. Incorporation of all channels results in an average DICE of 0.86, an average increase of 0.07 over single channel. The calculated LIC from CNN segmented livers agrees well with manual segmentation (R = 0.998, slope = 0.914, p«0.001), with an average absolute difference 0.27 ± 0.99 mg Fe/g or 1.34 ± 4.3%. CONCLUSION More input echoes yields higher model accuracy until the noise floor. Echos beyond the first three echo times in GRE based T2* relaxometry do not contribute significant information for segmentation of liver for LIC calculation. Deep learning models with three channel width allow for generalization of model to protocols of more than three echoes, effectively a universal requirement for relaxometry. Deep learning segmentations achieve a good accuracy compared with manual segmentations with minimal preprocessing. Liver iron values calculated from hand segmented liver and Neural network segmented liver were not statistically different from each other.
Collapse
Affiliation(s)
- Michael Liu
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA.
| | - Rami Vanguri
- Department of Pathology & Cell Biology, Columbia University, New York, NY, USA
| | - Simukayi Mutasa
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| | - Richard Ha
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| | - Yu-Cheng Liu
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| | - Terry Button
- Department of Radiology, Stony Brook University, Stony Brook, NY, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Irving Medical Center, 622 W 168th Street, New York, NY, 10032, USA
| |
Collapse
|
44
|
Chhetri A, Li X, Rispoli JV. Current and Emerging Magnetic Resonance-Based Techniques for Breast Cancer. Front Med (Lausanne) 2020; 7:175. [PMID: 32478083 PMCID: PMC7235971 DOI: 10.3389/fmed.2020.00175] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 04/15/2020] [Indexed: 01/10/2023] Open
Abstract
Breast cancer is the most commonly diagnosed cancer among women worldwide, and early detection remains a principal factor for improved patient outcomes and reduced mortality. Clinically, magnetic resonance imaging (MRI) techniques are routinely used in determining benign and malignant tumor phenotypes and for monitoring treatment outcomes. Static MRI techniques enable superior structural contrast between adipose and fibroglandular tissues, while dynamic MRI techniques can elucidate functional characteristics of malignant tumors. The preferred clinical procedure-dynamic contrast-enhanced MRI-illuminates the hypervascularity of breast tumors through a gadolinium-based contrast agent; however, accumulation of the potentially toxic contrast agent remains a major limitation of the technique, propelling MRI research toward finding an alternative, noninvasive method. Three such techniques are magnetic resonance spectroscopy, chemical exchange saturation transfer, and non-contrast diffusion weighted imaging. These methods shed light on underlying chemical composition, provide snapshots of tissue metabolism, and more pronouncedly characterize microstructural heterogeneity. This review article outlines the present state of clinical MRI for breast cancer and examines several research techniques that demonstrate capacity for clinical translation. Ultimately, multi-parametric MRI-incorporating one or more of these emerging methods-presently holds the best potential to afford improved specificity and deliver excellent accuracy to clinics for the prediction, detection, and monitoring of breast cancer.
Collapse
Affiliation(s)
- Apekshya Chhetri
- Magnetic Resonance Biomedical Engineering Laboratory, Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States
- Basic Medical Sciences, College of Veterinary Medicine, Purdue University, West Lafayette, IN, United States
| | - Xin Li
- Magnetic Resonance Biomedical Engineering Laboratory, Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States
| | - Joseph V. Rispoli
- Magnetic Resonance Biomedical Engineering Laboratory, Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States
- Center for Cancer Research, Purdue University, West Lafayette, IN, United States
- School of Electrical & Computer Engineering, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
45
|
Retson TA, Eghtedari M. Computer-Aided Detection/Diagnosis in Breast Imaging: A Focus on the Evolving FDA Regulations for Using Software as a Medical Device. CURRENT RADIOLOGY REPORTS 2020. [DOI: 10.1007/s40134-020-00350-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
46
|
Lai X, Yang W, Li R. DBT Masses Automatic Segmentation Using U-Net Neural Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:7156165. [PMID: 32411285 PMCID: PMC7204342 DOI: 10.1155/2020/7156165] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 12/17/2019] [Accepted: 12/18/2019] [Indexed: 12/02/2022]
Abstract
To improve the automatic segmentation accuracy of breast masses in digital breast tomosynthesis (DBT) images, we propose a DBT mass automatic segmentation algorithm by using a U-Net architecture. Firstly, to suppress the background tissue noise and enhance the contrast of the mass candidate regions, after the top-hat transform of DBT images, a constraint matrix is constructed and multiplied with the DBT image. Secondly, an efficient U-Net neural network is built and image patches are extracted before data augmentation to establish the training dataset to train the U-Net model. And then the presegmentation of the DBT tumors is implemented, which initially classifies per pixel into two different types of labels. Finally, all regions smaller than 50 voxels considered as false positives are removed, and the median filter smoothes the mass boundaries to obtain the final segmentation results. The proposed method can effectively improve the performance in the automatic segmentation of the masses in DBT images. Using the detection Accuracy (Acc), Sensitivity (Sen), Specificity (Spe), and area under the curve (AUC) as evaluation indexes, the Acc, Sen, Spe, and AUC for DBT mass segmentation in the entire experimental dataset is 0.871, 0.869, 0.882, and 0.859, respectively. Our proposed U-Net-based DBT mass automatic segmentation system obtains promising results, which is superior to some classical architectures, and may be expected to have clinical application prospects.
Collapse
Affiliation(s)
- Xiaobo Lai
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Weiji Yang
- College of Life Science, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Ruipeng Li
- Hangzhou Third People's Hospital, Hangzhou 310009, China
| |
Collapse
|
47
|
Machine learning with autophagy-related proteins for discriminating renal cell carcinoma subtypes. Sci Rep 2020; 10:720. [PMID: 31959887 PMCID: PMC6971298 DOI: 10.1038/s41598-020-57670-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 12/18/2019] [Indexed: 12/15/2022] Open
Abstract
Machine learning techniques have been previously applied for classification of tumors based largely on morphological features of tumor cells recognized in H&E images. Here, we tested the possibility of using numeric data acquired from software-based quantification of certain marker proteins, i.e. key autophagy proteins (ATGs), obtained from immunohistochemical (IHC) images of renal cell carcinomas (RCC). Using IHC staining and automated image quantification with a tissue microarray (TMA) of RCC, we found ATG1, ATG5 and microtubule-associated proteins 1A/1B light chain 3B (LC3B) were significantly reduced, suggesting a reduction in the basal level of autophagy with RCC. Notably, the levels of the ATG proteins expressed did not correspond to the mRNA levels expressed in these tissues. Applying a supervised machine learning algorithm, the K-Nearest Neighbor (KNN), to our quantified numeric data revealed that LC3B provided a strong measure for discriminating clear cell RCC (ccRCC). ATG5 and sequestosome-1 (SQSTM1/p62) could be used for classification of chromophobe RCC (crRCC). The quantitation of particular combinations of ATG1, ATG16L1, ATG5, LC3B and p62, all of which measure the basal level of autophagy, were able to discriminate among normal tissue, crRCC and ccRCC, suggesting that the basal level of autophagy would be a potentially useful parameter for RCC discrimination. In addition to our observation that the basal level of autophagy is reduced in RCC, our workflow from quantitative IHC analysis to machine learning could be considered as a potential complementary tool for the classification of RCC subtypes and also for other types of tumors for which precision medicine requires a characterization.
Collapse
|
48
|
Yang T, Zhou Y, Li L, Zhu C. DCU-Net: Multi-scale U-Net for brain tumor segmentation. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 28:709-726. [PMID: 32444591 DOI: 10.3233/xst-200650] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Brain tumor segmentation plays an important role in assisting diagnosis of disease, treatment plan planning, and surgical navigation. OBJECTIVE This study aims to improve the accuracy of tumor boundary segmentation using the multi-scale U-Net network. METHODS In this study, a novel U-Net with dilated convolution (DCU-Net) structure is proposed for brain tumor segmentation based on the classic U-Net structure. First, the MR brain tumor images are pre-processed to alleviate the class imbalance problem by reducing the input of the background pixels. Then, the multi-scale spatial pyramid pooling is used to replace the max pooling at the end of the down-sampling path. It can expand the feature receptive field while maintaining image resolution. Finally, a dilated convolution residual block is combined to improve the skip connections in the training networks to improve the network's ability to recognize the tumor details. RESULTS The proposed model has been evaluated using the Brain Tumor Segmentation (BRATS) 2018 Challenge training dataset and achieved the dice similarity coefficients (DSC) score of 0.91, 0.78 and 0.83 for whole tumor, core tumor and enhancing tumor segmentation, respectively. CONCLUSIONS The experiment results indicate that the proposed model yields a promising performance in automated brain tumor segmentation.
Collapse
Affiliation(s)
- Tiejun Yang
- Key Laboratory of Grain Information Processing and Control (Henan University of Technology), Ministry of Education, China
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, China
| | - Yudan Zhou
- School of Information Science and Technology, Henan University of Technology, Zhengzhou, China
| | - Lei Li
- School of Information Science and Technology, Henan University of Technology, Zhengzhou, China
| | - Chunhua Zhu
- School of Information Science and Technology, Henan University of Technology, Zhengzhou, China
| |
Collapse
|
49
|
Parekh VS, Macura KJ, Harvey SC, Kamel IR, EI‐Khouli R, Bluemke DA, Jacobs MA. Multiparametric deep learning tissue signatures for a radiological biomarker of breast cancer: Preliminary results. Med Phys 2020; 47:75-88. [PMID: 31598978 PMCID: PMC7003775 DOI: 10.1002/mp.13849] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 09/09/2019] [Accepted: 09/13/2019] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Deep learning is emerging in radiology due to the increased computational capabilities available to reading rooms. These computational developments have the ability to mimic the radiologist and may allow for more accurate tissue characterization of normal and pathological lesion tissue to assist radiologists in defining different diseases. We introduce a novel tissue signature model based on tissue characteristics in breast tissue from multiparametric magnetic resonance imaging (mpMRI). The breast tissue signatures are used as inputs in a stacked sparse autoencoder (SSAE) multiparametric deep learning (MPDL) network for segmentation of breast mpMRI. METHODS We constructed the MPDL network from SSAE with 5 layers with 10 nodes at each layer. A total cohort of 195 breast cancer subjects were used for training and testing of the MPDL network. The cohort consisted of a training dataset of 145 subjects and an independent validation set of 50 subjects. After segmentation, we used a combined SAE-support vector machine (SAE-SVM) learning method for classification. Dice similarity (DS) metrics were calculated between the segmented MPDL and dynamic contrast enhancement (DCE) MRI-defined lesions. Sensitivity, specificity, and area under the curve (AUC) metrics were used to classify benign from malignant lesions. RESULTS The MPDL segmentation resulted in a high DS of 0.87 ± 0.05 for malignant lesions and 0.84 ± 0.07 for benign lesions. The MPDL had excellent sensitivity and specificity of 86% and 86% with positive predictive and negative predictive values of 92% and 73%, respectively, and an AUC of 0.90. CONCLUSIONS Using a new tissue signature model as inputs into the MPDL algorithm, we have successfully validated MPDL in a large cohort of subjects and achieved results similar to radiologists.
Collapse
Affiliation(s)
- Vishwa S. Parekh
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Department of Computer ScienceThe Johns Hopkins UniversityBaltimoreMD21208USA
| | - Katarzyna J. Macura
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Sidney Kimmel Comprehensive Cancer CenterThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Susan C. Harvey
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Hologic Inc36 Apple Ridge RdDanburyCT06810USA
| | - Ihab R. Kamel
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Sidney Kimmel Comprehensive Cancer CenterThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Riham EI‐Khouli
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Department of Radiology and Radiological SciencesUniversity of KentuckyLexingtonKY40536USA
| | - David A. Bluemke
- Department of RadiologyUniversity of Wisconsin School of Medicine and Public HealthMadisonWI53726USA
| | - Michael A. Jacobs
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Sidney Kimmel Comprehensive Cancer CenterThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
| |
Collapse
|
50
|
Lenchik L, Heacock L, Weaver AA, Boutin RD, Cook TS, Itri J, Filippi CG, Gullapalli RP, Lee J, Zagurovskaya M, Retson T, Godwin K, Nicholson J, Narayana PA. Automated Segmentation of Tissues Using CT and MRI: A Systematic Review. Acad Radiol 2019; 26:1695-1706. [PMID: 31405724 PMCID: PMC6878163 DOI: 10.1016/j.acra.2019.07.006] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/17/2019] [Accepted: 07/17/2019] [Indexed: 01/10/2023]
Abstract
RATIONALE AND OBJECTIVES The automated segmentation of organs and tissues throughout the body using computed tomography and magnetic resonance imaging has been rapidly increasing. Research into many medical conditions has benefited greatly from these approaches by allowing the development of more rapid and reproducible quantitative imaging markers. These markers have been used to help diagnose disease, determine prognosis, select patients for therapy, and follow responses to therapy. Because some of these tools are now transitioning from research environments to clinical practice, it is important for radiologists to become familiar with various methods used for automated segmentation. MATERIALS AND METHODS The Radiology Research Alliance of the Association of University Radiologists convened an Automated Segmentation Task Force to conduct a systematic review of the peer-reviewed literature on this topic. RESULTS The systematic review presented here includes 408 studies and discusses various approaches to automated segmentation using computed tomography and magnetic resonance imaging for neurologic, thoracic, abdominal, musculoskeletal, and breast imaging applications. CONCLUSION These insights should help prepare radiologists to better evaluate automated segmentation tools and apply them not only to research, but eventually to clinical practice.
Collapse
Affiliation(s)
- Leon Lenchik
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157.
| | - Laura Heacock
- Department of Radiology, NYU Langone, New York, New York
| | - Ashley A Weaver
- Department of Biomedical Engineering, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Robert D Boutin
- Department of Radiology, University of California Davis School of Medicine, Sacramento, California
| | - Tessa S Cook
- Department of Radiology, University of Pennsylvania, Philadelphia Pennsylvania
| | - Jason Itri
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157
| | - Christopher G Filippi
- Department of Radiology, Donald and Barbara School of Medicine at Hofstra/Northwell, Lenox Hill Hospital, NY, New York
| | - Rao P Gullapalli
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - James Lee
- Department of Radiology, University of Kentucky, Lexington, Kentucky
| | | | - Tara Retson
- Department of Radiology, University of California San Diego, San Diego, California
| | - Kendra Godwin
- Medical Library, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Joey Nicholson
- NYU Health Sciences Library, NYU School of Medicine, NYU Langone Health, New York, New York
| | - Ponnada A Narayana
- Department of Diagnostic and Interventional Imaging, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas
| |
Collapse
|