1
|
Hizukuri A, Nakayama R, Goto M, Sakai K. Computerized Segmentation Method for Nonmasses on Breast DCE-MRI Images Using ResUNet++ with Slice Sequence Learning and Cross-Phase Convolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1567-1578. [PMID: 38441702 DOI: 10.1007/s10278-024-01053-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/23/2023] [Accepted: 12/22/2023] [Indexed: 08/07/2024]
Abstract
The purpose of this study was to develop a computerized segmentation method for nonmasses using ResUNet++ with a slice sequence learning and cross-phase convolution to analyze temporal information in breast dynamic contrast material-enhanced magnetic resonance imaging (DCE-MRI) images. The dataset consisted of a series of DCE-MRI examinations from 54 patients, each containing three-phase images, which included one image that was acquired before contrast injection and two images that were acquired after contrast injection. In the proposed method, the region of interest (ROI) slice images are first extracted from each phase image. The slice images at the same position in each ROI are stacked to generate a three-dimensional (3D) tensor. A cross-phase convolution generates feature maps with the 3D tensor to incorporate the temporal information. Subsequently, the feature maps are used as the input layers for ResUNet++. New feature maps are extracted from the input data using the ResUNet++ encoders, following which the nonmass regions are segmented by a decoder. A convolutional long short-term memory layer is introduced into the decoder to analyze a sequence of slice images. When using the proposed method, the average detection accuracy of nonmasses, number of false positives, Jaccard coefficient, Dice similarity coefficient, positive predictive value, and sensitivity were 90.5%, 1.91, 0.563, 0.712, 0.714, and 0.727, respectively, larger than those obtained using 3D U-Net, V-Net, and nnFormer. The proposed method achieves high detection and shape accuracies and will be useful in differential diagnoses of nonmasses.
Collapse
Affiliation(s)
- Akiyoshi Hizukuri
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-Higashi, Kusatsu, Shiga, 525-8577, Japan.
| | - Ryohei Nakayama
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-Higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Mariko Goto
- Department of Radiology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi Hirokoji, Kamigyoku, Kyoto, 602-8566, Japan
| | - Koji Sakai
- Department of Radiology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi Hirokoji, Kamigyoku, Kyoto, 602-8566, Japan
| |
Collapse
|
2
|
Jing X, Wielema M, Monroy-Gonzalez AG, Stams TRG, Mahesh SVK, Oudkerk M, Sijens PE, Dorrius MD, van Ooijen PMA. Automated Breast Density Assessment in MRI Using Deep Learning and Radiomics: Strategies for Reducing Inter-Observer Variability. J Magn Reson Imaging 2024; 60:80-91. [PMID: 37846440 DOI: 10.1002/jmri.29058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 10/18/2023] Open
Abstract
BACKGROUND Accurate breast density evaluation allows for more precise risk estimation but suffers from high inter-observer variability. PURPOSE To evaluate the feasibility of reducing inter-observer variability of breast density assessment through artificial intelligence (AI) assisted interpretation. STUDY TYPE Retrospective. POPULATION Six hundred and twenty-one patients without breast prosthesis or reconstructions were randomly divided into training (N = 377), validation (N = 98), and independent test (N = 146) datasets. FIELD STRENGTH/SEQUENCE 1.5 T and 3.0 T; T1-weighted spectral attenuated inversion recovery. ASSESSMENT Five radiologists independently assessed each scan in the independent test set to establish the inter-observer variability baseline and to reach a reference standard. Deep learning and three radiomics models were developed for three classification tasks: (i) four Breast Imaging-Reporting and Data System (BI-RADS) breast composition categories (A-D), (ii) dense (categories C, D) vs. non-dense (categories A, B), and (iii) extremely dense (category D) vs. moderately dense (categories A-C). The models were tested against the reference standard on the independent test set. AI-assisted interpretation was performed by majority voting between the models and each radiologist's assessment. STATISTICAL TESTS Inter-observer variability was assessed using linear-weighted kappa (κ) statistics. Kappa statistics, accuracy, and area under the receiver operating characteristic curve (AUC) were used to assess models against reference standard. RESULTS In the independent test set, five readers showed an overall substantial agreement on tasks (i) and (ii), but moderate agreement for task (iii). The best-performing model showed substantial agreement with reference standard for tasks (i) and (ii), but moderate agreement for task (iii). With the assistance of the AI models, almost perfect inter-observer variability was obtained for tasks (i) (mean κ = 0.86), (ii) (mean κ = 0.94), and (iii) (mean κ = 0.94). DATA CONCLUSION Deep learning and radiomics models have the potential to help reduce inter-observer variability of breast density assessment. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Xueping Jing
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Mirjam Wielema
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Andrea G Monroy-Gonzalez
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Thom R G Stams
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Shekar V K Mahesh
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Matthijs Oudkerk
- Faculty of Medical Sciences, University of Groningen, Groningen, The Netherlands
- Institute of Diagnostic Accuracy Research B.V., Groningen, The Netherlands
| | - Paul E Sijens
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Monique D Dorrius
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Peter M A van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Machine Learning Lab, Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
3
|
Müller-Franzes G, Khader F, Tayebi Arasteh S, Huck L, Bode M, Han T, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Intraindividual Comparison of Different Methods for Automated BPE Assessment at Breast MRI: A Call for Standardization. Radiology 2024; 312:e232304. [PMID: 39012249 DOI: 10.1148/radiol.232304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
Background The level of background parenchymal enhancement (BPE) at breast MRI provides predictive and prognostic information and can have diagnostic implications. However, there is a lack of standardization regarding BPE assessment. Purpose To investigate how well results of quantitative BPE assessment methods correlate among themselves and with assessments made by radiologists experienced in breast MRI. Materials and Methods In this pseudoprospective analysis of 5773 breast MRI examinations from 3207 patients (mean age, 60 years ± 10 [SD]), the level of BPE was prospectively categorized according to the Breast Imaging Reporting and Data System by radiologists experienced in breast MRI. For automated extraction of BPE, fibroglandular tissue (FGT) was segmented in an automated pipeline. Four different published methods for automated quantitative BPE extractions were used: two methods (A and B) based on enhancement intensity and two methods (C and D) based on the volume of enhanced FGT. The results from all methods were correlated, and agreement was investigated in comparison with the respective radiologist-based categorization. For surrogate validation of BPE assessment, how accurately the methods distinguished premenopausal women with (n = 50) versus without (n = 896) antihormonal treatment was determined. Results Intensity-based methods (A and B) exhibited a correlation with radiologist-based categorization of 0.56 ± 0.01 and 0.55 ± 0.01, respectively, and volume-based methods (C and D) had a correlation of 0.52 ± 0.01 and 0.50 ± 0.01 (P < .001). There were notable correlation differences (P < .001) between the BPE determined with the four methods. Among the four quantitation methods, method D offered the highest accuracy for distinguishing women with versus without antihormonal therapy (P = .01). Conclusion Results of different methods for quantitative BPE assessment agree only moderately among themselves or with visual categories reported by experienced radiologists; intensity-based methods correlate more closely with radiologists' ratings than volume-based methods. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Mann in this issue.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Firas Khader
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Soroosh Tayebi Arasteh
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Luisa Huck
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Maike Bode
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Tianyu Han
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Teresa Lemainque
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Jakob Nikolas Kather
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Sven Nebelung
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Christiane Kuhl
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| | - Daniel Truhn
- From the Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwelsstr 30, 52074 Aachen, Germany (G.M.F., F.K., S.T.A., L.H., M.B., T.H., T.L., S.N., C.K., D.T.); National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Department of Medical Oncology, Heidelberg University Hospital, Heidelberg, Germany (J.N.K.); Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany (J.N.K.); and Department of Medicine I, University Hospital Dresden, Dresden, Germany (J.N.K.)
| |
Collapse
|
4
|
Chen M, Xing J, Guo L. MRI-based Deep Learning Models for Preoperative Breast Volume and Density Assessment Assisting Breast Reconstruction. Aesthetic Plast Surg 2024:10.1007/s00266-024-04074-2. [PMID: 38806828 DOI: 10.1007/s00266-024-04074-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 04/09/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND The volume of the implant is the most critical element of breast reconstruction, so it is necessary to accurately assess the preoperative volume of the healthy and affected breasts and select the appropriate implant for placement. Accurate and automated methods for quantitative assessment of breast volume can optimize breast reconstruction surgery and assist physicians in clinical decision making. The aim of this study was to develop an artificial intelligence model for automated segmentation of the breast and measurement of volume. MATERIAL AND METHODS A total of 249 subjects undergoing breast reconstruction surgery were enrolled in this study. Subjects underwent preoperative breast MRI, and the breast region manually outlined by the imaging physician served as the gold standard for volume measurement by the automated segmentation model. In this study, we developed three automated algorithms for automatic segmentation of breast regions, including a simple alignment model, an alignment dynamic encoding model, and a deep learning model. The volumetric agreement between the three automated segmentation algorithms and the breast regions manually segmented by imaging physicians was evaluated by calculating the mean square error (MSE) and intragroup correlation coefficient (ICC), and the reproducibility of the automated segmentation of the breast regions was assessed by the test-retest step. RESULTS The three breast automated segmentation models developed in this study (simple registration model, dynamic programming model, and deep learning model) showed strong ICC with manual segmentation of the breast region, with MSEs of 1.124, 0.693, and 0.781, and ICCs of 0.975 (95% CI, 0.869-0.991), 0.986 (95% CI, 0.967-0.996), and 0.983 (95% CI, 0.961-0.992), respectively. Regarding the test-retest results of breast volume, the dynamic programming model performed the best with an MSE of 0.370 and an ICC of 0.993 (95% CI, 0.982-0.997), followed by the deep learning algorithm with an MSE of 0.741 and an ICC of 0.983 (95% CI, 0.956-0.993), and the simple registration algorithm with an MSE of 0.763 and an ICC of 0.982 (95% CI, 0.949-0.993). The reproducibility of the breast region segmented by the three automated algorithms was higher than that of manual segmentation by different radiologists. CONCLUSION The three automated breast segmentation algorithms developed in this study generate accurate and reliable breast regions, enable highly reproducible breast region segmentation and automated volume measurements, and provide a valuable tool for surgical selection of appropriate prostheses. NO LEVEL ASSIGNED This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Muzi Chen
- Department of Plastic and Reconstructive Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
| | - Jiahua Xing
- Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 33 Badachu Road, Shijingshan District, Beijing, 100144, China
| | - Lingli Guo
- Department of Plastic and Reconstructive Surgery, The First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China.
| |
Collapse
|
5
|
Yan R, Murakami W, Mortazavi S, Yu T, Chu FI, Lee-Felker S, Sung K. Quantitative assessment of background parenchymal enhancement is associated with lifetime breast cancer risk in screening MRI. Eur Radiol 2024:10.1007/s00330-024-10758-9. [PMID: 38683385 DOI: 10.1007/s00330-024-10758-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 03/07/2024] [Accepted: 03/16/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVES To compare the quantitative background parenchymal enhancement (BPE) in women with different lifetime risks and BRCA mutation status of breast cancer using screening MRI. MATERIALS AND METHODS This study included screening MRI of 535 women divided into three groups based on lifetime risk: nonhigh-risk women, high-risk women without BRCA mutation, and BRCA1/2 mutation carriers. Six quantitative BPE measurements, including percent enhancement (PE) and signal enhancement ratio (SER), were calculated on DCE-MRI after segmentation of the whole breast and fibroglandular tissue (FGT). The associations between lifetime risk factors and BPE were analyzed via linear regression analysis. We adjusted for risk factors influencing BPE using propensity score matching (PSM) and compared the BPE between different groups. A two-sided Mann-Whitney U-test was used to compare the BPE with a threshold of 0.1 for multiple testing issue-adjusted p values. RESULTS Age, BMI, menopausal status, and FGT level were significantly correlated with quantitative BPE based on the univariate and multivariable linear regression analyses. After adjusting for age, BMI, menopausal status, hormonal treatment history, and FGT level using PSM, significant differences were observed between high-risk non-BRCA and BRCA groups in PEFGT (11.5 vs. 8.0%, adjusted p = 0.018) and SERFGT (7.2 vs. 9.3%, adjusted p = 0.066). CONCLUSION Quantitative BPE varies in women with different lifetime breast cancer risks and BRCA mutation status. These differences may be due to the influence of multiple lifetime risk factors. Quantitative BPE differences remained between groups with and without BRCA mutations after adjusting for known risk factors associated with BPE. CLINICAL RELEVANCE STATEMENT BRCA germline mutations may be associated with quantitative background parenchymal enhancement, excluding the effects of known confounding factors. This finding can provide potential insights into the cancer pathophysiological mechanisms behind lifetime risk models. KEY POINTS Expanding understanding of breast cancer pathophysiology allows for improved risk stratification and optimized screening protocols. Quantitative BPE is significantly associated with lifetime risk factors and differs between BRCA mutation carriers and noncarriers. This research offers a possible understanding of the physiological mechanisms underlying quantitative BPE and BRCA germline mutations.
Collapse
Affiliation(s)
- Ran Yan
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
- Department of Bioengineering, Henry Samueli School of Engineering, University of California, Los Angeles, CA, USA.
| | - Wakana Murakami
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Department of Radiology, Showa University Graduate School of Medicine, Tokyo, Japan
| | - Shabnam Mortazavi
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Tiffany Yu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Fang-I Chu
- Department of Radiation Oncology, University of California, Los Angeles, CA, USA
| | - Stephanie Lee-Felker
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Department of Bioengineering, Henry Samueli School of Engineering, University of California, Los Angeles, CA, USA
| |
Collapse
|
6
|
Zhong Y, Piao Y, Tan B, Liu J. A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108101. [PMID: 38432087 DOI: 10.1016/j.cmpb.2024.108101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/13/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China.
| | - Baolin Tan
- Technology Co. LTD, Shenzhen 518000, PR China
| | - Jingxin Liu
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun 130033, PR China
| |
Collapse
|
7
|
Lew CO, Harouni M, Kirksey ER, Kang EJ, Dong H, Gu H, Grimm LJ, Walsh R, Lowell DA, Mazurowski MA. A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI. Sci Rep 2024; 14:5383. [PMID: 38443410 PMCID: PMC10915139 DOI: 10.1038/s41598-024-54048-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 02/08/2024] [Indexed: 03/07/2024] Open
Abstract
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available.
Collapse
Affiliation(s)
- Christopher O Lew
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA.
| | - Majid Harouni
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ella R Kirksey
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Elianne J Kang
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Haoyu Dong
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Hanxue Gu
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Lars J Grimm
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Ruth Walsh
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Dorothy A Lowell
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| | - Maciej A Mazurowski
- Department of Radiology, Duke University Medical Center, Box 2731, Durham, NC, 27710, USA
| |
Collapse
|
8
|
Sadeghi Pour E, Esmaeili M, Romoozi M. Employing Atrous Pyramid Convolutional Deep Learning Approach for Detection to Diagnose Breast Cancer Tumors. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7201479. [PMID: 38025486 PMCID: PMC10663704 DOI: 10.1155/2023/7201479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/08/2022] [Accepted: 11/24/2022] [Indexed: 12/01/2023]
Abstract
Breast cancer is among the most common diseases and one of the most common causes of death in the female population worldwide. Early identification of breast cancer improves survival. Therefore, radiologists will be able to make more accurate diagnoses if a computerized system is developed to detect breast cancer. Computer-aided design techniques have the potential to help medical professionals to determine the specific location of breast tumors and better manage this disease more rapidly and accurately. MIAS datasets were used in this study. The aim of this study is to evaluate a noise reduction for mammographic pictures and to identify salt and pepper, Gaussian, and Poisson so that precise mass detection operations can be estimated. As a result, it provides a method for noise reduction known as quantum wavelet transform (QWT) filtering and an image morphology operator for precise mass segmentation in mammographic images by utilizing an Atrous pyramid convolutional neural network as the deep learning model for classification of mammographic images. The hybrid methodology dubbed QWT-APCNN is compared to earlier methods in terms of peak signal-to-noise ratio (PSNR) and mean square error (MSE) in noise reduction and detection accuracy for mass area recognition. Compared to state-of-the-art approaches, the proposed method performed better at noise reduction and segmentation according to different evaluation criteria such as an accuracy rate of 98.57%, 92% sensitivity, 88% specificity, 90% DSS, and ROC and AUC rate of 88.77.
Collapse
Affiliation(s)
- Ehsan Sadeghi Pour
- Department of Electrical and Computer Engineering, Kashan Branch, Islamic Azad University, Kashan 8715998151, Iran
| | - Mahdi Esmaeili
- Department of Electrical and Computer Engineering, Kashan Branch, Islamic Azad University, Kashan 8715998151, Iran
| | - Morteza Romoozi
- Department of Electrical and Computer Engineering, Kashan Branch, Islamic Azad University, Kashan 8715998151, Iran
| |
Collapse
|
9
|
Nowakowska S, Borkowski K, Ruppert CM, Landsmann A, Marcon M, Berger N, Boss A, Ciritsis A, Rossi C. Generalizable attention U-Net for segmentation of fibroglandular tissue and background parenchymal enhancement in breast DCE-MRI. Insights Imaging 2023; 14:185. [PMID: 37932462 PMCID: PMC10628070 DOI: 10.1186/s13244-023-01531-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/25/2023] [Indexed: 11/08/2023] Open
Abstract
OBJECTIVES Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast-enhanced breast MRI. Subsequent assessment of the developed models in the context of FGT and BPE Breast Imaging Reporting and Data System (BI-RADS)-compliant classification. METHODS For the training and validation of attention U-Net models, data coming from a single 3.0-T scanner was used. For testing, additional data from 1.5-T scanner and data acquired in a different institution with a 3.0-T scanner was utilized. The developed models were used to quantify the amount of FGT and BPE in 80 DCE-MRI examinations, and a correlation between these volumetric measures and the classes assigned by radiologists was performed. RESULTS To assess the model performance using application-relevant metrics, the correlation between the volumes of breast, FGT, and BPE calculated from ground truth masks and predicted masks was checked. Pearson correlation coefficients ranging from 0.963 ± 0.004 to 0.999 ± 0.001 were achieved. The Spearman correlation coefficient for the quantitative and qualitative assessment, i.e., classification by radiologist, of FGT amounted to 0.70 (p < 0.0001), whereas BPE amounted to 0.37 (p = 0.0006). CONCLUSIONS Generalizable algorithms for FGT and BPE segmentation were developed and tested. Our results suggest that when assessing FGT, it is sufficient to use volumetric measures alone. However, for the evaluation of BPE, additional models considering voxels' intensity distribution and morphology are required. CRITICAL RELEVANCE STATEMENT A standardized assessment of FGT density can rely on volumetric measures, whereas in the case of BPE, the volumetric measures constitute, along with voxels' intensity distribution and morphology, an important factor. KEY POINTS • Our work contributes to the standardization of FGT and BPE assessment. • Attention U-Net can reliably segment intricately shaped FGT and BPE structures. • The developed models were robust to domain shift.
Collapse
Affiliation(s)
- Sylwia Nowakowska
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
| | | | - Carlotta M Ruppert
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Anna Landsmann
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Magda Marcon
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Nicole Berger
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present Address: Institut RadiologieSpital Lachen, Oberdorfstrasse 41, 8853, Lachen, Switzerland
| | - Andreas Boss
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Present address: GZO AG Spital Wetzikon, Spitalstrasse 66, 8620, Wetzikon, Switzerland
| | - Alexander Ciritsis
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| | - Cristina Rossi
- Diagnostic and interventional Radiology, University Hospital Zurich, University Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- b-rayZ AG, Wagistrasse 21, 8952, Schlieren, Switzerland
| |
Collapse
|
10
|
Shim S, Unkelbach J, Landsmann A, Boss A. Quantitative Study on the Breast Density and the Volume of the Mammary Gland According to the Patient's Age and Breast Quadrant. Diagnostics (Basel) 2023; 13:3343. [PMID: 37958239 PMCID: PMC10648521 DOI: 10.3390/diagnostics13213343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/29/2023] [Accepted: 10/13/2023] [Indexed: 11/15/2023] Open
Abstract
OBJECTIVES Breast density is considered an independent risk factor for the development of breast cancer. This study aimed to quantitatively assess the percent breast density (PBD) and the mammary glands volume (MGV) according to the patient's age and breast quadrant. We propose a regression model to estimate PBD and MGV as a function of the patient's age. METHODS The breast composition in 1027 spiral breast CT (BCT) datasets without soft tissue masses, calcifications, or implants from 517 women (57 ± 8 years) were segmented. The breast tissue volume (BTV), MGV, and PBD of the breasts were measured in the entire breast and each of the four quadrants. The three breast composition features were analyzed in the seven age groups, from 40 to 74 years in 5-year intervals. A logarithmic model was fitted to the BTV, and a multiplicative inverse model to the MGV and PBD as a function of age was established using a least-squares method. RESULTS The BTV increased from 545 ± 345 to 676 ± 412 cm3, and the MGV and PBD decreased from 111 ± 164 to 57 ± 43 cm3 and from 21 ± 21 to 11 ± 9%, respectively, from the youngest to the oldest group (p < 0.05). The average PBD over all ages were 14 ± 13%. The regression models could predict the BTV, MGV, and PBD based on the patient's age with residual standard errors of 386 cm3, 67 cm3, and 13%, respectively. The reduction in MGV and PBD in each quadrant followed the ones in the entire breast. CONCLUSIONS The PBD and MGV computed from BCT examinations provide important information for breast cancer risk assessment in women. The study quantified the breast mammary gland reduction and density decrease over the entire breast. It established mathematical models to estimate the breast composition features-BTV, MGV, and PBD, as a function of the patient's age.
Collapse
Affiliation(s)
- Sojin Shim
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich, Switzerland; (A.L.); (A.B.)
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital Zurich, 8091 Zurich, Switzerland;
| | - Anna Landsmann
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich, Switzerland; (A.L.); (A.B.)
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Raemistrasse 100, 8091 Zurich, Switzerland; (A.L.); (A.B.)
| |
Collapse
|
11
|
Xu Z, Rauch DE, Mohamed RM, Pashapoor S, Zhou Z, Panthi B, Son JB, Hwang KP, Musall BC, Adrada BE, Candelaria RP, Leung JWT, Le-Petross HTC, Lane DL, Perez F, White J, Clayborn A, Reed B, Chen H, Sun J, Wei P, Thompson A, Korkut A, Huo L, Hunt KK, Litton JK, Valero V, Tripathy D, Yang W, Yam C, Ma J. Deep Learning for Fully Automatic Tumor Segmentation on Serially Acquired Dynamic Contrast-Enhanced MRI Images of Triple-Negative Breast Cancer. Cancers (Basel) 2023; 15:4829. [PMID: 37835523 PMCID: PMC10571741 DOI: 10.3390/cancers15194829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/10/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
Accurate tumor segmentation is required for quantitative image analyses, which are increasingly used for evaluation of tumors. We developed a fully automated and high-performance segmentation model of triple-negative breast cancer using a self-configurable deep learning framework and a large set of dynamic contrast-enhanced MRI images acquired serially over the patients' treatment course. Among all models, the top-performing one that was trained with the images across different time points of a treatment course yielded a Dice similarity coefficient of 93% and a sensitivity of 96% on baseline images. The top-performing model also produced accurate tumor size measurements, which is valuable for practical clinical applications.
Collapse
Affiliation(s)
- Zhan Xu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - David E. Rauch
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Rania M. Mohamed
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Sanaz Pashapoor
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Bikash Panthi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Jong Bum Son
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Ken-Pin Hwang
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Benjamin C. Musall
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Beatriz E. Adrada
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rosalind P. Candelaria
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jessica W. T. Leung
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huong T. C. Le-Petross
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Deanna L. Lane
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Frances Perez
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jason White
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alyson Clayborn
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandy Reed
- Department of Clinical Research Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huiqin Chen
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jia Sun
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Peng Wei
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alastair Thompson
- Section of Breast Surgery, Baylor College of Medicine, Houston, TX 77030, USA
| | - Anil Korkut
- Department of Bioinformatics & Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Lei Huo
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Kelly K. Hunt
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jennifer K. Litton
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Vicente Valero
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Debu Tripathy
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wei Yang
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Clinton Yam
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| |
Collapse
|
12
|
Bhowmik A, Monga N, Belen K, Varela K, Sevilimedu V, Thakur SB, Martinez DF, Sutton EJ, Pinker K, Eskreis-Winkler S. Automated Triage of Screening Breast MRI Examinations in High-Risk Women Using an Ensemble Deep Learning Model. Invest Radiol 2023; 58:710-719. [PMID: 37058323 DOI: 10.1097/rli.0000000000000976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/15/2023]
Abstract
OBJECTIVES The aim of the study is to develop and evaluate the performance of a deep learning (DL) model to triage breast magnetic resonance imaging (MRI) findings in high-risk patients without missing any cancers. MATERIALS AND METHODS In this retrospective study, 16,535 consecutive contrast-enhanced MRIs performed in 8354 women from January 2013 to January 2019 were collected. From 3 New York imaging sites, 14,768 MRIs were used for the training and validation data set, and 80 randomly selected MRIs were used for a reader study test data set. From 3 New Jersey imaging sites, 1687 MRIs (1441 screening MRIs and 246 MRIs performed in recently diagnosed breast cancer patients) were used for an external validation data set. The DL model was trained to classify maximum intensity projection images as "extremely low suspicion" or "possibly suspicious." Deep learning model evaluation (workload reduction, sensitivity, specificity) was performed on the external validation data set, using a histopathology reference standard. A reader study was performed to compare DL model performance to fellowship-trained breast imaging radiologists. RESULTS In the external validation data set, the DL model triaged 159/1441 of screening MRIs as "extremely low suspicion" without missing a single cancer, yielding a workload reduction of 11%, a specificity of 11.5%, and a sensitivity of 100%. The model correctly triaged 246/246 (100% sensitivity) of MRIs in recently diagnosed patients as "possibly suspicious." In the reader study, 2 readers classified MRIs with a specificity of 93.62% and 91.49%, respectively, and missed 0 and 1 cancer, respectively. On the other hand, the DL model classified MRIs with a specificity of 19.15% and missed 0 cancers, highlighting its potential use not as an independent reader but as a triage tool. CONCLUSIONS Our automated DL model triages a subset of screening breast MRIs as "extremely low suspicion" without misclassifying any cancer cases. This tool may be used to reduce workload in standalone mode, to shunt low suspicion cases to designated radiologists or to the end of the workday, or to serve as base model for other downstream AI tools.
Collapse
|
13
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
14
|
Kuang S, Woodruff HC, Granzier R, van Nijnatten TJA, Lobbes MBI, Smidt ML, Lambin P, Mehrkanoon S. MSCDA: Multi-level semantic-guided contrast improves unsupervised domain adaptation for breast MRI segmentation in small datasets. Neural Netw 2023; 165:119-134. [PMID: 37285729 DOI: 10.1016/j.neunet.2023.05.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/09/2023] [Accepted: 05/09/2023] [Indexed: 06/09/2023]
Abstract
Deep learning (DL) applied to breast tissue segmentation in magnetic resonance imaging (MRI) has received increased attention in the last decade, however, the domain shift which arises from different vendors, acquisition protocols, and biological heterogeneity, remains an important but challenging obstacle on the path towards clinical implementation. In this paper, we propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation (MSCDA) framework to address this issue in an unsupervised manner. Our approach incorporates self-training with contrastive learning to align feature representations between domains. In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts to better exploit the underlying semantic information of the image at different levels. To resolve the data imbalance problem, we utilize a category-wise cross-domain sampling strategy to sample anchors from target images and build a hybrid memory bank to store samples from source images. We have validated MSCDA with a challenging task of cross-domain breast MRI segmentation between datasets of healthy volunteers and invasive breast cancer patients. Extensive experiments show that MSCDA effectively improves the model's feature alignment capabilities between domains, outperforming state-of-the-art methods. Furthermore, the framework is shown to be label-efficient, achieving good performance with a smaller source dataset. The code is publicly available at https://github.com/ShengKuangCN/MSCDA.
Collapse
Affiliation(s)
- Sheng Kuang
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Renee Granzier
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Thiemo J A van Nijnatten
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Marc B I Lobbes
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, The Netherlands
| | - Marjolein L Smidt
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Siamak Mehrkanoon
- Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
15
|
Chaki J. An automatic system for extracting figure-caption pair from medical documents: a six-fold approach. PeerJ Comput Sci 2023; 9:e1452. [PMID: 37547417 PMCID: PMC10403167 DOI: 10.7717/peerj-cs.1452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/01/2023] [Indexed: 08/08/2023]
Abstract
Background Figures and captions in medical documentation contain important information. As a result, researchers are becoming more interested in obtaining published medical figures from medical papers and utilizing the captions as a knowledge source. Methods This work introduces a unique and successful six-fold methodology for extracting figure-caption pairs. The A-torus wavelet transform is used to retrieve the first edge from the scanned page. Then, using the maximally stable extremal regions connected component feature, text and graphical contents are isolated from the edge document, and multi-layer perceptron is used to successfully detect and retrieve figures and captions from medical records. The figure-caption pair is then extracted using the bounding box approach. The files that contain the figures and captions are saved separately and supplied to the end useras theoutput of any investigation. The proposed approach is evaluated using a self-created database based on the pages collected from five open access books: Sergey Makarov, Gregory Noetscher and Aapo Nummenmaa's book "Brain and Human Body Modelling 2021", "Healthcare and Disease Burden in Africa" by Ilha Niohuru, "All-Optical Methods to Study Neuronal Function" by Eirini Papagiakoumou, "RNA, the Epicenter of Genetic Information" by John Mattick and Paulo Amaral and "Illustrated Manual of Pediatric Dermatology" by Susan Bayliss Mallory, Alanna Bree and Peggy Chern. Results Experiments and findings comparing the new method to earlier systems reveal a significant increase in efficiency, demonstrating the suggested technique's robustness and efficiency.
Collapse
Affiliation(s)
- Jyotismita Chaki
- Department of Computational Intelligence, School of Computer Science and Engineering, Vellore Instiute of Technology, Vellore, India
| |
Collapse
|
16
|
Ham S, Kim M, Lee S, Wang CB, Ko B, Kim N. Improvement of semantic segmentation through transfer learning of multi-class regions with convolutional neural networks on supine and prone breast MRI images. Sci Rep 2023; 13:6877. [PMID: 37106024 PMCID: PMC10140273 DOI: 10.1038/s41598-023-33900-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
Semantic segmentation of breast and surrounding tissues in supine and prone breast magnetic resonance imaging (MRI) is required for various kinds of computer-assisted diagnoses for surgical applications. Variability of breast shape in supine and prone poses along with various MRI artifacts makes it difficult to determine robust breast and surrounding tissue segmentation. Therefore, we evaluated semantic segmentation with transfer learning of convolutional neural networks to create robust breast segmentation in supine breast MRI without considering supine or prone positions. Total 29 patients with T1-weighted contrast-enhanced images were collected at Asan Medical Center and two types of breast MRI were performed in the prone position and the supine position. The four classes, including lungs and heart, muscles and bones, parenchyma with cancer, and skin and fat, were manually drawn by an expert. Semantic segmentation on breast MRI scans with supine, prone, transferred from prone to supine, and pooled supine and prone MRI were trained and compared using 2D U-Net, 3D U-Net, 2D nnU-Net and 3D nnU-Net. The best performance was 2D models with transfer learning. Our results showed excellent performance and could be used for clinical purposes such as breast registration and computer-aided diagnosis.
Collapse
Affiliation(s)
- Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-ro, Danwon-gu, Ansan city, Gyeonggi-do, Republic of Korea
| | - Minjee Kim
- Promedius Inc., 4 Songpa-daero 49-gil, Songpa-gu, Seoul, South Korea
| | - Sangwook Lee
- ANYMEDI Inc., 388-1 Pungnap-dong, Songpa-gu, Seoul, South Korea
| | - Chuan-Bing Wang
- Department of Radiology, First Affiliated Hospital of Nanjing Medical University, 300, Guangzhou Road, Nanjing, Jiangsu, China
| | - BeomSeok Ko
- Department of Breast Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Namkug Kim
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
17
|
Jia X, Li X, Shen T, Zhou L, Yang G, Wang F, Zhu X, Wan M, Li S, Zhang S. Monitoring of thermal lesions in ultrasound using fully convolutional neural networks: A preclinical study. ULTRASONICS 2023; 130:106929. [PMID: 36669371 DOI: 10.1016/j.ultras.2023.106929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 11/15/2022] [Accepted: 01/12/2023] [Indexed: 06/17/2023]
Abstract
Accurate monitoring of thermal ablation regions is an important guarantee for successful ablation treatment, which mainly depends on the subjective judgment of radiologists in current clinical practice. This work innovatively applied fully convolutional neural networks (FCNs) for detection and monitoring of thermal ablation regions in ultrasound (US) and comprehensively compared the performance of VGG16-FCN, U-Net, UNet++, Attention U-Net, MultiResUNet, and ResUNet, which have shown outstanding performance in medical image segmentation. The input of the models was US echo envelope data backscattered from the ablated regions. Excised porcine liver ablation dataset and clinical liver tumors ablation dataset were respectively used to evaluate the prediction ability of the models. With 1000 excised porcine liver ablation samples for training and 200 samples for testing, the UNet++ achieves both the highest Dice score (DSC) of 0.7824 ± 0.1098 and the best Hausdorff distance (HD) of 2.70 ± 1.38 mm. Additionally, considering potential clinical usage, we also tested the model generalizability by training on the excised dataset and testing on the clinical data, in which we obtained the performance with the highest DSC obtained by the ResUNet and the best HD by the UNet++. Our comparative study suggests that both UNet++ and ResUNet have relatively outstanding segmentation performance among all compared models, which are potential candidates for automatic segmentation of thermal ablation regions in US during clinical ablation treatment.
Collapse
Affiliation(s)
- Xin Jia
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Xiejing Li
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Ting Shen
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Ling Zhou
- Department of Ultrasound, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Zhejiang 310016, China.
| | - Guang Yang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Fan Wang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Xingguang Zhu
- Department of Medical Engineering, Beijing Huilongguan Hospital, Beijing 100096, China.
| | - Mingxi Wan
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China.
| | - Shiyan Li
- Department of Ultrasound, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Zhejiang 310016, China.
| | - Siyuan Zhang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, China; State Key Laboratory of Acoustics, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China; Sichuan Digital Economy Industry Development Research Institute, Sichuan 610000, China.
| |
Collapse
|
18
|
Machine learning on MRI radiomic features: identification of molecular subtype alteration in breast cancer after neoadjuvant therapy. Eur Radiol 2023; 33:2965-2974. [PMID: 36418622 DOI: 10.1007/s00330-022-09264-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/03/2022] [Accepted: 10/22/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Recent studies have revealed the change of molecular subtypes in breast cancer (BC) after neoadjuvant therapy (NAT). This study aims to construct a non-invasive model for predicting molecular subtype alteration in breast cancer after NAT. METHODS Eighty-two estrogen receptor (ER)-negative/ human epidermal growth factor receptor 2 (HER2)-negative or ER-low-positive/HER2-negative breast cancer patients who underwent NAT and completed baseline MRI were retrospectively recruited between July 2010 and November 2020. Subtype alteration was observed in 21 cases after NAT. A 2D-DenseUNet machine-learning model was built to perform automatic segmentation of breast cancer. 851 radiomic features were extracted from each MRI sequence (T2-weighted imaging, ADC, DCE, and contrast-enhanced T1-weighted imaging), both in the manual and auto-segmentation masks. All samples were divided into a training set (n = 66) and a test set (n = 16). XGBoost model with 5-fold cross-validation was performed to predict molecular subtype alterations in breast cancer patients after NAT. The predictive ability of these models was subsequently evaluated by the AUC of the ROC curve, sensitivity, and specificity. RESULTS A model consisting of three radiomics features from the manual segmentation of multi-sequence MRI achieved favorable predictive efficacy in identifying molecular subtype alteration in BC after NAT (cross-validation set: AUC = 0.908, independent test set: AUC = 0.864); whereas an automatic segmentation approach of BC lesions on the DCE sequence produced good segmentation results (Dice similarity coefficient = 0.720). CONCLUSIONS A machine learning model based on baseline MRI is proven useful for predicting molecular subtype alterations in breast cancer after NAT. KEY POINTS • Machine learning models using MRI-based radiomics signature have the ability to predict molecular subtype alterations in breast cancer after neoadjuvant therapy, which subsequently affect treatment protocols. • The application of deep learning in the automatic segmentation of breast cancer lesions from MRI images shows the potential to replace manual segmentation..
Collapse
|
19
|
Zhang J, Liu Y, Chen L, Ma S, Zhong Y, He Z, Li C, Xiao Z, Zheng Y, Lv F. DARU‐Net: A dual attention residual U‐Net for uterine fibroids segmentation on MRI. J Appl Clin Med Phys 2023:e13937. [PMID: 36992637 DOI: 10.1002/acm2.13937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 12/12/2022] [Accepted: 02/01/2023] [Indexed: 03/31/2023] Open
Abstract
PURPOSE Uterine fibroid is the most common benign tumor in female reproductive organs. In order to guide the treatment, it is crucial to detect the location, shape, and size of the tumor. This study proposed a deep learning approach based on attention mechanisms to segment uterine fibroids automatically on preoperative Magnetic Resonance (MR) images. METHODS The proposed method is based on U-Net architecture and integrates two attention mechanisms: channel attention of squeeze-and-excitation (SE) blocks with residual connections, spatial attention of pyramid pooling module (PPM). We did the ablation study to verify the performance of these two attention mechanisms module and compared DARU-Net with other deep learning methods. All experiments were performed on a clinical dataset consisting of 150 cases collected from our hospital. Among them, 120 cases were used as the training set, and 30 cases are used as the test set. After preprocessing and data augmentation, we trained the network and tested it on the test dataset. We evaluated segmentation performance through the Dice similarity coefficient (DSC), precision, recall, and Jaccard index (JI). RESULTS The average DSC, precision, recall, and JI of DARU-Net reached 0.8066 ± 0.0956, 0.8233 ± 0.1255, 0.7913 ± 0.1304, and 0.6743 ± 0.1317. Compared with U-Net and other deep learning methods, DARU-Net was more accurate and stable. CONCLUSION This work proposed an optimized U-Net with channel and spatial attention mechanisms to segment uterine fibroids on preoperative MR images. Results showed that DARU-Net was able to accurately segment uterine fibroids from MR images.
Collapse
Affiliation(s)
- Jian Zhang
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
| | - Yang Liu
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Liping Chen
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Si Ma
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
| | - Yuqing Zhong
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
| | - Zhimin He
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
| | - Chengwei Li
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
| | - Zhibo Xiao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yineng Zheng
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Fajin Lv
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing, China
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
- Institute of Medical Data, Chongqing Medical University, Chongqing, China
| |
Collapse
|
20
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
21
|
Xiao Z, Zhang X, Liu Y, Geng L, Wu J, Wang W, Zhang F. RNN-combined graph convolutional network with multi-feature fusion for tuberculosis cavity segmentation. SIGNAL, IMAGE AND VIDEO PROCESSING 2023; 17:2297-2303. [PMID: 36624826 PMCID: PMC9813881 DOI: 10.1007/s11760-022-02446-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/16/2022] [Accepted: 12/10/2022] [Indexed: 05/20/2023]
Abstract
Tuberculosis is a common infectious disease in the world. Tuberculosis cavities are common and an important imaging signs in tuberculosis. Accurate segmentation of tuberculosis cavities has practical significance for indicating the activity of lesions and guiding clinical treatment. However, this task faces challenges such as blurred boundaries, irregular shapes, different location and size of lesions and similar structures on computed tomography (CT) to other lung diseases or tissues. To overcome these problems, we propose a novel RNN-combined graph convolutional network (R2GCN) method, which integrates the bidirectional recurrent network (BRN) and graph convolution network (GCN) modules. First, feature extraction is performed on the input image by VGG-16 or ResNet-50 to obtain the feature map. The feature map is then used as the input of the two modules. On the one hand, we adopt the BRN to retrieve contextual information from the feature map. On the other hand, we take the vector for each location in the feature map as input nodes and utilize GCN to extract node topology information. Finally, two types of features obtained fuse together. Our strategy can not only make full use of node correlations and differences, but also obtain more precise segmentation boundaries. Extensive experiments on CT images of cavitary patients with tuberculosis show that our proposed method achieves the best segmentation accuracy than compared segmentation methods. Our method can be used for the diagnosis of tuberculosis cavity and the evaluation of tuberculosis cavity treatment.
Collapse
Affiliation(s)
- Zhitao Xiao
- School of life Sciences, Tiangong University, Tianjin, 300387 China
- Tianjin Key Laboratory of Optoelectronic Detection Technology and Systems, Tianjin, 300387 China
| | - Xiaomeng Zhang
- School of Artificial Intelligence, Tiangong University, Tianjin, 300387 China
| | - Yanbei Liu
- School of life Sciences, Tiangong University, Tianjin, 300387 China
| | - Lei Geng
- School of life Sciences, Tiangong University, Tianjin, 300387 China
| | - Jun Wu
- School of Electronic and Information Engineering, Tiangong University, Tianjin, 300387 China
| | - Wen Wang
- School of life Sciences, Tiangong University, Tianjin, 300387 China
| | - Fang Zhang
- School of life Sciences, Tiangong University, Tianjin, 300387 China
| |
Collapse
|
22
|
Gao Y, Fu X, Chen Y, Guo C, Wu J. Post-pandemic healthcare for COVID-19 vaccine: Tissue-aware diagnosis of cervical lymphadenopathy via multi-modal ultrasound semantic segmentation. Appl Soft Comput 2023; 133:109947. [PMID: 36570119 PMCID: PMC9762098 DOI: 10.1016/j.asoc.2022.109947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/30/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
With the widespread deployment of COVID-19 vaccines all around the world, billions of people have benefited from the vaccination and thereby avoiding infection. However, huge amount of clinical cases revealed diverse side effects of COVID-19 vaccines, among which cervical lymphadenopathy is one of the most frequent local reactions. Therefore, rapid detection of cervical lymph node (LN) is essential in terms of vaccine recipients' healthcare and avoidance of misdiagnosis in the post-pandemic era. This paper focuses on a novel deep learning-based framework for the rapid diagnosis of cervical lymphadenopathy towards COVID-19 vaccine recipients. Existing deep learning-based computer-aided diagnosis (CAD) methods for cervical LN enlargement mostly only depend on single modal images, e.g., grayscale ultrasound (US), color Doppler ultrasound, and CT, while failing to effectively integrate information from the multi-source medical images. Meanwhile, both the surrounding tissue objects of the cervical LNs and different regions inside the cervical LNs may imply valuable diagnostic knowledge which is pending for mining. In this paper, we propose an Tissue-Aware Cervical Lymph Node Diagnosis method (TACLND) via multi-modal ultrasound semantic segmentation. The method effectively integrates grayscale and color Doppler US images and realizes a pixel-level localization of different tissue objects, i.e., lymph, muscle, and blood vessels. With inter-tissue and intra-tissue attention mechanisms applied, our proposed method can enhance the implicit tissue-level diagnostic knowledge in both spatial and channel dimension, and realize diagnosis of cervical LN with normal, benign or malignant state. Extensive experiments conducted on our collected cervical LN US dataset demonstrate the effectiveness of our methods on both tissue detection and cervical lymphadenopathy diagnosis. Therefore, our proposed framework can guarantee efficient diagnosis for the vaccine recipients' cervical LN, and assist doctors to discriminate between COVID-related reactive lymphadenopathy and metastatic lymphadenopathy.
Collapse
Affiliation(s)
- Yue Gao
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China,Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Xiangling Fu
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China,Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China,Corresponding author at: School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
| | - Yuepeng Chen
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China,Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Chenyi Guo
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Ji Wu
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
23
|
Nepal P, Bagga B, Feng L, Chandarana H. Respiratory Motion Management in Abdominal MRI: Radiology In Training. Radiology 2023; 306:47-53. [PMID: 35997609 PMCID: PMC9792710 DOI: 10.1148/radiol.220448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
A 96-year-old woman had a suboptimal evaluation of liver observations at abdominal MRI due to significant respiratory motion. State-of-the-art strategies to minimize respiratory motion during clinical abdominal MRI are discussed.
Collapse
Affiliation(s)
- Pankaj Nepal
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Barun Bagga
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Li Feng
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Hersh Chandarana
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| |
Collapse
|
24
|
SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability. PLoS One 2022; 17:e0276836. [PMID: 36315487 PMCID: PMC9621459 DOI: 10.1371/journal.pone.0276836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/14/2022] [Indexed: 11/05/2022] Open
Abstract
Skin cancer is considered to be the most common human malignancy. Around 5 million new cases of skin cancer are recorded in the United States annually. Early identification and evaluation of skin lesions are of great clinical significance, but the disproportionate dermatologist-patient ratio poses a significant problem in most developing nations. Therefore a novel deep architecture, named as SkiNet, is proposed to provide faster screening solution and assistance to newly trained physicians in the process of clinical diagnosis of skin cancer. The main motive behind SkiNet's design and development is to provide a white box solution, addressing a critical problem of trust and interpretability which is crucial for the wider adoption of Computer-aided diagnosis systems by medical practitioners. The proposed SkiNet is a two-stage pipeline wherein the lesion segmentation is followed by the lesion classification. Monte Carlo dropout and test time augmentation techniques have been employed in the proposed method to estimate epistemic and aleatoric uncertainty. A novel segmentation model named Bayesian MultiResUNet is used to estimate the uncertainty on the predicted segmentation map. Saliency-based methods like XRAI, Grad-CAM and Guided Backprop are explored to provide post-hoc explanations of the deep learning models. The ISIC-2018 dataset is used to perform the experimentation and ablation studies. The results establish the robustness of the proposed model on the traditional benchmarks while addressing the black-box nature of such models to alleviate the skepticism of medical practitioners by incorporating transparency and confidence to the model's prediction.
Collapse
|
25
|
Ying J, Cattell R, Zhao T, Lei L, Jiang Z, Hussain SM, Gao Y, Chow HHS, Stopeck AT, Thompson PA, Huang C. Two fully automated data-driven 3D whole-breast segmentation strategies in MRI for MR-based breast density using image registration and U-Net with a focus on reproducibility. Vis Comput Ind Biomed Art 2022; 5:25. [PMID: 36219359 PMCID: PMC9554077 DOI: 10.1186/s42492-022-00121-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 09/21/2022] [Indexed: 11/07/2022] Open
Abstract
Presence of higher breast density (BD) and persistence over time are risk factors for breast cancer. A quantitatively accurate and highly reproducible BD measure that relies on precise and reproducible whole-breast segmentation is desirable. In this study, we aimed to develop a highly reproducible and accurate whole-breast segmentation algorithm for the generation of reproducible BD measures. Three datasets of volunteers from two clinical trials were included. Breast MR images were acquired on 3 T Siemens Biograph mMR, Prisma, and Skyra using 3D Cartesian six-echo GRE sequences with a fat-water separation technique. Two whole-breast segmentation strategies, utilizing image registration and 3D U-Net, were developed. Manual segmentation was performed. A task-based analysis was performed: a previously developed MR-based BD measure, MagDensity, was calculated and assessed using automated and manual segmentation. The mean squared error (MSE) and intraclass correlation coefficient (ICC) between MagDensity were evaluated using the manual segmentation as a reference. The test-retest reproducibility of MagDensity derived from different breast segmentation methods was assessed using the difference between the test and retest measures (Δ2-1), MSE, and ICC. The results showed that MagDensity derived by the registration and deep learning segmentation methods exhibited high concordance with manual segmentation, with ICCs of 0.986 (95%CI: 0.974-0.993) and 0.983 (95%CI: 0.961-0.992), respectively. For test-retest analysis, MagDensity derived using the registration algorithm achieved the smallest MSE of 0.370 and highest ICC of 0.993 (95%CI: 0.982-0.997) when compared to other segmentation methods. In conclusion, the proposed registration and deep learning whole-breast segmentation methods are accurate and reliable for estimating BD. Both methods outperformed a previously developed algorithm and manual segmentation in the test-retest assessment, with the registration exhibiting superior performance for highly reproducible BD measurements.
Collapse
Affiliation(s)
- Jia Ying
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Renee Cattell
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
- Department of Radiation Oncology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Tianyun Zhao
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Lan Lei
- Department of Medicine, Northside Hospital Gwinnett, Lawrenceville, GA, 30046, USA
- Program of Public Health, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Zhao Jiang
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Shahid M Hussain
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yi Gao
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China
| | | | - Alison T Stopeck
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Patricia A Thompson
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA
- Department of Medicine, Cedar Sinai Cancer, Cedars Sinai Medical Center, Los Angeles, CA, 90048, USA
| | - Chuan Huang
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794, USA.
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, 11794, USA.
- Stony Brook Cancer Center, Stony Brook University, Stony Brook, NY, 11794, USA.
| |
Collapse
|
26
|
Breast MRI Segmentation and Ki-67 High- and Low-Expression Prediction Algorithm Based on Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1770531. [PMID: 36238476 PMCID: PMC9553330 DOI: 10.1155/2022/1770531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/11/2022] [Accepted: 09/08/2022] [Indexed: 11/17/2022]
Abstract
Background and Objective. Breast cancer is a common malignant tumor that seriously threatens the health of women in my country and even around the world. The proliferation marker Ki-67 has been utilized to distinguish luminal B from luminal A tumors and is a reliable indicator of more aggressive breast cancer growth. If a reliable prediction method for breast cancer patients to avoid invasive damage can be found to predict Ki-67 before pathological examination, it will be very beneficial for doctors to formulate later treatment plans and provide more useful treatment options. Methodology. This paper proposes a tumor segmentation and prediction framework based on the combination of improved attention U-Net and SVM. The framework first improves on attention U-Net by introducing coefficients for learning multidimensional attention. Make the attention mechanism more aware of the main situation in the segmentation process. At the same time, the segmented breast MRI results and corresponding labels were input into the SVM classifier to accurately predict the expression of Ki-67. Results. The DSC, PPV, and sensitivity of our combined model are 0.94, 0.93, and 0.94, respectively, with better segmentation performance. And we compare with the segmentation frameworks of other papers and find that our combined model can make accurate segmentation of breast tumors. Conclusion. Our method can adapt to the variability of breast tumors and segment breast tumors accurately and efficiently. In the future, it can be widely used in clinical practice, so as to help the clinic better formulate a reasonable diagnosis and treatment plan for breast cancer patients.
Collapse
|
27
|
Shim S, Cester D, Ruby L, Bluethgen C, Marcon M, Berger N, Unkelbach J, Boss A. Fully automated breast segmentation on spiral breast computed tomography images. J Appl Clin Med Phys 2022; 23:e13726. [PMID: 35946049 PMCID: PMC9588268 DOI: 10.1002/acm2.13726] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/10/2022] [Accepted: 06/24/2022] [Indexed: 11/10/2022] Open
Abstract
Introduction The quantification of the amount of the glandular tissue and breast density is important to assess breast cancer risk. Novel photon‐counting breast computed tomography (CT) technology has the potential to quantify them. For accurate analysis, a dedicated method to segment the breast components—the adipose and glandular tissue, skin, pectoralis muscle, skinfold section, rib, and implant—is required. We propose a fully automated breast segmentation method for breast CT images. Methods The framework consists of four parts: (1) investigate, (2) segment the components excluding adipose and glandular tissue, (3) assess the breast density, and (4) iteratively segment the glandular tissue according to the estimated density. For the method, adapted seeded watershed and region growing algorithm were dedicatedly developed for the breast CT images and optimized on 68 breast images. The segmentation performance was qualitatively (five‐point Likert scale) and quantitatively (Dice similarity coefficient [DSC] and difference coefficient [DC]) demonstrated according to human reading by experienced radiologists. Results The performance evaluation on each component and overall segmentation for 17 breast CT images resulted in DSCs ranging 0.90–0.97 and in DCs 0.01–0.08. The readers rated 4.5–4.8 (5 highest score) with an excellent inter‐reader agreement. The breast density varied by 3.7%–7.1% when including mis‐segmented muscle or skin. Conclusion The automatic segmentation results coincided with the human expert's reading. The accurate segmentation is important to avoid the significant bias in breast density analysis. Our method enables accurate quantification of the breast density and amount of the glandular tissue that is directly related to breast cancer risk.
Collapse
Affiliation(s)
- Sojin Shim
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Davide Cester
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Lisa Ruby
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Christian Bluethgen
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Magda Marcon
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Nicole Berger
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital of Zurich, Zurich, Switzerland
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| |
Collapse
|
28
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
29
|
Samperna R, Moriakov N, Karssemeijer N, Teuwen J, Mann RM. Exploiting the Dixon Method for a Robust Breast and Fibro-Glandular Tissue Segmentation in Breast MRI. Diagnostics (Basel) 2022; 12:diagnostics12071690. [PMID: 35885594 PMCID: PMC9324146 DOI: 10.3390/diagnostics12071690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/07/2022] [Accepted: 07/09/2022] [Indexed: 11/26/2022] Open
Abstract
Automatic breast and fibro-glandular tissue (FGT) segmentation in breast MRI allows for the efficient and accurate calculation of breast density. The U-Net architecture, either 2D or 3D, has already been shown to be effective at addressing the segmentation problem in breast MRI. However, the lack of publicly available datasets for this task has forced several authors to rely on internal datasets composed of either acquisitions without fat suppression (WOFS) or with fat suppression (FS), limiting the generalization of the approach. To solve this problem, we propose a data-centric approach, efficiently using the data available. By collecting a dataset of T1-weighted breast MRI acquisitions acquired with the use of the Dixon method, we train a network on both T1 WOFS and FS acquisitions while utilizing the same ground truth segmentation. Using the “plug-and-play” framework nnUNet, we achieve, on our internal test set, a Dice Similarity Coefficient (DSC) of 0.96 and 0.91 for WOFS breast and FGT segmentation and 0.95 and 0.86 for FS breast and FGT segmentation, respectively. On an external, publicly available dataset, a panel of breast radiologists rated the quality of our automatic segmentation with an average of 3.73 on a four-point scale, with an average percentage agreement of 67.5%.
Collapse
Affiliation(s)
- Riccardo Samperna
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
- Correspondence:
| | - Nikita Moriakov
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiation Oncology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| | - Nico Karssemeijer
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- ScreenPoint Medical BV, 6525 EC Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiation Oncology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| | - Ritse M. Mann
- Department of Medical Imaging, Radboudumc, 6525 GA Nijmegen, The Netherlands; (N.M.); (N.K.); (J.T.); (R.M.M.)
- Department of Radiology, The Netherlands Cancer Institute (NKI), 1066 CX Amsterdam, The Netherlands
| |
Collapse
|
30
|
Alqaoud M, Plemmons J, Feliberti E, Dong S, Kaipa K, Fichtinger G, Xiao Y, Audette MA. nnUNet-based Multi-modality Breast MRI Segmentation and Tissue-Delineating Phantom for Robotic Tumor Surgery Planning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3495-3501. [PMID: 36086096 DOI: 10.1109/embc48229.2022.9871109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Segmentation of the thoracic region and breast tissues is crucial for analyzing and diagnosing the presence of breast masses. This paper introduces a medical image segmentation architecture that aggregates two neural networks based on the state-of-the-art nnU-Net. Additionally, this study proposes a polyvinyl alcohol cryogel (PVA-C) breast phantom, based on its automated segmentation approach, to enable planning and navigation experiments for robotic breast surgery. The dataset consists of multimodality breast MRI of T2W and STIR images obtained from 10 patients. A statistical analysis of segmentation tasks emphasizes the Dice Similarity Coefficient (DSC), segmentation accuracy, sensitivity, and specificity. We first use a single class labeling to segment the breast region and then exploit it as an input for three-class labeling to segment fatty, fibroglandular (FGT), and tumorous tissues. The first network has a 0.95 DCS, while the second network has a 0.95, 0.83, and 0.41 for fat, FGT, and tumor classes, respectively. Clinical Relevance-This research is relevant to the breast surgery community as it establishes a deep learning-based (DL) algorithmic and phantomic foundation for surgical planning and navigation that will exploit preoperative multimodal MRI and intraoperative ultrasound to achieve highly cosmetic breast surgery. In addition, the planning and navigation will guide a robot that can cut, resect, bag, and grasp a tissue mass that encapsulates breast tumors and positive tissue margins. This image-guided robotic approach promises to potentiate the accuracy of breast surgeons and improve patient outcomes.
Collapse
|
31
|
A Synopsis of Machine and Deep Learning in Medical Physics and Radiology. JOURNAL OF BASIC AND CLINICAL HEALTH SCIENCES 2022. [DOI: 10.30621/jbachs.960154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Machine learning (ML) and deep learning (DL) technologies introduced in the fields of medical physics, radiology, and oncology have made great strides in the past few years. A good many applications have proven to be an efficacious automated diagnosis and radiotherapy system. This paper outlines DL's general concepts and principles, key computational methods, and resources, as well as the implementation of automated models in diagnostic radiology and radiation oncology research. In addition, the potential challenges and solutions of DL technology are also discussed.
Collapse
|
32
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
33
|
Robust Detection and Modeling of the Major Temporal Arcade in Retinal Fundus Images. MATHEMATICS 2022. [DOI: 10.3390/math10081334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The Major Temporal Arcade (MTA) is a critical component of the retinal structure that facilitates clinical diagnosis and monitoring of various ocular pathologies. Although recent works have addressed the quantitative analysis of the MTA through parametric modeling, their efforts are strongly based on an assumption of symmetry in the MTA shape. This work presents a robust method for the detection and piecewise parametric modeling of the MTA in fundus images. The model consists of a piecewise parametric curve with the ability to consider both symmetric and asymmetric scenarios. In an initial stage, multiple models are built from random blood vessel points taken from the blood-vessel segmented retinal image, following a weighted-RANSAC strategy. To choose the final model, the algorithm extracts blood-vessel width and grayscale-intensity features and merges them to obtain a coarse MTA probability function, which is used to weight the percentage of inlier points for each model. This procedure promotes selecting a model based on points with high MTA probability. Experimental results in the public benchmark dataset Digital Retinal Images for Vessel Extraction (DRIVE), for which manual MTA delineations have been prepared, indicate that the proposed method outperforms existing approaches with a balanced Accuracy of 0.7067, Mean Distance to Closest Point of 7.40 pixels, and Hausdorff Distance of 27.96 pixels, while demonstrating competitive results in terms of execution time (9.93 s per image).
Collapse
|
34
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
35
|
Ouyang Z, Zhang P, Pan W, Li Q. Deep learning-based body part recognition algorithm for three-dimensional medical images. Med Phys 2022; 49:3067-3079. [PMID: 35157332 DOI: 10.1002/mp.15536] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 01/24/2022] [Accepted: 01/25/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The automatic recognition of human body parts in three-dimensional (3D) medical images is important in many clinical applications. However, methods presented in prior studies have mainly classified each two-dimensional (2D) slice independently rather than recognizing a batch of consecutive slices as a specific body part. PURPOSE In this study, we aim to develop a deep-learning-based method designed to automatically divide computed tomography (CT) and magnetic resonance imaging (MRI) scans into five consecutive body parts: head, neck, chest, abdomen, and pelvis. METHODS A deep learning framework was developed to recognize body parts in two stages. In the first pre-classification stage, a convolutional neural network (CNN) using the GoogLeNet Inception v3 architecture and a long short-term memory (LSTM) network were combined to classify each 2D slice; the CNN extracted information from a single slice, whereas the LSTM employed rich contextual information among consecutive slices. In the second post-processing stage, the input scan was further partitioned into consecutive body parts by identifying the optimal boundaries between them based on the slice classification results of the first stage. To evaluate the performance of the proposed method, 662 CT and 1434 MRI scans were used. RESULTS Our method achieved a very good performance in 2D slice classification compared with state-of-the-art methods, with overall classification accuracies of 97.3% and 98.2% for CT and MRI scans, respectively. Moreover, our method further divided whole scans into consecutive body parts with mean boundary errors of 8.9 mm and 3.5 mm for CT and MRI data, respectively. CONCLUSIONS The proposed method significantly improved the slice classification accuracy compared with state-of-the-art methods, and further accurately divided CT and MRI scans into consecutive body parts based on the results of slice classification. The developed method can be employed as an important step in various computer-aided diagnosis and medical image analysis schemes. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zihui Ouyang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Peng Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Weifan Pan
- Zhejiang Taimei Medical Technology Co., Ltd, Jiaxing, Zhejiang, 314001, China
| | - Qiang Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| |
Collapse
|
36
|
Yin XX, Hadjiloucas S, Zhang Y, Tian Z. MRI radiogenomics for intelligent diagnosis of breast tumors and accurate prediction of neoadjuvant chemotherapy responses-a review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106510. [PMID: 34852935 DOI: 10.1016/j.cmpb.2021.106510] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 11/01/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper aims to overview multidimensional mining algorithms in relation to Magnetic Resonance Imaging (MRI) radiogenomics for computer aided detection and diagnosis of breast tumours. The work also aims to address a new problem in radiogenomics mining: how to combine structural radiomics information with non-structural genomics information for improving the accuracy and efficacy of Neoadjuvant Chemotherapy (NAC). METHODS This requires the automated extraction of parameters from non-structural breast radiomics data, and finding feature vectors with diagnostic value, which then are combined with genomics data. In order to address the problem of weakly labelled tumour images, a Generative Adiversarial Networks (GAN) based deep learning strategy is proposed for the classification of tumour types; this has significant potential for providing accurate real-time identification of tumorous regions from MRI scans. In order to efficiently integrate in a deep learning framework different features from radiogenomics datasets at multiple spatio-temporal resolutions, pyramid structured and multi-scale densely connected U-Nets are proposed. A bidirectional gated recurrent unit (BiGRU) combined with an attention based deep learning approach is also proposed. RESULTS The aim is to accurately predict NAC responses by combining imaging and genomic datasets. The approaches discussed incorporate some of the latest developments in of current signal processing and artificial intelligence and have significant potential in advancing and provide a development platform for future cutting-edge biomedical radiogenomics analysis. CONCLUSIONS The association of genotypic and phenotypic features is at the core of the emergent field of Precision Medicine. It makes use of advances in biomedical big data analysis, which enables the correlation between disease-associated phenotypic characteristics, genetics polymorphism and gene activation to be revealed.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China.
| | - Sillas Hadjiloucas
- Department of Biomedical Engineering, The University of Reading, RG6 6AY, UK
| | - Yanchun Zhang
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| | - Zhihong Tian
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
37
|
Frankhouser DE, Dietze E, Mahabal A, Seewaldt VL. Vascularity and Dynamic Contrast-Enhanced Breast Magnetic Resonance Imaging. FRONTIERS IN RADIOLOGY 2021; 1:735567. [PMID: 37492179 PMCID: PMC10364989 DOI: 10.3389/fradi.2021.735567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/11/2021] [Indexed: 07/27/2023]
Abstract
Angiogenesis is a key step in the initiation and progression of an invasive breast cancer. High microvessel density by morphological characterization predicts metastasis and poor survival in women with invasive breast cancers. However, morphologic characterization is subject to variability and only can evaluate a limited portion of an invasive breast cancer. Consequently, breast Magnetic Resonance Imaging (MRI) is currently being evaluated to assess vascularity. Recently, through the new field of radiomics, dynamic contrast enhanced (DCE)-MRI is being used to evaluate vascular density, vascular morphology, and detection of aggressive breast cancer biology. While DCE-MRI is a highly sensitive tool, there are specific features that limit computational evaluation of blood vessels. These include (1) DCE-MRI evaluates gadolinium contrast and does not directly evaluate biology, (2) the resolution of DCE-MRI is insufficient for imaging small blood vessels, and (3) DCE-MRI images are very difficult to co-register. Here we review computational approaches for detection and analysis of blood vessels in DCE-MRI images and present some of the strategies we have developed for co-registry of DCE-MRI images and early detection of vascularization.
Collapse
Affiliation(s)
- David E. Frankhouser
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| | - Eric Dietze
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| | - Ashish Mahabal
- Department of Astronomy, Division of Physics, Mathematics, and Astronomy, California Institute of Technology (Caltech), Pasadena, CA, United States
| | - Victoria L. Seewaldt
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| |
Collapse
|
38
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
39
|
Abstract
This article gives a brief overview of the development of artificial intelligence in clinical breast imaging. For multiple decades, artificial intelligence (AI) methods have been developed and translated for breast imaging tasks such as detection, diagnosis, and assessing response to therapy. As imaging modalities arise to support breast cancer screening programs and diagnostic examinations, including full-field digital mammography, breast tomosynthesis, ultrasound, and MRI, AI techniques parallel the efforts with more complex algorithms, faster computers, and larger data sets. AI methods include human-engineered radiomics algorithms and deep learning methods. Examples of these AI-supported clinical tasks are given along with commentary on the future.
Collapse
Affiliation(s)
- Qiyuan Hu
- Committee on Medical Physics, Department of Radiology, The University of Chicago, 5841 S Maryland Avenue, MC2026, Chicago, IL 60637, USA
| | - Maryellen L Giger
- Committee on Medical Physics, Department of Radiology, The University of Chicago, 5841 S Maryland Avenue, MC2026, Chicago, IL 60637, USA.
| |
Collapse
|
40
|
Li X, Zhao Y, Jiang J, Cheng J, Zhu W, Wu Z, Jing J, Zhang Z, Wen W, Sachdev PS, Wang Y, Liu T, Li Z. White matter hyperintensities segmentation using an ensemble of neural networks. Hum Brain Mapp 2021; 43:929-939. [PMID: 34704337 PMCID: PMC8764480 DOI: 10.1002/hbm.25695] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 10/08/2021] [Indexed: 11/30/2022] Open
Abstract
White matter hyperintensities (WMHs) represent the most common neuroimaging marker of cerebral small vessel disease (CSVD). The volume and location of WMHs are important clinical measures. We present a pipeline using deep fully convolutional network and ensemble models, combining U‐Net, SE‐Net, and multi‐scale features, to automatically segment WMHs and estimate their volumes and locations. We evaluated our method in two datasets: a clinical routine dataset comprising 60 patients (selected from Chinese National Stroke Registry, CNSR) and a research dataset composed of 60 patients (selected from MICCAI WMH Challenge, MWC). The performance of our pipeline was compared with four freely available methods: LGA, LPA, UBO detector, and U‐Net, in terms of a variety of metrics. Additionally, to access the model generalization ability, another research dataset comprising 40 patients (from Older Australian Twins Study and Sydney Memory and Aging Study, OSM), was selected and tested. The pipeline achieved the best performance in both research dataset and the clinical routine dataset with DSC being significantly higher than other methods (p < .001), reaching .833 and .783, respectively. The results of model generalization ability showed that the model trained on the research dataset (DSC = 0.736) performed higher than that trained on the clinical dataset (DSC = 0.622). Our method outperformed widely used pipelines in WMHs segmentation. This system could generate both image and text outputs for whole brain, lobar and anatomical automatic labeling WMHs. Additionally, software and models of our method are made publicly available at https://www.nitrc.org/projects/what_v1.
Collapse
Affiliation(s)
- Xinxin Li
- Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,BioMind Technology AI Center, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Beijng, China
| | - Yu Zhao
- Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Jiyang Jiang
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, UNSW, Sydney, New South Wales, Australia
| | - Jian Cheng
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicin, School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Wanlin Zhu
- Neuroimaging Center of Excellence, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijng, China
| | - Zhenzhou Wu
- BioMind Technology AI Center, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Beijng, China
| | - Jing Jing
- Neuroimaging Center of Excellence, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijng, China
| | - Zhe Zhang
- Neuroimaging Center of Excellence, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijng, China
| | - Wei Wen
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, UNSW, Sydney, New South Wales, Australia.,Neuropsychiatric Institute, Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Perminder S Sachdev
- Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, UNSW, Sydney, New South Wales, Australia.,Neuropsychiatric Institute, Prince of Wales Hospital, Sydney, New South Wales, Australia
| | - Yongjun Wang
- Neuroimaging Center of Excellence, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijng, China
| | - Tao Liu
- Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicin, School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Zixiao Li
- Neuroimaging Center of Excellence, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijng, China.,Vascular Neurology, Department of Neurology, Beijing TianTan Hospital, Capital Medical University, Beijing, China.,Chinese Institute for Brain Research, Beijing, China.,Research Unit of Artificial Intelligence in Cerebrovascular Disease, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
41
|
Satake H, Ishigaki S, Ito R, Naganawa S. Radiomics in breast MRI: current progress toward clinical application in the era of artificial intelligence. Radiol Med 2021; 127:39-56. [PMID: 34704213 DOI: 10.1007/s11547-021-01423-y] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/14/2021] [Indexed: 12/11/2022]
Abstract
Breast magnetic resonance imaging (MRI) is the most sensitive imaging modality for breast cancer diagnosis and is widely used clinically. Dynamic contrast-enhanced MRI is the basis for breast MRI, but ultrafast images, T2-weighted images, and diffusion-weighted images are also taken to improve the characteristics of the lesion. Such multiparametric MRI with numerous morphological and functional data poses new challenges to radiologists, and thus, new tools for reliable, reproducible, and high-volume quantitative assessments are warranted. In this context, radiomics, which is an emerging field of research involving the conversion of digital medical images into mineable data for clinical decision-making and outcome prediction, has been gaining ground in oncology. Recent development in artificial intelligence has promoted radiomics studies in various fields including breast cancer treatment and numerous studies have been conducted. However, radiomics has shown a translational gap in clinical practice, and many issues remain to be solved. In this review, we will outline the steps of radiomics workflow and investigate clinical application of radiomics focusing on breast MRI based on published literature, as well as current discussion about limitations and challenges in radiomics.
Collapse
Affiliation(s)
- Hiroko Satake
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan.
| | - Satoko Ishigaki
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
42
|
Yu X, Zhou Q, Wang S, Zhang Y. A systematic survey of deep learning in breast cancer. INT J INTELL SYST 2021. [DOI: 10.1002/int.22622] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Qinghua Zhou
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Yu‐Dong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| |
Collapse
|
43
|
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model. J Digit Imaging 2021; 34:877-887. [PMID: 34244879 PMCID: PMC8455741 DOI: 10.1007/s10278-021-00472-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 05/27/2021] [Accepted: 06/09/2021] [Indexed: 12/11/2022] Open
Abstract
To develop a U-net deep learning method for breast tissue segmentation on fat-sat T1-weighted (T1W) MRI using transfer learning (TL) from a model developed for non-fat-sat images. The training dataset (N = 126) was imaged on a 1.5 T MR scanner, and the independent testing dataset (N = 40) was imaged on a 3 T scanner, both using fat-sat T1W pulse sequence. Pre-contrast images acquired in the dynamic-contrast-enhanced (DCE) MRI sequence were used for analysis. All patients had unilateral cancer, and the segmentation was performed using the contralateral normal breast. The ground truth of breast and fibroglandular tissue (FGT) segmentation was generated using a template-based segmentation method with a clustering algorithm. The deep learning segmentation was performed using U-net models trained with and without TL, by using initial values of trainable parameters taken from the previous model for non-fat-sat images. The ground truth of each case was used to evaluate the segmentation performance of the U-net models by calculating the dice similarity coefficient (DSC) and the overall accuracy based on all pixels. Pearson’s correlation was used to evaluate the correlation of breast volume and FGT volume between the U-net prediction output and the ground truth. In the training dataset, the evaluation was performed using tenfold cross-validation, and the mean DSC with and without TL was 0.97 vs. 0.95 for breast and 0.86 vs. 0.80 for FGT. When the final model developed with and without TL from the training dataset was applied to the testing dataset, the mean DSC was 0.89 vs. 0.83 for breast and 0.81 vs. 0.81 for FGT, respectively. Application of TL not only improved the DSC, but also decreased the required training case number. Lastly, there was a high correlation (R2 > 0.90) for both the training and testing datasets between the U-net prediction output and ground truth for breast volume and FGT volume. U-net can be applied to perform breast tissue segmentation on fat-sat images, and TL is an efficient strategy to develop a specific model for each different dataset.
Collapse
|
44
|
Wang H, Cao J, Feng J, Xie Y, Yang D, Chen B. Mixed 2D and 3D convolutional network with multi-scale context for lesion segmentation in breast DCE-MRI. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102607] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
45
|
Huo L, Hu X, Xiao Q, Gu Y, Chu X, Jiang L. Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images. Magn Reson Imaging 2021; 82:31-41. [PMID: 34147598 DOI: 10.1016/j.mri.2021.06.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Segmentation of the whole breast and fibroglandular tissue (FGT) is important for quantitatively analyzing the breast cancer risk in the dynamic contrast-enhanced magnetic resonance (DCE-MR) images. The purpose of this study is to improve the accuracy and efficiency of the segmentation of the whole breast and FGT in 3-D fat-suppressed DCE-MR images with a versatile deep learning (DL) framework. METHODS We randomly collected 100 breast DCE-MR scans from Shanghai Cancer Hospital of Fudan University. The MR scans in the dataset were different in both the spatial resolution and the MR scanners employed. Furthermore, four breast density categories were assessed by radiologists based on Breast Imaging Reporting and Data System (BI-RADS) of American College of Radiology. The dataset was separated into the training and the testing sets, while keeping a balanced distribution of scans with different imaging parameters and density categories. The nnU-Net has been recently proposed to automatically adapt preprocessing strategies and network architectures for a given medical image dataset, thus showing a great potential in the systematic adaptation of DL methods to different datasets. In this study, we applied the nnU-Net to segment the whole breast and FGT in 3-D fat-suppressed DCE-MR images. Five-fold cross validation was employed to train and validate the segmentation method. RESULTS The segmentation performance was evaluated with the volume and surface agreement metrics between the DL-based automatic and the manually delineated masks, as quantified with the following measures: the average Dice volume overlap (0.968 ± 0.017 and 0.877 ± 0.081), the average surface distances (0.201 ± 0.080 mm and 0.310 ± 0.043 mm), and the Pearson correlation coefficient of masks (0.995 and 0.972) between the automatic and the manually delineated masks, as calculated for the whole breast and the FGT segmentation, respectively. The correlation coefficient between the breast densities obtained with the DL-based segmentation and the manual delineation was 0.981. There was a positive bias of 0.8% (DL-based relative to manual) in breast density measurement with the Bland-Altman plot. The execution time of the DL-based segmentation was approximately 20 s for the whole breast segmentation and 15 s for the FGT segmentation. CONCLUSIONS Our DL-based segmentation framework using nnU-Net could robustly achieve high accuracy and efficiency across variable MR imaging settings without extra pre- or post-processing procedures. It would be useful for developing DCE-MR-based CAD systems to quantify breast cancer risk and to be integrated into the clinical workflow.
Collapse
Affiliation(s)
- Lu Huo
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; University of Chinese Academy of Sciences, No.19 Yuquan Road, Beijing 100049, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Xiaoxin Hu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Qin Xiao
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Yajia Gu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Xu Chu
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Luan Jiang
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China.
| |
Collapse
|
46
|
Sugimori H, Shimizu K, Makita H, Suzuki M, Konno S. A Comparative Evaluation of Computed Tomography Images for the Classification of Spirometric Severity of the Chronic Obstructive Pulmonary Disease with Deep Learning. Diagnostics (Basel) 2021; 11:diagnostics11060929. [PMID: 34064240 PMCID: PMC8224354 DOI: 10.3390/diagnostics11060929] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/17/2021] [Accepted: 05/19/2021] [Indexed: 12/03/2022] Open
Abstract
Recently, deep learning applications in medical imaging have been widely applied. However, whether it is sufficient to simply input the entire image or whether it is necessary to preprocess the setting of the supervised image has not been sufficiently studied. This study aimed to create a classifier trained with and without preprocessing for the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification using CT images and to evaluate the classification accuracy of the GOLD classification by confusion matrix. According to former GOLD 0, GOLD 1, GOLD 2, and GOLD 3 or 4, eighty patients were divided into four groups (n = 20). The classification models were created by the transfer learning of the ResNet50 network architecture. The created models were evaluated by confusion matrix and AUC. Moreover, the rearranged confusion matrix for former stages 0 and ≥1 was evaluated by the same procedure. The AUCs of original and threshold images for the four-class analysis were 0.61 ± 0.13 and 0.64 ± 0.10, respectively, and the AUCs for the two classifications of former GOLD 0 and GOLD ≥ 1 were 0.64 ± 0.06 and 0.68 ± 0.12, respectively. In the two-class classification by threshold image, recall and precision were over 0.8 in GOLD ≥ 1, and in the McNemar–Bowker test, there was some symmetry. The results suggest that the preprocessed threshold image can be possibly used as a screening tool for GOLD classification without pulmonary function tests, rather than inputting the normal image into the convolutional neural network (CNN) for CT image learning.
Collapse
Affiliation(s)
- Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan;
| | - Kaoruko Shimizu
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
- Correspondence: ; Tel.: +81-11-706-5911
| | - Hironi Makita
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
- Hokkaido Medical Research Institute for Respiratory Diseases, Sapporo 064-0807, Japan
| | - Masaru Suzuki
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
| | - Satoshi Konno
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
| |
Collapse
|
47
|
Humpire-Mamani GE, Bukala J, Scholten ET, Prokop M, van Ginneken B, Jacobs C. Fully Automatic Volume Measurement of the Spleen at CT Using Deep Learning. Radiol Artif Intell 2021; 2:e190102. [PMID: 33937830 DOI: 10.1148/ryai.2020190102] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 04/26/2020] [Accepted: 05/01/2020] [Indexed: 12/15/2022]
Abstract
Purpose To develop a fully automated algorithm for spleen segmentation and to assess the performance of this algorithm in a large dataset. Materials and Methods In this retrospective study, a three-dimensional deep learning network was developed to segment the spleen on thorax-abdomen CT scans. Scans were extracted from patients undergoing oncologic treatment from 2014 to 2017. A total of 1100 scans from 1100 patients were used in this study, and 400 were selected for development of the algorithm. For testing, a dataset of 50 scans was annotated to assess the segmentation accuracy and was compared against the splenic index equation. In a qualitative observer experiment, an enriched set of 100 scan-pairs was used to evaluate whether the algorithm could aid a radiologist in assessing splenic volume change. The reference standard was set by the consensus of two other independent radiologists. A Mann-Whitney U test was conducted to test whether there was a performance difference between the algorithm and the independent observer. Results The algorithm and the independent observer obtained comparable Dice scores (P = .834) on the test set of 50 scans of 0.962 and 0.964, respectively. The radiologist had an agreement with the reference standard in 81% (81 of 100) of the cases after a visual classification of volume change, which increased to 92% (92 of 100) when aided by the algorithm. Conclusion A segmentation method based on deep learning can accurately segment the spleen on CT scans and may help radiologists to detect abnormal splenic volumes and splenic volume changes.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Gabriel E Humpire-Mamani
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Joris Bukala
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Ernst T Scholten
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Mathias Prokop
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| | - Colin Jacobs
- Diagnostic Image Analysis Group, Radboud University Medical Center, Geert Grooteplein 10 (Route 767), 6525 GA, Nijmegen, the Netherlands (G.E.H.M., J.B., E.T.S., M.P., B.v.G., C.J.); and Fraunhofer MEVIS, Bremen, Germany (B.v.G.)
| |
Collapse
|
48
|
Fernandes FE, Yen GG. Pruning of generative adversarial neural networks for medical imaging diagnostics with evolution strategy. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.12.086] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
49
|
Sugimori H, Hamaguchi H, Fujiwara T, Ishizaka K. Classification of type of brain magnetic resonance images with deep learning technique. Magn Reson Imaging 2021; 77:180-185. [PMID: 33359426 DOI: 10.1016/j.mri.2020.12.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 11/01/2020] [Accepted: 12/20/2020] [Indexed: 11/19/2022]
Affiliation(s)
- Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, North- 12, West- 5, Kita- ku, Sapporo, Hokkaido 060-0812, Japan.
| | - Hiroyuki Hamaguchi
- Department of Radiological Technology, Hokkaido University Hospital, North- 14, West- 5, Kita- ku, Sapporo, Hokkaido 060-8648, Japan.
| | - Taro Fujiwara
- Department of Radiological Technology, Hokkaido University Hospital, North- 14, West- 5, Kita- ku, Sapporo, Hokkaido 060-8648, Japan.
| | - Kinya Ishizaka
- Department of Radiological Technology, Hokkaido University Hospital, North- 14, West- 5, Kita- ku, Sapporo, Hokkaido 060-8648, Japan.
| |
Collapse
|
50
|
Tian X, Li C, Liu H, Li P, He J, Gao W. Applications of artificial intelligence in radiophysics. J Cancer Res Ther 2021; 17:1603-1607. [DOI: 10.4103/jcrt.jcrt_1438_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|