1
|
Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs. J Imaging 2022; 8:jimaging8090231. [PMID: 36135397 PMCID: PMC9503015 DOI: 10.3390/jimaging8090231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/26/2022] [Accepted: 08/04/2022] [Indexed: 11/30/2022] Open
Abstract
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.
Collapse
|
2
|
Malliori A, Pallikarakis N. Breast cancer detection using machine learning in digital mammography and breast tomosynthesis: A systematic review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00693-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
3
|
Ricciardi R, Mettivier G, Staffa M, Sarno A, Acampora G, Minelli S, Santoro A, Antignani E, Orientale A, Pilotti I, Santangelo V, D'Andria P, Russo P. A deep learning classifier for digital breast tomosynthesis. Phys Med 2021; 83:184-193. [DOI: 10.1016/j.ejmp.2021.03.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 02/04/2021] [Accepted: 03/13/2021] [Indexed: 10/21/2022] Open
|
4
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
5
|
Yang B, Wu Y, Zhou Z, Li S, Qin G, Chen L, Wang J. A collection input based support tensor machine for lesion malignancy classification in digital breast tomosynthesis. Phys Med Biol 2019; 64:235007. [PMID: 31698349 PMCID: PMC7103089 DOI: 10.1088/1361-6560/ab553d] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Digital breast tomosynthesis (DBT) with improved lesion conspicuity and characterization has been adopted in screening practice. DBT-based diagnosis strongly depends on physicians' experience, so an automatic lesion malignancy classification model using DBT could improve the consistency of diagnosis among different physicians. Tensor-based approaches that use the original imaging data as input have shown promising results for many classification tasks. However, DBT data are pseudo-3D volumetric imaging as the slice spacing of DBT is much coarser than that of the in-plane resolution. Thus, directly constructing DBT as the third-order tensor in a conventional tensor-based classifier with introducing additional information to the original DBT data along the slice-spacing dimension will lead to inconsistency across all three dimensions. To avoid such inconsistency, we introduce a collection input based support tensor machine (CISTM)-based classifier that uses the tensor collection as input for classifying lesion malignancy in DBT. In CISTM, instead of introducing the third dimension directly into the geometry construction, the third-dimension structural relationship is related by weight parameters in the decision function, which is dynamically and automatically constructed during the classifier training process and is more consistent with the pseudo-3D nature of DBT. We tested our method on a DBT dataset of 926 images among which 262 were malignant and 664 were benign. We compared our method with the latest tensor-based method, KSTM (kernelled support tensor machine), which does not consider the unique non-uniform resolution property of DBT. Experimental results illustrate that the CISTM-based classifier is effective for classifying breast lesion malignancy in DBT and that it outperforms the KSTM-based classifier.
Collapse
Affiliation(s)
- Benjuan Yang
- School of Mathematics and Sciences, Guizhou Normal University, Guiyang, 50001, PR China
- Medical Artificial Intelligence and Automation (MAIA) Lab, University of Texas Southwestern Medical Center, Dallas, TX 75390, US
| | - Yingjiang Wu
- School of Information Engineering, Guangdong Medical University, Dongguan, 523808, PR China
| | - Zhiguo Zhou
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75235, US
- Medical Artificial Intelligence and Automation (MAIA) Lab, University of Texas Southwestern Medical Center, Dallas, TX 75390, US
| | - Shulong Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, PR China
| | - Genggeng Qin
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, PR China
| | - Liyuan Chen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75235, US
- Medical Artificial Intelligence and Automation (MAIA) Lab, University of Texas Southwestern Medical Center, Dallas, TX 75390, US
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75235, US
- Medical Artificial Intelligence and Automation (MAIA) Lab, University of Texas Southwestern Medical Center, Dallas, TX 75390, US
| |
Collapse
|
6
|
Li X, Qin G, He Q, Sun L, Zeng H, He Z, Chen W, Zhen X, Zhou L. Digital breast tomosynthesis versus digital mammography: integration of image modalities enhances deep learning-based breast mass classification. Eur Radiol 2019; 30:778-788. [DOI: 10.1007/s00330-019-06457-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 08/01/2019] [Accepted: 09/12/2019] [Indexed: 12/24/2022]
Affiliation(s)
- Xin Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Genggeng Qin
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Qiang He
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Lei Sun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Hui Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China.
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China.
| |
Collapse
|
7
|
Geras KJ, Mann RM, Moy L. Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Radiology 2019; 293:246-259. [PMID: 31549948 DOI: 10.1148/radiol.2019182627] [Citation(s) in RCA: 146] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Although computer-aided diagnosis (CAD) is widely used in mammography, conventional CAD programs that use prompts to indicate potential cancers on the mammograms have not led to an improvement in diagnostic accuracy. Because of the advances in machine learning, especially with use of deep (multilayered) convolutional neural networks, artificial intelligence has undergone a transformation that has improved the quality of the predictions of the models. Recently, such deep learning algorithms have been applied to mammography and digital breast tomosynthesis (DBT). In this review, the authors explain how deep learning works in the context of mammography and DBT and define the important technical challenges. Subsequently, they discuss the current status and future perspectives of artificial intelligence-based clinical applications for mammography, DBT, and radiomics. Available algorithms are advanced and approach the performance of radiologists-especially for cancer detection and risk prediction at mammography. However, clinical validation is largely lacking, and it is not clear how the power of deep learning should be used to optimize practice. Further development of deep learning models is necessary for DBT, and this requires collection of larger databases. It is expected that deep learning will eventually have an important role in DBT, including the generation of synthetic images.
Collapse
Affiliation(s)
- Krzysztof J Geras
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| | - Ritse M Mann
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| | - Linda Moy
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| |
Collapse
|
8
|
Park H, Lee HJ, Kim HG, Ro YM, Shin D, Lee SR, Kim SH, Kong M. Endometrium segmentation on transvaginal ultrasound image using key-point discriminator. Med Phys 2019; 46:3974-3984. [PMID: 31230366 DOI: 10.1002/mp.13677] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2018] [Revised: 06/06/2019] [Accepted: 06/06/2019] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Transvaginal ultrasound imaging provides useful information for diagnosing endometrial pathologies and reproductive health. Endometrium segmentation in transvaginal ultrasound (TVUS) images is very challenging due to ambiguous boundaries and heterogeneous textures. In this study, we developed a new segmentation framework which provides robust segmentation against ambiguous boundaries and heterogeneous textures of TVUS images. METHODS To achieve endometrium segmentation from TVUS images, we propose a new segmentation framework with a discriminator guided by four key points of the endometrium (namely, the endometrium cavity tip, the internal os of the cervix, and the two thickest points between the two basal layers on the anterior and posterior uterine walls). The key points of the endometrium are defined as meaningful points that are related to the characteristics of the endometrial morphology, namely the length and thickness of the endometrium. In the proposed segmentation framework, the key-point discriminator distinguishes a predicted segmentation map from a ground-truth segmentation map according to the key-point maps. Meanwhile, the endometrium segmentation network predicts accurate segmentation results that the key-point discriminator cannot discriminate. In this adversarial way, the key-point information containing endometrial morphology characteristics is effectively incorporated in the segmentation network. The segmentation network can accurately find the segmentation boundary while the key-point discriminator learns the shape distribution of the endometrium. Moreover, the endometrium segmentation can be robust to the heterogeneous texture of the endometrium. We conducted an experiment on a TVUS dataset that contained 3,372 sagittal TVUS images and the corresponding key points. The dataset was collected by three hospitals (Ewha Woman's University School of Medicine, Asan Medical Center, and Yonsei University College of Medicine) with the approval of the three hospitals' Institutional Review Board. For verification, fivefold cross-validation was performed. RESULT The proposed key-point discriminator improved the performance of the endometrium segmentation, achieving 82.67 % for the Dice coefficient and 70.46% for the Jaccard coefficient. In comparison, on the TVUS images UNet, showed 58.69 % for the Dice coefficient and 41.59 % for the Jaccard coefficient. The qualitative performance of the endometrium segmentation was also improved over the conventional deep learning segmentation networks. Our experimental results indicated robust segmentation by the proposed method on TVUS images with heterogeneous texture and unclear boundary. In addition, the effect of the key-point discriminator was verified by an ablation study. CONCLUSION We proposed a key-point discriminator to train a segmentation network for robust segmentation of the endometrium with TVUS images. By utilizing the key-point information, the proposed method showed more reliable and accurate segmentation performance and outperformed the conventional segmentation networks both in qualitative and quantitative comparisons.
Collapse
Affiliation(s)
- Hyenok Park
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Hong Joo Lee
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Hak Gu Kim
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Yong Man Ro
- School of Electrical Engineering, KAIST, Daejeon, 34141, Republic of Korea
| | - Dongkuk Shin
- Medical Image Development Group, R&D Center, Samsung Medison, Seongnam, 13530, Republic of Korea
| | - Sa Ra Lee
- Department of Obstetrics and Gynecology, Ewha Womans University School of Medicine, Seoul, 07985, Republic of Korea
| | - Sung Hoon Kim
- Department of Obstetrics and Gynecology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, 05505, Republic of Korea
| | - Mikyung Kong
- Department of Obstetrics and Gynecology, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| |
Collapse
|
9
|
Mendel K, Li H, Sheth D, Giger M. Transfer Learning From Convolutional Neural Networks for Computer-Aided Diagnosis: A Comparison of Digital Breast Tomosynthesis and Full-Field Digital Mammography. Acad Radiol 2019; 26:735-743. [PMID: 30076083 DOI: 10.1016/j.acra.2018.06.019] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 06/13/2018] [Accepted: 06/22/2018] [Indexed: 01/09/2023]
Abstract
RATIONALE AND OBJECTIVES With the growing adoption of digital breast tomosynthesis (DBT) in breast cancer screening, we compare the performance of deep learning computer-aided diagnosis on DBT images to that of conventional full-field digital mammography (FFDM). MATERIALS AND METHODS In this study, we retrospectively collected FFDM and DBT images of 78 biopsy-proven lesions from 76 patients. A region of interest was selected for each lesion on FFDM, synthesized 2D, and DBT key slice images. Features were extracted from each lesion using a pretrained convolutional neural network (CNN) and served as input to a support vector machine classifier trained in the task of predicting likelihood of malignancy. RESULTS From receiver operating characteristic (ROC) analysis of all 78 lesions, the synthesized 2D image performed best in both the cradiocaudal view (area under the ROC curve [AUC] = 0.81, SE = 0.05) and mediolateral oblique view (AUC = 0.88, SE = 0.04) in the task of lesion characterization. When cradiocaudal and mediolateral oblique data of each lesion were merged through soft voting, DBT key slice image performed best (AUC = 0.89, SE = 0.04). When only masses and architectural distortions (ARDs) were considered, DBT performed significantly better than FFDM (p = 0.024). CONCLUSION DBT performed significantly better than FFDM in the merged view classification of mass and ARD lesions. The increased performance suggests that the information extracted by the CNN from DBT images may be more relevant to lesion malignancy status than the information extracted from FFDM images. Therefore, this study provides supporting evidence for the efficacy of computer-aided diagnosis on DBT in the evaluation of mass and ARD lesions.
Collapse
Affiliation(s)
- Kayla Mendel
- The University of Chicago, 5801 S Ellis Ave, Chicago, Illinois.
| | - Hui Li
- The University of Chicago, 5801 S Ellis Ave, Chicago, Illinois
| | - Deepa Sheth
- The University of Chicago, 5801 S Ellis Ave, Chicago, Illinois
| | - Maryellen Giger
- The University of Chicago, 5801 S Ellis Ave, Chicago, Illinois
| |
Collapse
|
10
|
de Oliveira HC, Mencattini A, Casti P, Catani JH, de Barros N, Gonzaga A, Martinelli E, da Costa Vieira MA. A cross-cutting approach for tracking architectural distortion locii on digital breast tomosynthesis slices. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
11
|
Samala RK, Hadjiiski L, Helvie MA, Richter CD, Cha KH. Breast Cancer Diagnosis in Digital Breast Tomosynthesis: Effects of Training Sample Size on Multi-Stage Transfer Learning Using Deep Neural Nets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:686-696. [PMID: 31622238 PMCID: PMC6812655 DOI: 10.1109/tmi.2018.2870343] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this paper, we developed a deep convolutional neural network (CNN) for the classification of malignant and benign masses in digital breast tomosynthesis (DBT) using a multi-stage transfer learning approach that utilized data from similar auxiliary domains for intermediate-stage fine-tuning. Breast imaging data from DBT, digitized screen-film mammography, and digital mammography totaling 4039 unique regions of interest (1797 malignant and 2242 benign) were collected. Using cross validation, we selected the best transfer network from six transfer networks by varying the level up to which the convolutional layers were frozen. In a single-stage transfer learning approach, knowledge from CNN trained on the ImageNet data was fine-tuned directly with the DBT data. In a multi-stage transfer learning approach, knowledge learned from ImageNet was first fine-tuned with the mammography data and then fine-tuned with the DBT data. Two transfer networks were compared for the second-stage transfer learning by freezing most of the CNN structures versus freezing only the first convolutional layer. We studied the dependence of the classification performance on training sample size for various transfer learning and fine-tuning schemes by varying the training data from 1% to 100% of the available sets. The area under the receiver operating characteristic curve (AUC) was used as a performance measure. The view-based AUC on the test set for single-stage transfer learning was 0.85 ± 0.05 and improved significantly (p <; 0.05$ ) to 0.91 ± 0.03 for multi-stage learning. This paper demonstrated that, when the training sample size from the target domain is limited, an additional stage of transfer learning using data from a similar auxiliary domain is advantageous.
Collapse
|
12
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 266] [Impact Index Per Article: 53.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
13
|
Kim ST, Lee JH, Lee H, Ro YM. Visually interpretable deep network for diagnosis of breast masses on mammograms. Phys Med Biol 2018; 63:235025. [PMID: 30511660 DOI: 10.1088/1361-6560/aaef0a] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Recently, deep learning technology has achieved various successes in medical image analysis studies including computer-aided diagnosis (CADx). However, current CADx approaches based on deep learning have a limitation in interpreting diagnostic decisions. The limited interpretability is a major challenge for practical use of current deep learning approaches. In this paper, a novel visually interpretable deep network framework is proposed to provide diagnostic decisions with visual interpretation. The proposed method is motivated by the fact that the radiologists characterize breast masses according to the breast imaging reporting and data system (BIRADS). The proposed deep network framework consists of a BIRADS guided diagnosis network and a BIRADS critic network. A 2D map, named BIRADS guide map, is generated in the inference process of the deep network. The visual features extracted from the breast masses could be refined by the BIRADS guide map, which helps the deep network to focus on more informative areas. The BIRADS critic network makes the BIRADS guide map to be relevant to the characterization of masses in terms of BIRADS description. To verify the proposed method, comparative experiments have been conducted on public mammogram database. On the independent test set (170 malignant masses and 170 benign masses), the proposed method was found to have significantly higher performance compared to the deep network approach without using the BIRADS guide map (p < 0.05). Moreover, the visualization was conducted to show the location where the deep network exploited more information. This study demonstrated that the proposed visually interpretable CADx framework could be a promising approach for visually interpreting the diagnostic decision of the deep network.
Collapse
Affiliation(s)
- Seong Tae Kim
- School of Electrical Engineering, KAIST, 291, Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
| | | | | | | |
Collapse
|