1
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
2
|
Kwon MR, Youn I, Lee MY, Lee HA. Diagnostic Performance of Artificial Intelligence-Based Computer-Aided Detection Software for Automated Breast Ultrasound. Acad Radiol 2024; 31:480-491. [PMID: 37813703 DOI: 10.1016/j.acra.2023.09.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 08/25/2023] [Accepted: 09/12/2023] [Indexed: 10/11/2023]
Abstract
RATIONALE AND OBJECTIVES This study aimed to evaluate the diagnostic performance of radiologists following the utilization of artificial intelligence (AI)-based computer-aided detection software (CAD) in detecting suspicious lesions in automated breast ultrasounds (ABUS). MATERIALS AND METHODS ABUS-detected 262 breast lesions (histopathological verification; January 2020 to December 2022) were included. Two radiologists reviewed the images and assigned a Breast Imaging Reporting and Data System (BI-RADS) category. ABUS images were classified as positive or negative using AI-CAD. The BI-RADS category was readjusted in four ways: the radiologists modified the BI-RADS category using the AI results (AI-aided 1), upgraded or downgraded based on AI results (AI-aided 2), only upgraded for positive results (AI-aided 3), or only downgraded for negative results (AI-aided 4). The AI-aided diagnostic performances were compared to radiologists. The AI-CAD-positive and AI-CAD-negative cancer characteristics were compared. RESULTS For 262 lesions (145 malignant and 117 benign) in 231 women (mean age, 52.2 years), the area under the receiver operator characteristic curve (AUC) of radiologists was 0.870 (95% confidence interval [CI], 0.832-0.908). The AUC significantly improved to 0.919 (95% CI, 0.890-0.947; P = 0.001) using AI-aided 1, whereas it improved without significance to 0.884 (95% CI, 0.844-0.923), 0.890 (95% CI, 0.852-0.929), and 0.890 (95% CI, 0.853-0.928) using AI-aided 2, 3, and 4, respectively. AI-CAD-negative cancers were smaller, less frequently exhibited retraction phenomenon, and had lower BI-RADS category. Among nonmass lesions, AI-CAD-negative cancers showed no posterior shadowing. CONCLUSION AI-CAD implementation significantly improved the radiologists' diagnostic performance and may serve as a valuable diagnostic tool.
Collapse
Affiliation(s)
- Mi-Ri Kwon
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, 29 Saemunan-ro, Jongno-gu, Seoul, 03181, Republic of Korea (M.K., I.Y., H.-A.L.)
| | - Inyoung Youn
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, 29 Saemunan-ro, Jongno-gu, Seoul, 03181, Republic of Korea (M.K., I.Y., H.-A.L.).
| | - Mi Yeon Lee
- Division of Biostatistics, Department of R&D Management, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (M.Y.L.)
| | - Hyun-Ah Lee
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, 29 Saemunan-ro, Jongno-gu, Seoul, 03181, Republic of Korea (M.K., I.Y., H.-A.L.)
| |
Collapse
|
3
|
Eida S, Fukuda M, Katayama I, Takagi Y, Sasaki M, Mori H, Kawakami M, Nishino T, Ariji Y, Sumi M. Metastatic Lymph Node Detection on Ultrasound Images Using YOLOv7 in Patients with Head and Neck Squamous Cell Carcinoma. Cancers (Basel) 2024; 16:274. [PMID: 38254765 PMCID: PMC10813890 DOI: 10.3390/cancers16020274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 12/28/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Ultrasonography is the preferred modality for detailed evaluation of enlarged lymph nodes (LNs) identified on computed tomography and/or magnetic resonance imaging, owing to its high spatial resolution. However, the diagnostic performance of ultrasonography depends on the examiner's expertise. To support the ultrasonographic diagnosis, we developed YOLOv7-based deep learning models for metastatic LN detection on ultrasonography and compared their detection performance with that of highly experienced radiologists and less experienced residents. We enrolled 462 B- and D-mode ultrasound images of 261 metastatic and 279 non-metastatic histopathologically confirmed LNs from 126 patients with head and neck squamous cell carcinoma. The YOLOv7-based B- and D-mode models were optimized using B- and D-mode training and validation images and their detection performance for metastatic LNs was evaluated using B- and D-mode testing images, respectively. The D-mode model's performance was comparable to that of radiologists and superior to that of residents' reading of D-mode images, whereas the B-mode model's performance was higher than that of residents but lower than that of radiologists on B-mode images. Thus, YOLOv7-based B- and D-mode models can assist less experienced residents in ultrasonographic diagnoses. The D-mode model could raise the diagnostic performance of residents to the same level as experienced radiologists.
Collapse
Affiliation(s)
- Sato Eida
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Motoki Fukuda
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Ikuo Katayama
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yukinori Takagi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Miho Sasaki
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Hiroki Mori
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Maki Kawakami
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Tatsuyoshi Nishino
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| | - Yoshiko Ariji
- Department of Oral Radiology, Osaka Dental University, 1-5-17 Otemae, Chuo-ku, Osaka 540-0008, Japan; (M.F.); (Y.A.)
| | - Misa Sumi
- Department of Radiology and Biomedical Informatics, Nagasaki University Graduate School of Biomedical Sciences, 1-7-1 Sakamoto, Nagasaki 852-8588, Japan; (S.E.); (I.K.); (Y.T.); (M.S.); (H.M.); (M.K.); (T.N.)
| |
Collapse
|
4
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
5
|
Tang S, Jing C, Jiang Y, Yang K, Huang Z, Wu H, Cui C, Shi S, Ye X, Tian H, Song D, Xu J, Dong F. The effect of image resolution on convolutional neural networks in breast ultrasound. Heliyon 2023; 9:e19253. [PMID: 37664701 PMCID: PMC10469557 DOI: 10.1016/j.heliyon.2023.e19253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 04/01/2023] [Accepted: 08/16/2023] [Indexed: 09/05/2023] Open
Abstract
Purpose The objective of this research was to investigate the efficacy of various parameter combinations of Convolutional Neural Networks (CNNs) models, namely MobileNet and DenseNet121, and different input image resolutions (REZs) ranging from 64×64 to 512×512 pixels, for diagnosing breast cancer. Materials and methods During the period of June 2015 to November 2020, two hospitals were involved in the collection of two-dimensional ultrasound breast images for this retrospective multicenter study. The diagnostic performance of the computer models MobileNet and DenseNet 121 was compared at different resolutions. Results The results showed that MobileNet had the best breast cancer diagnosis performance at 320×320pixel REZ and DenseNet121 had the best breast cancer diagnosis performance at 448×448pixel REZ. Conclusion Our study reveals a significant correlation between image resolution and breast cancer diagnosis accuracy. Through the comparison of MobileNet and DenseNet121, it is highlighted that lightweight neural networks (LW-CNNs) can achieve model performance similar to or even slightly better than large neural networks models (HW-CNNs) in ultrasound images, and LW-CNNs' prediction time per image is lower.
Collapse
Affiliation(s)
- Shuzhen Tang
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
| | - Chen Jing
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Yitao Jiang
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Keen Yang
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Zhibin Huang
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Huaiyu Wu
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Chen Cui
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Siyuan Shi
- Research and Development Department, Illuminate, LLC, Shenzhen, Guangdong 518000, China
| | - Xiuqin Ye
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Hongtian Tian
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Di Song
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
- Shenzhen People's Hospital, Shenzhen 518020, Guangdong, China
| | - Jinfeng Xu
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
| | - Fajin Dong
- Second Clinical College of Jinan University, Shenzhen 518020, Guangdong, China
| |
Collapse
|
6
|
Sexauer R, Hejduk P, Borkowski K, Ruppert C, Weikert T, Dellas S, Schmidt N. Diagnostic accuracy of automated ACR BI-RADS breast density classification using deep convolutional neural networks. Eur Radiol 2023; 33:4589-4596. [PMID: 36856841 PMCID: PMC10289992 DOI: 10.1007/s00330-023-09474-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 01/17/2023] [Accepted: 01/26/2023] [Indexed: 03/02/2023]
Abstract
OBJECTIVES High breast density is a well-known risk factor for breast cancer. This study aimed to develop and adapt two (MLO, CC) deep convolutional neural networks (DCNN) for automatic breast density classification on synthetic 2D tomosynthesis reconstructions. METHODS In total, 4605 synthetic 2D images (1665 patients, age: 57 ± 37 years) were labeled according to the ACR (American College of Radiology) density (A-D). Two DCNNs with 11 convolutional layers and 3 fully connected layers each, were trained with 70% of the data, whereas 20% was used for validation. The remaining 10% were used as a separate test dataset with 460 images (380 patients). All mammograms in the test dataset were read blinded by two radiologists (reader 1 with two and reader 2 with 11 years of dedicated mammographic experience in breast imaging), and the consensus was formed as the reference standard. The inter- and intra-reader reliabilities were assessed by calculating Cohen's kappa coefficients, and diagnostic accuracy measures of automated classification were evaluated. RESULTS The two models for MLO and CC projections had a mean sensitivity of 80.4% (95%-CI 72.2-86.9), a specificity of 89.3% (95%-CI 85.4-92.3), and an accuracy of 89.6% (95%-CI 88.1-90.9) in the differentiation between ACR A/B and ACR C/D. DCNN versus human and inter-reader agreement were both "substantial" (Cohen's kappa: 0.61 versus 0.63). CONCLUSION The DCNN allows accurate, standardized, and observer-independent classification of breast density based on the ACR BI-RADS system. KEY POINTS • A DCNN performs on par with human experts in breast density assessment for synthetic 2D tomosynthesis reconstructions. • The proposed technique may be useful for accurate, standardized, and observer-independent breast density evaluation of tomosynthesis.
Collapse
Affiliation(s)
- Raphael Sexauer
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland.
| | - Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Carlotta Ruppert
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Thomas Weikert
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| | - Sophie Dellas
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| | - Noemi Schmidt
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| |
Collapse
|
7
|
Retson TA, Eghtedari M. Expanding Horizons: The Realities of CAD, the Promise of Artificial Intelligence, and Machine Learning's Role in Breast Imaging beyond Screening Mammography. Diagnostics (Basel) 2023; 13:2133. [PMID: 37443526 DOI: 10.3390/diagnostics13132133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence (AI) applications in mammography have gained significant popular attention; however, AI has the potential to revolutionize other aspects of breast imaging beyond simple lesion detection. AI has the potential to enhance risk assessment by combining conventional factors with imaging and improve lesion detection through a comparison with prior studies and considerations of symmetry. It also holds promise in ultrasound analysis and automated whole breast ultrasound, areas marked by unique challenges. AI's potential utility also extends to administrative tasks such as MQSA compliance, scheduling, and protocoling, which can reduce the radiologists' workload. However, adoption in breast imaging faces limitations in terms of data quality and standardization, generalizability, benchmarking performance, and integration into clinical workflows. Developing methods for radiologists to interpret AI decisions, and understanding patient perspectives to build trust in AI results, will be key future endeavors, with the ultimate aim of fostering more efficient radiology practices and better patient care.
Collapse
Affiliation(s)
- Tara A Retson
- Department of Radiology, University of California, San Diego, CA 92093, USA
| | - Mohammad Eghtedari
- Department of Radiology, University of California, San Diego, CA 92093, USA
| |
Collapse
|
8
|
Hejduk P, Sexauer R, Ruppert C, Borkowski K, Unkelbach J, Schmidt N. Automatic and standardized quality assurance of digital mammography and tomosynthesis with deep convolutional neural networks. Insights Imaging 2023; 14:90. [PMID: 37199794 DOI: 10.1186/s13244-023-01396-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 03/06/2023] [Indexed: 05/19/2023] Open
Abstract
OBJECTIVES The aim of this study was to develop and validate a commercially available AI platform for the automatic determination of image quality in mammography and tomosynthesis considering a standardized set of features. MATERIALS AND METHODS In this retrospective study, 11,733 mammograms and synthetic 2D reconstructions from tomosynthesis of 4200 patients from two institutions were analyzed by assessing the presence of seven features which impact image quality in regard to breast positioning. Deep learning was applied to train five dCNN models on features detecting the presence of anatomical landmarks and three dCNN models for localization features. The validity of models was assessed by the calculation of the mean squared error in a test dataset and was compared to the reading by experienced radiologists. RESULTS Accuracies of the dCNN models ranged between 93.0% for the nipple visualization and 98.5% for the depiction of the pectoralis muscle in the CC view. Calculations based on regression models allow for precise measurements of distances and angles of breast positioning on mammograms and synthetic 2D reconstructions from tomosynthesis. All models showed almost perfect agreement compared to human reading with Cohen's kappa scores above 0.9. CONCLUSIONS An AI-based quality assessment system using a dCNN allows for precise, consistent and observer-independent rating of digital mammography and synthetic 2D reconstructions from tomosynthesis. Automation and standardization of quality assessment enable real-time feedback to technicians and radiologists that shall reduce a number of inadequate examinations according to PGMI (Perfect, Good, Moderate, Inadequate) criteria, reduce a number of recalls and provide a dependable training platform for inexperienced technicians.
Collapse
Affiliation(s)
- Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland.
| | - Raphael Sexauer
- Breast Imaging, Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| | - Carlotta Ruppert
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital Zurich, Zurich, Switzerland
| | - Noemi Schmidt
- Breast Imaging, Radiology and Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| |
Collapse
|
9
|
Lan Z, Peng Y. Artificial intelligence diagnosis based on breast ultrasound imaging. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2022; 47:1009-1015. [PMID: 36097768 PMCID: PMC10950100 DOI: 10.11817/j.issn.1672-7347.2022.220110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Indexed: 06/15/2023]
Abstract
Breast cancer has now become the leading cancer in women. The development of breast ultrasound artificial intelligence (AI) diagnostic technology is conducive to promoting the precise diagnosis and treatment of breast cancer and alleviating the heavy medical burden due to the unbalanced regional development in China. In recent years, on the basis of improving diagnostic efficiency, AI technology has been continuously combined with various clinical application scenarios, thereby providing more comprehensive and reliable evidence-based suggestions for clinical decision-making. Although AI diagnostic technologies based on conventional breast ultrasound gray-scale images and cutting-edge technologies such as three-dimensional (3D) imaging and elastography have been developed to some extent, there are still technical pain points, diffusion difficulties and ethical dilemmas in the development of AI diagnostic technologies for breast ultrasound.
Collapse
Affiliation(s)
- Zihan Lan
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu 610000, China.
| | - Yulan Peng
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu 610000, China.
| |
Collapse
|
10
|
Abel F, Landsmann A, Hejduk P, Ruppert C, Borkowski K, Ciritsis A, Rossi C, Boss A. Detecting Abnormal Axillary Lymph Nodes on Mammograms Using a Deep Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12061347. [PMID: 35741157 PMCID: PMC9221636 DOI: 10.3390/diagnostics12061347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 05/26/2022] [Accepted: 05/26/2022] [Indexed: 11/16/2022] Open
Abstract
The purpose of this study was to determine the feasibility of a deep convolutional neural network (dCNN) to accurately detect abnormal axillary lymph nodes on mammograms. In this retrospective study, 107 mammographic images in mediolateral oblique projection from 74 patients were labeled to three classes: (1) “breast tissue”, (2) “benign lymph nodes”, and (3) “suspicious lymph nodes”. Following data preprocessing, a dCNN model was trained and validated with 5385 images. Subsequently, the trained dCNN was tested on a “real-world” dataset and the performance compared to human readers. For visualization, colored probability maps of the classification were calculated using a sliding window approach. The accuracy was 98% for the training and 99% for the validation set. Confusion matrices of the “real-world” dataset for the three classes with radiological reports as ground truth yielded an accuracy of 98.51% for breast tissue, 98.63% for benign lymph nodes, and 95.96% for suspicious lymph nodes. Intraclass correlation of the dCNN and the readers was excellent (0.98), and Kappa values were nearly perfect (0.93–0.97). The colormaps successfully detected abnormal lymph nodes with excellent image quality. In this proof-of-principle study in a small patient cohort from a single institution, we found that deep convolutional networks can be trained with high accuracy and reliability to detect abnormal axillary lymph nodes on mammograms.
Collapse
|