1
|
Yan P, Gong W, Li M, Zhang J, Li X, Jiang Y, Luo H, Zhou H. TDF-Net: Trusted Dynamic Feature Fusion Network for breast cancer diagnosis using incomplete multimodal ultrasound. INFORMATION FUSION 2024; 112:102592. [DOI: 10.1016/j.inffus.2024.102592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
|
2
|
Jiménez-Gaona Y, Álvarez MJR, Castillo-Malla D, García-Jaen S, Carrión-Figueroa D, Corral-Domínguez P, Lakshminarayanan V. BraNet: a mobil application for breast image classification based on deep learning algorithms. Med Biol Eng Comput 2024; 62:2737-2756. [PMID: 38693328 PMCID: PMC11330402 DOI: 10.1007/s11517-024-03084-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/26/2024] [Indexed: 05/03/2024]
Abstract
Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet" for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client-server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader's agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts' accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model.
Collapse
Affiliation(s)
- Yuliana Jiménez-Gaona
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador.
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain.
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada.
| | - María José Rodríguez Álvarez
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
| | - Darwin Castillo-Malla
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada
| | - Santiago García-Jaen
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
| | | | - Patricio Corral-Domínguez
- Corporación Médica Monte Sinaí-CIPAM (Centro Integral de Patología Mamaria) Cuenca-Ecuador, Facultad de Ciencias Médicas, Universidad de Cuenca, Cuenca, 010203, Ecuador
| | - Vasudevan Lakshminarayanan
- Department of Systems Design Engineering, Physics, and Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L3G1, Canada
| |
Collapse
|
3
|
Lu G, Tian R, Yang W, Liu R, Liu D, Xiang Z, Zhang G. Deep learning radiomics based on multimodal imaging for distinguishing benign and malignant breast tumours. Front Med (Lausanne) 2024; 11:1402967. [PMID: 39036101 PMCID: PMC11257849 DOI: 10.3389/fmed.2024.1402967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 06/14/2024] [Indexed: 07/23/2024] Open
Abstract
Objectives This study aimed to develop a deep learning radiomic model using multimodal imaging to differentiate benign and malignant breast tumours. Methods Multimodality imaging data, including ultrasonography (US), mammography (MG), and magnetic resonance imaging (MRI), from 322 patients (112 with benign breast tumours and 210 with malignant breast tumours) with histopathologically confirmed breast tumours were retrospectively collected between December 2018 and May 2023. Based on multimodal imaging, the experiment was divided into three parts: traditional radiomics, deep learning radiomics, and feature fusion. We tested the performance of seven classifiers, namely, SVM, KNN, random forest, extra trees, XGBoost, LightGBM, and LR, on different feature models. Through feature fusion using ensemble and stacking strategies, we obtained the optimal classification model for benign and malignant breast tumours. Results In terms of traditional radiomics, the ensemble fusion strategy achieved the highest accuracy, AUC, and specificity, with values of 0.892, 0.942 [0.886-0.996], and 0.956 [0.873-1.000], respectively. The early fusion strategy with US, MG, and MRI achieved the highest sensitivity of 0.952 [0.887-1.000]. In terms of deep learning radiomics, the stacking fusion strategy achieved the highest accuracy, AUC, and sensitivity, with values of 0.937, 0.947 [0.887-1.000], and 1.000 [0.999-1.000], respectively. The early fusion strategies of US+MRI and US+MG achieved the highest specificity of 0.954 [0.867-1.000]. In terms of feature fusion, the ensemble and stacking approaches of the late fusion strategy achieved the highest accuracy of 0.968. In addition, stacking achieved the highest AUC and specificity, which were 0.997 [0.990-1.000] and 1.000 [0.999-1.000], respectively. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity of 1.000 [0.999-1.000] under the early fusion strategy. Conclusion This study demonstrated the potential of integrating deep learning and radiomic features with multimodal images. As a single modality, MRI based on radiomic features achieved greater accuracy than US or MG. The US and MG models achieved higher accuracy with transfer learning than the single-mode or radiomic models. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity under the early fusion strategy, showed higher diagnostic performance, and provided more valuable information for differentiation between benign and malignant breast tumours.
Collapse
Affiliation(s)
- Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| | - Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, Liaoning, China
| | - Ruibo Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dongmei Liu
- Department of Ultrasound, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Zijie Xiang
- Biomedical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Guoxu Zhang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| |
Collapse
|
4
|
Iacob R, Iacob ER, Stoicescu ER, Ghenciu DM, Cocolea DM, Constantinescu A, Ghenciu LA, Manolescu DL. Evaluating the Role of Breast Ultrasound in Early Detection of Breast Cancer in Low- and Middle-Income Countries: A Comprehensive Narrative Review. Bioengineering (Basel) 2024; 11:262. [PMID: 38534536 DOI: 10.3390/bioengineering11030262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 03/04/2024] [Accepted: 03/06/2024] [Indexed: 03/28/2024] Open
Abstract
Breast cancer, affecting both genders, but mostly females, exhibits shifting demographic patterns, with an increasing incidence in younger age groups. Early identification through mammography, clinical examinations, and breast self-exams enhances treatment efficacy, but challenges persist in low- and medium-income countries due to limited imaging resources. This review assesses the feasibility of employing breast ultrasound as the primary breast cancer screening method, particularly in resource-constrained regions. Following the PRISMA guidelines, this study examines 52 publications from the last five years. Breast ultrasound, distinct from mammography, offers advantages like radiation-free imaging, suitability for repeated screenings, and preference for younger populations. Real-time imaging and dense breast tissue evaluation enhance sensitivity, accessibility, and cost-effectiveness. However, limitations include reduced specificity, operator dependence, and challenges in detecting microcalcifications. Automatic breast ultrasound (ABUS) addresses some issues but faces constraints like potential inaccuracies and limited microcalcification detection. The analysis underscores the need for a comprehensive approach to breast cancer screening, emphasizing international collaboration and addressing limitations, especially in resource-constrained settings. Despite advancements, notably with ABUS, the primary goal is to contribute insights for optimizing breast cancer screening globally, improving outcomes, and mitigating the impact of this debilitating disease.
Collapse
Affiliation(s)
- Roxana Iacob
- Department of Anatomy and Embriology, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
- Doctoral School, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
- Faculty of Mechanics, Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, 'Politehnica' University Timișoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
| | - Emil Radu Iacob
- Department of Pediatric Surgery, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Emil Robert Stoicescu
- Doctoral School, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
- Faculty of Mechanics, Field of Applied Engineering Sciences, Specialization Statistical Methods and Techniques in Health and Clinical Research, 'Politehnica' University Timișoara, Mihai Viteazul Boulevard No. 1, 300222 Timisoara, Romania
- Department of Radiology and Medical Imaging, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
- Research Center for Pharmaco-Toxicological Evaluations, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Delius Mario Ghenciu
- Doctoral School, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Daiana Marina Cocolea
- Doctoral School, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Amalia Constantinescu
- Department of Radiology and Medical Imaging, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Laura Andreea Ghenciu
- Discipline of Pathophysiology, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Diana Luminita Manolescu
- Department of Radiology and Medical Imaging, 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
- Center for Research and Innovation in Precision Medicine of Respiratory Diseases (CRIPMRD), 'Victor Babeș' University of Medicine and Pharmacy, 300041 Timișoara, Romania
| |
Collapse
|
5
|
Yao S, Dai F, Sun P, Zhang W, Qian B, Lu H. Enhancing the fairness of AI prediction models by Quasi-Pareto improvement among heterogeneous thyroid nodule population. Nat Commun 2024; 15:1958. [PMID: 38438371 PMCID: PMC10912763 DOI: 10.1038/s41467-024-44906-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 01/09/2024] [Indexed: 03/06/2024] Open
Abstract
Artificial Intelligence (AI) models for medical diagnosis often face challenges of generalizability and fairness. We highlighted the algorithmic unfairness in a large thyroid ultrasound dataset with significant diagnostic performance disparities across subgroups linked causally to sample size imbalances. To address this, we introduced the Quasi-Pareto Improvement (QPI) approach and a deep learning implementation (QP-Net) combining multi-task learning and domain adaptation to improve model performance among disadvantaged subgroups without compromising overall population performance. On the thyroid ultrasound dataset, our method significantly mitigated the area under curve (AUC) disparity for three less-prevalent subgroups by 0.213, 0.112, and 0.173 while maintaining the AUC for dominant subgroups; we also further confirmed the generalizability of our approach on two public datasets: the ISIC2019 skin disease dataset and the CheXpert chest radiograph dataset. Here we show the QPI approach to be widely applicable in promoting AI for equitable healthcare outcomes.
Collapse
Affiliation(s)
- Siqiong Yao
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, 200240, PR China
- SJTU-Yale Joint Center of Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute Shanghai Jiao Tong University, Shanghai, 200240, PR China
| | - Fang Dai
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, 200240, PR China
| | - Peng Sun
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, 200240, PR China
| | - Weituo Zhang
- Hongqiao International Institute of Medicine, Shanghai Tong Ren Hospital and School of Public Health, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, PR China.
| | - Biyun Qian
- Hongqiao International Institute of Medicine, Shanghai Tong Ren Hospital and School of Public Health, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, PR China.
| | - Hui Lu
- State Key Laboratory of Microbial Metabolism, Joint International Research Laboratory of Metabolic and Developmental Sciences, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, 200240, PR China.
- SJTU-Yale Joint Center of Biostatistics and Data Science, National Center for Translational Medicine, MoE Key Lab of Artificial Intelligence, AI Institute Shanghai Jiao Tong University, Shanghai, 200240, PR China.
- Shanghai Engineering Research Center for Big Data in Pediatric Precision Medicine, NHC Key Laboratory of Medical Embryogenesis and Developmental Molecular Biology & Shanghai Key Laboratory of Embryo and Reproduction Engineering, Shanghai, 200020, PR China.
| |
Collapse
|
6
|
Gómez-Flores W, Pereira WCDA. Gray-to-color image conversion in the classification of breast lesions on ultrasound using pre-trained deep neural networks. Med Biol Eng Comput 2023; 61:3193-3207. [PMID: 37713158 DOI: 10.1007/s11517-023-02928-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 08/29/2023] [Indexed: 09/16/2023]
Abstract
Breast ultrasound (BUS) image classification in benign and malignant classes is often based on pre-trained convolutional neural networks (CNNs) to cope with small-sized training data. Nevertheless, BUS images are single-channel gray-level images, whereas pre-trained CNNs learned from color images with red, green, and blue (RGB) components. Thus, a gray-to-color conversion method is applied to fit the BUS image to the CNN's input layer size. This paper evaluates 13 gray-to-color conversion methods proposed in the literature that follow three strategies: replicating the gray-level image to all RGB channels, decomposing the image to enhance inherent information like the lesion's texture and morphology, and learning a matching layer. Besides, we introduce an image decomposition method based on the lesion's structural information to describe its inner and outer complexity. These gray-to-color conversion methods are evaluated under the same experimental framework using a pre-trained CNN architecture named ResNet-18 and a BUS dataset with more than 3000 images. In addition, the Matthews correlation coefficient (MCC), sensitivity (SEN), and specificity (SPE) measure the classification performance. The experimental results show that decomposition methods outperform replication and learning-based methods when using information from the lesion's binary mask (obtained from a segmentation method), reaching an MCC value greater than 0.70 and specificity up to 0.92, although the sensitivity is about 0.80. On the other hand, regarding the proposed method, the trade-off between sensitivity and specificity is better balanced, obtaining about 0.88 for both indices and an MCC of 0.73. This study contributes to the objective assessment of different gray-to-color conversion approaches in classifying breast lesions, revealing that mask-based decomposition methods improve classification performance. Besides, the proposed method based on structural information improves the sensitivity, obtaining more reliable classification results on malignant cases and potentially benefiting clinical practice.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del IPN, Unidad Tamaulipas, Ciudad Victoria, 87138, Tamaulipas, Mexico.
| | | |
Collapse
|