1
|
Ogundipe O, Kurt Z, Woo WL. Deep neural networks integrating genomics and histopathological images for predicting stages and survival time-to-event in colon cancer. PLoS One 2024; 19:e0305268. [PMID: 39226289 PMCID: PMC11371203 DOI: 10.1371/journal.pone.0305268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 05/26/2024] [Indexed: 09/05/2024] Open
Abstract
MOTIVATION There exists an unexplained diverse variation within the predefined colon cancer stages using only features from either genomics or histopathological whole slide images as prognostic factors. Unraveling this variation will bring about improved staging and treatment outcomes. Hence, motivated by the advancement of Deep Neural Network (DNN) libraries and complementary factors within some genomics datasets, we aggregate atypia patterns in histopathological images with diverse carcinogenic expression from mRNA, miRNA and DNA methylation as an integrative input source into a deep neural network for colon cancer stages classification, and samples stratification into low or high-risk survival groups. RESULTS The genomics-only and integrated input features return Area Under Curve-Receiver Operating Characteristic curve (AUC-ROC) of 0.97 compared with AUC-ROC of 0.78 obtained when only image features are used for the stage's classification. A further analysis of prediction accuracy using the confusion matrix shows that the integrated features have a weakly improved accuracy of 0.08% more than the accuracy obtained with genomics features. Also, the extracted features were used to split the patients into low or high-risk survival groups. Among the 2,700 fused features, 1,836 (68%) features showed statistically significant survival probability differences in aggregating samples into either low or high between the two risk survival groups. Availability and Implementation: https://github.com/Ogundipe-L/EDCNN.
Collapse
Affiliation(s)
- Olalekan Ogundipe
- Department of Computer and Information Sciences, University of Northumbria, Newcastle Upon Tyne, United Kingdom
| | - Zeyneb Kurt
- Information School, University of Sheffield, Sheffield, United Kingdom
| | - Wai Lok Woo
- Department of Computer and Information Sciences, University of Northumbria, Newcastle Upon Tyne, United Kingdom
| |
Collapse
|
2
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
3
|
Ye RZ, Lipatov K, Diedrich D, Bhattacharyya A, Erickson BJ, Pickering BW, Herasevich V. Automatic ARDS surveillance with chest X-ray recognition using convolutional neural networks. J Crit Care 2024; 82:154794. [PMID: 38552452 DOI: 10.1016/j.jcrc.2024.154794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/20/2023] [Accepted: 12/01/2023] [Indexed: 06/01/2024]
Abstract
OBJECTIVE This study aims to design, validate and assess the accuracy a deep learning model capable of differentiation Chest X-Rays between pneumonia, acute respiratory distress syndrome (ARDS) and normal lungs. MATERIALS AND METHODS A diagnostic performance study was conducted using Chest X-Ray images from adult patients admitted to a medical intensive care unit between January 2003 and November 2014. X-ray images from 15,899 patients were assigned one of three prespecified categories: "ARDS", "Pneumonia", or "Normal". RESULTS A two-step convolutional neural network (CNN) pipeline was developed and tested to distinguish between the three patterns with sensitivity ranging from 91.8% to 97.8% and specificity ranging from 96.6% to 98.8%. The CNN model was validated with a sensitivity of 96.3% and specificity of 96.6% using a previous dataset of patients with Acute Lung Injury (ALI)/ARDS. DISCUSSION The results suggest that a deep learning model based on chest x-ray pattern recognition can be a useful tool in distinguishing patients with ARDS from patients with normal lungs, providing faster results than digital surveillance tools based on text reports. CONCLUSION A CNN-based deep learning model showed clinically significant performance, providing potential for faster ARDS identification. Future research should prospectively evaluate these tools in a clinical setting.
Collapse
Affiliation(s)
- Run Zhou Ye
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA.; Division of Endocrinology, Department of Medicine, Centre de Recherche du CHUS, Sherbrooke QC J1H 5N4, Canada
| | - Kirill Lipatov
- Critical Care Medicine, Mayo Clinic, Eau Claire, WI, United States
| | - Daniel Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | | | - Bradley J Erickson
- Department of Diagnostic Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Brian W Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA..
| |
Collapse
|
4
|
Fenta HM, Zewotir TT, Naidoo S, Naidoo RN, Mwambi H. Factors of acute respiratory infection among under-five children across sub-Saharan African countries using machine learning approaches. Sci Rep 2024; 14:15801. [PMID: 38982206 PMCID: PMC11233665 DOI: 10.1038/s41598-024-65620-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 06/21/2024] [Indexed: 07/11/2024] Open
Abstract
Symptoms of Acute Respiratory infections (ARIs) among under-five children are a global health challenge. We aimed to train and evaluate ten machine learning (ML) classification approaches in predicting symptoms of ARIs reported by mothers among children younger than 5 years in sub-Saharan African (sSA) countries. We used the most recent (2012-2022) nationally representative Demographic and Health Surveys data of 33 sSA countries. The air pollution covariates such as global annual surface particulate matter (PM 2.5) and the nitrogen dioxide available in the form of raster images were obtained from the National Aeronautics and Space Administration (NASA). The MLA was used for predicting the symptoms of ARIs among under-five children. We randomly split the dataset into two, 80% was used to train the model, and the remaining 20% was used to test the trained model. Model performance was evaluated using sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve. A total of 327,507 under-five children were included in the study. About 7.10, 4.19, 20.61, and 21.02% of children reported symptoms of ARI, Severe ARI, cough, and fever in the 2 weeks preceding the survey years respectively. The prevalence of ARI was highest in Mozambique (15.3%), Uganda (15.05%), Togo (14.27%), and Namibia (13.65%,), whereas Uganda (40.10%), Burundi (38.18%), Zimbabwe (36.95%), and Namibia (31.2%) had the highest prevalence of cough. The results of the random forest plot revealed that spatial locations (longitude, latitude), particulate matter, land surface temperature, nitrogen dioxide, and the number of cattle in the houses are the most important features in predicting the diagnosis of symptoms of ARIs among under-five children in sSA. The RF algorithm was selected as the best ML model (AUC = 0.77, Accuracy = 0.72) to predict the symptoms of ARIs among children under five. The MLA performed well in predicting the symptoms of ARIs and associated predictors among under-five children across the sSA countries. Random forest MLA was identified as the best classifier to be employed for the prediction of the symptoms of ARI among under-five children.
Collapse
Affiliation(s)
- Haile Mekonnen Fenta
- Discipline of Public Health Medicine, School of Nursing and Public Health College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa.
- Department of Statistics, College of Science, Bahir Dar University, Bahir Dar, Ethiopia.
| | - Temesgen T Zewotir
- School of Mathematics, Statistics and Computer Science, College of Agriculture Engineering and Science, University of KwaZulu-Natal, Durban, South Africa
| | - Saloshni Naidoo
- Discipline of Public Health Medicine, School of Nursing and Public Health College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
| | - Rajen N Naidoo
- Discipline of Occupational and Environmental Health, School of Nursing and Public Health, College of Health Sciences, University of KwaZulu-Natal, Durban, South Africa
| | - Henry Mwambi
- School of Mathematics, Statistics and Computer Science, College of Agriculture Engineering and Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
5
|
Vellan CJ, Islam T, De Silva S, Mohd Taib NA, Prasanna G, Jayapalan JJ. Exploring novel protein-based biomarkers for advancing breast cancer diagnosis: A review. Clin Biochem 2024; 129:110776. [PMID: 38823558 DOI: 10.1016/j.clinbiochem.2024.110776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 04/26/2024] [Accepted: 05/29/2024] [Indexed: 06/03/2024]
Abstract
This review provides a contemporary examination of the evolving landscape of breast cancer (BC) diagnosis, focusing on the pivotal role of novel protein-based biomarkers. The overview begins by elucidating the multifaceted nature of BC, exploring its prevalence, subtypes, and clinical complexities. A critical emphasis is placed on the transformative impact of proteomics, dissecting the proteome to unravel the molecular intricacies of BC. Navigating through various sources of samples crucial for biomarker investigations, the review underscores the significance of robust sample processing methods and their validation in ensuring reliable outcomes. The central theme of the review revolves around the identification and evaluation of novel protein-based biomarkers. Cutting-edge discoveries are summarised, shedding light on emerging biomarkers poised for clinical application. Nevertheless, the review candidly addresses the challenges inherent in biomarker discovery, including issues of standardisation, reproducibility, and the complex heterogeneity of BC. The future direction section envisions innovative strategies and technologies to overcome existing challenges. In conclusion, the review summarises the current state of BC biomarker research, offering insights into the intricacies of proteomic investigations. As precision medicine gains momentum, the integration of novel protein-based biomarkers emerges as a promising avenue for enhancing the accuracy and efficacy of BC diagnosis. This review serves as a compass for researchers and clinicians navigating the evolving landscape of BC biomarker discovery, guiding them toward transformative advancements in diagnostic precision and personalised patient care.
Collapse
Affiliation(s)
- Christina Jane Vellan
- Department of Molecular Medicine, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Tania Islam
- Department of Surgery, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Sumadee De Silva
- Institute of Biochemistry, Molecular Biology and Biotechnology, University of Colombo, Colombo 03, Sri Lanka
| | - Nur Aishah Mohd Taib
- Department of Surgery, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Galhena Prasanna
- Institute of Biochemistry, Molecular Biology and Biotechnology, University of Colombo, Colombo 03, Sri Lanka
| | - Jaime Jacqueline Jayapalan
- Department of Molecular Medicine, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia; Universiti Malaya Centre for Proteomics Research (UMCPR), Universiti Malaya, 50603, Kuala Lumpur, Malaysia.
| |
Collapse
|
6
|
Tadege M, Tegegne AS, Dessie ZG. Cardiac patients' surgery outcome and associated factors in Ethiopia: application of machine learning. BMC Pediatr 2024; 24:395. [PMID: 38886745 PMCID: PMC11184771 DOI: 10.1186/s12887-024-04870-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 06/03/2024] [Indexed: 06/20/2024] Open
Abstract
INTRODUCTION Cardiovascular diseases are a class of heart and blood vessel-related illnesses. In Sub-Saharan Africa, including Ethiopia, preventable heart disease continues to be a significant factor, contrasting with its presence in developed nations. Therefore, the objective of the study was to assess the prevalence of death due to cardiac disease and its risk factors among heart patients in Ethiopia. METHODS The current investigation included all cardiac patients who had cardiac surgery in the country between 2012 and 2023. A total of 1520 individuals were participated in the study. Data collection took place between February 2022 and January 2023. The study design was a retrospective cohort since the study track back patients' chart since 2012. Machine learning algorithms were applied for data analysis. For machine learning algorithms comparison, lift and AUC was applied. RESULTS From all possible algorithms, logistic algorithm at 90%/10% was the best fit since it produces the maximum AUC value. In addition, based on the lift value of 3.33, it can be concluded that the logistic regression algorithm was performing well and providing substantial improvement over random selection. From the logistic regression machine learning algorithms, age, saturated oxygen, ejection fraction, duration of cardiac center stays after surgery, waiting time to surgery, hemoglobin, and creatinine were significant predictors of death. CONCLUSION Some of the predictors for the death of cardiac disease patients are identified as such special attention should be given to aged patients, for patients waiting for long periods of time to get surgery, lower saturated oxygen, higher creatinine value, lower ejection fraction and for patients with lower hemoglobin values.
Collapse
Affiliation(s)
- Melaku Tadege
- College of Science, Bahir Dar University, Bahir Dar, Ethiopia.
- Department of Statistics, Injibara University, Injibara, Amhara, Ethiopia.
- Regional Data Management Center for Health (RDMC), Amhara Public Health Institute (APHI), Bahir Dar, Ethiopia.
| | | | - Zelalem G Dessie
- College of Science, Bahir Dar University, Bahir Dar, Ethiopia
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
7
|
Liu W, Zeng P, Jiang J, Chen J, Chen L, Hu C, Jian W, Diao X, Wang X. Improved PAA algorithm for breast mass detection in mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108211. [PMID: 38744058 DOI: 10.1016/j.cmpb.2024.108211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 04/11/2024] [Accepted: 05/01/2024] [Indexed: 05/16/2024]
Abstract
Mammography screening is instrumental in the early detection and diagnosis of breast cancer by identifying masses in mammograms. With the rapid development of deep learning, numerous deep learning-based object detection algorithms have been explored for mass detection studies. However, these methods often yield a high false positive rate per image (FPPI) while achieving a high true positive rate (TPR). To maintain a higher TPR while also ensuring lower FPPI, we improved the Probability Anchor Assignment (PAA) algorithm to enhance the detection capability for mammographic characteristics with our previous work. We considered three dimensions: the backbone network, feature fusion module, and dense detection heads. The final experiment showed the effectiveness of the proposed method, and the TPR/FPPI values of the final improved PAA algorithm were 0.96/0.56 on the INbreast datasets. Compared to other methods, our method stands distinguished with its effectiveness in addressing the imbalance between positive and negative classes in cases of single lesion detection.
Collapse
Affiliation(s)
- Weixiang Liu
- College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Pengcheng Zeng
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Jiale Jiang
- Department of Medical lmaging, the First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou 510000, Guangdong, China
| | - Jingyang Chen
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Linghao Chen
- The Seventh Affiliated Hospital, Sun Yat-Sen University, 628 Zhenyuan Road, Xinhu Street, Guangming New District, Shenzhen 518107, Guangdong, China
| | - Chuting Hu
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen 518035, Guangdong, China
| | - Wenjing Jian
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen 518035, Guangdong, China
| | - Xianfen Diao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China.
| | - Xianming Wang
- Department of Breast Surgery, Shenzhen Futian District Maternity & Child Healthcare Hospital, Shenzhen 518000, Guangdong, China
| |
Collapse
|
8
|
Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024; 10:705-726. [PMID: 38787015 PMCID: PMC11125819 DOI: 10.3390/tomography10050055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.
Collapse
Affiliation(s)
- Dhurgham Al-Karawi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Shakir Al-Zaidi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Khaled Ahmad Helael
- Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan;
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Abdulmajeed Mounzer Mouhsen
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Tarek Ajam
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Bashar A. Alshalabi
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohamed Salman
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohammed H. Ahmed
- School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK;
| |
Collapse
|
9
|
Africano G, Arponen O, Rinta-Kiikka I, Pertuz S. Transfer learning for the generalization of artificial intelligence in breast cancer detection: a case-control study. Acta Radiol 2024; 65:334-340. [PMID: 38115699 DOI: 10.1177/02841851231218960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
BACKGROUND Some researchers have questioned whether artificial intelligence (AI) systems maintain their performance when used for women from populations not considered during the development of the system. PURPOSE To evaluate the impact of transfer learning as a way of improving the generalization of AI systems in the detection of breast cancer. MATERIAL AND METHODS This retrospective case-control Finnish study involved 191 women diagnosed with breast cancer and 191 matched healthy controls. We selected a state-of-the-art AI system for breast cancer detection trained using a large US dataset. The selected baseline system was evaluated in two experimental settings. First, we examined our private Finnish sample as an independent test set that had not been considered in the development of the system (unseen population). Second, the baseline system was retrained to attempt to improve its performance in the unseen population by means of transfer learning. To analyze performance, we used areas under the receiver operating characteristic curve (AUCs) with DeLong's test. RESULTS Two versions of the baseline system were considered: ImageOnly and Heatmaps. The ImageOnly and Heatmaps versions yielded mean AUC values of 0.82±0.008 and 0.88±0.003 in the US dataset and 0.56 (95% CI=0.50-0.62) and 0.72 (95% CI=0.67-0.77) when evaluated in the unseen population, respectively. The retrained systems achieved AUC values of 0.61 (95% CI=0.55-0.66) and 0.69 (95% CI=0.64-0.75), respectively. There was no statistical difference between the baseline system and the retrained system. CONCLUSION Transfer learning with a small study sample did not yield a significant improvement in the generalization of the system.
Collapse
Affiliation(s)
- Gerson Africano
- School of Electrical, Electronics and Telecommunications Engineering, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Otso Arponen
- Department of Radiology, Tampere University Hospital, Tampere, Finland
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Irina Rinta-Kiikka
- Department of Radiology, Tampere University Hospital, Tampere, Finland
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Said Pertuz
- School of Electrical, Electronics and Telecommunications Engineering, Universidad Industrial de Santander, Bucaramanga, Colombia
| |
Collapse
|
10
|
Hajim WI, Zainudin S, Mohd Daud K, Alheeti K. Optimized models and deep learning methods for drug response prediction in cancer treatments: a review. PeerJ Comput Sci 2024; 10:e1903. [PMID: 38660174 PMCID: PMC11042005 DOI: 10.7717/peerj-cs.1903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 01/31/2024] [Indexed: 04/26/2024]
Abstract
Recent advancements in deep learning (DL) have played a crucial role in aiding experts to develop personalized healthcare services, particularly in drug response prediction (DRP) for cancer patients. The DL's techniques contribution to this field is significant, and they have proven indispensable in the medical field. This review aims to analyze the diverse effectiveness of various DL models in making these predictions, drawing on research published from 2017 to 2023. We utilized the VOS-Viewer 1.6.18 software to create a word cloud from the titles and abstracts of the selected studies. This study offers insights into the focus areas within DL models used for drug response. The word cloud revealed a strong link between certain keywords and grouped themes, highlighting terms such as deep learning, machine learning, precision medicine, precision oncology, drug response prediction, and personalized medicine. In order to achieve an advance in DRP using DL, the researchers need to work on enhancing the models' generalizability and interoperability. It is also crucial to develop models that not only accurately represent various architectures but also simplify these architectures, balancing the complexity with the predictive capabilities. In the future, researchers should try to combine methods that make DL models easier to understand; this will make DRP reviews more open and help doctors trust the decisions made by DL models in cancer DRP.
Collapse
Affiliation(s)
- Wesam Ibrahim Hajim
- Department of Applied Geology, College of Sciences, Tirkit University, Tikrit, Salah ad Din, Iraq
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Selangor, Malaysia
| | - Suhaila Zainudin
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Selangor, Malaysia
| | - Kauthar Mohd Daud
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Selangor, Malaysia
| | - Khattab Alheeti
- Department of Computer Networking Systems, College of Computer Sciences and Information Technology, University of Anbar, Al Anbar, Ramadi, Iraq
| |
Collapse
|
11
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
12
|
Jiang W, Chen Z, Chen C, Wang L, Han T, Wen L. Machine learning algorithms being an auxiliary tool to predict the overall survival of patients with renal cell carcinoma using the SEER database. Transl Androl Urol 2024; 13:53-63. [PMID: 38404544 PMCID: PMC10891382 DOI: 10.21037/tau-23-319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/22/2023] [Indexed: 02/27/2024] Open
Abstract
Background The clinical prognosis assessment of renal cell carcinoma (RCC) still relies on nuclear grading and nuclear score by naked eye with microscope, which has defects long time, low efficiency, and uneven evaluation level criteria. There are few machine learning (ML) studies investigating the prognosis in the RCC literature which could also quantify the risk of postoperative recurrence of RCC patients and guide cancer patients to conduct individualized postoperative clinical management. This study evaluated the suitability of ML algorithms for survival prediction in patients with RCC. Methods A total of 192,912 RCC patients from the Surveillance, Epidemiology, and End Results (SEER) were obtained from 2004 to 2015. Six ML algorithms including support vector machine (SVM), Bayesian method, decision tree, random forest, neural network, and Extreme Gradient Boosting (XGBoost) were applied to predict overall survival (OS) of RCC. Results Patients from the SEER with a median age of 62 years and the pathological types were clear cell RCC (47.6%), papillary RCC (9.5%), chromophobe RCC (4.0%) and others (4.1%) were collected. In the deleting patients with missing data, the highest accurate model was XGBoost [area under the curve (AUC) 67.0%]. In the deleting patients with missing data and survival time <5 years, the accuracy of random forest, neural network and XGBoost were high, with AUC of 80.8%, 81.5% and 81.8%, respectively. In the only deleting the missing tumor diameter and filling the missing dataset with missForest, the highest accurate model was random forest (AUC: 71.9%). In this study, the overall accuracy of the SVM model was not high, apart from in the population of patients with deleting the missing tumor diameter and survival time <5 years, and filling the missing data with missForest. Random forest, neural network and XGBoost had high accuracy, with AUC of 84.1%, 84.7% and 84.8%, respectively. Conclusions ML algorithms could be used to predict the prognosis of RCC. It could quantify the recurrence possibility of patients and help more individualized postoperative clinical management. Given the limitations and complexity of datasets, ML may be used as an auxiliary tool to analyze and process larger datasets and complex data.
Collapse
Affiliation(s)
- Weixing Jiang
- Department of Urology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Zhenghao Chen
- Department of Urology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Urology, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Cancan Chen
- Digital Health China Technologies Co., LTD., Beijing, China
| | - Lei Wang
- Department of Urology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Tiandong Han
- Department of Urology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Li Wen
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
13
|
Shankari N, Kudva V, Hegde RB. Breast Mass Detection and Classification Using Machine Learning Approaches on Two-Dimensional Mammogram: A Review. Crit Rev Biomed Eng 2024; 52:41-60. [PMID: 38780105 DOI: 10.1615/critrevbiomedeng.2024051166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.
Collapse
Affiliation(s)
- N Shankari
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte 574110, Karnataka, India
| | - Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, India -576104; Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, India - 574110
| | - Roopa B Hegde
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte - 574110, Karnataka, India
| |
Collapse
|
14
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
15
|
Rangarajan K, Aggarwal P, Gupta DK, Dhanakshirur R, Baby A, Pal C, Gupta AK, Hari S, Banerjee S, Arora C. Deep learning for detection of iso-dense, obscure masses in mammographically dense breasts. Eur Radiol 2023; 33:8112-8121. [PMID: 37209125 DOI: 10.1007/s00330-023-09717-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 02/11/2023] [Accepted: 03/06/2023] [Indexed: 05/22/2023]
Abstract
OBJECTIVES To analyze the performance of deep learning in isodense/obscure masses in dense breasts. To build and validate a deep learning (DL) model using core radiology principles and analyze its performance in isodense/obscure masses. To show performance on screening mammography as well as diagnostic mammography distribution. METHODS This was a retrospective, single-institution, multi-centre study with external validation. For model building, we took a 3-pronged approach. First, we explicitly taught the network to learn features other than density differences: such as spiculations and architectural distortion. Second, we used the opposite breast to enable the detection of asymmetries. Third, we systematically enhanced each image by piece-wise-linear transformation. We tested the network on a diagnostic mammography dataset (2569 images with 243 cancers, January to June 2018) and a screening mammography dataset (2146 images with 59 cancers, patient recruitment from January to April 2021) from a different centre (external validation). RESULTS When trained with our proposed technique (and compared with baseline network), sensitivity for malignancy increased from 82.7 to 84.7% at 0.2 False positives per image (FPI) in the diagnostic mammography dataset, 67.9 to 73.8% in the subset of patients with dense breasts, 74.6 to 85.3 in the subset of patients with isodense/obscure cancers and 84.9 to 88.7 in an external validation test set with a screening mammography distribution. We showed that our sensitivity exceeded currently reported values (0.90 at 0.2 FPI) on a public benchmark dataset (INBreast). CONCLUSION Modelling traditional mammographic teaching into a DL framework can help improve cancer detection accuracy in dense breasts. CLINICAL RELEVANCE STATEMENT Incorporating medical knowledge into neural network design can help us overcome some limitations associated with specific modalities. In this paper, we show how one such deep neural network can help improve performance on mammographically dense breasts. KEY POINTS • Although state-of-the-art deep learning networks achieve good results in cancer detection in mammography in general, isodense, obscure masses and mammographically dense breasts posed a challenge to deep learning networks. • Collaborative network design and incorporation of traditional radiology teaching into the deep learning approach helped mitigate the problem. • The accuracy of deep learning networks may be translatable to different patient distributions. We showed the results of our network on screening as well as diagnostic mammography datasets.
Collapse
Affiliation(s)
- Krithika Rangarajan
- All India Institute of Medical Sciences, Ansari Nagar New Delhi, 110029, India.
- Indian Institute of Technology, Delhi, Hauz Khas, Delhi, 110016, India.
| | - Pranjal Aggarwal
- Indian Institute of Technology, Delhi, Hauz Khas, Delhi, 110016, India
| | - Dhruv Kumar Gupta
- Indian Institute of Technology, Delhi, Hauz Khas, Delhi, 110016, India
| | | | - Akhil Baby
- All India Institute of Medical Sciences, Ansari Nagar New Delhi, 110029, India
| | - Chandan Pal
- All India Institute of Medical Sciences, Ansari Nagar New Delhi, 110029, India
| | - Arun Kumar Gupta
- All India Institute of Medical Sciences, Ansari Nagar New Delhi, 110029, India
| | - Smriti Hari
- All India Institute of Medical Sciences, Ansari Nagar New Delhi, 110029, India
| | | | - Chetan Arora
- Indian Institute of Technology, Delhi, Hauz Khas, Delhi, 110016, India
| |
Collapse
|
16
|
Prisilla AA, Guo YL, Jan YK, Lin CY, Lin FY, Liau BY, Tsai JY, Ardhianto P, Pusparani Y, Lung CW. An approach to the diagnosis of lumbar disc herniation using deep learning models. Front Bioeng Biotechnol 2023; 11:1247112. [PMID: 37731760 PMCID: PMC10507264 DOI: 10.3389/fbioe.2023.1247112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 08/09/2023] [Indexed: 09/22/2023] Open
Abstract
Background: In magnetic resonance imaging (MRI), lumbar disc herniation (LDH) detection is challenging due to the various shapes, sizes, angles, and regions associated with bulges, protrusions, extrusions, and sequestrations. Lumbar abnormalities in MRI can be detected automatically by using deep learning methods. As deep learning models gain recognition, they may assist in diagnosing LDH with MRI images and provide initial interpretation in clinical settings. YOU ONLY LOOK ONCE (YOLO) model series are often used to train deep learning algorithms for real-time biomedical image detection and prediction. This study aims to confirm which YOLO models (YOLOv5, YOLOv6, and YOLOv7) perform well in detecting LDH in different regions of the lumbar intervertebral disc. Materials and methods: The methodology involves several steps, including converting DICOM images to JPEG, reviewing and selecting MRI slices for labeling and augmentation using ROBOFLOW, and constructing YOLOv5x, YOLOv6, and YOLOv7 models based on the dataset. The training dataset was combined with the radiologist's labeling and annotation, and then the deep learning models were trained using the training/validation dataset. Results: Our result showed that the 550-dataset with augmentation (AUG) or without augmentation (non-AUG) in YOLOv5x generates satisfactory training performance in LDH detection. The AUG dataset overall performance provides slightly higher accuracy than the non-AUG. YOLOv5x showed the highest performance with 89.30% mAP compared to YOLOv6, and YOLOv7. Also, YOLOv5x in non-AUG dataset showed the balance LDH region detections in L2-L3, L3-L4, L4-L5, and L5-S1 with above 90%. And this illustrates the competitiveness of using non-AUG dataset to detect LDH. Conclusion: Using YOLOv5x and the 550 augmented dataset, LDH can be detected with promising both in non-AUG and AUG dataset. By utilizing the most appropriate YOLO model, clinicians have a greater chance of diagnosing LDH early and preventing adverse effects for their patients.
Collapse
Affiliation(s)
- Ardha Ardea Prisilla
- Department of Fashion Design, LaSalle College Jakarta, Jakarta, Indonesia
- Department of Digital Media Design, Asia University, Taichung, Taiwan
| | - Yue Leon Guo
- Environmental and Occupational Medicine, College of Medicine, National Taiwan University (NTU) and NTU Hospital, Taipei, Taiwan
- Graduate Institute of Environmental and Occupational Health Sciences, College of Public Health, National Taiwan University, Taipei, Taiwan
- National Institute of Environmental Health Sciences, National Health Research Institutes, Miaoli, Taiwan
| | - Yih-Kuen Jan
- Rehabilitation Engineering Lab, Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| | - Chih-Yang Lin
- Department of Mechanical Engineering, National Central University, Taoyuan, Taiwan
| | - Fu-Yu Lin
- Department of Neurology, China Medical University Hospital, Taichung, Taiwan
| | - Ben-Yi Liau
- Department of Automatic Control Engineering, Feng Chia University, Taichung, Taiwan
| | - Jen-Yung Tsai
- Department of Digital Media Design, Asia University, Taichung, Taiwan
| | - Peter Ardhianto
- Department of Visual Communication Design, Soegijapranata Catholic University, Semarang, Indonesia
| | - Yori Pusparani
- Department of Digital Media Design, Asia University, Taichung, Taiwan
- Department of Visual Communication Design, Budi Luhur University, Jakarta, Indonesia
| | - Chi-Wen Lung
- Rehabilitation Engineering Lab, Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Creative Product Design, Asia University, Taichung, Taiwan
| |
Collapse
|
17
|
Ghosh A, Patton D, Bose S, Henry MK, Ouyang M, Huang H, Vossough A, Sze R, Sotardi S, Francavilla M. A Patch-Based Deep Learning Approach for Detecting Rib Fractures on Frontal Radiographs in Young Children. J Digit Imaging 2023; 36:1302-1313. [PMID: 36897422 PMCID: PMC10406785 DOI: 10.1007/s10278-023-00793-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 01/30/2023] [Accepted: 02/08/2023] [Indexed: 03/11/2023] Open
Abstract
Chest radiography is the modality of choice for the identification of rib fractures in young children and there is value for the development of computer-aided rib fracture detection in this age group. However, the automated identification of rib fractures on chest radiographs can be challenging due to the need for high spatial resolution in deep learning frameworks. A patch-based deep learning algorithm was developed to automatically detect rib fractures on frontal chest radiographs in children under 2 years old. A total of 845 chest radiographs of children 0-2 years old (median: 4 months old) were manually segmented for rib fractures by radiologists and served as the ground-truth labels. Image analysis utilized a patch-based sliding-window technique, to meet the high-resolution requirements for fracture detection. Standard transfer learning techniques used ResNet-50 and ResNet-18 architectures. Area-under-curve for precision-recall (AUC-PR) and receiver-operating-characteristic (AUC-ROC), along with patch and whole-image classification metrics, were reported. On the test patches, the ResNet-50 model showed AUC-PR and AUC-ROC of 0.25 and 0.77, respectively, and the ResNet-18 showed an AUC-PR of 0.32 and AUC-ROC of 0.76. On the whole-radiograph level, the ResNet-50 had an AUC-ROC of 0.74 with 88% sensitivity and 43% specificity in identifying rib fractures, and the ResNet-18 had an AUC-ROC of 0.75 with 75% sensitivity and 60% specificity in identifying rib fractures. This work demonstrates the utility of patch-based analysis for detection of rib fractures in children under 2 years old. Future work with large cohorts of multi-institutional data will improve the generalizability of these findings to patients with suspicion of child abuse.
Collapse
Affiliation(s)
- Adarsh Ghosh
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA.
- Department of Radiology, Cincinnati Children's Hospital and Medical Center, Cincinnati, OH, USA.
- Cincinnati Children's Burnet Campus, 3333 Burnet Avenue, Cincinnati, OH, 45229, USA.
| | - Daniella Patton
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Saurav Bose
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - M Katherine Henry
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Safe Place: Center for Child Protection and Health, Division of General Pediatrics, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Minhui Ouyang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Hao Huang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arastoo Vossough
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Raymond Sze
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Susan Sotardi
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Cincinnati Children's Hospital and Medical Center, Cincinnati, OH, USA
| | - Michael Francavilla
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Cincinnati Children's Hospital and Medical Center, Cincinnati, OH, USA
| |
Collapse
|
18
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
19
|
Muchuchuti S, Viriri S. Retinal Disease Detection Using Deep Learning Techniques: A Comprehensive Review. J Imaging 2023; 9:84. [PMID: 37103235 PMCID: PMC10145952 DOI: 10.3390/jimaging9040084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 04/28/2023] Open
Abstract
Millions of people are affected by retinal abnormalities worldwide. Early detection and treatment of these abnormalities could arrest further progression, saving multitudes from avoidable blindness. Manual disease detection is time-consuming, tedious and lacks repeatability. There have been efforts to automate ocular disease detection, riding on the successes of the application of Deep Convolutional Neural Networks (DCNNs) and vision transformers (ViTs) for Computer-Aided Diagnosis (CAD). These models have performed well, however, there remain challenges owing to the complex nature of retinal lesions. This work reviews the most common retinal pathologies, provides an overview of prevalent imaging modalities and presents a critical evaluation of current deep-learning research for the detection and grading of glaucoma, diabetic retinopathy, Age-Related Macular Degeneration and multiple retinal diseases. The work concluded that CAD, through deep learning, will increasingly be vital as an assistive technology. As future work, there is a need to explore the potential impact of using ensemble CNN architectures in multiclass, multilabel tasks. Efforts should also be expended on the improvement of model explainability to win the trust of clinicians and patients.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa
| |
Collapse
|
20
|
Zhou B, Rao X, Xing H, Ma Y, Wang F, Rong L. A convolutional neural network-based system for detecting early gastric cancer in white-light endoscopy. Scand J Gastroenterol 2023; 58:157-162. [PMID: 36000979 DOI: 10.1080/00365521.2022.2113427] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND White-light endoscopy (WLE) is a main and standard modality for detection of early gastric cancer (EGC). The detection rate of EGC is not satisfactory so far. In this single-center retrospective study we developed a convolutional neural network (CNN)-based system to automatically detect EGC in WLE images. METHODS An EGC detecting system was constructed based on the CNN architecture EfficientDet. We trained our system with a data set including 4527 images from 130 cases (cancerous images, 1737; noncancerous images, 2790). Then we tested its performance with a data set including 1243 images from 64 cases (cancerous images, 445; noncancerous images, 798). RESULTS For case-based analysis, our system successfully detected EGC in 63 of 64 cases and the sensitivity was 98.4%. For image-based analysis, the accuracy was 88.3%. The sensitivity, specificity, positive predictive value and negative predictive value were 84.5%, 90.5%, 83.2% and 91.3%, respectively. The most common cause for false positives was gastritis (57.9%). The most common cause for false negatives was that the lesion was too small with a diameter of 10 mm or less (44.9%). CONCLUSION Our CNN-based EGC detecting system was able to achieve satisfactory sensitivity for detecting EGC in WLE images and shows great potential in assisting endoscopists with the detection of EGC.
Collapse
Affiliation(s)
- Bin Zhou
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Xiaolong Rao
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Haoqiang Xing
- Thunder Software Technology Co., Ltd, Beijing, China
| | - Yongchen Ma
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Feng Wang
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Long Rong
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| |
Collapse
|
21
|
Cantone M, Marrocco C, Tortorella F, Bria A. Convolutional Networks and Transformers for Mammography Classification: An Experimental Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:1229. [PMID: 36772268 PMCID: PMC9921468 DOI: 10.3390/s23031229] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 05/31/2023]
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, SA, Italy
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| |
Collapse
|
22
|
Neural Network in the Analysis of the MR Signal as an Image Segmentation Tool for the Determination of T 1 and T 2 Relaxation Times with Application to Cancer Cell Culture. Int J Mol Sci 2023; 24:ijms24021554. [PMID: 36675075 PMCID: PMC9861169 DOI: 10.3390/ijms24021554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
Artificial intelligence has been entering medical research. Today, manufacturers of diagnostic instruments are including algorithms based on neural networks. Neural networks are quickly entering all branches of medical research and beyond. Analyzing the PubMed database from the last 5 years (2017 to 2021), we see that the number of responses to the query "neural network in medicine" exceeds 10,500 papers. Deep learning algorithms are of particular importance in oncology. This paper presents the use of neural networks to analyze the magnetic resonance imaging (MRI) images used to determine MRI relaxometry of the samples. Relaxometry is becoming an increasingly common tool in diagnostics. The aim of this work was to optimize the processing time of DICOM images by using a neural network implemented in the MATLAB package by The MathWorks with the patternnet function. The application of a neural network helps to eliminate spaces in which there are no objects with characteristics matching the phenomenon of longitudinal or transverse MRI relaxation. The result of this work is the elimination of aerated spaces in MRI images. The whole algorithm was implemented as an application in the MATLAB package.
Collapse
|
23
|
Ayana G, Dese K, Dereje Y, Kebede Y, Barki H, Amdissa D, Husen N, Mulugeta F, Habtamu B, Choe SW. Vision-Transformer-Based Transfer Learning for Mammogram Classification. Diagnostics (Basel) 2023; 13:diagnostics13020178. [PMID: 36672988 PMCID: PMC9857963 DOI: 10.3390/diagnostics13020178] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023] Open
Abstract
Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Yisak Dereje
- Department of Information Engineering, Marche Polytechnic University, 60121 Ancona, Italy
| | - Yonas Kebede
- Biomedical Engineering Unit, Black Lion Hospital, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea
| | - Dechassa Amdissa
- Department of Basic and Applied Science for Engineering, Sapienza University of Rome, 00161 Roma, Italy
| | - Nahimiya Husen
- Department of Bioengineering and Robotics, Campus Bio-Medico University of Rome, 00128 Roma, Italy
| | - Fikadu Mulugeta
- Center of Biomedical Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
24
|
Walsh R, Tardy M. A Comparison of Techniques for Class Imbalance in Deep Learning Classification of Breast Cancer. Diagnostics (Basel) 2022; 13:67. [PMID: 36611358 PMCID: PMC9818528 DOI: 10.3390/diagnostics13010067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Tools based on deep learning models have been created in recent years to aid radiologists in the diagnosis of breast cancer from mammograms. However, the datasets used to train these models may suffer from class imbalance, i.e., there are often fewer malignant samples than benign or healthy cases, which can bias the model towards the healthy class. In this study, we systematically evaluate several popular techniques to deal with this class imbalance, namely, class weighting, over-sampling, and under-sampling, as well as a synthetic lesion generation approach to increase the number of malignant samples. These techniques are applied when training on three diverse Full-Field Digital Mammography datasets, and tested on in-distribution and out-of-distribution samples. The experiments show that a greater imbalance is associated with a greater bias towards the majority class, which can be counteracted by any of the standard class imbalance techniques. On the other hand, these methods provide no benefit to model performance with respect to Area Under the Curve of the Recall Operating Characteristic (AUC-ROC), and indeed under-sampling leads to a reduction of 0.066 in AUC in the case of a 19:1 benign to malignant imbalance. Our synthetic lesion methodology leads to better performance in most cases, with increases of up to 0.07 in AUC on out-of-distribution test sets over the next best experiment.
Collapse
Affiliation(s)
- Ricky Walsh
- ISTIC, Campus Beaulieu, Université de Rennes 1, 35700 Rennes, France
- Hera-MI SAS, 44800 Saint-Herblain, France
| | - Mickael Tardy
- Hera-MI SAS, 44800 Saint-Herblain, France
- Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, 44000 Nantes, France
| |
Collapse
|
25
|
Yang Y. Application of LSTM Neural Network Technology Embedded in English Intelligent Translation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1085577. [PMID: 36203717 PMCID: PMC9532074 DOI: 10.1155/2022/1085577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 08/23/2022] [Accepted: 09/01/2022] [Indexed: 11/30/2022]
Abstract
With the rapid development of computer technology, the loss of long-distance information in the transmission process is a prominent problem faced by English machine translation. The self-attention mechanism is combined with convolutional neural network (CNN) and long-term and short-term memory network (LSTM). An English intelligent translation model based on LSTM-SA is proposed, and the performance of this model is compared with other deep neural network models. The study adds SA to the LSTM neural network model and constructs the English translation model of LSTM-SA attention embedding. Compared with other deep learning algorithms such as 3RNN and GRU, the LSTM-SA neural network algorithm has faster convergence speed and lower loss value, and the loss value is finally stable at about 8.6. Under the three values of adaptability, the accuracy of LSTM-SA neural network structure is higher than that of LSTM, and when the adaptability is 1, the accuracy of LSTM-SA neural network improved the fastest, with an accuracy of nearly 20%. Compared with other deep learning algorithms, the LSTM-SA neural network algorithm has a better translation level map under the three hidden layers. The proposed LSTM-SA model can better carry out English intelligent translation, enhance the representation of source language context information, and improve the performance and quality of English machine translation model.
Collapse
Affiliation(s)
- Yifang Yang
- School of Humanities, Hunan City University, Yiyang, Hunan 413000, China
| |
Collapse
|
26
|
Mahant SS, Varma AR. Artificial Intelligence in Breast Ultrasound: The Emerging Future of Modern Medicine. Cureus 2022; 14:e28945. [PMID: 36237807 PMCID: PMC9547651 DOI: 10.7759/cureus.28945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/08/2022] [Indexed: 11/25/2022] Open
Abstract
In today's world, progressively enormous popularity prevails around artificial intelligence (AI). AI is gaining popularity in the identification of various images. Therefore, it has been widely used in the ultrasound of the breast. Furthermore, AI can perform a quantitative evaluation, which further helps maintain the diagnosis's accuracy. Moreover, breast cancer is the most common cancer in women, posing a severe threat to women's health. Hence, its early detection is usually associated with a patient's prognosis. As a result, using AI in breast cancer screening and detection is highly crucial. The concept of AI in the perspective of breast ultrasound has been highlighted in this brief review article. It tends to focus on early AI, i.e., traditional machine learning and deep learning algorithms. Also, the use of AI in ultrasound and the use of it in mammography, magnetic resonance imaging, nuclear medicine imaging, and classification of breast lesions is broadly explained, along with the challenges faced in bringing AI into daily practice.
Collapse
|
27
|
Tang Z, Zhu Y, Lu X, Wu D, Fan X, Shen J, Xiao L, Zhao J, Xie R, Xiao L. Deep Learning-Based Prediction of Hematoma Expansion Using a Single Brain Computed Tomographic Slice in Patients With Spontaneous Intracerebral Hemorrhages. World Neurosurg 2022; 165:e128-e136. [PMID: 35680084 DOI: 10.1016/j.wneu.2022.05.109] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 05/24/2022] [Accepted: 05/24/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVES We aimed to predict hematoma expansion in intracerebral hemorrhage (ICH) patients by using the deep learning technique. METHODS We retrospectively collected data from ICH patients treated between May 2015 and May 2019. Head computed tomography (CT) scans were performed at admission, and 6 hours, 24 hours, and 72 hours after admission. CT scans were mandatory when neurologic deficits occurred. Univariate and multivariate analyses were conducted to illustrate the association between clinical variables and hematoma expansion. Convolutional neural network (CNN) was adopted to predict hematoma expansion based on brain CT slices. In addition, 5 machine learning methods, including support vector machine, multi-layer perceptron, naive Bayes, decision tree, and random forest, were also performed to predict hematoma expansion based on clinical variables for comparisons. RESULTS A total of 223 patients were included. It was revealed that patients' older age (odds ratio [95% confidence interval]: 1.783 [1.417-1.924]), cerebral hemorrhage and breaking into the ventricle (2.524 [1.291-1.778]), coagulopathy (2.341 [1.677-3.454]), and baseline National Institutes of Health Stroke Scale (1.545 [1.132-3.203]) and Glasgow Coma Scale scores (0.782 [0.432-0.918]) independently associated with hematoma expanding. After 4-5 epochs, the CNN framework was well trained. The average sensitivity, specificity, and accuracy of CNN prediction are 0.9197, 0.8837, and 0.9058, respectively. Compared with 5 machine learning methods based on clinical variables, CNN can also achieve better performance. CONCLUSIONS More than 90% of hematomas with or without expansion can be precisely classified by deep learning technology within this study, which is better than other methods based on clinical variables only. Deep learning technology could favorably predict hematoma expansion from non-contrast CT scan images.
Collapse
Affiliation(s)
- Zhiri Tang
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China; Department of Electronic Science and Technology, School of Physics and Technology, Wuhan University, Wuhan, P.R. China
| | - Yiqin Zhu
- Department of Neurosurgery, National Center for Neurological Disorders, Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration, Neurosurgical Institute of Fudan University, Shanghai Clinical Medical Center of Neurosurgery, Fudan University Huashan Hospital, Shanghai Medical College-Fudan University, Shanghai, China; Department of Nursing, Huashan Hospital, Fudan University, Shanghai, P.R. China
| | - Xin Lu
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China; Graduate School of Jiangxi Medical College; Nanchang University, Jiangxi, P.R. China
| | - Dengjun Wu
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China; Graduate School of Jiangxi Medical College; Nanchang University, Jiangxi, P.R. China
| | - Xinlin Fan
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China; Graduate School of Jiangxi Medical College; Nanchang University, Jiangxi, P.R. China
| | - Junjun Shen
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China; Graduate School of Jiangxi Medical College; Nanchang University, Jiangxi, P.R. China
| | - Limin Xiao
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China.
| | - Jianlan Zhao
- Department of Neurosurgery; National Center for Neurological Disorders; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration; Neurosurgical Institute of Fudan University; Shanghai Clinical Medical Center of Neurosurgery; Fudan University Huashan Hospital, Shanghai Medical College-Fudan University, 12 Wulumuqi Zhong Rd., Shanghai 200040, China.
| | - Rong Xie
- Department of Neurosurgery; National Center for Neurological Disorders; Shanghai Key Laboratory of Brain Function and Restoration and Neural Regeneration; Neurosurgical Institute of Fudan University; Shanghai Clinical Medical Center of Neurosurgery; Fudan University Huashan Hospital, Shanghai Medical College-Fudan University, 12 Wulumuqi Zhong Rd., Shanghai 200040, China.
| | - Limin Xiao
- Department of Neurosurgery, the First Affiliated Hospital of Nanchang University, Jiangxi, P.R. China.
| |
Collapse
|
28
|
Ramachandran A, Bhalla D, Rangarajan K, Pramanik R, Banerjee S, Arora C. Building and evaluating an artificial intelligence algorithm: A practical guide for practicing oncologists. Artif Intell Cancer 2022; 3:42-53. [DOI: 10.35713/aic.v3.i3.42] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 04/09/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023] Open
Abstract
The use of machine learning and deep learning has enabled many applications, previously thought of as being impossible. Among all medical fields, cancer care is arguably the most significantly impacted, with precision medicine now truly being a possibility. The effect of these technologies, loosely known as artificial intelligence, is particularly striking in fields involving images (such as radiology and pathology) and fields involving large amounts of data (such as genomics). Practicing oncologists are often confronted with new technologies claiming to predict response to therapy or predict the genomic make-up of patients. Underst-anding these new claims and technologies requires a deep understanding of the field. In this review, we provide an overview of the basis of deep learning. We describe various common tasks and their data requirements so that oncologists could be equipped to start such projects, as well as evaluate algorithms presented to them.
Collapse
Affiliation(s)
- Anupama Ramachandran
- Department of Radiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Deeksha Bhalla
- Department of Radiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Krithika Rangarajan
- Department of Radiology, All India Institute of Medical Sciences New Delhi, New Delhi 110029, India
- School of Information Technology, Indian Institute of Technology, Delhi 110016, India
| | - Raja Pramanik
- Department of Medical Oncology, Dr. B.R.A. Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Subhashis Banerjee
- Department of Computer Science, Indian Institute of Technology, Delhi 110016, India
| | - Chetan Arora
- Department of Computer Science, Indian Institute of Technology, Delhi 110016, India
| |
Collapse
|
29
|
An Efficient Method for Breast Mass Classification Using Pre-Trained Deep Convolutional Networks. MATHEMATICS 2022. [DOI: 10.3390/math10142539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Masses are the early indicators of breast cancer, and distinguishing between benign and malignant masses is a challenging problem. Many machine learning- and deep learning-based methods have been proposed to distinguish benign masses from malignant ones on mammograms. However, their performance is not satisfactory. Though deep learning has been shown to be effective in a variety of applications, it is challenging to apply it for mass classification since it requires a large dataset for training and the number of available annotated mammograms is limited. A common approach to overcome this issue is to employ a pre-trained model and fine-tune it on mammograms. Though this works well, it still involves fine-tuning a huge number of learnable parameters with a small number of annotated mammograms. To tackle the small set problem in the training or fine-tuning of CNN models, we introduce a new method, which uses a pre-trained CNN without any modifications as an end-to-end model for mass classification, without fine-tuning the learnable parameters. The training phase only identifies the neurons in the classification layer, which yield higher activation for each class, and later on uses the activation of these neurons to classify an unknown mass ROI. We evaluated the proposed approach using different CNN models on the public domain benchmark datasets, such as DDSM and INbreast. The results show that it outperforms the state-of-the-art deep learning-based methods.
Collapse
|
30
|
Samee NA, Alhussan AA, Ghoneim VF, Atteia G, Alkanhel R, Al-antari MA, Kadah YM. A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms. SENSORS 2022; 22:s22134938. [PMID: 35808433 PMCID: PMC9269713 DOI: 10.3390/s22134938] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 12/16/2022]
Abstract
One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Amel A. Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence:
| | | | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Reem Alkanhel
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Korea;
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia;
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
| |
Collapse
|
31
|
Sreenivasu SVN, Gomathi S, Kumar MJ, Prathap L, Madduri A, Almutairi KMA, Alonazi WB, Kali D, Jayadhas SA. Dense Convolutional Neural Network for Detection of Cancer from CT Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1293548. [PMID: 35769667 PMCID: PMC9236787 DOI: 10.1155/2022/1293548] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/17/2022] [Accepted: 04/23/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we develop a detection module with strong training testing to develop a dense convolutional neural network model. The model is designed in such a way that it is trained with necessary features for optimal modelling of the cancer detection. The method involves preprocessing of computerized tomography (CT) images for optimal classification at the testing stages. A 10-fold cross-validation is conducted to test the reliability of the model for cancer detection. The experimental validation is conducted in python to validate the effectiveness of the model. The result shows that the model offers robust detection of cancer instances that novel approaches on large image datasets. The simulation result shows that the proposed method provides analyzes with 94% accuracy than other methods. Also, it helps to reduce the detection errors while classifying the cancer instances than other methods the several existing methods.
Collapse
Affiliation(s)
- S. V. N. Sreenivasu
- Department of Computer Science and Engineering, Narasaraopeta Engineering College, Narasaraopeta, Andhra Pradesh 522601, India
| | - S. Gomathi
- Department of Information Technology, Sri Sairam Engineering College, Chennai, Tamil Nadu 602109, India
| | - M. Jogendra Kumar
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - Abhishek Madduri
- Department of Engineering Management, Duke University, North Carolina 27708, USA
| | - Khalid M. A. Almutairi
- Department of Community Health Sciences, College of Applied Medical Sciences, King Saud University, P. O. Box: 10219, Riyadh-11433, Saudi Arabia
| | - Wadi B. Alonazi
- Health Administration Department, College of Business Administration, King Saud University, PO Box: 71115, Riyadh-11587, Saudi Arabia
| | - D. Kali
- Department of Mechanical Engineering, Ryerson University, Canada
| | | |
Collapse
|
32
|
Huang Z, Ni Y, Yu Q, Li J, Fan L, Eskin NAM. Deep learning in food science: An insight in evaluating Pickering emulsion properties by droplets classification and quantification via object detection algorithm. Adv Colloid Interface Sci 2022; 304:102663. [PMID: 35430426 DOI: 10.1016/j.cis.2022.102663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 04/03/2022] [Accepted: 04/03/2022] [Indexed: 11/18/2022]
Abstract
Understanding the complicated emulsion microstructures by microscopic images will help to further elaborate their mechanisms and relevance. The formidable goal of the classification and quantification of emulsion microstructure appears difficult to achieve. However, object detection algorithm in deep learning makes it feasible. This paper reports a new technique for evaluating Pickering emulsion properties through classification and quantification of the emulsion microstructure by object detection algorithm. The trained neural network models characterize the emulsion droplets by distinguishing between different individual emulsion droplets and morphological mechanisms from numerous microscopic images. The quantified results of the emulsion droplets presented in this study, provide details of statistical changes at different concentrations of the Pickering interface and storage temperatures enabling elucidation of the mechanisms involved. This methodology provides a new quantitative and statistical analysis of emulsion microstructure and properties.
Collapse
Affiliation(s)
- Zongyu Huang
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Yang Ni
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Qun Yu
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Jinwei Li
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Liuping Fan
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China.
| | - N A Michael Eskin
- Department of Food and Human Nutritional Sciences, University of Manitoba, Winnipeg, Manitoba R3T 2N, Canada
| |
Collapse
|
33
|
Leong YS, Hasikin K, Lai KW, Mohd Zain N, Azizan MM. Microcalcification Discrimination in Mammography Using Deep Convolutional Neural Network: Towards Rapid and Early Breast Cancer Diagnosis. Front Public Health 2022; 10:875305. [PMID: 35570962 PMCID: PMC9096221 DOI: 10.3389/fpubh.2022.875305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 04/04/2022] [Indexed: 11/30/2022] Open
Abstract
Breast cancer is among the most common types of cancer in women and under the cases of misdiagnosed, or delayed in treatment, the mortality risk is high. The existence of breast microcalcifications is common in breast cancer patients and they are an effective indicator for early sign of breast cancer. However, microcalcifications are often missed and wrongly classified during screening due to their small sizes and indirect scattering in mammogram images. Motivated by this issue, this project proposes an adaptive transfer learning deep convolutional neural network in segmenting breast mammogram images with calcifications cases for early breast cancer diagnosis and intervention. Mammogram images of breast microcalcifications are utilized to train several deep neural network models and their performance is compared. Image filtering of the region of interest images was conducted to remove possible artifacts and noises to enhance the quality of the images before the training. Different hyperparameters such as epoch, batch size, etc were tuned to obtain the best possible result. In addition, the performance of the proposed fine-tuned hyperparameter of ResNet50 is compared with another state-of-the-art machine learning network such as ResNet34, VGG16, and AlexNet. Confusion matrices were utilized for comparison. The result from this study shows that the proposed ResNet50 achieves the highest accuracy with a value of 97.58%, followed by ResNet34 of 97.35%, VGG16 96.97%, and finally AlexNet of 83.06%.
Collapse
Affiliation(s)
- Yew Sum Leong
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia.,Department of Biomedical Engineering, Center for Image and Signal Processing (CISIP), Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Norita Mohd Zain
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Muhammad Mokhzaini Azizan
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Built Environment, Universiti Sains Islam Malaysia, Nilai, Malaysia
| |
Collapse
|
34
|
Kalake L, Dong Y, Wan W, Hou L. Enhancing Detection Quality Rate with a Combined HOG and CNN for Real-Time Multiple Object Tracking across Non-Overlapping Multiple Cameras. SENSORS 2022; 22:s22062123. [PMID: 35336294 PMCID: PMC8949134 DOI: 10.3390/s22062123] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 03/02/2022] [Accepted: 03/05/2022] [Indexed: 12/04/2022]
Abstract
Multi-object tracking in video surveillance is subjected to illumination variation, blurring, motion, and similarity variations during the identification process in real-world practice. The previously proposed applications have difficulties in learning the appearances and differentiating the objects from sundry detections. They mostly rely heavily on local features and tend to lose vital global structured features such as contour features. This contributes to their inability to accurately detect, classify or distinguish the fooling images. In this paper, we propose a paradigm aimed at eliminating these tracking difficulties by enhancing the detection quality rate through the combination of a convolutional neural network (CNN) and a histogram of oriented gradient (HOG) descriptor. We trained the algorithm with an input of 120 × 32 images size and cleaned and converted them into binary for reducing the numbers of false positives. In testing, we eliminated the background on frames size and applied morphological operations and Laplacian of Gaussian model (LOG) mixture after blobs. The images further underwent feature extraction and computation with the HOG descriptor to simplify the structural information of the objects in the captured video images. We stored the appearance features in an array and passed them into the network (CNN) for further processing. We have applied and evaluated our algorithm for real-time multiple object tracking on various city streets using EPFL multi-camera pedestrian datasets. The experimental results illustrate that our proposed technique improves the detection rate and data associations. Our algorithm outperformed the online state-of-the-art approach by recording the highest in precisions and specificity rates.
Collapse
Affiliation(s)
- Lesole Kalake
- School of Communications and Information Engineering, Institute of Smart City, Shanghai University, Shanghai 200444, China; (Y.D.); (W.W.)
- Correspondence: ; Tel.: +86-198-2121-4680
| | - Yanqiu Dong
- School of Communications and Information Engineering, Institute of Smart City, Shanghai University, Shanghai 200444, China; (Y.D.); (W.W.)
| | - Wanggen Wan
- School of Communications and Information Engineering, Institute of Smart City, Shanghai University, Shanghai 200444, China; (Y.D.); (W.W.)
| | - Li Hou
- School of Information Engineering, Huangshan University, Huangshan 245041, China;
| |
Collapse
|
35
|
Muduli D, Dash R, Majhi B. Automated diagnosis of breast cancer using multi-modal datasets: A deep convolution neural network based approach. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.102825] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
36
|
Rangarajan AK, Ramachandran HK. A preliminary analysis of AI based smartphone application for diagnosis of COVID-19 using chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2021; 183:115401. [PMID: 34149202 PMCID: PMC8196480 DOI: 10.1016/j.eswa.2021.115401] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/17/2021] [Accepted: 06/08/2021] [Indexed: 05/08/2023]
Abstract
The COVID-19 outbreak has catastrophically affected both public health system and world economy. Swift diagnosis of the positive cases will help in providing proper medical attention to the infected individuals and will also aid in effective tracing of their contacts to break the chain of transmission. Blending Artificial Intelligence (AI) with chest X-ray images and incorporating these models in a smartphone can be handy for the accelerated diagnosis of COVID-19. In this study, publicly available datasets of chest X-ray images have been utilized for training and testing of five pre-trained Convolutional Neural Network (CNN) models namely VGG16, MobileNetV2, Xception, NASNetMobile and InceptionResNetV2. Prior to the training of the selected models, the number of images in COVID-19 category has been increased employing traditional augmentation and Generative Adversarial Network (GAN). The performance of the five pre-trained CNN models utilizing the images generated with the two strategies has been compared. In the case of models trained using augmented images, Xception (98%) and MobileNetV2 (97.9%) turned out to be the ones with highest validation accuracy. Xception (98.1%) and VGG16 (98.6%) emerged as models with the highest validation accuracy in the models trained with synthetic GAN images. The best performing models have been further deployed in a smartphone and evaluated. The overall results suggest that VGG16 and Xception, trained with the synthetic images created using GAN, performed better compared to models trained with augmented images. Among these two models VGG16 produced an encouraging Diagnostic Odd Ratio (DOR) with higher positive likelihood and lower negative likelihood for the prediction of COVID-19.
Collapse
|
37
|
Detection of Bone Metastases on Bone Scans through Image Classification with Contrastive Learning. J Pers Med 2021; 11:jpm11121248. [PMID: 34945720 PMCID: PMC8708961 DOI: 10.3390/jpm11121248] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/16/2021] [Accepted: 11/22/2021] [Indexed: 01/01/2023] Open
Abstract
Patients with bone metastases have poor prognoses. A bone scan is a commonly applied diagnostic tool for this condition. However, its accuracy is limited by the nonspecific character of radiopharmaceutical accumulation, which indicates all-cause bone remodeling. The current study evaluated deep learning techniques to improve the efficacy of bone metastasis detection on bone scans, retrospectively examining 19,041 patients aged 22 to 92 years who underwent bone scans between May 2011 and December 2019. We developed several functional imaging binary classification deep learning algorithms suitable for bone scans. The presence or absence of bone metastases as a reference standard was determined through a review of image reports by nuclear medicine physicians. Classification was conducted with convolutional neural network-based (CNN-based), residual neural network (ResNet), and densely connected convolutional networks (DenseNet) models, with and without contrastive learning. Each set of bone scans contained anterior and posterior images with resolutions of 1024 × 256 pixels. A total of 37,427 image sets were analyzed. The overall performance of all models improved with contrastive learning. The accuracy, precision, recall, F1 score, area under the receiver operating characteristic curve, and negative predictive value (NPV) for the optimal model were 0.961, 0.878, 0.599, 0.712, 0.92 and 0.965, respectively. In particular, the high NPV may help physicians safely exclude bone metastases, decreasing physician workload, and improving patient care.
Collapse
|
38
|
Busaleh M, Hussain M, Aboalsamh HA, Amin FE. Breast Mass Classification Using Diverse Contextual Information and Convolutional Neural Network. BIOSENSORS 2021; 11:419. [PMID: 34821634 PMCID: PMC8615673 DOI: 10.3390/bios11110419] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 10/14/2021] [Accepted: 10/22/2021] [Indexed: 06/13/2023]
Abstract
Masses are one of the early signs of breast cancer, and the survival rate of women suffering from breast cancer can be improved if masses can be correctly identified as benign or malignant. However, their classification is challenging due to the similarity in texture patterns of both types of mass. The existing methods for this problem have low sensitivity and specificity. Based on the hypothesis that diverse contextual information of a mass region forms a strong indicator for discriminating benign and malignant masses and the idea of the ensemble classifier, we introduce a computer-aided system for this problem. The system uses multiple regions of interest (ROIs) encompassing a mass region for modeling diverse contextual information, a single ResNet-50 model (or its density-specific modification) as a backbone for local decisions, and stacking with SVM as a base model to predict the final decision. A data augmentation technique is introduced for fine-tuning the backbone model. The system was thoroughly evaluated on the benchmark CBIS-DDSM dataset using its provided data split protocol, and it achieved a sensitivity of 98.48% and a specificity of 92.31%. Furthermore, it was found that the system gives higher performance if it is trained and tested using the data from a specific breast density BI-RADS class. The system does not need to fine-tune/train multiple CNN models; it introduces diverse contextual information by multiple ROIs. The comparison shows that the method outperforms the state-of-the-art methods for classifying mass regions into benign and malignant. It will help radiologists reduce their burden and enhance their sensitivity in the prediction of malignant masses.
Collapse
Affiliation(s)
- Mariam Busaleh
- Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia; (M.H.); (H.A.A.)
| | - Muhammad Hussain
- Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia; (M.H.); (H.A.A.)
| | - Hatim A. Aboalsamh
- Department of Computer Science, CCIS, King Saud University, Riyadh 11451, Saudi Arabia; (M.H.); (H.A.A.)
| | - Fazal-e- Amin
- Department of Software Engineering, CCIS, King Saud University, Riyadh 11543, Saudi Arabia;
| |
Collapse
|
39
|
Fenta HM, Zewotir T, Muluneh EK. A machine learning classifier approach for identifying the determinants of under-five child undernutrition in Ethiopian administrative zones. BMC Med Inform Decis Mak 2021; 21:291. [PMID: 34689769 PMCID: PMC8542294 DOI: 10.1186/s12911-021-01652-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 10/04/2021] [Indexed: 12/23/2022] Open
Abstract
Background Undernutrition is the main cause of child death in developing countries. This paper aimed to explore the efficacy of machine learning (ML) approaches in predicting under-five undernutrition in Ethiopian administrative zones and to identify the most important predictors.
Method The study employed ML techniques using retrospective cross-sectional survey data from Ethiopia, a national-representative data collected in the year (2000, 2005, 2011, and 2016). We explored six commonly used ML algorithms; Logistic regression, Least Absolute Shrinkage and Selection Operator (L-1 regularization logistic regression), L-2 regularization (Ridge), Elastic net, neural network, and random forest (RF). Sensitivity, specificity, accuracy, and area under the curve were used to evaluate the performance of those models. Results Based on different performance evaluations, the RF algorithm was selected as the best ML model. In the order of importance; urban–rural settlement, literacy rate of parents, and place of residence were the major determinants of disparities of nutritional status for under-five children among Ethiopian administrative zones. Conclusion Our results showed that the considered machine learning classification algorithms can effectively predict the under-five undernutrition status in Ethiopian administrative zones. Persistent under-five undernutrition status was found in the northern part of Ethiopia. The identification of such high-risk zones could provide useful information to decision-makers trying to reduce child undernutrition. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-021-01652-1.
Collapse
Affiliation(s)
- Haile Mekonnen Fenta
- Department of Statistics, College of Science, Bahir Dar University, Bahir Dar, Ethiopia.
| | - Temesgen Zewotir
- School of Mathematics, Statistics and Computer Science, College of Agriculture Engineering and Science, University of KwaZulu-Natal, Durban, South Africa
| | - Essey Kebede Muluneh
- School of Public Health, College of Medicine and Health Sciences, Bahir Dar University, Bahir Dar, Ethiopia
| |
Collapse
|
40
|
Nica RE, Șerbănescu MS, Florescu LM, Camen GC, Streba CT, Gheonea IA. Deep Learning: a Promising Method for Histological Class Prediction of Breast Tumors in Mammography. J Digit Imaging 2021; 34:1190-1198. [PMID: 34505960 PMCID: PMC8554900 DOI: 10.1007/s10278-021-00508-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 07/28/2021] [Accepted: 08/17/2021] [Indexed: 02/05/2023] Open
Abstract
The objective of the study was to determine if the pathology depicted on a mammogram is either benign or malignant (ductal or non-ductal carcinoma) using deep learning and artificial intelligence techniques. A total of 559 patients underwent breast ultrasound, mammography, and ultrasound-guided breast biopsy. Based on the histopathological results, the patients were divided into three categories: benign, ductal carcinomas, and non-ductal carcinomas. The mammograms in the cranio-caudal view underwent pre-processing and segmentation. Given the large variability of the areola, an algorithm was used to remove it and the adjacent skin. Therefore, patients with breast lesions close to the skin were removed. The remaining breast image was resized on the Y axis to a square image and then resized to 512 × 512 pixels. A variable square of 322,622 pixels was searched inside every image to identify the lesion. Each image was rotated with no information loss. For data augmentation, each image was rotated 360 times and a crop of 227 × 227 pixels was saved, resulting in a total of 201,240 images. The reason why our images were cropped at this size is because the deep learning algorithm transfer learning used from AlexNet network has an input image size of 227 × 227. The mean accuracy was 95.8344% ± 6.3720% and mean AUC 0.9910% ± 0.0366%, computed on 100 runs of the algorithm. Based on the results, the proposed solution can be used as a non-invasive and highly accurate computer-aided system based on deep learning that can classify breast lesions based on changes identified on mammograms in the cranio-caudal view.
Collapse
Affiliation(s)
- Raluca-Elena Nica
- Doctoral School, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania
- Department of Radiology and Medical Imaging, Emergency Clinical County Hospital of Craiova, 1 Tabaci Street, 200642, Craiova, Romania
| | - Mircea-Sebastian Șerbănescu
- Department of Medical Informatics and Biostatistics, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania.
| | - Lucian-Mihai Florescu
- Department of Radiology and Medical Imaging, Emergency Clinical County Hospital of Craiova, 1 Tabaci Street, 200642, Craiova, Romania
- Department of Radiology and Medical Imaging, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania
| | - Georgiana-Cristiana Camen
- Department of Radiology and Medical Imaging, Emergency Clinical County Hospital of Craiova, 1 Tabaci Street, 200642, Craiova, Romania
- Department of Radiology and Medical Imaging, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania
| | - Costin Teodor Streba
- Department of Research Methodology, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania
| | - Ioana-Andreea Gheonea
- Department of Radiology and Medical Imaging, Emergency Clinical County Hospital of Craiova, 1 Tabaci Street, 200642, Craiova, Romania
- Department of Radiology and Medical Imaging, University of Medicine and Pharmacy of Craiova, 2-4 Petru Rareș Street, 200349, Craiova, Romania
| |
Collapse
|
41
|
Yuan H, Fan Z, Wu Y, Cheng J. An efficient multi-path 3D convolutional neural network for false-positive reduction of pulmonary nodule detection. Int J Comput Assist Radiol Surg 2021; 16:2269-2277. [PMID: 34449037 DOI: 10.1007/s11548-021-02478-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 08/10/2021] [Indexed: 12/19/2022]
Abstract
PURPOSE Considering that false-positive and true pulmonary nodules are highly similar in shapes and sizes between lung computed tomography scans, we develop and evaluate a false-positive nodules reduction method applied to the computer-aided diagnosis system. METHODS To improve the pulmonary nodule diagnosis quality, a 3D convolutional neural networks (CNN) model is constructed to effectively extract spatial information of candidate nodule features through the hierarchical architecture. Furthermore, three paths corresponding to three receptive field sizes are adopted and concatenated in the network model, so that the feature information is fully extracted and fused to actively adapting to the changes in shapes, sizes, and contextual information between pulmonary nodules. In this way, the false-positive reduction is well implemented in pulmonary nodule detection. RESULTS Multi-path 3D CNN is performed on LUNA16 dataset, which achieves an average competitive performance metric score of 0.881, and excellent sensitivity of 0.952 and 0.962 occurs to 4, 8 FP/Scans. CONCLUSION By constructing a multi-path 3D CNN to fully extract candidate target features, it accurately identifies pulmonary nodules with different sizes, shapes, and background information. In addition, the proposed general framework is also suitable for similar 3D medical image classification tasks.
Collapse
Affiliation(s)
- Haiying Yuan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China.
| | - Zhongwei Fan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| | - Yanrui Wu
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| | - Junpeng Cheng
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| |
Collapse
|
42
|
Tsai JY, Hung IYJ, Guo YL, Jan YK, Lin CY, Shih TTF, Chen BB, Lung CW. Lumbar Disc Herniation Automatic Detection in Magnetic Resonance Imaging Based on Deep Learning. Front Bioeng Biotechnol 2021; 9:708137. [PMID: 34490222 PMCID: PMC8416668 DOI: 10.3389/fbioe.2021.708137] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 07/19/2021] [Indexed: 12/30/2022] Open
Abstract
Background: Lumbar disc herniation (LDH) is among the most common causes of lower back pain and sciatica. The causes of LDH have not been fully elucidated but most likely involve a complex combination of mechanical and biological processes. Magnetic resonance imaging (MRI) is a tool most frequently used for LDH because it can show abnormal soft tissue areas around the spine. Deep learning models may be trained to recognize images with high speed and accuracy to diagnose LDH. Although the deep learning model requires huge numbers of image datasets to train and establish the best model, this study processed enhanced medical image features for training the small-scale deep learning dataset. Methods: We propose automatic detection to assist the initial LDH exam for lower back pain. The subjects were between 20 and 65 years old with at least 6 months of work experience. The deep learning method employed the YOLOv3 model to train and detect small object changes such as LDH on MRI. The dataset images were processed and combined with labeling and annotation from the radiologist's diagnosis record. Results: Our method proves the possibility of using deep learning with a small-scale dataset with limited medical images. The highest mean average precision (mAP) was 92.4% at 550 images with data augmentation (550-aug), and the YOLOv3 LDH training was 100% with the best average precision at 550-aug among all datasets. This study used data augmentation to prevent under- or overfitting in an object detection model that was trained with the small-scale dataset. Conclusions: The data augmentation technique plays a crucial role in YOLOv3 training and detection results. This method displays a high possibility for rapid initial tests and auto-detection for a limited clinical dataset.
Collapse
Affiliation(s)
- Jen-Yung Tsai
- Department of Digital Media Design, Asia University, Taichung, Taiwan
| | - Isabella Yu-Ju Hung
- Department of Nursing, Chung Hwa University of Medical Technology, Tainan, Taiwan
| | - Yue Leon Guo
- Environmental and Occupational Medicine, College of Medicine, National Taiwan University (NTU) and NTU Hospital, Taipei, Taiwan
- Graduate Institute of Environmental and Occupational Health Sciences, College of Public Health, National Taiwan University, Taipei, Taiwan
- National Institute of Environmental Health Sciences, National Health Research Institutes, Miaoli, Taiwan
| | - Yih-Kuen Jan
- Rehabilitation Engineering Lab, Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Champaign, IL, United States
| | - Chih-Yang Lin
- Department of Electrical Engineering, Yuan Ze University, Chung-Li, Taiwan
| | - Tiffany Ting-Fang Shih
- Department of Medical Imaging and Radiology, National Taiwan University (NTU) Hospital and NTU College of Medicine, Taipei, Taiwan
| | - Bang-Bin Chen
- Department of Medical Imaging and Radiology, National Taiwan University (NTU) Hospital and NTU College of Medicine, Taipei, Taiwan
| | - Chi-Wen Lung
- Rehabilitation Engineering Lab, Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Department of Creative Product Design, Asia University, Taichung, Taiwan
| |
Collapse
|
43
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women’s physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
44
|
Badawy SM, Mohamed AENA, Hefnawy AA, Zidan HE, GadAllah MT, El-Banby GM. Classification of Breast Ultrasound Images Based on Convolutional Neural Networks - A Comparative Study. 2021 INTERNATIONAL TELECOMMUNICATIONS CONFERENCE (ITC-EGYPT) 2021. [DOI: 10.1109/itc-egypt52936.2021.9513972] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
45
|
Automatic Diagnosis of Coronary Artery Disease in SPECT Myocardial Perfusion Imaging Employing Deep Learning. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11146362] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Focusing on coronary artery disease (CAD) patients, this research paper addresses the problem of automatic diagnosis of ischemia or infarction using single-photon emission computed tomography (SPECT) (Siemens Symbia S Series) myocardial perfusion imaging (MPI) scans and investigates the capabilities of deep learning and convolutional neural networks. Considering the wide applicability of deep learning in medical image classification, a robust CNN model whose architecture was previously determined in nuclear image analysis is introduced to recognize myocardial perfusion images by extracting the insightful features of an image and use them to classify it correctly. In addition, a deep learning classification approach using transfer learning is implemented to classify cardiovascular images as normal or abnormal (ischemia or infarction) from SPECT MPI scans. The present work is differentiated from other studies in nuclear cardiology as it utilizes SPECT MPI images. To address the two-class classification problem of CAD diagnosis, achieving adequate accuracy, simple, fast and efficient CNN architectures were built based on a CNN exploration process. They were then employed to identify the category of CAD diagnosis, presenting its generalization capabilities. The results revealed that the applied methods are sufficiently accurate and able to differentiate the infarction or ischemia from healthy patients (overall classification accuracy = 93.47% ± 2.81%, AUC score = 0.936). To strengthen the findings of this study, the proposed deep learning approaches were compared with other popular state-of-the-art CNN architectures for the specific dataset. The prediction results show the efficacy of new deep learning architecture applied for CAD diagnosis using SPECT MPI scans over the existing ones in nuclear medicine.
Collapse
|
46
|
Reid S, Schousboe JT, Kimelman D, Monchka BA, Jafari Jozani M, Leslie WD. Machine learning for automated abdominal aortic calcification scoring of DXA vertebral fracture assessment images: A pilot study. Bone 2021; 148:115943. [PMID: 33836309 DOI: 10.1016/j.bone.2021.115943] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 03/28/2021] [Accepted: 03/30/2021] [Indexed: 01/20/2023]
Abstract
BACKGROUND Abdominal aortic calcification (AAC) identified on dual-energy x-ray absorptiometry (DXA) vertebral fracture assessment (VFA) lateral spine images is predictive of cardiovascular outcomes, but is time-consuming to perform manually. Whether this procedure can be automated using convolutional neural networks (CNNs), a class of machine learning algorithms used for image processing, has not been widely investigated. METHODS Using the Province of Manitoba Bone Density Program DXA database, we selected a random sample of 1100 VFA images from individuals qualifying for VFA as part of their osteoporosis assessment. For each scan, AAC was manually scored using the 24-point semi-quantitative scale and categorized as low (score < 2), moderate (score 2 to <6), or high (score ≥ 6). An ensemble consisting of two CNNs was developed, by training and evaluating separately on single-energy and dual-energy images. AAC prediction was performed using the mean AAC score of the two models. RESULTS Mean (SD) age of the cohort was 75.5 (6.7) years, 95.5% were female. Training (N = 770, 70%), validation (N = 110, 10%) and test sets (N = 220, 20%) were well-balanced with respect to baseline characteristics and AAC scores. For the test set, the Pearson correlation between the CNN-predicted and human-labelled scores was 0.93 with intraclass correlation coefficient for absolute agreement 0.91 (95% CI 0.89-0.93). Kappa for AAC category agreement (prevalence- and bias-adjusted, ordinal scale) was 0.71 (95% CI 0.65-0.78). There was complete separation of the low and high categories, without any low AAC score scans predicted to be high and vice versa. CONCLUSIONS CNNs are capable of detecting AAC in VFA images, with high correlation between the human and predicted scores. These preliminary results suggest CNNs are a promising method for automatically detecting and quantifying AAC.
Collapse
Affiliation(s)
| | - John T Schousboe
- Park Nicollet Clinic and HealthPartners Institute, Bloomington, MN, USA; University of Minnesota, Minneapolis, MN, USA.
| | - Douglas Kimelman
- University of Manitoba, Winnipeg, Canada; St. Boniface Hospital Albrechtsen Research Centre, Winnipeg, Manitoba, Canada
| | | | | | | |
Collapse
|
47
|
Zhou L, Yin X, Zhang T, Feng Y, Zhao Y, Jin M, Peng M, Xing C, Li F, Wang Z, Wei G, Jia X, Liu Y, Wu X, Lu L. Detection and Semiquantitative Analysis of Cardiomegaly, Pneumothorax, and Pleural Effusion on Chest Radiographs. Radiol Artif Intell 2021; 3:e200172. [PMID: 34350406 PMCID: PMC8328111 DOI: 10.1148/ryai.2021200172] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 04/14/2021] [Accepted: 04/23/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To develop and evaluate deep learning models for the detection and semiquantitative analysis of cardiomegaly, pneumothorax, and pleural effusion on chest radiographs. MATERIALS AND METHODS In this retrospective study, models were trained for lesion detection or for lung segmentation. The first dataset for lesion detection consisted of 2838 chest radiographs from 2638 patients (obtained between November 2018 and January 2020) containing findings positive for cardiomegaly, pneumothorax, and pleural effusion that were used in developing Mask region-based convolutional neural networks plus Point-based Rendering models. Separate detection models were trained for each disease. The second dataset was from two public datasets, which included 704 chest radiographs for training and testing a U-Net for lung segmentation. Based on accurate detection and segmentation, semiquantitative indexes were calculated for cardiomegaly (cardiothoracic ratio), pneumothorax (lung compression degree), and pleural effusion (grade of pleural effusion). Detection performance was evaluated by average precision (AP) and free-response receiver operating characteristic (FROC) curve score with the intersection over union greater than 75% (AP75; FROC score75). Segmentation performance was evaluated by Dice similarity coefficient. RESULTS The detection models achieved high accuracy for detecting cardiomegaly (AP75, 98.0%; FROC score75, 0.985), pneumothorax (AP75, 71.2%; FROC score75, 0.728), and pleural effusion (AP75, 78.2%; FROC score75, 0.802), and they also weakened boundary aliasing. The segmentation effect of the lung field (Dice, 0.960), cardiomegaly (Dice, 0.935), pneumothorax (Dice, 0.827), and pleural effusion (Dice, 0.826) was good, which provided important support for semiquantitative analysis. CONCLUSION The developed models could detect cardiomegaly, pneumothorax, and pleural effusion, and semiquantitative indexes could be calculated from segmentations.Keywords: Computer-Aided Diagnosis (CAD), Thorax, CardiacSupplemental material is available for this article.© RSNA, 2021.
Collapse
Affiliation(s)
- Leilei Zhou
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Xindao Yin
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Tao Zhang
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Yuan Feng
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Ying Zhao
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Mingxu Jin
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Mingyang Peng
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Chunhua Xing
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Fengfang Li
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Ziteng Wang
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Guoliang Wei
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Xiao Jia
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Yujun Liu
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Xinying Wu
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| | - Lingquan Lu
- From the Department of Radiology, Nanjing First Hospital, Nanjing
Medical University, Nanjing 210006, China (L.Z., X.Y., T.Z., Y.F., Y.Z., M.J.,
M.P., C.X., F.L., X.W., L.L.); and Yizhun Medical AI Co., Ltd., Beijing, China
(Z.W., G.W., X.J., Y.L.)
| |
Collapse
|
48
|
Bamba Y, Ogawa S, Itabashi M, Shindo H, Kameoka S, Okamoto T, Yamamoto M. Object and anatomical feature recognition in surgical video images based on a convolutional neural network. Int J Comput Assist Radiol Surg 2021; 16:2045-2054. [PMID: 34169465 PMCID: PMC8224261 DOI: 10.1007/s11548-021-02434-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 06/17/2021] [Indexed: 12/14/2022]
Abstract
Purpose Artificial intelligence-enabled techniques can process large amounts of surgical data and may be utilized for clinical decision support to recognize or forecast adverse events in an actual intraoperative scenario. To develop an image-guided navigation technology that will help in surgical education, we explored the performance of a convolutional neural network (CNN)-based computer vision system in detecting intraoperative objects. Methods The surgical videos used for annotation were recorded during surgeries conducted in the Department of Surgery of Tokyo Women’s Medical University from 2019 to 2020. Abdominal endoscopic images were cut out from manually captured surgical videos. An open-source programming framework for CNN was used to design a model that could recognize and segment objects in real time through IBM Visual Insights. The model was used to detect the GI tract, blood, vessels, uterus, forceps, ports, gauze and clips in the surgical images. Results The accuracy, precision and recall of the model were 83%, 80% and 92%, respectively. The mean average precision (mAP), the calculated mean of the precision for each object, was 91%. Among surgical tools, the highest recall and precision of 96.3% and 97.9%, respectively, were achieved for forceps. Among the anatomical structures, the highest recall and precision of 92.9% and 91.3%, respectively, were achieved for the GI tract. Conclusion The proposed model could detect objects in operative images with high accuracy, highlighting the possibility of using AI-based object recognition techniques for intraoperative navigation. Real-time object recognition will play a major role in navigation surgery and surgical education. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02434-w.
Collapse
Affiliation(s)
- Yoshiko Bamba
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan.
| | - Shimpei Ogawa
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Michio Itabashi
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | | | | | - Takahiro Okamoto
- Department of Breast Endocrinology Surgery, Tokyo Women's Medical University, Tokyo, Japan
| | - Masakazu Yamamoto
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| |
Collapse
|
49
|
Development of machine learning model for diagnostic disease prediction based on laboratory tests. Sci Rep 2021; 11:7567. [PMID: 33828178 PMCID: PMC8026627 DOI: 10.1038/s41598-021-87171-5] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 03/19/2021] [Indexed: 01/16/2023] Open
Abstract
The use of deep learning and machine learning (ML) in medical science is increasing, particularly in the visual, audio, and language data fields. We aimed to build a new optimized ensemble model by blending a DNN (deep neural network) model with two ML models for disease prediction using laboratory test results. 86 attributes (laboratory tests) were selected from datasets based on value counts, clinical importance-related features, and missing values. We collected sample datasets on 5145 cases, including 326,686 laboratory test results. We investigated a total of 39 specific diseases based on the International Classification of Diseases, 10th revision (ICD-10) codes. These datasets were used to construct light gradient boosting machine (LightGBM) and extreme gradient boosting (XGBoost) ML models and a DNN model using TensorFlow. The optimized ensemble model achieved an F1-score of 81% and prediction accuracy of 92% for the five most common diseases. The deep learning and ML models showed differences in predictive power and disease classification patterns. We used a confusion matrix and analyzed feature importance using the SHAP value method. Our new ML model achieved high efficiency of disease prediction through classification of diseases. This study will be useful in the prediction and diagnosis of diseases.
Collapse
|
50
|
Kumar R, Khan FU, Sharma A, Aziz IB, Poddar NK. Recent Applications of Artificial Intelligence in detection of Gastrointestinal, Hepatic and Pancreatic Diseases. Curr Med Chem 2021; 29:66-85. [PMID: 33820515 DOI: 10.2174/0929867328666210405114938] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/25/2021] [Accepted: 03/06/2021] [Indexed: 11/22/2022]
Abstract
There is substantial progress in artificial intelligence (AI) algorithms and their medical sciences applications in the last two decades. AI-assisted programs have already been established for remotely health monitoring using sensors and smartphones. A variety of AI-based prediction models available for the gastrointestinal inflammatory, non-malignant diseases, and bowel bleeding using wireless capsule endoscopy, electronic medical records for hepatitis-associated fibrosis, pancreatic carcinoma using endoscopic ultrasounds. AI-based models may be of immense help for healthcare professionals in the identification, analysis, and decision support using endoscopic images to establish prognosis and risk assessment of patient's treatment using multiple factors. Although enough randomized clinical trials are warranted to establish the efficacy of AI-algorithms assisted and non-AI based treatments before approval of such techniques from medical regulatory authorities. In this article, available AI approaches and AI-based prediction models for detecting gastrointestinal, hepatic, and pancreatic diseases are reviewed. The limitation of AI techniques in such disease prognosis, risk assessment, and decision support are discussed.
Collapse
Affiliation(s)
- Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh. India
| | - Farhat Ullah Khan
- Computer and Information Sciences Department, Universiti Teknologi Petronas, 32610, Seri Iskander, Perak. Malaysia
| | - Anju Sharma
- Department of Applied Science, Indian Institute of Information Technology, Allahabad, Uttar Pradesh. India
| | - Izzatdin Ba Aziz
- Computer and Information Sciences Department, Universiti Teknologi Petronas, 32610, Seri Iskander, Perak. Malaysia
| | | |
Collapse
|