1
|
Caratsch L, Lechtenboehmer C, Caorsi M, Oung K, Zanchi F, Aleman Y, Walker UA, Omoumi P, Hügle T. Detection and Grading of Radiographic Hand Osteoarthritis Using an Automated Machine Learning Platform. ACR Open Rheumatol 2024; 6:388-395. [PMID: 38576187 PMCID: PMC11168904 DOI: 10.1002/acr2.11665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 01/31/2024] [Accepted: 02/09/2024] [Indexed: 04/06/2024] Open
Abstract
OBJECTIVE Automated machine learning (autoML) platforms allow health care professionals to play an active role in the development of machine learning (ML) algorithms according to scientific or clinical needs. The aim of this study was to develop and evaluate such a model for automated detection and grading of distal hand osteoarthritis (OA). METHODS A total of 13,690 hand radiographs from 2,863 patients within the Swiss Cohort of Quality Management (SCQM) and an external control data set of 346 non-SCQM patients were collected and scored for distal interphalangeal OA (DIP-OA) using the modified Kellgren/Lawrence (K/L) score. Giotto (Learn to Forecast [L2F]) was used as an autoML platform for training two convolutional neural networks for DIP joint extraction and subsequent classification according to the K/L scores. A total of 48,892 DIP joints were extracted and then used to train the classification model. Heatmaps were generated independently of the platform. User experience of a web application as a provisional user interface was investigated by rheumatologists and radiologists. RESULTS The sensitivity and specificity of this model for detecting DIP-OA were 79% and 86%, respectively. The accuracy for grading the correct K/L score was 75%, with a κ score of 0.76. The accuracy per DIP-OA class differed, with 86% for no OA (defined as K/L scores 0 and 1), 71% for a K/L score of 2, 46% for a K/L score of 3, and 67% for a K/L score of 4. Similar values were obtained in an independent external test set. Qualitative and quantitative user experience testing of the web application revealed a moderate to high demand for automated DIP-OA scoring among rheumatologists. Conversely, radiologists expressed a low demand, except for the use of heatmaps. CONCLUSION AutoML platforms are an opportunity to develop clinical end-to-end ML algorithms. Here, automated radiographic DIP-OA detection is both feasible and usable, whereas grading among individual K/L scores (eg, for clinical trials) remains challenging.
Collapse
Affiliation(s)
- Leo Caratsch
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
- City Hospital WaidZurichSwitzerland
- L2F (Learn to Forecast)LausanneSwitzerland
| | - Christian Lechtenboehmer
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
- City Hospital WaidZurichSwitzerland
- L2F (Learn to Forecast)LausanneSwitzerland
- University Hospital of BaselBaselSwitzerland
| | | | - Karine Oung
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
| | - Fabio Zanchi
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
| | - Yasser Aleman
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
| | | | - Patrick Omoumi
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
| | - Thomas Hügle
- Lausanne University Hospital and University of LausanneLausanneSwitzerland
| |
Collapse
|
2
|
Elangovan K, Lim G, Ting D. A comparative study of an on premise AutoML solution for medical image classification. Sci Rep 2024; 14:10483. [PMID: 38714764 PMCID: PMC11076477 DOI: 10.1038/s41598-024-60429-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 04/23/2024] [Indexed: 05/10/2024] Open
Abstract
Automated machine learning (AutoML) allows for the simplified application of machine learning to real-world problems, by the implicit handling of necessary steps such as data pre-processing, feature engineering, model selection and hyperparameter optimization. This has encouraged its use in medical applications such as imaging. However, the impact of common parameter choices such as the number of trials allowed, and the resolution of the input images, has not been comprehensively explored in existing literature. We therefore benchmark AutoKeras (AK), an open-source AutoML framework, against several bespoke deep learning architectures, on five public medical datasets representing a wide range of imaging modalities. It was found that AK could outperform the bespoke models in general, although at the cost of increased training time. Moreover, our experiments suggest that a large number of trials and higher resolutions may not be necessary for optimal performance to be achieved.
Collapse
Affiliation(s)
- Kabilan Elangovan
- Artificial Intelligence and Digital Health Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Artificial Intelligence Office, Singapore Health Service, Singapore, Singapore
| | - Gilbert Lim
- Artificial Intelligence and Digital Health Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Artificial Intelligence Office, Singapore Health Service, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Daniel Ting
- Artificial Intelligence and Digital Health Research Group, Singapore Eye Research Institute, Singapore, Singapore.
- Artificial Intelligence Office, Singapore Health Service, Singapore, Singapore.
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore.
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Byers Eye Institute, Stanford University, Stanford, USA.
| |
Collapse
|
3
|
Guo Y, Zhang H, Yuan L, Chen W, Zhao H, Yu QQ, Shi W. Machine learning and new insights for breast cancer diagnosis. J Int Med Res 2024; 52:3000605241237867. [PMID: 38663911 PMCID: PMC11047257 DOI: 10.1177/03000605241237867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 02/21/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.
Collapse
Affiliation(s)
- Ya Guo
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Heng Zhang
- Department of Laboratory Medicine, Shandong Daizhuang Hospital, Jining, Shandong Province, China
| | - Leilei Yuan
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Weidong Chen
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Haibo Zhao
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Qing-Qing Yu
- Phase I Clinical Research Centre, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Wenjie Shi
- Molecular and Experimental Surgery, University Clinic for General-, Visceral-, Vascular- and Trans-Plantation Surgery, Medical Faculty University Hospital Magdeburg, Otto-von Guericke University, Magdeburg, Germany
| |
Collapse
|
4
|
Han J, Hua H, Fei J, Liu J, Guo Y, Ma W, Chen J. Prediction of Disease-Free Survival in Breast Cancer using Deep Learning with Ultrasound and Mammography: A Multicenter Study. Clin Breast Cancer 2024; 24:215-226. [PMID: 38281863 DOI: 10.1016/j.clbc.2024.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 01/04/2024] [Accepted: 01/10/2024] [Indexed: 01/30/2024]
Abstract
BACKGROUND Breast cancer is a leading cause of cancer morbility and mortality in women. The possibility of overtreatment or inappropriate treatment exists, and methods for evaluating prognosis need to be improved. MATERIALS AND METHODS Patients (from January 2013 to December 2018) were recruited and divided into a training group and a testing group. All patients were followed for more than 3 years. Patients were divided into a disease-free group and a recurrence group based on follow up results at 3 years. Ultrasound (US) and mammography (MG) images were collected to establish deep learning models (DLMs) using ResNet50. Clinical data, MG, and US characteristics were collected to select independent prognostic factors using a cox proportional hazards model to establish a clinical model. DLM and independent prognostic factors were combined to establish a combined model. RESULTS In total, 1242 patients were included. Independent prognostic factors included age, neoadjuvant chemotherapy, HER2, orientation, blood flow, dubious calcification, and size. We established 5 models: the US DLM, MG DLM, US + MG DLM, clinical and combined model. The combined model using US images, MG images, and pathological, clinical, and radiographic characteristics had the highest predictive performance (AUC = 0.882 in the training group, AUC = 0.739 in the testing group). CONCLUSION DLMs based on the combination of US, MG, and clinical data have potential as predictive tools for breast cancer prognosis.
Collapse
Affiliation(s)
- Junqi Han
- Department of Breast Imaging, The Affiliated Hospital of Qingdao University, Qingdao, People's Republic of China
| | - Hui Hua
- Department of Thyroid Surgery, The Affiliated Hospital of Qingdao University, Qingdao, People's Republic of China
| | - Jie Fei
- Department of Breast Imaging, The Affiliated Hospital of Qingdao University, Qingdao, People's Republic of China
| | - Jingjing Liu
- Department of Breast Imaging, The Affiliated Hospital of Qingdao University, Qingdao, People's Republic of China
| | - Yijun Guo
- Department of Breast Imaging Diagnosis, National Clinical Research Center for Cancer, Tianjin Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, Tianjin Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, Tianjin, People's Republic of China
| | - Wenjuan Ma
- Department of Breast Imaging Diagnosis, National Clinical Research Center for Cancer, Tianjin Clinical Research Center for Cancer, Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, Tianjin Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, Tianjin, People's Republic of China
| | - Jingjing Chen
- Department of Breast Imaging, The Affiliated Hospital of Qingdao University, Qingdao, People's Republic of China.
| |
Collapse
|
5
|
Wong CYT, O'Byrne C, Taribagil P, Liu T, Antaki F, Keane PA. Comparing code-free and bespoke deep learning approaches in ophthalmology. Graefes Arch Clin Exp Ophthalmol 2024:10.1007/s00417-024-06432-x. [PMID: 38446200 DOI: 10.1007/s00417-024-06432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 02/13/2024] [Accepted: 02/27/2024] [Indexed: 03/07/2024] Open
Abstract
AIM Code-free deep learning (CFDL) allows clinicians without coding expertise to build high-quality artificial intelligence (AI) models without writing code. In this review, we comprehensively review the advantages that CFDL offers over bespoke expert-designed deep learning (DL). As exemplars, we use the following tasks: (1) diabetic retinopathy screening, (2) retinal multi-disease classification, (3) surgical video classification, (4) oculomics and (5) resource management. METHODS We performed a search for studies reporting CFDL applications in ophthalmology in MEDLINE (through PubMed) from inception to June 25, 2023, using the keywords 'autoML' AND 'ophthalmology'. After identifying 5 CFDL studies looking at our target tasks, we performed a subsequent search to find corresponding bespoke DL studies focused on the same tasks. Only English-written articles with full text available were included. Reviews, editorials, protocols and case reports or case series were excluded. We identified ten relevant studies for this review. RESULTS Overall, studies were optimistic towards CFDL's advantages over bespoke DL in the five ophthalmological tasks. However, much of such discussions were identified to be mono-dimensional and had wide applicability gaps. High-quality assessment of better CFDL applicability over bespoke DL warrants a context-specific, weighted assessment of clinician intent, patient acceptance and cost-effectiveness. We conclude that CFDL and bespoke DL are unique in their own assets and are irreplaceable with each other. Their benefits are differentially valued on a case-to-case basis. Future studies are warranted to perform a multidimensional analysis of both techniques and to improve limitations of suboptimal dataset quality, poor applicability implications and non-regulated study designs. CONCLUSION For clinicians without DL expertise and easy access to AI experts, CFDL allows the prototyping of novel clinical AI systems. CFDL models concert with bespoke models, depending on the task at hand. A multidimensional, weighted evaluation of the factors involved in the implementation of those models for a designated task is warranted.
Collapse
Affiliation(s)
- Carolyn Yu Tung Wong
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ciara O'Byrne
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Priyal Taribagil
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Timing Liu
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Fares Antaki
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- The CHUM School of Artificial Intelligence in Healthcare, Montreal, QC, Canada
| | - Pearse Andrew Keane
- Institute of Ophthalmology, University College London, 11-43 Bath St, London, EC1V 9EL, UK.
- Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- NIHR Moorfields Biomedical Research Centre, London, UK.
| |
Collapse
|
6
|
Wang HY, Lin WY, Zhou C, Yang ZA, Kalpana S, Lebowitz MS. Integrating Artificial Intelligence for Advancing Multiple-Cancer Early Detection via Serum Biomarkers: A Narrative Review. Cancers (Basel) 2024; 16:862. [PMID: 38473224 DOI: 10.3390/cancers16050862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/08/2024] [Accepted: 02/16/2024] [Indexed: 03/14/2024] Open
Abstract
The concept and policies of multicancer early detection (MCED) have gained significant attention from governments worldwide in recent years. In the era of burgeoning artificial intelligence (AI) technology, the integration of MCED with AI has become a prevailing trend, giving rise to a plethora of MCED AI products. However, due to the heterogeneity of both the detection targets and the AI technologies, the overall diversity of MCED AI products remains considerable. The types of detection targets encompass protein biomarkers, cell-free DNA, or combinations of these biomarkers. In the development of AI models, different model training approaches are employed, including datasets of case-control studies or real-world cancer screening datasets. Various validation techniques, such as cross-validation, location-wise validation, and time-wise validation, are used. All of the factors show significant impacts on the predictive efficacy of MCED AIs. After the completion of AI model development, deploying the MCED AIs in clinical practice presents numerous challenges, including presenting the predictive reports, identifying the potential locations and types of tumors, and addressing cancer-related information, such as clinical follow-up and treatment. This study reviews several mature MCED AI products currently available in the market, detecting their composing factors from serum biomarker detection, MCED AI training/validation, and the clinical application. This review illuminates the challenges encountered by existing MCED AI products across these stages, offering insights into the continued development and obstacles within the field of MCED AI.
Collapse
Affiliation(s)
- Hsin-Yao Wang
- Department of Laboratory Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 33343, Taiwan
- School of Medicine, National Tsing Hua University, Hsinchu 300044, Taiwan
- 20/20 GeneSystems, Gaithersburg, MD 20877, USA
| | - Wan-Ying Lin
- Department of Laboratory Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 33343, Taiwan
| | | | - Zih-Ang Yang
- Department of Laboratory Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 33343, Taiwan
| | - Sriram Kalpana
- Department of Laboratory Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 33343, Taiwan
| | | |
Collapse
|
7
|
Truong B, Zapala M, Kammen B, Luu K. Automated Detection of Pediatric Foreign Body Aspiration from Chest X-rays Using Machine Learning. Laryngoscope 2024. [PMID: 38366768 DOI: 10.1002/lary.31338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 01/19/2024] [Accepted: 01/26/2024] [Indexed: 02/18/2024]
Abstract
OBJECTIVE/HYPOTHESIS Standard chest radiographs are a poor diagnostic tool for pediatric foreign body aspiration. Machine learning may improve upon the diagnostic capabilities of chest radiographs. The objective is to develop a machine learning algorithm that improves the diagnostic capabilities of chest radiographs in pediatric foreign body aspiration. METHOD This retrospective, diagnostic study included a retrospective chart review of patients with a potential diagnosis of FBA from 2010 to 2020. Frontal view chest radiographs were extracted, processed, and uploaded to Google AutoML Vision. The developed algorithm was then evaluated against a pediatric radiologist. RESULTS The study selected 566 patients who were presented with a suspected diagnosis of foreign body aspiration. One thousand six hundred and eighty eight chest radiograph images were collected. The sensitivity and specificity of the radiologist interpretation were 50.6% (43.1-58.0) and 88.7% (85.3-91.5), respectively. The sensitivity and specificity of the algorithm were 66.7% (43.0-85.4) and 95.3% (90.6-98.1), respectively. The precision and recall of the algorithm were both 91.8% with an AuPRC of 98.3%. CONCLUSION Chest radiograph analysis augmented with machine learning can diagnose foreign body aspiration in pediatric patients at a level similar to a read performed by a pediatric radiologist despite only using single-view, fixed images. Overall, this study highlights the potential and capabilities of machine learning in diagnosing conditions with a wide range of clinical presentations. LEVEL OF EVIDENCE 3 Laryngoscope, 2024.
Collapse
Affiliation(s)
- Brandon Truong
- School of Medicine, University of California, San Francisco, California, U.S.A
| | - Matthew Zapala
- Division of Pediatric Radiology, Department of Radiology and Biomedical Imaging, University of California, San Francisco, California, U.S.A
| | - Bamidele Kammen
- Division of Pediatric Radiology, Department of Radiology and Biomedical Imaging, University of California, San Francisco, California, U.S.A
| | - Kimberly Luu
- Division of Pediatric Otolaryngology, Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, U.S.A
| |
Collapse
|
8
|
Xu FWX, Choo AMH, Ting PLM, Ong SJ, Khoo D. Leveraging AI in Postgraduate Medical Education for Rapid Skill Acquisition in Ultrasound-Guided Procedural Techniques. J Imaging 2023; 9:225. [PMID: 37888332 PMCID: PMC10607244 DOI: 10.3390/jimaging9100225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 10/06/2023] [Accepted: 10/10/2023] [Indexed: 10/28/2023] Open
Abstract
Ultrasound-guided techniques are increasingly prevalent and represent a gold standard of care. Skills such as needle visualisation, optimising the target image and directing the needle require deliberate practice. However, training opportunities remain limited by patient case load and safety considerations. Hence, there is a genuine and urgent need for trainees to attain accelerated skill acquisition in a time- and cost-efficient manner that minimises risk to patients. We propose a two-step solution: First, we have created an agar phantom model that simulates human tissue and structures like vessels and nerve bundles. Moreover, we have adopted deep learning techniques to provide trainees with live visualisation of target structures and automate assessment of their user speed and accuracy. Key structures like the needle tip, needle body, target blood vessels, and nerve bundles, are delineated in colour on the processed image, providing an opportunity for real-time guidance of needle positioning and target structure penetration. Quantitative feedback on user speed (time taken for target penetration), accuracy (penetration of correct target), and efficacy in needle positioning (percentage of frames where the full needle is visualised in a longitudinal plane) are also assessable using our model. Our program was able to demonstrate a sensitivity of 99.31%, specificity of 69.23%, accuracy of 91.33%, precision of 89.94%, recall of 99.31%, and F1 score of 0.94 in automated image labelling.
Collapse
Affiliation(s)
| | | | | | - Shao Jin Ong
- National University Hospital, National University Health Systems, Singapore 119074, Singapore; (F.W.X.X.); (A.M.H.C.); (P.L.M.T.); (D.K.)
| | | |
Collapse
|
9
|
Nosrati H, Nosrati M. Artificial Intelligence in Regenerative Medicine: Applications and Implications. Biomimetics (Basel) 2023; 8:442. [PMID: 37754193 PMCID: PMC10526210 DOI: 10.3390/biomimetics8050442] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/16/2023] [Accepted: 09/19/2023] [Indexed: 09/28/2023] Open
Abstract
The field of regenerative medicine is constantly advancing and aims to repair, regenerate, or substitute impaired or unhealthy tissues and organs using cutting-edge approaches such as stem cell-based therapies, gene therapy, and tissue engineering. Nevertheless, incorporating artificial intelligence (AI) technologies has opened new doors for research in this field. AI refers to the ability of machines to perform tasks that typically require human intelligence in ways such as learning the patterns in the data and applying that to the new data without being explicitly programmed. AI has the potential to improve and accelerate various aspects of regenerative medicine research and development, particularly, although not exclusively, when complex patterns are involved. This review paper provides an overview of AI in the context of regenerative medicine, discusses its potential applications with a focus on personalized medicine, and highlights the challenges and opportunities in this field.
Collapse
Affiliation(s)
- Hamed Nosrati
- Biosensor Research Center, Isfahan University of Medical Sciences, Isfahan 81746-73461, Iran
| | - Masoud Nosrati
- Department of Computer Science, Iowa State University, Ames, IA 50011, USA
| |
Collapse
|
10
|
Wibaek R, Andersen GS, Dahm CC, Witte DR, Hulman A. Large Language Models for Epidemiological Research via Automated Machine Learning: Case Study Using Data From the British National Child Development Study. JMIR Med Inform 2023; 11:e43638. [PMID: 37787655 PMCID: PMC10547934 DOI: 10.2196/43638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 06/29/2023] [Accepted: 07/22/2023] [Indexed: 10/04/2023] Open
Abstract
Background Large language models have had a huge impact on natural language processing (NLP) in recent years. However, their application in epidemiological research is still limited to the analysis of electronic health records and social media data. objectives To demonstrate the potential of NLP beyond these domains, we aimed to develop prediction models based on texts collected from an epidemiological cohort and compare their performance to classical regression methods. Methods We used data from the British National Child Development Study, where 10,567 children aged 11 years wrote essays about how they imagined themselves as 25-year-olds. Overall, 15% of the data set was set aside as a test set for performance evaluation. Pretrained language models were fine-tuned using AutoTrain (Hugging Face) to predict current reading comprehension score (range: 0-35) and future BMI and physical activity (active vs inactive) at the age of 33 years. We then compared their predictive performance (accuracy or discrimination) with linear and logistic regression models, including demographic and lifestyle factors of the parents and children from birth to the age of 11 years as predictors. Results NLP clearly outperformed linear regression when predicting reading comprehension scores (root mean square error: 3.89, 95% CI 3.74-4.05 for NLP vs 4.14, 95% CI 3.98-4.30 and 5.41, 95% CI 5.23-5.58 for regression models with and without general ability score as a predictor, respectively). Predictive performance for physical activity was similarly poor for the 2 methods (area under the receiver operating characteristic curve: 0.55, 95% CI 0.52-0.60 for both) but was slightly better than random assignment, whereas linear regression clearly outperformed the NLP approach when predicting BMI (root mean square error: 4.38, 95% CI 4.02-4.74 for NLP vs 3.85, 95% CI 3.54-4.16 for regression). The NLP approach did not perform better than simply assigning the mean BMI from the training set as a predictor. Conclusions Our study demonstrated the potential of using large language models on text collected from epidemiological studies. The performance of the approach appeared to depend on how directly the topic of the text was related to the outcome. Open-ended questions specifically designed to capture certain health concepts and lived experiences in combination with NLP methods should receive more attention in future epidemiological studies.
Collapse
Affiliation(s)
| | | | | | - Daniel R Witte
- Department of Public Health, Aarhus University, Aarhus, Denmark
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
| | - Adam Hulman
- Department of Public Health, Aarhus University, Aarhus, Denmark
- Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
11
|
Choi W, Choi T, Heo S. A Comparative Study of Automated Machine Learning Platforms for Exercise Anthropometry-Based Typology Analysis: Performance Evaluation of AWS SageMaker, GCP VertexAI, and MS Azure. Bioengineering (Basel) 2023; 10:891. [PMID: 37627775 PMCID: PMC10451891 DOI: 10.3390/bioengineering10080891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/21/2023] [Accepted: 07/25/2023] [Indexed: 08/27/2023] Open
Abstract
The increasing prevalence of machine learning (ML) and automated machine learning (AutoML) applications across diverse industries necessitates rigorous comparative evaluations of their predictive accuracies under various computational environments. The purpose of this research was to compare and analyze the predictive accuracy of several machine learning algorithms, including RNNs, LSTMs, GRUs, XGBoost, and LightGBM, when implemented on different platforms such as Google Colab Pro, AWS SageMaker, GCP Vertex AI, and MS Azure. The predictive performance of each model within its respective environment was assessed using performance metrics such as accuracy, precision, recall, F1-score, and log loss. All algorithms were trained on the same dataset and implemented on their specified platforms to ensure consistent comparisons. The dataset used in this study comprised fitness images, encompassing 41 exercise types and totaling 6 million samples. These images were acquired from AI-hub, and joint coordinate values (x, y, z) were extracted utilizing the Mediapipe library. The extracted values were then stored in a CSV format. Among the ML algorithms, LSTM demonstrated the highest performance, achieving an accuracy of 73.75%, precision of 74.55%, recall of 73.68%, F1-score of 73.11%, and a log loss of 0.71. Conversely, among the AutoML algorithms, XGBoost performed exceptionally well on AWS SageMaker, boasting an accuracy of 99.6%, precision of 99.8%, recall of 99.2%, F1-score of 99.5%, and a log loss of 0.014. On the other hand, LightGBM exhibited the poorest performance on MS Azure, achieving an accuracy of 84.2%, precision of 82.2%, recall of 81.8%, F1-score of 81.5%, and a log loss of 1.176. The unnamed algorithm implemented on GCP Vertex AI showcased relatively favorable results, with an accuracy of 89.9%, precision of 94.2%, recall of 88.4%, F1-score of 91.2%, and a log loss of 0.268. Despite LightGBM's lackluster performance on MS Azure, the GRU implemented in Google Colab Pro displayed encouraging results, yielding an accuracy of 88.2%, precision of 88.5%, recall of 88.1%, F1-score of 88.4%, and a log loss of 0.44. Overall, this study revealed significant variations in performance across different algorithms and platforms. Particularly, AWS SageMaker's implementation of XGBoost outperformed other configurations, highlighting the importance of carefully considering the choice of algorithm and computational environment in predictive tasks. To gain a comprehensive understanding of the factors contributing to these performance discrepancies, further investigations are recommended.
Collapse
Affiliation(s)
- Wansuk Choi
- Department of Physical Therapy, International University of Korea, Jinju 17731, Republic of Korea;
| | - Taeseok Choi
- Department of Medical Performance Center (MPC), Sejong Sports Medicine and Performance Hospital, Seoul 05006, Republic of Korea
| | - Seoyoon Heo
- Department of Occupational Therapy, College of Medical and Health Sciences, Kyungbok University, Namyangju 12051, Republic of Korea
| |
Collapse
|
12
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
13
|
Lee P, Tahmasebi A, Dave JK, Parekh MR, Kumaran M, Wang S, Eisenbrey JR, Donuru A. Comparison of Gray-scale Inversion to Improve Detection of Pulmonary Nodules on Chest X-rays Between Radiologists and a Deep Convolutional Neural Network. Curr Probl Diagn Radiol 2023; 52:180-186. [PMID: 36470698 DOI: 10.1067/j.cpradiol.2022.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 10/08/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022]
Abstract
Detection of pulmonary nodules on chest x-rays is an important task for radiologists. Previous studies have shown improved detection rates using gray-scale inversion. The purpose of our study was to compare the efficacy of gray-scale inversion in improving the detection of pulmonary nodules on chest x-rays for radiologists and machine learning models (ML). We created a mixed dataset consisting of 60, 2-view (posteroanterior view - PA and lateral view) chest x-rays with computed tomography confirmed nodule(s) and 62 normal chest x-rays. Twenty percent of the cases were separated for a testing dataset (24 total images). Data augmentation through mirroring and transfer learning was used for the remaining cases (784 total images) for supervised training of 4 ML models (grayscale PA, grayscale lateral, gray-scale inversion PA, and gray-scale inversion lateral) on Google's cloud-based AutoML platform. Three cardiothoracic radiologists analyzed the complete 2-view dataset (n=120) and, for comparison to the ML, the single-view testing subsets (12 images each). Gray-scale inversion (area under the curve (AUC) 0.80, 95% confidence interval (CI) 0.75-0.85) did not improve diagnostic performance for radiologists compared to grayscale (AUC 0.84, 95% CI 0.79-0.88). Gray-scale inversion also did not improve diagnostic performance for the ML. The ML did demonstrate higher sensitivity and negative predictive value for grayscale PA (72.7% and 75.0%), grayscale lateral (63.6% and 66.6%), and gray-scale inversion lateral views (72.7% and 76.9%), comparing favorably to the radiologists (63.9% and 72.3%, 27.8% and 58.3%, 19.5% and 50.5% respectively). In the limited testing dataset, the ML did demonstrate higher sensitivity and negative predictive value for grayscale PA (72.7% and 75.0%), grayscale lateral (63.6% and 66.6%), and gray-scale inversion lateral views (72.7% and 76.9%), comparing favorably to the radiologists (63.9% and 72.3%, 27.8% and 58.3%, 19.5% and 50.5%, respectively). Further investigation of other post-processing algorithms to improve diagnostic performance of ML is warranted.
Collapse
Affiliation(s)
- Patrick Lee
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Jaydev K Dave
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Maansi R Parekh
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Maruti Kumaran
- Department of Radiology, Temple University Hospital, Philadelphia, PA
| | - Shuo Wang
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - John R Eisenbrey
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Achala Donuru
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA.
| |
Collapse
|
14
|
Tahmasebi A, Wang S, Wessner CE, Vu T, Liu JB, Forsberg F, Civan J, Guglielmo FF, Eisenbrey JR. Ultrasound-Based Machine Learning Approach for Detection of Nonalcoholic Fatty Liver Disease. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36807314 DOI: 10.1002/jum.16194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/05/2022] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES Current diagnosis of nonalcoholic fatty liver disease (NAFLD) relies on biopsy or MR-based fat quantification. This prospective study explored the use of ultrasound with artificial intelligence for the detection of NAFLD. METHODS One hundred and twenty subjects with clinical suspicion of NAFLD and 10 healthy volunteers consented to participate in this institutional review board-approved study. Subjects were categorized as NAFLD and non-NAFLD according to MR proton density fat fraction (PDFF) findings. Ultrasound images from 10 different locations in the right and left hepatic lobes were collected following a standard protocol. MRI-based liver fat quantification was used as the reference standard with >6.4% indicative of NAFLD. A supervised machine learning model was developed for assessment of NAFLD. To validate model performance, a balanced testing dataset of 24 subjects was used. Sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy with 95% confidence interval were calculated. RESULTS A total of 1119 images from 106 participants was used for model development. The internal evaluation achieved an average precision of 0.941, recall of 88.2%, and precision of 89.0%. In the testing set AutoML achieved a sensitivity of 72.2% (63.1%-80.1%), specificity of 94.6% (88.7%-98.0%), positive predictive value (PPV) of 93.1% (86.0%-96.7%), negative predictive value of 77.3% (71.6%-82.1%), and accuracy of 83.4% (77.9%-88.0%). The average agreement for an individual subject was 92%. CONCLUSIONS An ultrasound-based machine learning model for identification of NAFLD showed high specificity and PPV in this prospective trial. This approach may in the future be used as an inexpensive and noninvasive screening tool for identifying NAFLD in high-risk patients.
Collapse
Affiliation(s)
- Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Shuo Wang
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Corinne E Wessner
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Trang Vu
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Ji-Bin Liu
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Flemming Forsberg
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Jesse Civan
- Department of Medicine, Division of Gastroenterology and Hepatology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Flavius F Guglielmo
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - John R Eisenbrey
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
15
|
Artificial Intelligence in Breast Ultrasound: From Diagnosis to Prognosis-A Rapid Review. Diagnostics (Basel) 2022; 13:diagnostics13010058. [PMID: 36611350 PMCID: PMC9818181 DOI: 10.3390/diagnostics13010058] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Ultrasound (US) is a fundamental diagnostic tool in breast imaging. However, US remains an operator-dependent examination. Research into and the application of artificial intelligence (AI) in breast US are increasing. The aim of this rapid review was to assess the current development of US-based artificial intelligence in the field of breast cancer. METHODS Two investigators with experience in medical research performed literature searching and data extraction on PubMed. The studies included in this rapid review evaluated the role of artificial intelligence concerning BC diagnosis, prognosis, molecular subtypes of breast cancer, axillary lymph node status, and the response to neoadjuvant chemotherapy. The mean values of sensitivity, specificity, and AUC were calculated for the main study categories with a meta-analytical approach. RESULTS A total of 58 main studies, all published after 2017, were included. Only 9/58 studies were prospective (15.5%); 13/58 studies (22.4%) used an ML approach. The vast majority (77.6%) used DL systems. Most studies were conducted for the diagnosis or classification of BC (55.1%). At present, all the included studies showed that AI has excellent performance in breast cancer diagnosis, prognosis, and treatment strategy. CONCLUSIONS US-based AI has great potential and research value in the field of breast cancer diagnosis, treatment, and prognosis. More prospective and multicenter studies are needed to assess the potential impact of AI in breast ultrasound.
Collapse
|
16
|
Kabir SM, Bhuiyan MIH. Correlated-Weighted Statistically Modeled Contourlet and Curvelet Coefficient Image-Based Breast Tumor Classification Using Deep Learning. Diagnostics (Basel) 2022; 13:diagnostics13010069. [PMID: 36611361 PMCID: PMC9818942 DOI: 10.3390/diagnostics13010069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 12/14/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Deep learning-based automatic classification of breast tumors using parametric imaging techniques from ultrasound (US) B-mode images is still an exciting research area. The Rician inverse Gaussian (RiIG) distribution is currently emerging as an appropriate example of statistical modeling. This study presents a new approach of correlated-weighted contourlet-transformed RiIG (CWCtr-RiIG) and curvelet-transformed RiIG (CWCrv-RiIG) image-based deep convolutional neural network (CNN) architecture for breast tumor classification from B-mode ultrasound images. A comparative study with other statistical models, such as Nakagami and normal inverse Gaussian (NIG) distributions, is also experienced here. The weighted entitled here is for weighting the contourlet and curvelet sub-band coefficient images by correlation with their corresponding RiIG statistically modeled images. By taking into account three freely accessible datasets (Mendeley, UDIAT, and BUSI), it is demonstrated that the proposed approach can provide more than 98 percent accuracy, sensitivity, specificity, NPV, and PPV values using the CWCtr-RiIG images. On the same datasets, the suggested method offers superior classification performance to several other existing strategies.
Collapse
Affiliation(s)
- Shahriar M. Kabir
- Department of Electrical and Electronic Engineering, Green University of Bangladesh, Dhaka 1207, Bangladesh
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh
- Correspondence: ; Tel.: +88-017-6461-0728
| | - Mohammed I. H. Bhuiyan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh
| |
Collapse
|
17
|
Gu Y, Xu W, Lin B, An X, Tian J, Ran H, Ren W, Chang C, Yuan J, Kang C, Deng Y, Wang H, Luo B, Guo S, Zhou Q, Xue E, Zhan W, Zhou Q, Li J, Zhou P, Chen M, Gu Y, Chen W, Zhang Y, Li J, Cong L, Zhu L, Wang H, Jiang Y. Deep learning based on ultrasound images assists breast lesion diagnosis in China: a multicenter diagnostic study. Insights Imaging 2022; 13:124. [PMID: 35900608 PMCID: PMC9334487 DOI: 10.1186/s13244-022-01259-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 06/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Studies on deep learning (DL)-based models in breast ultrasound (US) remain at the early stage due to a lack of large datasets for training and independent test sets for verification. We aimed to develop a DL model for differentiating benign from malignant breast lesions on US using a large multicenter dataset and explore the model's ability to assist the radiologists. METHODS A total of 14,043 US images from 5012 women were prospectively collected from 32 hospitals. To develop the DL model, the patients from 30 hospitals were randomly divided into a training cohort (n = 4149) and an internal test cohort (n = 466). The remaining 2 hospitals (n = 397) were used as the external test cohorts (ETC). We compared the model with the prospective Breast Imaging Reporting and Data System assessment and five radiologists. We also explored the model's ability to assist the radiologists using two different methods. RESULTS The model demonstrated excellent diagnostic performance with the ETC, with a high area under the receiver operating characteristic curve (AUC, 0.913), sensitivity (88.84%), specificity (83.77%), and accuracy (86.40%). In the comparison set, the AUC was similar to that of the expert (p = 0.5629) and one experienced radiologist (p = 0.2112) and significantly higher than that of three inexperienced radiologists (p < 0.01). After model assistance, the accuracies and specificities of the radiologists were substantially improved without loss in sensitivities. CONCLUSIONS The DL model yielded satisfactory predictions in distinguishing benign from malignant breast lesions. The model showed the potential value in improving the diagnosis of breast lesions by radiologists.
Collapse
Affiliation(s)
- Yang Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Wen Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Bin Lin
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Xing An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital of Chongqing Medical University and Chongqing Key Laboratory of Ultrasound Molecular Imaging, Chongqing, China
| | - Weidong Ren
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jianjun Yuan
- Department of Ultrasonography, Henan Provincial People's Hospital, Zhengzhou, China
| | - Chunsong Kang
- Department of Ultrasound, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Taiyuan, China
| | - Youbin Deng
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baoming Luo
- Department of Ultrasound, The Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shenglan Guo
- Department of Ultrasonography, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qi Zhou
- Department of Medical Ultrasound, The Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University, Xi'an, China
| | - Ensheng Xue
- Department of Ultrasound, Union Hospital of Fujian Medical University, Fujian Institute of Ultrasound Medicine, Fuzhou, China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University, School of Medicine, Shanghai, China
| | - Qing Zhou
- Department of Ultrasonography, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jie Li
- Department of Ultrasound, Qilu Hospital, Shandong University, Jinan, 250012, China
| | - Ping Zhou
- Department of Ultrasound, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ying Gu
- Department of Ultrasonography, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Yuhong Zhang
- Department of Ultrasound, The Second Hospital of Dalian Medical University, Dalian, China
| | - Jianchu Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Longfei Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Lei Zhu
- Department of Medical Imaging Advanced Research, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
18
|
Image Moment-Based Features for Mass Detection in Breast US Images via Machine Learning and Neural Network Classification Models. INVENTIONS 2022. [DOI: 10.3390/inventions7020042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Differentiating between malignant and benign masses using machine learning in the recognition of breast ultrasound (BUS) images is a technique with good accuracy and precision, which helps doctors make a correct diagnosis. The method proposed in this paper integrates Hu’s moments in the analysis of the breast tumor. The extracted features feed a k-nearest neighbor (k-NN) classifier and a radial basis function neural network (RBFNN) to classify breast tumors into benign and malignant. The raw images and the tumor masks provided as ground-truth images belong to the public digital BUS images database. Certain metrics such as accuracy, sensitivity, precision, and F1-score were used to evaluate the segmentation results and to select Hu’s moments showing the best capacity to discriminate between malignant and benign breast tissues in BUS images. Regarding the selection of Hu’s moments, the k-NN classifier reached 85% accuracy for moment M1 and 80% for moment M5 whilst RBFNN reached an accuracy of 76% for M1. The proposed method might be used to assist the clinical diagnosis of breast cancer identification by providing a good combination between segmentation and Hu’s moments.
Collapse
|
19
|
Abstract
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed.
Collapse
|
20
|
Lan X, Wang X, Qi J, Chen H, Zeng X, Shi J, Liu D, Shen H, Zhang J. Application of machine learning with multiparametric dual-energy computed tomography of the breast to differentiate between benign and malignant lesions. Quant Imaging Med Surg 2022; 12:810-822. [PMID: 34993120 DOI: 10.21037/qims-21-39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 07/30/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND Multiparametric dual-energy computed tomography (mpDECT) is widely used to differentiate various kinds of tumors; however, the data regarding its diagnostic performance with machine learning to diagnose breast tumors is limited. We evaluated univariate analysis and machine learning performance with mpDECT to distinguish between benign and malignant breast lesions. METHODS In total, 172 patients with 214 breast lesions (55 benign and 159 malignant) who underwent preoperative dual-phase contrast-enhanced DECT were included in this retrospective study. Twelve quantitative features were extracted for each lesion, including CT attenuation (precontrast, arterial, and venous phases), the arterial-venous phase difference in normalized effective atomic number (nZeff), normalized iodine concentration (NIC), and slope of the spectral Hounsfield unit (HU) curve (λHu). Predictive models were developed using univariate analysis and eight machine learning methods [logistic regression, extreme gradient boosting (XGBoost), stochastic gradient descent (SGD), linear discriminant analysis (LDA), adaptive boosting (AdaBoost), random forest (RF), decision tree, and linear support vector machine (SVM)]. Classification performances were assessed based on the area under the receiver operating characteristic curve (AUROC). The best performances of the conventional univariate analysis and machine learning methods were compared using the Delong test. RESULTS The univariate analysis showed that the venous phase λHu had the highest AUROC (0.88). Machine learning with mpDECT achieved an excellent and stable diagnostic performance, as shown by the mean classification performances in the training dataset (AUROC, 0.88-0.99) and testing (AUROC, 0.83-0.96) datasets. The performance of the AdaBoost model based on mpDECT was more stable than the other machine learning models and superior to the univariate analysis (AUROC, 0.96 vs. 0.88; P<0.001). CONCLUSIONS The performance of the AdaBoost classifier based on mpDECT data achieved the highest mean accuracy compared to the other machine learning models and univariate analysis in differentiating between benign and malignant breast lesions.
Collapse
Affiliation(s)
- Xiaosong Lan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Xiaoxia Wang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Jun Qi
- Department of Thoracic Surgery, Chongqing University Cancer Hospital, School of Medicine, Chongqing University, Chongqing, China
| | - Huifang Chen
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Xiangfei Zeng
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Jinfang Shi
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Daihong Liu
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Hesong Shen
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, China
| |
Collapse
|
21
|
|
22
|
RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112412138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
This study presents two new approaches based on Weighted Contourlet Parametric (WCP) images for the classification of breast tumors from B-mode ultrasound images. The Rician Inverse Gaussian (RiIG) distribution is considered for modeling the statistics of ultrasound images in the Contourlet transform domain. The WCP images are obtained by weighting the RiIG modeled Contourlet sub-band coefficient images. In the feature-based approach, various geometrical, statistical, and texture features are shown to have low ANOVA p-value, thus indicating a good capacity for class discrimination. Using three publicly available datasets (Mendeley, UDIAT, and BUSI), it is shown that the classical feature-based approach can yield more than 97% accuracy across the datasets for breast tumor classification using WCP images while the custom-made convolutional neural network (CNN) can deliver more than 98% accuracy, sensitivity, specificity, NPV, and PPV values utilizing the same WCP images. Both methods provide superior classification performance, better than those of several existing techniques on the same datasets.
Collapse
|
23
|
Meraj T, Alosaimi W, Alouffi B, Rauf HT, Kumar SA, Damaševičius R, Alyami H. A quantization assisted U-Net study with ICA and deep features fusion for breast cancer identification using ultrasonic data. PeerJ Comput Sci 2021; 7:e805. [PMID: 35036531 PMCID: PMC8725669 DOI: 10.7717/peerj-cs.805] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 11/12/2021] [Indexed: 06/14/2023]
Abstract
Breast cancer is one of the leading causes of death in women worldwide-the rapid increase in breast cancer has brought about more accessible diagnosis resources. The ultrasonic breast cancer modality for diagnosis is relatively cost-effective and valuable. Lesion isolation in ultrasonic images is a challenging task due to its robustness and intensity similarity. Accurate detection of breast lesions using ultrasonic breast cancer images can reduce death rates. In this research, a quantization-assisted U-Net approach for segmentation of breast lesions is proposed. It contains two step for segmentation: (1) U-Net and (2) quantization. The quantization assists to U-Net-based segmentation in order to isolate exact lesion areas from sonography images. The Independent Component Analysis (ICA) method then uses the isolated lesions to extract features and are then fused with deep automatic features. Public ultrasonic-modality-based datasets such as the Breast Ultrasound Images Dataset (BUSI) and the Open Access Database of Raw Ultrasonic Signals (OASBUD) are used for evaluation comparison. The OASBUD data extracted the same features. However, classification was done after feature regularization using the lasso method. The obtained results allow us to propose a computer-aided design (CAD) system for breast cancer identification using ultrasonic modalities.
Collapse
Affiliation(s)
- Talha Meraj
- Department of Computer Science, COMSATS University Islamabad-Wah Campus, Wah Cantt, Pakistan
| | - Wael Alosaimi
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford, United Kingdom
| | - Swarn Avinash Kumar
- Department of Information Technology, Indian Institute of Information Technology, Uttar Pradesh, Jhalwa, Prayagraj, India
| | | | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| |
Collapse
|
24
|
Tahmasebi A, Qu E, Sevrukov A, Liu JB, Wang S, Lyshchik A, Yu J, Eisenbrey JR. Assessment of Axillary Lymph Nodes for Metastasis on Ultrasound Using Artificial Intelligence. ULTRASONIC IMAGING 2021; 43:329-336. [PMID: 34416827 DOI: 10.1177/01617346211035315] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The purpose of this study was to evaluate an artificial intelligence (AI) system for the classification of axillary lymph nodes on ultrasound compared to radiologists. Ultrasound images of 317 axillary lymph nodes from patients referred for ultrasound guided fine needle aspiration or core needle biopsy and corresponding pathology findings were collected. Lymph nodes were classified into benign and malignant groups with histopathological result serving as the reference. Google Cloud AutoML Vision (Mountain View, CA) was used for AI image classification. Three experienced radiologists also classified the images and gave a level of suspicion score (1-5). To test the accuracy of AI, an external testing dataset of 64 images from 64 independent patients was evaluated by three AI models and the three readers. The diagnostic performance of AI and the humans were then quantified using receiver operating characteristics curves. In the complete set of 317 images, AutoML achieved a sensitivity of 77.1%, positive predictive value (PPV) of 77.1%, and an area under the precision recall curve of 0.78, while the three radiologists showed a sensitivity of 87.8% ± 8.5%, specificity of 50.3% ± 16.4%, PPV of 61.1% ± 5.4%, negative predictive value (NPV) of 84.1% ± 6.6%, and accuracy of 67.7% ± 5.7%. In the three external independent test sets, AI and human readers achieved sensitivity of 74.0% ± 0.14% versus 89.9% ± 0.06% (p = .25), specificity of 64.4% ± 0.11% versus 50.1 ± 0.20% (p = .22), PPV of 68.3% ± 0.04% versus 65.4 ± 0.07% (p = .50), NPV of 72.6% ± 0.11% versus 82.1% ± 0.08% (p = .33), and accuracy of 69.5% ± 0.06% versus 70.1% ± 0.07% (p = .90), respectively. These preliminary results indicate AI has comparable performance to trained radiologists and could be used to predict the presence of metastasis in ultrasound images of axillary lymph nodes.
Collapse
Affiliation(s)
- Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Enze Qu
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
- Department of Ultrasound, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Alexander Sevrukov
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Ji-Bin Liu
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Shuo Wang
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Andrej Lyshchik
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Joshua Yu
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - John R Eisenbrey
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| |
Collapse
|