201
|
Zolnoori M, Zolnour A, Topaz M. ADscreen: A speech processing-based screening system for automatic identification of patients with Alzheimer's disease and related dementia. Artif Intell Med 2023; 143:102624. [PMID: 37673583 PMCID: PMC10483114 DOI: 10.1016/j.artmed.2023.102624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 06/22/2023] [Accepted: 07/08/2023] [Indexed: 09/08/2023]
Abstract
Alzheimer's disease and related dementias (ADRD) present a looming public health crisis, affecting roughly 5 million people and 11 % of older adults in the United States. Despite nationwide efforts for timely diagnosis of patients with ADRD, >50 % of them are not diagnosed and unaware of their disease. To address this challenge, we developed ADscreen, an innovative speech-processing based ADRD screening algorithm for the protective identification of patients with ADRD. ADscreen consists of five major components: (i) noise reduction for reducing background noises from the audio-recorded patient speech, (ii) modeling the patient's ability in phonetic motor planning using acoustic parameters of the patient's voice, (iii) modeling the patient's ability in semantic and syntactic levels of language organization using linguistic parameters of the patient speech, (iv) extracting vocal and semantic psycholinguistic cues from the patient speech, and (v) building and evaluating the screening algorithm. To identify important speech parameters (features) associated with ADRD, we used the Joint Mutual Information Maximization (JMIM), an effective feature selection method for high dimensional, small sample size datasets. Modeling the relationship between speech parameters and the outcome variable (presence/absence of ADRD) was conducted using three different machine learning (ML) architectures with the capability of joining informative acoustic and linguistic with contextual word embedding vectors obtained from the DistilBERT (Bidirectional Encoder Representations from Transformers). We evaluated the performance of the ADscreen on an audio-recorded patients' speech (verbal description) for the Cookie-Theft picture description task, which is publicly available in the dementia databank. The joint fusion of acoustic and linguistic parameters with contextual word embedding vectors of DistilBERT achieved F1-score = 84.64 (standard deviation [std] = ±3.58) and AUC-ROC = 92.53 (std = ±3.34) for training dataset, and F1-score = 89.55 and AUC-ROC = 93.89 for the test dataset. In summary, ADscreen has a strong potential to be integrated with clinical workflow to address the need for an ADRD screening tool so that patients with cognitive impairment can receive appropriate and timely care.
Collapse
Affiliation(s)
- Maryam Zolnoori
- Columbia University Medical Center, New York, NY, United States of America; School of Nursing, Columbia University, New York, NY, United States of America.
| | - Ali Zolnour
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | - Maxim Topaz
- Columbia University Medical Center, New York, NY, United States of America; School of Nursing, Columbia University, New York, NY, United States of America
| |
Collapse
|
202
|
García-Ramó KB, Sanchez-Catasus CA, Winston GP. Deep learning in neuroimaging of epilepsy. Clin Neurol Neurosurg 2023; 232:107879. [PMID: 37473486 DOI: 10.1016/j.clineuro.2023.107879] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 05/24/2023] [Accepted: 07/04/2023] [Indexed: 07/22/2023]
Abstract
In recent years, artificial intelligence, particularly deep learning (DL), has demonstrated utility in diverse areas of medicine. DL uses neural networks to automatically learn features from the raw data while this is not possible with conventional machine learning. It is helpful for the assessment of patients with epilepsy and whilst most published studies have been aimed at the automatic detection and prediction of seizures from electroencephalographic records, there is a growing number of investigations that use neuroimaging modalities (structural and functional magnetic resonance imaging, diffusion-weighted imaging and positron emission tomography) as input data. We review the application of DL to neuroimaging (sMRI, fMRI, DWI and PET) of focal epilepsy, specifically presurgical evaluation of drug-refractory epilepsy. First, a brief theoretical overview of artificial neural networks and deep learning is presented. Next, we review applications of deep learning to neuroimaging of epilepsy: diagnosis and lateralization, automated detection of lesion, presurgical evaluation and prediction of postsurgical outcome. Finally, the limitations, challenges and possible future directions in the application of these methods in the study of epilepsies are discussed. This approach could become an essential tool in clinical practice, particularly in the evaluation of images considered negative by visual inspection, in individualized treatments, and in the approach to epilepsy as a network disorder. However, greater multicenter collaboration is required to achieve the collection of sufficient data with the required quality together with the open access availability of the developed codes and tools.
Collapse
Affiliation(s)
- Karla Batista García-Ramó
- Group of Neuroimaging Processing, International Center for Neurological Restoration, Cuba; Department of Clinical Investigations, Center of Isotopes, Cuba.
| | - Carlos A Sanchez-Catasus
- Department of Neurology, Clínica Universidad de Navarra, Spain; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713 GZ Groningen, the Netherlands.
| | - Gavin P Winston
- Division of Neurology, Department of Medicine, Queen's University, Canada; Centre for Neuroscience Studies, Queen's University, Canada.
| |
Collapse
|
203
|
Chen W, Li M. Standardized motion detection and real time heart rate monitoring of aerobics training based on convolution neural network. Prev Med 2023; 174:107642. [PMID: 37481166 DOI: 10.1016/j.ypmed.2023.107642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 07/11/2023] [Accepted: 07/19/2023] [Indexed: 07/24/2023]
Abstract
In order to make the teaching and training of aerobics more standardized, it is necessary to use scientific means to detect and monitor the movement standardization in teaching and training and the change of human heart rate in the training process, but at present, there are some difficulties in both detection and monitoring, Therefore, this paper proposes to use the advantages of convolutional neural network to solve the current aerobics teaching problems of motion detection and heart rate monitoring. In the process of operation, the complete aerobics video needs to be divided into several different images, the standardized action image background needs to be eliminated, and then the visual error caused by the difficult action image needs to be corrected. On the premise of image processing, the convolutional neural network is used to pre train the image, and the skeleton map of the human body is constructed in the computer. In the process of practical operation, the use of convolutional neural network for heart rate monitoring has many advantages. It can not only save the time of contact with the human body, but also integrate various information of the time dimension, reducing a lot of computing steps, saving a lot of computing resources for practical work, and promoting the improvement of system output signal quality to a certain extent. The result of the experiment also proves that the convolutional neural network can improve the accuracy of students' movement detection and heart rate change monitoring in aerobics teaching and training.
Collapse
Affiliation(s)
- Wenying Chen
- School of physical education, Guizhou University of Finance and Economics, Guiyang, Guizhou 550025, China.
| | - Min Li
- School of physical education, Nanchang University, Nanchang, Jiangxi 330031, China
| |
Collapse
|
204
|
Zhu Y, Li H, Huang Y, Fu W, Wang S, Sun N, Dong D, Tian J, Peng Y. CT-based identification of pediatric non-Wilms tumors using convolutional neural networks at a single center. Pediatr Res 2023; 94:1104-1110. [PMID: 36959318 DOI: 10.1038/s41390-023-02553-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 12/21/2022] [Accepted: 01/05/2023] [Indexed: 03/25/2023]
Abstract
BACKGROUND Deep learning (DL) is more and more widely used in children's medical treatment. In this study, we have developed a computed tomography (CT)-based DL model for identifying undiagnosed non-Wilms tumors (nWTs) from pediatric renal tumors. METHODS This study collected and analyzed the preoperative clinical data and CT images of pediatric renal tumor patients diagnosed by our center from 2008 to 2020, and established a DL model to identify nWTs noninvasively. RESULTS A total of 364 children who had been confirmed by histopathology with renal tumors from our center were enrolled, including 269 Wilms tumors (WTs) and 95 nWTs. For DL model development, all cases were randomly allocated to training set (218 cases), validation set (73 cases), and test set (73 cases). In the test set, the DL model achieved area under the curve of 0.831 (95% CI: 0.712-0.951) in discriminating WTs from nWTs, with the accuracy, sensitivity, and specificity of 0.781, 0.563, and 0.842, respectively. The sensitivity of our model was higher than a radiologist with 15 years of experience. CONCLUSIONS We presented a DL model for identifying undiagnosed nWTs from pediatric renal tumors, with the potential to improve the image-based diagnosis. IMPACT Deep learning model was used for the first time to identify pediatric renal tumors in this study. Deep learning model can identify non-Wilms tumors from pediatric renal tumors. Deep learning model based on computed tomography images can improve tumor diagnosis rate.
Collapse
Affiliation(s)
- Yupeng Zhu
- Department of Radiology, MOE Key Laboratory of Major Diseases in Children, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, 100045, China
- Department of Radiology, Peking University Third Hospital, Beijing, 100191, China
| | - Hailin Li
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100191, China
- CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Yangyue Huang
- Department of Pediatric Urology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, 100045, China
| | - Wangxing Fu
- Department of Radiology, MOE Key Laboratory of Major Diseases in Children, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, 100045, China
| | - Siwen Wang
- CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ning Sun
- Department of Pediatric Urology, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, 100045, China.
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, 100191, China.
- CAS Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China.
- Zhuhai Precision Medical Center, Zhuhai People's Hospital (affiliated with Jinan University), Zhuhai, 519000, China.
| | - Yun Peng
- Department of Radiology, MOE Key Laboratory of Major Diseases in Children, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing, 100045, China.
| |
Collapse
|
205
|
Ayubcha C, Singh SB, Patel KH, Rahmim A, Hasan J, Liu L, Werner T, Alavi A. Machine learning in the positron emission tomography imaging of Alzheimer's disease. Nucl Med Commun 2023; 44:751-766. [PMID: 37395538 DOI: 10.1097/mnm.0000000000001723] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The utilization of machine learning techniques in medicine has exponentially increased over the last decades due to innovations in computer processing, algorithm development, and access to big data. Applications of machine learning techniques to neuroimaging specifically have unveiled various hidden interactions, structures, and mechanisms related to various neurological disorders. One application of interest is the imaging of Alzheimer's disease, the most common cause of progressive dementia. The diagnoses of Alzheimer's disease, mild cognitive impairment, and preclinical Alzheimer's disease have been difficult. Molecular imaging, particularly via PET scans, holds tremendous value in the imaging of Alzheimer's disease. To date, many novel algorithms have been developed with great success that leverage machine learning in the context of Alzheimer's disease. This review article provides an overview of the diverse applications of machine learning to PET imaging of Alzheimer's disease.
Collapse
Affiliation(s)
- Cyrus Ayubcha
- Harvard Medical School
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Shashi B Singh
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Krishna H Patel
- Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jareed Hasan
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Litian Liu
- Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Thomas Werner
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Abass Alavi
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
206
|
Tomihama RT, Dass S, Chen S, Kiang SC. Machine learning and image analysis in vascular surgery. Semin Vasc Surg 2023; 36:413-418. [PMID: 37863613 DOI: 10.1053/j.semvascsurg.2023.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/04/2023] [Accepted: 07/05/2023] [Indexed: 10/22/2023]
Abstract
Deep learning, a subset of machine learning within artificial intelligence, has been successful in medical image analysis in vascular surgery. Unlike traditional computer-based segmentation methods that manually extract features from input images, deep learning methods learn image features and classify data without making prior assumptions. Convolutional neural networks, the main type of deep learning for computer vision processing, are neural networks with multilevel architecture and weighted connections between nodes that can "auto-learn" through repeated exposure to training data without manual input or supervision. These networks have numerous applications in vascular surgery imaging analysis, particularly in disease classification, object identification, semantic segmentation, and instance segmentation. The purpose of this review article was to review the relevant concepts of machine learning image analysis and its application to the field of vascular surgery.
Collapse
Affiliation(s)
- Roger T Tomihama
- Department of Radiology, Section of Vascular and Interventional Radiology, Linda University School of Medicine, 11234 Anderson Street, Suite MC-2605E, Loma Linda, CA 92354.
| | - Saharsh Dass
- Department of Radiology, Section of Vascular and Interventional Radiology, Linda University School of Medicine, 11234 Anderson Street, Suite MC-2605E, Loma Linda, CA 92354
| | - Sally Chen
- Department of Surgery, Division of Vascular Surgery, Linda University School of Medicine, Loma Linda, CA
| | - Sharon C Kiang
- Department of Surgery, Division of Vascular Surgery, Linda University School of Medicine, Loma Linda, CA; Department of Surgery, Division of Vascular Surgery, Veterans Affairs Loma Linda Healthcare System, Loma Linda, CA
| |
Collapse
|
207
|
Kim JY, Kahm SH, Yoo S, Bae SM, Kang JE, Lee SH. The efficacy of supervised learning and semi-supervised learning in diagnosis of impacted third molar on panoramic radiographs through artificial intelligence model. Dentomaxillofac Radiol 2023; 52:20230030. [PMID: 37192043 PMCID: PMC10461259 DOI: 10.1259/dmfr.20230030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023] Open
Abstract
OBJECTIVES The aim of the study was to evaluate the efficacy of traditional supervised learning (SL) and semi-supervised learning (SSL) in the classification of mandibular third molars (Mn3s) on panoramic images. The simplicity of preprocessing step and the outcome of the performance of SL and SSL were analyzed. METHODS Total 1625 Mn3s cropped images from 1000 panoramic images were labeled for classifications of the depth of impaction (D class), spatial relation with adjacent second molar (S class), and relationship with inferior alveolar nerve canal (N class). For the SL model, WideResNet (WRN) was applicated and for the SSL model, LaplaceNet (LN) was utilized. RESULTS In the WRN model, 300 labeled images for D and S classes, and 360 labeled images for N class were used for training and validation. In the LN model, only 40 labeled images for D, S, and N classes were used for learning. The F1 score were 0.87, 0.87, and 0.83 in WRN model, 0.84, 0.94, and 0.80 for D class, S class, and N class in the LN model, respectively. CONCLUSIONS These results confirmed that the LN model applied as SSL, even utilizing a small number of labeled images, demonstrated the satisfactory of the prediction accuracy similar to that of the WRN model as SL.
Collapse
Affiliation(s)
- Ji-Youn Kim
- Division of Oral & Maxillofacial Surgery, Department of Dentistry, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Se Hoon Kahm
- Department of Dentistry, Eunpyeong St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seok Yoo
- AI Business Headquarters, Unidocs Inc., Seoul, South Korea
| | - Soo-Mi Bae
- Department of Artificial Intelligence, Graduate school, Korea University, Seoul, South Korea
| | | | - Sang Hwa Lee
- Department of Dentistry, Eunpyeong St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
208
|
Pham TD. Prediction of Five-Year Survival Rate for Rectal Cancer Using Markov Models of Convolutional Features of RhoB Expression on Tissue Microarray. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:3195-3204. [PMID: 37155403 DOI: 10.1109/tcbb.2023.3274211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The ability to predict survival in cancer is clinically important because the finding can help patients and physicians make optimal treatment decisions. Artificial intelligence in the context of deep learning has been increasingly realized by the informatics-oriented medical community as a powerful machine-learning technology for cancer research, diagnosis, prediction, and treatment. This paper presents the combination of deep learning, data coding, and probabilistic modeling for predicting five-year survival in a cohort of patients with rectal cancer using images of RhoB expression on biopsies. Using about one-third of the patients' data for testing, the proposed approach achieved 90% prediction accuracy, which is much higher than the direct use of the best pretrained convolutional neural network (70%) and the best coupling of a pretrained model and support vector machines (70%).
Collapse
|
209
|
Ghafoor A, Imran AS, Daudpota SM, Kastrati Z, Shaikh S, Batra R. SentiUrdu-1M: A large-scale tweet dataset for Urdu text sentiment analysis using weakly supervised learning. PLoS One 2023; 18:e0290779. [PMID: 37647318 PMCID: PMC10468080 DOI: 10.1371/journal.pone.0290779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 08/15/2023] [Indexed: 09/01/2023] Open
Abstract
Low-resource languages are gaining much-needed attention with the advent of deep learning models and pre-trained word embedding. Though spoken by more than 230 million people worldwide, Urdu is one such low-resource language that has recently gained popularity online and is attracting a lot of attention and support from the research community. One challenge faced by such resource-constrained languages is the scarcity of publicly available large-scale datasets for conducting any meaningful study. In this paper, we address this challenge by collecting the first-ever large-scale Urdu Tweet Dataset for sentiment analysis and emotion recognition. The dataset consists of a staggering number of 1, 140, 821 tweets in the Urdu language. Obviously, manual labeling of such a large number of tweets would have been tedious, error-prone, and humanly impossible; therefore, the paper also proposes a weakly supervised approach to label tweets automatically. Emoticons used within the tweets, in addition to SentiWordNet, are utilized to propose a weakly supervised labeling approach to categorize extracted tweets into positive, negative, and neutral categories. Baseline deep learning models are implemented to compute the accuracy of three labeling approaches, i.e., VADER, TextBlob, and our proposed weakly supervised approach. Unlike the weakly supervised labeling approach, the VADER and TextBlob put most tweets as neutral and show a high correlation between the two. This is largely attributed to the fact that these models do not consider emoticons for assigning polarity.
Collapse
Affiliation(s)
- Abdul Ghafoor
- Dept. of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Ali Shariq Imran
- Dept of Computer Science (IDI), Norwegian University of Science and Technology (NTNU), Gjøvik, Norway
| | | | - Zenun Kastrati
- Department of Informatics, Linnaeus University, Växjö, Sweden
| | - Sarang Shaikh
- Dept of Computer Science (IDI), Norwegian University of Science and Technology (NTNU), Gjøvik, Norway
| | - Rakhi Batra
- Dept. of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| |
Collapse
|
210
|
Sattari M, Mohammadi M. Using Data Mining Techniques to Predict Chronic Kidney Disease: A Review Study. Int J Prev Med 2023; 14:110. [PMID: 37855011 PMCID: PMC10580203 DOI: 10.4103/ijpvm.ijpvm_482_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 05/17/2023] [Indexed: 10/20/2023] Open
Abstract
One of the growing global health problems is chronic kidney disease (CKD). Early diagnosis, control, and management of chronic kidney disease are very important. This study considers articles published in English between 2016 and 2021 that use classification methods to predict kidney disease. Data mining models play a vital role in predicting disease. Through our study, data mining techniques of support vector machine, Naive Bayes, and k-nearest neighbor had the highest frequency. After that, random forest, neural network, and decision tree were the most common data mining techniques. Among the risk factors associated with chronic kidney disease, respectively, risk factors of albumin, age, red blood cells, pus cells, and serum creatinine had the highest frequency in these studies. The highest number of best yields was allocated to random forest technique. Reviewing larger databases in the field of kidney disease can help to better analyze the disease and ensure the risk factors extracted.
Collapse
Affiliation(s)
- Mohammad Sattari
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Maryam Mohammadi
- Department of Management and Health Information Technology, School of Management and Medical Information Sciences, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
211
|
Kniep I, Mieling R, Gerling M, Schlaefer A, Heinemann A, Ondruschka B. Bayesian Reconstruction Algorithms for Low-Dose Computed Tomography Are Not Yet Suitable in Clinical Context. J Imaging 2023; 9:170. [PMID: 37754934 PMCID: PMC10532172 DOI: 10.3390/jimaging9090170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/17/2023] [Accepted: 08/21/2023] [Indexed: 09/28/2023] Open
Abstract
Computed tomography (CT) is a widely used examination technique that usually requires a compromise between image quality and radiation exposure. Reconstruction algorithms aim to reduce radiation exposure while maintaining comparable image quality. Recently, unsupervised deep learning methods have been proposed for this purpose. In this study, a promising sparse-view reconstruction method (posterior temperature optimized Bayesian inverse model; POTOBIM) is tested for its clinical applicability. For this study, 17 whole-body CTs of deceased were performed. In addition to POTOBIM, reconstruction was performed using filtered back projection (FBP). An evaluation was conducted by simulating sinograms and comparing the reconstruction with the original CT slice for each case. A quantitative analysis was performed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). The quality was assessed visually using a modified Ludewig's scale. In the qualitative evaluation, POTOBIM was rated worse than the reference images in most cases. A partially equivalent image quality could only be achieved with 80 projections per rotation. Quantitatively, POTOBIM does not seem to benefit from more than 60 projections. Although deep learning methods seem suitable to produce better image quality, the investigated algorithm (POTOBIM) is not yet suitable for clinical routine.
Collapse
Affiliation(s)
- Inga Kniep
- Institute of Legal Medicine, University Medical Center Hamburg-Eppendorf, 22529 Hamburg, Germany; (M.G.); (A.H.); (B.O.)
| | - Robin Mieling
- Institute for Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073 Hamburg, Germany;
| | - Moritz Gerling
- Institute of Legal Medicine, University Medical Center Hamburg-Eppendorf, 22529 Hamburg, Germany; (M.G.); (A.H.); (B.O.)
| | - Alexander Schlaefer
- Institute for Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073 Hamburg, Germany;
| | - Axel Heinemann
- Institute of Legal Medicine, University Medical Center Hamburg-Eppendorf, 22529 Hamburg, Germany; (M.G.); (A.H.); (B.O.)
| | - Benjamin Ondruschka
- Institute of Legal Medicine, University Medical Center Hamburg-Eppendorf, 22529 Hamburg, Germany; (M.G.); (A.H.); (B.O.)
| |
Collapse
|
212
|
Sinha K, Ghosh N, Sil PC. A Review on the Recent Applications of Deep Learning in Predictive Drug Toxicological Studies. Chem Res Toxicol 2023; 36:1174-1205. [PMID: 37561655 DOI: 10.1021/acs.chemrestox.2c00375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
Drug toxicity prediction is an important step in ensuring patient safety during drug design studies. While traditional preclinical studies have historically relied on animal models to evaluate toxicity, recent advances in deep-learning approaches have shown great promise in advancing drug safety science and reducing animal use in preclinical studies. However, deep-learning-based approaches also face challenges in handling large biological data sets, model interpretability, and regulatory acceptance. In this review, we provide an overview of recent developments in deep-learning-based approaches for predicting drug toxicity, highlighting their potential advantages over traditional methods and the need to address their limitations. Deep-learning models have demonstrated excellent performance in predicting toxicity outcomes from various data sources such as chemical structures, genomic data, and high-throughput screening assays. The potential of deep learning for automated feature engineering is also discussed. This review emphasizes the need to address ethical concerns related to the use of deep learning in drug toxicity studies, including the reduction of animal use and ensuring regulatory acceptance. Furthermore, emerging applications of deep learning in drug toxicity prediction, such as predicting drug-drug interactions and toxicity in rare subpopulations, are highlighted. The integration of deep-learning-based approaches with traditional methods is discussed as a way to develop more reliable and efficient predictive models for drug safety assessment, paving the way for safer and more effective drug discovery and development. Overall, this review highlights the critical role of deep learning in predictive toxicology and drug safety evaluation, emphasizing the need for continued research and development in this rapidly evolving field. By addressing the limitations of traditional methods, leveraging the potential of deep learning for automated feature engineering, and addressing ethical concerns, deep-learning-based approaches have the potential to revolutionize drug toxicity prediction and improve patient safety in drug discovery and development.
Collapse
Affiliation(s)
- Krishnendu Sinha
- Department of Zoology, Jhargram Raj College, Jhargram 721507, West Bengal, India
| | - Nabanita Ghosh
- Department of Zoology, Maulana Azad College, Kolkata 700013, West Bengal, India
| | - Parames C Sil
- Division of Molecular Medicine, Bose Institute, Kolkata 700054, West Bengal, India
| |
Collapse
|
213
|
Al-Nabulsi J, Turab N, Owida HA, Al-Naami B, De Fazio R, Visconti P. IoT Solutions and AI-Based Frameworks for Masked-Face and Face Recognition to Fight the COVID-19 Pandemic. SENSORS (BASEL, SWITZERLAND) 2023; 23:7193. [PMID: 37631730 PMCID: PMC10458933 DOI: 10.3390/s23167193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 08/04/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
A global health emergency resulted from the COVID-19 epidemic. Image recognition techniques are a useful tool for limiting the spread of the pandemic; indeed, the World Health Organization (WHO) recommends the use of face masks in public places as a form of protection against contagion. Hence, innovative systems and algorithms were deployed to rapidly screen a large number of people with faces covered by masks. In this article, we analyze the current state of research and future directions in algorithms and systems for masked-face recognition. First, the paper discusses the importance and applications of facial and face mask recognition, introducing the main approaches. Afterward, we review the recent facial recognition frameworks and systems based on Convolution Neural Networks, deep learning, machine learning, and MobilNet techniques. In detail, we analyze and critically discuss recent scientific works and systems which employ machine learning (ML) and deep learning tools for promptly recognizing masked faces. Also, Internet of Things (IoT)-based sensors, implementing ML and DL algorithms, were described to keep track of the number of persons donning face masks and notify the proper authorities. Afterward, the main challenges and open issues that should be solved in future studies and systems are discussed. Finally, comparative analysis and discussion are reported, providing useful insights for outlining the next generation of face recognition systems.
Collapse
Affiliation(s)
- Jamal Al-Nabulsi
- Medical Engineering Department, Faculty of Engineering, Al-Ahliyya Amman University, Amman 19328, Jordan; (J.A.-N.); (H.A.O.)
| | - Nidal Turab
- Department of Networks and Cyber Security, Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan;
| | - Hamza Abu Owida
- Medical Engineering Department, Faculty of Engineering, Al-Ahliyya Amman University, Amman 19328, Jordan; (J.A.-N.); (H.A.O.)
| | - Bassam Al-Naami
- Department of Biomedical Engineering, Faculty of Engineering, The Hashemite University, Zarqa 13133, Jordan;
| | - Roberto De Fazio
- Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy;
| | - Paolo Visconti
- Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy;
| |
Collapse
|
214
|
Salih M, Austin C, Warty RR, Tiktin C, Rolnik DL, Momeni M, Rezatofighi H, Reddy S, Smith V, Vollenhoven B, Horta F. Embryo selection through artificial intelligence versus embryologists: a systematic review. Hum Reprod Open 2023; 2023:hoad031. [PMID: 37588797 PMCID: PMC10426717 DOI: 10.1093/hropen/hoad031] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Indexed: 08/18/2023] Open
Abstract
STUDY QUESTION What is the present performance of artificial intelligence (AI) decision support during embryo selection compared to the standard embryo selection by embryologists? SUMMARY ANSWER AI consistently outperformed the clinical teams in all the studies focused on embryo morphology and clinical outcome prediction during embryo selection assessment. WHAT IS KNOWN ALREADY The ART success rate is ∼30%, with a worrying trend of increasing female age correlating with considerably worse results. As such, there have been ongoing efforts to address this low success rate through the development of new technologies. With the advent of AI, there is potential for machine learning to be applied in such a manner that areas limited by human subjectivity, such as embryo selection, can be enhanced through increased objectivity. Given the potential of AI to improve IVF success rates, it remains crucial to review the performance between AI and embryologists during embryo selection. STUDY DESIGN SIZE DURATION The search was done across PubMed, EMBASE, Ovid Medline, and IEEE Xplore from 1 June 2005 up to and including 7 January 2022. Included articles were also restricted to those written in English. Search terms utilized across all databases for the study were: ('Artificial intelligence' OR 'Machine Learning' OR 'Deep learning' OR 'Neural network') AND ('IVF' OR 'in vitro fertili*' OR 'assisted reproductive techn*' OR 'embryo'), where the character '*' refers the search engine to include any auto completion of the search term. PARTICIPANTS/MATERIALS SETTING METHODS A literature search was conducted for literature relating to AI applications to IVF. Primary outcomes of interest were accuracy, sensitivity, and specificity of the embryo morphology grade assessments and the likelihood of clinical outcomes, such as clinical pregnancy after IVF treatments. Risk of bias was assessed using the Modified Down and Black Checklist. MAIN RESULTS AND THE ROLE OF CHANCE Twenty articles were included in this review. There was no specific embryo assessment day across the studies-Day 1 until Day 5/6 of embryo development was investigated. The types of input for training AI algorithms were images and time-lapse (10/20), clinical information (6/20), and both images and clinical information (4/20). Each AI model demonstrated promise when compared to an embryologist's visual assessment. On average, the models predicted the likelihood of successful clinical pregnancy with greater accuracy than clinical embryologists, signifying greater reliability when compared to human prediction. The AI models performed at a median accuracy of 75.5% (range 59-94%) on predicting embryo morphology grade. The correct prediction (Ground Truth) was defined through the use of embryo images according to post embryologists' assessment following local respective guidelines. Using blind test datasets, the embryologists' accuracy prediction was 65.4% (range 47-75%) with the same ground truth provided by the original local respective assessment. Similarly, AI models had a median accuracy of 77.8% (range 68-90%) in predicting clinical pregnancy through the use of patient clinical treatment information compared to 64% (range 58-76%) when performed by embryologists. When both images/time-lapse and clinical information inputs were combined, the median accuracy by the AI models was higher at 81.5% (range 67-98%), while clinical embryologists had a median accuracy of 51% (range 43-59%). LIMITATIONS REASONS FOR CAUTION The findings of this review are based on studies that have not been prospectively evaluated in a clinical setting. Additionally, a fair comparison of all the studies were deemed unfeasible owing to the heterogeneity of the studies, development of the AI models, database employed and the study design and quality. WIDER IMPLICATIONS OF THE FINDINGS AI provides considerable promise to the IVF field and embryo selection. However, there needs to be a shift in developers' perception of the clinical outcome from successful implantation towards ongoing pregnancy or live birth. Additionally, existing models focus on locally generated databases and many lack external validation. STUDY FUNDING/COMPETING INTERESTS This study was funded by Monash Data Future Institute. All authors have no conflicts of interest to declare. REGISTRATION NUMBER CRD42021256333.
Collapse
Affiliation(s)
- M Salih
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - C Austin
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - R R Warty
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - C Tiktin
- School of Engineering, RMIT University, Melbourne, Victoria, Australia
| | - D L Rolnik
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Women’s and Newborn Program, Monash Health, Melbourne, Victoria, Australia
| | - M Momeni
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - H Rezatofighi
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
- Monash Data Future Institute, Monash University, Clayton, Victoria, Australia
| | - S Reddy
- School of Medicine, Deakin University, Geelong, Victoria, Australia
| | - V Smith
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - B Vollenhoven
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Women’s and Newborn Program, Monash Health, Melbourne, Victoria, Australia
- Monash IVF, Melbourne, Victoria, Australia
| | - F Horta
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Monash Data Future Institute, Monash University, Clayton, Victoria, Australia
- City Fertility, Melbourne, Victoria, Australia
| |
Collapse
|
215
|
Ao SI, Fayek H. Continual Deep Learning for Time Series Modeling. SENSORS (BASEL, SWITZERLAND) 2023; 23:7167. [PMID: 37631703 PMCID: PMC10457853 DOI: 10.3390/s23167167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 07/31/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
The multi-layer structures of Deep Learning facilitate the processing of higher-level abstractions from data, thus leading to improved generalization and widespread applications in diverse domains with various types of data. Each domain and data type presents its own set of challenges. Real-world time series data may have a non-stationary data distribution that may lead to Deep Learning models facing the problem of catastrophic forgetting, with the abrupt loss of previously learned knowledge. Continual learning is a paradigm of machine learning to handle situations when the stationarity of the datasets may no longer be true or required. This paper presents a systematic review of the recent Deep Learning applications of sensor time series, the need for advanced preprocessing techniques for some sensor environments, as well as the summaries of how to deploy Deep Learning in time series modeling while alleviating catastrophic forgetting with continual learning methods. The selected case studies cover a wide collection of various sensor time series applications and can illustrate how to deploy tailor-made Deep Learning, advanced preprocessing techniques, and continual learning algorithms from practical, real-world application aspects.
Collapse
Affiliation(s)
- Sio-Iong Ao
- International Association of Engineers, Unit 1, 1/F, Hung To Road, Hong Kong
| | - Haytham Fayek
- School of Computing Technologies, RMIT University, Building 14, Melbourne, VIC 3000, Australia;
| |
Collapse
|
216
|
Chun JW, Kim HS. The Present and Future of Artificial Intelligence-Based Medical Image in Diabetes Mellitus: Focus on Analytical Methods and Limitations of Clinical Use. J Korean Med Sci 2023; 38:e253. [PMID: 37550811 PMCID: PMC10412032 DOI: 10.3346/jkms.2023.38.e253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 07/12/2023] [Indexed: 08/09/2023] Open
Abstract
Artificial intelligence (AI)-based diagnostic technology using medical images can be used to increase examination accessibility and support clinical decision-making for screening and diagnosis. To determine a machine learning algorithm for diabetes complications, a literature review of studies using medical image-based AI technology was conducted using the National Library of Medicine PubMed, and the Excerpta Medica databases. Lists of studies using diabetes diagnostic images and AI as keywords were combined. In total, 227 appropriate studies were selected. Diabetic retinopathy studies using the AI model were the most frequent (85.0%, 193/227 cases), followed by diabetic foot (7.9%, 18/227 cases) and diabetic neuropathy (2.7%, 6/227 cases). The studies used open datasets (42.3%, 96/227 cases) or directly constructed data from fundoscopy or optical coherence tomography (57.7%, 131/227 cases). Major limitations in AI-based detection of diabetes complications using medical images were the lack of datasets (36.1%, 82/227 cases) and severity misclassification (26.4%, 60/227 cases). Although it remains difficult to use and fully trust AI-based imaging analysis technology clinically, it reduces clinicians' time and labor, and the expectations from its decision-support roles are high. Various data collection and synthesis data technology developments according to the disease severity are required to solve data imbalance.
Collapse
Affiliation(s)
- Ji-Won Chun
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Hun-Sung Kim
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|
217
|
Bhimavarapu U, Chintalapudi N, Battineni G. Automatic Detection and Classification of Diabetic Retinopathy Using the Improved Pooling Function in the Convolution Neural Network. Diagnostics (Basel) 2023; 13:2606. [PMID: 37568969 PMCID: PMC10416913 DOI: 10.3390/diagnostics13152606] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 07/30/2023] [Accepted: 08/02/2023] [Indexed: 08/13/2023] Open
Abstract
Diabetic retinopathy (DR) is an eye disease associated with diabetes that can lead to blindness. Early diagnosis is critical to ensure that patients with diabetes are not affected by blindness. Deep learning plays an important role in diagnosing diabetes, reducing the human effort to diagnose and classify diabetic and non-diabetic patients. The main objective of this study was to provide an improved convolution neural network (CNN) model for automatic DR diagnosis from fundus images. The pooling function increases the receptive field of convolution kernels over layers. It reduces computational complexity and memory requirements because it reduces the resolution of feature maps while preserving the essential characteristics required for subsequent layer processing. In this study, an improved pooling function combined with an activation function in the ResNet-50 model was applied to the retina images in autonomous lesion detection with reduced loss and processing time. The improved ResNet-50 model was trained and tested over the two datasets (i.e., APTOS and Kaggle). The proposed model achieved an accuracy of 98.32% for APTOS and 98.71% for Kaggle datasets. It is proven that the proposed model has produced greater accuracy when compared to their state-of-the-art work in diagnosing DR with retinal fundus images.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522302, India
| | - Nalini Chintalapudi
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- The Research Centre of the ECE Department, V. R. Siddhartha Engineering College, Vijayawada 520007, India
| |
Collapse
|
218
|
Lin TM, Lee HY, Chang CK, Lin KH, Chang CC, Wu BF, Peng SJ. Identification of tophi in ultrasound imaging based on transfer learning and clinical practice. Sci Rep 2023; 13:12507. [PMID: 37532752 PMCID: PMC10397312 DOI: 10.1038/s41598-023-39508-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 07/26/2023] [Indexed: 08/04/2023] Open
Abstract
Gout is a common metabolic disorder characterized by deposits of monosodium urate monohydrate crystals (tophi) in soft tissue, triggering intense and acute arthritis with intolerable pain as well as articular and periarticular inflammation. Tophi can also promote chronic inflammatory and erosive arthritis. 2015 ACR/EULAR Gout Classification criteria include clinical, laboratory, and imaging findings, where cases of gout are indicated by a threshold score of ≥ 8. Some imaging-related findings, such as a double contour sign in ultrasound, urate in dual-energy computed tomography, or radiographic gout-related erosion, generate a score of up to 4. Clearly, the diagnosis of gout is largely assisted by imaging findings; however, dual-energy computed tomography is expensive and exposes the patient to high levels of radiation. Although musculoskeletal ultrasound is non-invasive and inexpensive, the reliability of the results depends on expert experience. In the current study, we applied transfer learning to train a convolutional neural network for the identification of tophi in ultrasound images. The accuracy of predictions varied with the convolutional neural network model, as follows: InceptionV3 (0.871 ± 0.020), ResNet101 (0.913 ± 0.015), and VGG19 (0.918 ± 0.020). The sensitivity was as follows: InceptionV3 (0.507 ± 0.060), ResNet101 (0.680 ± 0.056), and VGG19 (0.747 ± 0.056). The precision was as follows: InceptionV3 (0.767 ± 0.091), ResNet101 (0.863 ± 0.098), and VGG19 (0.825 ± 0.062). Our results demonstrate that it is possible to retrain deep convolutional neural networks to identify the patterns of tophi in ultrasound images with a high degree of accuracy.
Collapse
Affiliation(s)
- Tzu-Min Lin
- Division of Allergy, Immunology and Rheumatology, Department of Internal Medicine, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
- Division of Rheumatology, Immunology and Allergy, Department of Internal Medicine, Taipei Medical University Hospital, Taipei, Taiwan
| | - Hsiang-Yen Lee
- Division of Rheumatology, Immunology and Allergy, Department of Internal Medicine, Taipei Medical University Hospital, Taipei, Taiwan
| | - Ching-Kuei Chang
- Division of Rheumatology, Immunology and Allergy, Department of Internal Medicine, Taipei Medical University Hospital, Taipei, Taiwan
| | - Ke-Hung Lin
- Division of Rheumatology, Immunology and Allergy, Department of Internal Medicine, Taipei Medical University Hospital, Taipei, Taiwan
| | - Chi-Ching Chang
- Division of Allergy, Immunology and Rheumatology, Department of Internal Medicine, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
- Division of Rheumatology, Immunology and Allergy, Department of Internal Medicine, Taipei Medical University Hospital, Taipei, Taiwan
| | - Bing-Fei Wu
- Institute of Electrical and Control Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Syu-Jyun Peng
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, No. 250, Wuxing St., Xinyi Dist., Taipei City, 110, Taiwan.
- Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
219
|
Najaran MHT. A genetic programming-based convolutional deep learning algorithm for identifying COVID-19 cases via X-ray images. Artif Intell Med 2023; 142:102571. [PMID: 37316095 DOI: 10.1016/j.artmed.2023.102571] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 03/07/2023] [Accepted: 04/27/2023] [Indexed: 06/16/2023]
Abstract
Evolutionary algorithms have been successfully employed to find the best structure for many learning algorithms including neural networks. Due to their flexibility and promising results, Convolutional Neural Networks (CNNs) have found their application in many image processing applications. The structure of CNNs greatly affects the performance of these algorithms both in terms of accuracy and computational cost, thus, finding the best architecture for these networks is a crucial task before they are employed. In this paper, we develop a genetic programming approach for the optimization of CNN structure in diagnosing COVID-19 cases via X-ray images. A graph representation for CNN architecture is proposed and evolutionary operators including crossover and mutation are specifically designed for the proposed representation. The proposed architecture of CNNs is defined by two sets of parameters, one is the skeleton which determines the arrangement of the convolutional and pooling operators and their connections and one is the numerical parameters of the operators which determine the properties of these operators like filter size and kernel size. The proposed algorithm in this paper optimizes the skeleton and the numerical parameters of the CNN architectures in a co-evolutionary scheme. The proposed algorithm is used to identify covid-19 cases via X-ray images.
Collapse
|
220
|
Lu Y, Li K. Multistation collaborative prediction of air pollutants based on the CNN-BiLSTM model. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:92417-92435. [PMID: 37490250 DOI: 10.1007/s11356-023-28877-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 07/16/2023] [Indexed: 07/26/2023]
Abstract
The development of industry has led to serious air pollution problems. It is very important to establish high-precision and high-performance air quality prediction models and take corresponding control measures. In this paper, based on 4 years of air quality and meteorological data from Tianjin, China, the relationships between various meteorological factors and air pollutant concentrations are analyzed. A hybrid deep learning model consisting of a convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) is proposed to predict pollutant concentrations. In addition, a Bayesian optimization algorithm is applied to obtain the optimal combination of hyperparameters for the proposed deep learning model, which enhances the generalization ability of the model. Furthermore, based on air quality data from multiple stations in the region, a multistation collaborative prediction method is designed, and the concept of a strongly correlated station (SCS) is defined. The predictive model is modified using the idea of SCS and is used to predict the pollutant concentration in Tianjin. The coefficient of determination R2 of PM2.5, PM10, SO2, NO2, CO, and O3 are 0.89, 0.84, 0.69, 0.83, 0.92, and 0.84, respectively. The results show that our model is capable of dealing with air pollutant prediction with satisfactory accuracy.
Collapse
Affiliation(s)
- Yanan Lu
- School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, 200433, China.
| | - Kun Li
- School of Economics and Management, Tiangong University, Tianjin, 300387, China
| |
Collapse
|
221
|
Hussein HI, Mohammed AO, Hassan MM, Mstafa RJ. Lightweight deep CNN-based models for early detection of COVID-19 patients from chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 223:119900. [PMID: 36969370 PMCID: PMC10023206 DOI: 10.1016/j.eswa.2023.119900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/05/2023] [Accepted: 03/15/2023] [Indexed: 06/18/2023]
Abstract
Hundreds of millions of people worldwide have recently been infected by the novel Coronavirus disease (COVID-19), causing significant damage to the health, economy, and welfare of the world's population. Moreover, the unprecedented number of patients with COVID-19 has placed a massive burden on healthcare centers, making timely and rapid diagnosis challenging. A crucial step in minimizing the impact of such problems is to automatically detect infected patients and place them under special care as quickly as possible. Deep learning algorithms, such as Convolutional Neural Networks (CNN), can be used to meet this need. Despite the desired results, most of the existing deep learning-based models were built on millions of parameters (weights), which are not applicable to devices with limited resources. Inspired by such fact, in this research, we developed two new lightweight CNN-based diagnostic models for the automatic and early detection of COVID-19 subjects from chest X-ray images. The first model was built for binary classification (COVID-19 and Normal), whereas the second one was built for multiclass classification (COVID-19, viral pneumonia, or normal). The proposed models were tested on a relatively large dataset of chest X-ray images, and the results showed that the accuracy rates of the 2- and 3-class-based classification models are 98.55% and 96.83%, respectively. The results also revealed that our models achieved competitive performance compared with the existing heavyweight models while significantly reducing cost and memory requirements for computing resources. With these findings, we can indicate that our models are helpful to clinicians in making insightful diagnoses of COVID-19 and are potentially easily deployable on devices with limited computational power and resources.
Collapse
Affiliation(s)
- Haval I Hussein
- Department of Computer Science, Faculty of Science, University of Zakho. Zakho, Kurdistan Region, Iraq
| | - Abdulhakeem O Mohammed
- Department of Information Technology Management, Technical College of Administration, Duhok Polytechnic University, Duhok, Iraq
| | - Masoud M Hassan
- Department of Computer Science, Faculty of Science, University of Zakho. Zakho, Kurdistan Region, Iraq
| | - Ramadhan J Mstafa
- Department of Computer Science, Faculty of Science, University of Zakho. Zakho, Kurdistan Region, Iraq
- Department of Computer Science, College of Science, Nawroz University, Duhok, Kurdistan Region, Iraq
| |
Collapse
|
222
|
Kalidindi S, Gandhi S. Workforce Crisis in Radiology in the UK and the Strategies to Deal With It: Is Artificial Intelligence the Saviour? Cureus 2023; 15:e43866. [PMID: 37608900 PMCID: PMC10441819 DOI: 10.7759/cureus.43866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2023] [Indexed: 08/24/2023] Open
Abstract
Radiology has seen rapid growth over the last few decades. Technological advances in equipment and computing have resulted in an explosion of new modalities and applications. However, this rapid expansion of capability and capacity has not been matched by a parallel growth in the number of radiologists. This has resulted in global shortages in the workforce, with the UK being one of the most affected countries. The UK National Health Service has been employing several conventional strategies to deal with the workforce situation with mixed success. The emergence of artificial intelligence (AI) tools that have the potential to increase efficiency and efficacy at various stages in radiology has made it possible for radiology departments to use new strategies and workflows that can offset workforce shortages to some extent. This review article discusses the current and projected radiology workforce situation in the UK and the various strategies to deal with it, including applications of AI in radiology. We highlight the benefits of AI tools in improving efficiency and patient safety. AI has a role along the patient's entire journey from the clinician requesting the appropriate radiological investigation, safe image acquisition, alerting the radiologists and clinicians about critical and life-threatening situations, cancer screening follow up, to generating meaningful radiology reports more efficiently. It has great potential in easing the workforce crisis and needs rapid adoption by radiology departments.
Collapse
|
223
|
Hu M, Wu B, Lu D, Xie J, Chen Y, Yang Z, Dai W. Two-step hierarchical neural network for classification of dry age-related macular degeneration using optical coherence tomography images. Front Med (Lausanne) 2023; 10:1221453. [PMID: 37547613 PMCID: PMC10403700 DOI: 10.3389/fmed.2023.1221453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 07/03/2023] [Indexed: 08/08/2023] Open
Abstract
Purpose The aim of this study is to apply deep learning techniques for the development and validation of a system that categorizes various phases of dry age-related macular degeneration (AMD), including nascent geographic atrophy (nGA), through the analysis of optical coherence tomography (OCT) images. Methods A total of 3,401 OCT macular images obtained from 338 patients admitted to Shenyang Aier Eye Hospital in 2019-2021 were collected for the development of the classification model. We adopted a convolutional neural network (CNN) model and introduced hierarchical structure along with image enhancement techniques to train a two-step CNN model to detect and classify normal and three phases of dry AMD: atrophy-associated drusen regression, nGA, and geographic atrophy (GA). Five-fold cross-validation was used to evaluate the performance of the multi-label classification model. Results Experimental results obtained from five-fold cross-validation with different dry AMD classification models show that the proposed two-step hierarchical model with image enhancement achieves the best classification performance, with a f1-score of 91.32% and a kappa coefficients of 96.09% compared to the state-of-the-art models. The results obtained from the ablation study demonstrate that the proposed method not only improves accuracy across all categories in comparison to a traditional flat CNN model, but also substantially enhances the classification performance of nGA, with an improvement from 66.79 to 81.65%. Conclusion This study introduces a novel two-step hierarchical deep learning approach in categorizing dry AMD progression phases, and demonstrates its efficacy. The high classification performance suggests its potential for guiding individualized treatment plans for patients with macular degeneration.
Collapse
Affiliation(s)
- Min Hu
- Changsha Aier Eye Hospital, Changsha, China
| | - Bin Wu
- Department of Retina, Shenyang Aier Excellence Eye Hospital, Shenyang, China
| | - Di Lu
- Department of Retina, Shenyang Aier Optometry Hospital, Shenyang, China
| | - Jing Xie
- Changsha Aier Eye Hospital, Changsha, China
| | - Yiqiang Chen
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Zhikuan Yang
- Aier Institute of Optometry and Vision Science, Changsha, China
| | - Weiwei Dai
- Changsha Aier Eye Hospital, Changsha, China
- Anhui Aier Eye Hospital, Anhui Medical University, Hefei, China
| |
Collapse
|
224
|
Hou Y, Navarro-Cía M. A computationally-inexpensive strategy in CT image data augmentation for robust deep learning classification in the early stages of an outbreak. Biomed Phys Eng Express 2023; 9:055003. [PMID: 37413977 DOI: 10.1088/2057-1976/ace4cf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 07/06/2023] [Indexed: 07/08/2023]
Abstract
Coronavirus disease 2019 (COVID-19) has spread globally for over three years, and chest computed tomography (CT) has been used to diagnose COVID-19 and identify lung damage in COVID-19 patients. Given its widespread, CT will remain a common diagnostic tool in future pandemics, but its effectiveness at the beginning of any pandemic will depend strongly on the ability to classify CT scans quickly and correctly when only limited resources are available, as it will happen inevitably again in future pandemics. Here, we resort into the transfer learning procedure and limited hyperparameters to use as few computing resources as possible for COVID-19 CT images classification. Advanced Normalisation Tools (ANTs) are used to synthesise images as augmented/independent data and trained on EfficientNet to investigate the effect of synthetic images. On the COVID-CT dataset, classification accuracy increases from 91.15% to 95.50% and Area Under the Receiver Operating Characteristic (AUC) from 96.40% to 98.54%. We also customise a small dataset to simulate data collected in the early stages of the outbreak and report an improvement in accuracy from 85.95% to 94.32% and AUC from 93.21% to 98.61%. This study provides a feasible Low-Threshold, Easy-To-Deploy and Ready-To-Use solution with a relatively low computational cost for medical image classification at an early stage of an outbreak in which scarce data are available and traditional data augmentation may fail. Hence, it would be most suitable for low-resource settings.
Collapse
Affiliation(s)
- Yikun Hou
- Department of Electronic, Electrical and Systems Engineering, University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Miguel Navarro-Cía
- Department of Electronic, Electrical and Systems Engineering, University of Birmingham, Birmingham B15 2TT, United Kingdom
- School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, United Kingdom
| |
Collapse
|
225
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
226
|
Wu P, Cao B, Liang Z, Wu M. The advantages of artificial intelligence-based gait assessment in detecting, predicting, and managing Parkinson's disease. Front Aging Neurosci 2023; 15:1191378. [PMID: 37502426 PMCID: PMC10368956 DOI: 10.3389/fnagi.2023.1191378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 06/05/2023] [Indexed: 07/29/2023] Open
Abstract
Background Parkinson's disease is a neurological disorder that can cause gait disturbance, leading to mobility issues and falls. Early diagnosis and prediction of freeze episodes are essential for mitigating symptoms and monitoring the disease. Objective This review aims to evaluate the use of artificial intelligence (AI)-based gait evaluation in diagnosing and managing Parkinson's disease, and to explore the potential benefits of this technology for clinical decision-making and treatment support. Methods A thorough review of published literature was conducted to identify studies, articles, and research related to AI-based gait evaluation in Parkinson's disease. Results AI-based gait evaluation has shown promise in preventing freeze episodes, improving diagnosis, and increasing motor independence in patients with Parkinson's disease. Its advantages include higher diagnostic accuracy, continuous monitoring, and personalized therapeutic interventions. Conclusion AI-based gait evaluation systems hold great promise for managing Parkinson's disease and improving patient outcomes. They offer the potential to transform clinical decision-making and inform personalized therapies, but further research is needed to determine their effectiveness and refine their use.
Collapse
Affiliation(s)
- Peng Wu
- College of Acupuncture and Orthopedics, Hubei University of Chinese Medicine, Wuhan, Hubei, China
| | - Biwei Cao
- Hubei Provincial Hospital of Traditional Chinese Medicine, Wuhan, China
- Affiliated Hospital of Hubei University of Chinese Medicine, Wuhan, Hubei, China
- Hubei Academy of Traditional Chinese Medicine, Wuhan, Hubei, China
| | - Zhendong Liang
- College of Acupuncture and Orthopedics, Hubei University of Chinese Medicine, Wuhan, Hubei, China
| | - Miao Wu
- Hubei Provincial Hospital of Traditional Chinese Medicine, Wuhan, China
- Affiliated Hospital of Hubei University of Chinese Medicine, Wuhan, Hubei, China
- Hubei Academy of Traditional Chinese Medicine, Wuhan, Hubei, China
| |
Collapse
|
227
|
Dou B, Zhu Z, Merkurjev E, Ke L, Chen L, Jiang J, Zhu Y, Liu J, Zhang B, Wei GW. Machine Learning Methods for Small Data Challenges in Molecular Science. Chem Rev 2023; 123:8736-8780. [PMID: 37384816 PMCID: PMC10999174 DOI: 10.1021/acs.chemrev.3c00189] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/01/2023]
Abstract
Small data are often used in scientific and engineering research due to the presence of various constraints, such as time, cost, ethics, privacy, security, and technical limitations in data acquisition. However, big data have been the focus for the past decade, small data and their challenges have received little attention, even though they are technically more severe in machine learning (ML) and deep learning (DL) studies. Overall, the small data challenge is often compounded by issues, such as data diversity, imputation, noise, imbalance, and high-dimensionality. Fortunately, the current big data era is characterized by technological breakthroughs in ML, DL, and artificial intelligence (AI), which enable data-driven scientific discovery, and many advanced ML and DL technologies developed for big data have inadvertently provided solutions for small data problems. As a result, significant progress has been made in ML and DL for small data challenges in the past decade. In this review, we summarize and analyze several emerging potential solutions to small data challenges in molecular science, including chemical and biological sciences. We review both basic machine learning algorithms, such as linear regression, logistic regression (LR), k-nearest neighbor (KNN), support vector machine (SVM), kernel learning (KL), random forest (RF), and gradient boosting trees (GBT), and more advanced techniques, including artificial neural network (ANN), convolutional neural network (CNN), U-Net, graph neural network (GNN), Generative Adversarial Network (GAN), long short-term memory (LSTM), autoencoder, transformer, transfer learning, active learning, graph-based semi-supervised learning, combining deep learning with traditional machine learning, and physical model-based data augmentation. We also briefly discuss the latest advances in these methods. Finally, we conclude the survey with a discussion of promising trends in small data challenges in molecular science.
Collapse
Affiliation(s)
- Bozheng Dou
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Zailiang Zhu
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Ekaterina Merkurjev
- Department of Mathematics, Michigan State University, East Lansing, Michigan 48824, United States
| | - Lu Ke
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Long Chen
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Jian Jiang
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
- Department of Mathematics, Michigan State University, East Lansing, Michigan 48824, United States
| | - Yueying Zhu
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Jie Liu
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Bengong Zhang
- Research Center of Nonlinear Science, School of Mathematical and Physical Sciences,Wuhan Textile University, Wuhan 430200, P, R. China
| | - Guo-Wei Wei
- Department of Mathematics, Michigan State University, East Lansing, Michigan 48824, United States
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824, United States
- Department of Biochemistry and Molecular Biology, Michigan State University, East Lansing, Michigan 48824, United States
| |
Collapse
|
228
|
Lisacek-Kiosoglous AB, Powling AS, Fontalis A, Gabr A, Mazomenos E, Haddad FS. Artificial intelligence in orthopaedic surgery. Bone Joint Res 2023; 12:447-454. [PMID: 37423607 DOI: 10.1302/2046-3758.127.bjr-2023-0111.r1] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/11/2023] Open
Abstract
The use of artificial intelligence (AI) is rapidly growing across many domains, of which the medical field is no exception. AI is an umbrella term defining the practical application of algorithms to generate useful output, without the need of human cognition. Owing to the expanding volume of patient information collected, known as 'big data', AI is showing promise as a useful tool in healthcare research and across all aspects of patient care pathways. Practical applications in orthopaedic surgery include: diagnostics, such as fracture recognition and tumour detection; predictive models of clinical and patient-reported outcome measures, such as calculating mortality rates and length of hospital stay; and real-time rehabilitation monitoring and surgical training. However, clinicians should remain cognizant of AI's limitations, as the development of robust reporting and validation frameworks is of paramount importance to prevent avoidable errors and biases. The aim of this review article is to provide a comprehensive understanding of AI and its subfields, as well as to delineate its existing clinical applications in trauma and orthopaedic surgery. Furthermore, this narrative review expands upon the limitations of AI and future direction.
Collapse
Affiliation(s)
- Anthony B Lisacek-Kiosoglous
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
| | - Amber S Powling
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
- Barts and The London School of Medicine and Dentistry, School of Medicine London, London, UK
| | - Andreas Fontalis
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
- Division of Surgery and Interventional Science, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Ayman Gabr
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
| | - Evangelos Mazomenos
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Fares S Haddad
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
- Division of Surgery and Interventional Science, University College London, London, UK
| |
Collapse
|
229
|
Alsubai S, Alqahtani A, Sha M, Almadhor A, Abbas S, Mughal H, Gregus M. Privacy Preserved Cervical Cancer Detection Using Convolutional Neural Networks Applied to Pap Smear Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:9676206. [PMID: 37455684 PMCID: PMC10349677 DOI: 10.1155/2023/9676206] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 09/28/2022] [Accepted: 10/11/2022] [Indexed: 07/18/2023]
Abstract
Image processing has enabled faster and more accurate image classification. It has been of great benefit to the health industry. Manually examining medical images like MRI and X-rays can be very time-consuming, more prone to human error, and way more costly. One such examination is the Pap smear exam, where the cervical cells are examined in laboratory settings to distinguish healthy cervical cells from abnormal cells, thus indicating early signs of cervical cancer. In this paper, we propose a convolutional neural network- (CNN-) based cervical cell classification using the publicly available SIPaKMeD dataset having five cell categories: superficial-intermediate, parabasal, koilocytotic, metaplastic, and dyskeratotic. CNN distinguishes between healthy cervical cells, cells with precancerous abnormalities, and benign cells. Pap smear images were segmented, and a deep CNN using four convolutional layers was applied to the augmented images of cervical cells obtained from Pap smear slides. A simple yet efficient CNN is proposed that yields an accuracy of 0.9113% and can be successfully used to classify cervical cells. A simple architecture that yields a reasonably good accuracy can increase the speed of diagnosis and decrease the response time, reducing the computation cost. Future researchers can build upon this model to improve the model's accuracy to get a faster and more accurate prediction.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Ahmad Almadhor
- Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Sidra Abbas
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Huma Mughal
- Department of Computer Science, Kinnaird College for Women, Lahore 54000, Pakistan
| | - Michal Gregus
- Information Systems Department, Faculty of Management, Comenius University in Bratislava, Odbojárov 10, 82005 Bratislava 25, Slovakia
| |
Collapse
|
230
|
Singh A, Velagala VR, Kumar T, Dutta RR, Sontakke T. The Application of Deep Learning to Electroencephalograms, Magnetic Resonance Imaging, and Implants for the Detection of Epileptic Seizures: A Narrative Review. Cureus 2023; 15:e42460. [PMID: 37637568 PMCID: PMC10457132 DOI: 10.7759/cureus.42460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Epilepsy is a neurological disorder characterized by recurrent seizures affecting millions worldwide. Medically intractable seizures in epilepsy patients are not only detrimental to the quality of life but also pose a significant threat to their safety. Outcomes of epilepsy therapy can be improved by early detection and intervention during the interictal window period. Electroencephalography is the primary diagnostic tool for epilepsy, but accurate interpretation of seizure activity is challenging and highly time-consuming. Machine learning (ML) and deep learning (DL) algorithms enable us to analyze complex EEG data, which can not only help us diagnose but also locate epileptogenic zones and predict medical and surgical treatment outcomes. DL models such as convolutional neural networks (CNNs), inspired by visual processing, can be used to classify EEG activity. By applying preprocessing techniques, signal quality can be enhanced by denoising and artifact removal. DL can also be incorporated into the analysis of magnetic resonance imaging (MRI) data, which can help in the localization of epileptogenic zones in the brain. Proper detection of these zones can help in good neurosurgical outcomes. Recent advancements in DL have facilitated the implementation of these systems in neural implants and wearable devices, allowing for real-time seizure detection. This has the potential to transform the management of drug-refractory epilepsy. This review explores the application of ML and DL techniques to Electroencephalograms (EEGs), MRI, and wearable devices for epileptic seizure detection. This review briefly explains the fundamentals of both artificial intelligence (AI) and DL, highlighting these systems' potential advantages and undeniable limitations.
Collapse
Affiliation(s)
- Arihant Singh
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Vivek R Velagala
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Tanishq Kumar
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Rajoshee R Dutta
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Tushar Sontakke
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
231
|
Pierre K, Gupta M, Raviprasad A, Sadat Razavi SM, Patel A, Peters K, Hochhegger B, Mancuso A, Forghani R. Medical imaging and multimodal artificial intelligence models for streamlining and enhancing cancer care: opportunities and challenges. Expert Rev Anticancer Ther 2023; 23:1265-1279. [PMID: 38032181 DOI: 10.1080/14737140.2023.2286001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) has the potential to transform oncologic care. There have been significant developments in AI applications in medical imaging and increasing interest in multimodal models. These are likely to enable improved oncologic care through more precise diagnosis, increasingly in a more personalized and less invasive manner. In this review, we provide an overview of the current state and challenges that clinicians, administrative personnel and policy makers need to be aware of and mitigate for the technology to reach its full potential. AREAS COVERED The article provides a brief targeted overview of AI, a high-level review of the current state and future potential AI applications in diagnostic radiology and to a lesser extent digital pathology, focusing on oncologic applications. This is followed by a discussion of emerging approaches, including multimodal models. The article concludes with a discussion of technical, regulatory challenges and infrastructure needs for AI to realize its full potential. EXPERT OPINION There is a large volume of promising research, and steadily increasing commercially available tools using AI. For the most advanced and promising precision diagnostic applications of AI to be used clinically, robust and comprehensive quality monitoring systems and informatics platforms will likely be required.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Manas Gupta
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
| | - Abheek Raviprasad
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Seyedeh Mehrsa Sadat Razavi
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Anjali Patel
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- University of Florida College of Medicine, Gainesville, FL, USA
| | - Keith Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
- Division of Medical Physics, University of Florida College of Medicine, Gainesville, FL, USA
- Department of Neurology, Division of Movement Disorders, University of Florida College of Medicine, Gainesville, FL, USA
| |
Collapse
|
232
|
Anil S, Porwal P, Porwal A. Transforming Dental Caries Diagnosis Through Artificial Intelligence-Based Techniques. Cureus 2023; 15:e41694. [PMID: 37575741 PMCID: PMC10413921 DOI: 10.7759/cureus.41694] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/11/2023] [Indexed: 08/15/2023] Open
Abstract
Diagnosing dental caries plays a pivotal role in preventing and treating tooth decay. However, traditional methods of diagnosing caries often fall short in accuracy and efficiency. Despite the endorsement of radiography as a diagnostic tool, the identification of dental caries through radiographic images can be influenced by individual interpretation. Incorporating artificial intelligence (AI) into diagnosing dental caries holds significant promise, potentially enhancing the precision and efficiency of diagnoses. This review introduces the fundamental concepts of AI, including machine learning and deep learning algorithms, and emphasizes their relevance and potential contributions to the diagnosis of dental caries. It further explains the process of gathering and pre-processing radiography data for AI examination. Additionally, AI techniques for dental caries diagnosis are explored, focusing on image processing, analysis, and classification models for predicting caries risk and severity. Deep learning applications in dental caries diagnosis using convolutional neural networks are presented. Furthermore, the integration of AI systems into dental practice is discussed, including the challenges and considerations for implementation as well as ethical and legal aspects. The breadth of AI technologies and their prospective utility in clinical scenarios for diagnosing dental caries from dental radiographs is presented. This review outlines the advancements of AI and its potential in revolutionizing dental caries diagnosis, encouraging further research and development in this rapidly evolving field.
Collapse
Affiliation(s)
| | - Priyanka Porwal
- Dentistry, Pushpagiri Institute of Medical Sciences and Research Centre, Tiruvalla, IND
| | - Amit Porwal
- Prosthetic Dental Sciences, College of Dentistry, Jazan University, Jazan, SAU
| |
Collapse
|
233
|
Kheir AM, Elnashar A, Mosad A, Govind A. An improved deep learning procedure for statistical downscaling of climate data. Heliyon 2023; 9:e18200. [PMID: 37539241 PMCID: PMC10393634 DOI: 10.1016/j.heliyon.2023.e18200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 07/11/2023] [Accepted: 07/11/2023] [Indexed: 08/05/2023] Open
Abstract
Recent climate change (CC) scenarios from the Coupled Model Intercomparison Project Phase 6 (CMIP6) have just been released in coarse resolution. Deep learning (DL) based on statistical downscaling has recently been used, but more research is needed, particularly in arid regions, because little is known about their suitability for extrapolating future CC scenarios. Here we analyzed this issue by downscaling maximum, and minimum temperature over the Egyptian domain based on one General Circulation Model (GCM) as CanESM5 and two shared socioeconomic pathways (SSPs) as SSP4.5 and SSP8.5 from CMIP6 using Convolutional Neural Network (CNN) herein after called CNNSD. The downscaled maximum and minimum temperatures based CNNSD was able to reproduce the observed climate over historical and future periods at a finer resolution (0.1°), reducing the biases exhibited by the original scenario. To the best of our knowledge, this is the first time CNN has been used to downscale CMIP6 scenarios, particularly in arid regions. The downscaled analysis showed that maximum and minimum temperatures are expected to rise by 4.8 °C and 4.0 °C, respectively, in the future (2015-2100), compared to the historical period, under the moderate scenario (SSP4.5). Meanwhile, under the Fossil-fueled Development scenario (SSP8.5), these values will rise by 6.3 °C and 4.2 °C, respectively as analyzed by the CNNSD. The developed approach could be used not only in Egypt but also in other developing countries, which are especially vulnerable to climate change and has a scarcity of related research. The established downscaled approach's supply can be used to provide climate services, as a driver for impact studies and adaptation decisions, and as information for policy development. More research is needed, however, to include multi-GCMs to quantify the uncertainties between GCMs and SSPs, improving the outputs for use in climate change impacts and adaptations for food and nutrition security.
Collapse
Affiliation(s)
- Ahmed M.S. Kheir
- International Center for Agricultural Research in the Dry Areas (ICARDA), Maadi 11728, Egypt
- Soils, Water and Environment Research Institute, Agricultural Research Center, 9 Cairo University Street, Giza 12112, Egypt
| | - Abdelrazek Elnashar
- Department of Natural Resources, Faculty of African Postgraduate Studies, Cairo University, Giza 12613, Egypt
| | - Alaa Mosad
- International Center for Agricultural Research in the Dry Areas (ICARDA), Maadi 11728, Egypt
- Soils, Water and Environment Research Institute, Agricultural Research Center, 9 Cairo University Street, Giza 12112, Egypt
| | - Ajit Govind
- International Center for Agricultural Research in the Dry Areas (ICARDA), Maadi 11728, Egypt
| |
Collapse
|
234
|
Haja SA, Mahadevappa V. Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology. Rom J Ophthalmol 2023; 67:222-237. [PMID: 37876506 PMCID: PMC10591431 DOI: 10.22336/rjo.2023.39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2023] [Indexed: 10/26/2023] Open
Abstract
A leading cause of irreversible vision loss, glaucoma needs early detection for effective management. Intraocular Pressure (IOP) is a significant risk factor for glaucoma. Convolutional Neural Networks (CNN) demonstrate exceptional capabilities in analyzing retinal fundus images, a non-invasive and cost-effective imaging technique widely used in glaucoma diagnosis. By learning from large datasets of annotated images, CNN can identify subtle changes in the optic nerve head and retinal structures indicative of glaucoma. This enables early and precise glaucoma diagnosis, empowering clinicians to implement timely interventions. CNNs excel in analyzing complex medical images, detecting subtle changes indicative of glaucoma with high precision. Another valuable diagnostic tool for glaucoma evaluation, Optical Coherence Tomography (OCT), provides high-resolution cross-sectional images of the retina. CNN can effectively analyze OCT scans and extract meaningful features, facilitating the identification of structural abnormalities associated with glaucoma. Visual field testing, performed using devices like the Humphrey Field Analyzer, is crucial for assessing functional vision loss in glaucoma. The integration of CNN with retinal fundus images, OCT scans, visual field testing, and IOP measurements represents a transformative approach to glaucoma detection. These advanced technologies have the potential to revolutionize ophthalmology by enabling early detection, personalized management, and improved patient outcomes. CNNs facilitate remote expert opinions and enhance treatment monitoring. Overcoming challenges such as data scarcity and interpretability can optimize CNN utilization in glaucoma diagnosis. Measuring retinal nerve fiber layer thickness as a diagnostic marker proves valuable. CNN implementation reduces healthcare costs and improves access to quality eye care. Future research should focus on optimizing architectures and incorporating novel biomarkers. CNN integration in glaucoma detection revolutionizes ophthalmology, improving patient outcomes and access to care. This review paves the way for innovative CNN-based glaucoma detection methods. Abbreviations: CNN = Convolutional Neural Networks, AI = Artificial Intelligence, IOP = Intraocular Pressure, OCT = Optical Coherence Tomography, CLSO = Confocal Scanning Laser Ophthalmoscopy, AUC-ROC = Area Under the Receiver Operating Characteristic Curve, RNFL = Retinal Nerve Fiber Layer, RNN = Recurrent Neural Networks, VF = Visual Field, AP = Average Precision, MD = Mean Defect, sLV = square-root of Loss Variance, NN = Neural Network, WHO = World Health Organization.
Collapse
Affiliation(s)
- Shafeeq Ahmed Haja
- Department of Ophthalmology, Bangalore Medical College and Research Institute, India
| | - Vidyadevi Mahadevappa
- Department of Ophthalmology, Bangalore Medical College and Research Institute, India
| |
Collapse
|
235
|
Anastasiadis A, Koudonas A, Langas G, Tsiakaras S, Memmos D, Mykoniatis I, Symeonidis EN, Tsiptsios D, Savvides E, Vakalopoulos I, Dimitriadis G, de la Rosette J. Transforming urinary stone disease management by artificial intelligence-based methods: A comprehensive review. Asian J Urol 2023; 10:258-274. [PMID: 37538159 PMCID: PMC10394286 DOI: 10.1016/j.ajur.2023.02.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 10/23/2022] [Accepted: 02/10/2023] [Indexed: 08/05/2023] Open
Abstract
Objective To provide a comprehensive review on the existing research and evidence regarding artificial intelligence (AI) applications in the assessment and management of urinary stone disease. Methods A comprehensive literature review was performed using PubMed, Scopus, and Google Scholar databases to identify publications about innovative concepts or supporting applications of AI in the improvement of every medical procedure relating to stone disease. The terms ''endourology'', ''artificial intelligence'', ''machine learning'', and ''urolithiasis'' were used for searching eligible reports, while review articles, articles referring to automated procedures without AI application, and editorial comments were excluded from the final set of publications. The search was conducted from January 2000 to September 2023 and included manuscripts in the English language. Results A total of 69 studies were identified. The main subjects were related to the detection of urinary stones, the prediction of the outcome of conservative or operative management, the optimization of operative procedures, and the elucidation of the relation of urinary stone chemistry with various factors. Conclusion AI represents a useful tool that provides urologists with numerous amenities, which explains the fact that it has gained ground in the pursuit of stone disease management perfection. The effectiveness of diagnosis and therapy can be increased by using it as an alternative or adjunct to the already existing data. However, little is known concerning the potential of this vast field. Electronic patient records, containing big data, offer AI the opportunity to develop and analyze more precise and efficient diagnostic and treatment algorithms. Nevertheless, the existing applications are not generalizable in real-life practice, and high-quality studies are needed to establish the integration of AI in the management of urinary stone disease.
Collapse
Affiliation(s)
- Anastasios Anastasiadis
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Antonios Koudonas
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Georgios Langas
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Stavros Tsiakaras
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Dimitrios Memmos
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Ioannis Mykoniatis
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Evangelos N. Symeonidis
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Dimitrios Tsiptsios
- Neurology Department, Democritus University of Thrace, Alexandroupolis, Greece
| | | | - Ioannis Vakalopoulos
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Georgios Dimitriadis
- 1st Department of Urology, Aristotle University of Thessaloniki, School of Medicine, “G.Gennimatas” General Hospital, Thessaloniki, Greece
| | - Jean de la Rosette
- Department of Urology, Istanbul Medipol Mega University Hospital, Istanbul, Turkey
| |
Collapse
|
236
|
TCNN: A Transformer Convolutional Neural Network for artifact classification in whole slide images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
|
237
|
Diraco G, Rescio G, Caroppo A, Manni A, Leone A. Human Action Recognition in Smart Living Services and Applications: Context Awareness, Data Availability, Personalization, and Privacy. SENSORS (BASEL, SWITZERLAND) 2023; 23:6040. [PMID: 37447889 DOI: 10.3390/s23136040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/20/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023]
Abstract
Smart living, an increasingly prominent concept, entails incorporating sophisticated technologies in homes and urban environments to elevate the quality of life for citizens. A critical success factor for smart living services and applications, from energy management to healthcare and transportation, is the efficacy of human action recognition (HAR). HAR, rooted in computer vision, seeks to identify human actions and activities using visual data and various sensor modalities. This paper extensively reviews the literature on HAR in smart living services and applications, amalgamating key contributions and challenges while providing insights into future research directions. The review delves into the essential aspects of smart living, the state of the art in HAR, and the potential societal implications of this technology. Moreover, the paper meticulously examines the primary application sectors in smart living that stand to gain from HAR, such as smart homes, smart healthcare, and smart cities. By underscoring the significance of the four dimensions of context awareness, data availability, personalization, and privacy in HAR, this paper offers a comprehensive resource for researchers and practitioners striving to advance smart living services and applications. The methodology for this literature review involved conducting targeted Scopus queries to ensure a comprehensive coverage of relevant publications in the field. Efforts have been made to thoroughly evaluate the existing literature, identify research gaps, and propose future research directions. The comparative advantages of this review lie in its comprehensive coverage of the dimensions essential for smart living services and applications, addressing the limitations of previous reviews and offering valuable insights for researchers and practitioners in the field.
Collapse
Affiliation(s)
- Giovanni Diraco
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Gabriele Rescio
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Andrea Caroppo
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Andrea Manni
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| | - Alessandro Leone
- National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
| |
Collapse
|
238
|
Barrera-Naranjo A, Marin-Castrillon DM, Decourselle T, Lin S, Leclerc S, Morgant MC, Bernard C, De Oliveira S, Boucher A, Presles B, Bouchot O, Christophe JJ, Lalande A. Segmentation of 4D Flow MRI: Comparison between 3D Deep Learning and Velocity-Based Level Sets. J Imaging 2023; 9:123. [PMID: 37367471 DOI: 10.3390/jimaging9060123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/30/2023] [Accepted: 06/15/2023] [Indexed: 06/28/2023] Open
Abstract
A thoracic aortic aneurysm is an abnormal dilatation of the aorta that can progress and lead to rupture. The decision to conduct surgery is made by considering the maximum diameter, but it is now well known that this metric alone is not completely reliable. The advent of 4D flow magnetic resonance imaging has allowed for the calculation of new biomarkers for the study of aortic diseases, such as wall shear stress. However, the calculation of these biomarkers requires the precise segmentation of the aorta during all phases of the cardiac cycle. The objective of this work was to compare two different methods for automatically segmenting the thoracic aorta in the systolic phase using 4D flow MRI. The first method is based on a level set framework and uses the velocity field in addition to 3D phase contrast magnetic resonance imaging. The second method is a U-Net-like approach that is only applied to magnitude images from 4D flow MRI. The used dataset was composed of 36 exams from different patients, with ground truth data for the systolic phase of the cardiac cycle. The comparison was performed based on selected metrics, such as the Dice similarity coefficient (DSC) and Hausdorf distance (HD), for the whole aorta and also three aortic regions. Wall shear stress was also assessed and the maximum wall shear stress values were used for comparison. The U-Net-based approach provided statistically better results for the 3D segmentation of the aorta, with a DSC of 0.92 ± 0.02 vs. 0.86 ± 0.5 and an HD of 21.49 ± 24.8 mm vs. 35.79 ± 31.33 mm for the whole aorta. The absolute difference between the wall shear stress and ground truth slightly favored the level set method, but not significantly (0.754 ± 1.07 Pa vs. 0.737 ± 0.79 Pa). The results showed that the deep learning-based method should be considered for the segmentation of all time steps in order to evaluate biomarkers based on 4D flow MRI.
Collapse
Affiliation(s)
| | | | | | - Siyu Lin
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
| | - Sarah Leclerc
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
| | - Marie-Catherine Morgant
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
- Department of Cardio-Vascular and Thoracic Surgery, University Hospital of Dijon, 21078 Dijon, France
| | - Chloé Bernard
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
- Department of Cardio-Vascular and Thoracic Surgery, University Hospital of Dijon, 21078 Dijon, France
| | | | - Arnaud Boucher
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
| | - Benoit Presles
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
| | - Olivier Bouchot
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
- Department of Cardio-Vascular and Thoracic Surgery, University Hospital of Dijon, 21078 Dijon, France
| | | | - Alain Lalande
- IFTIM, ICMUB Laboratory, University of Burgundy, 21078 Dijon, France
- Department of Medical Imaging, University Hospital of Dijon, 21078 Dijon, France
| |
Collapse
|
239
|
Cirrincione G, Cannata S, Cicceri G, Prinzi F, Currieri T, Lovino M, Militello C, Pasero E, Vitabile S. Transformer-Based Approach to Melanoma Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:5677. [PMID: 37420843 DOI: 10.3390/s23125677] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/09/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Melanoma is a malignant cancer type which develops when DNA damage occurs (mainly due to environmental factors such as ultraviolet rays). Often, melanoma results in intense and aggressive cell growth that, if not caught in time, can bring one toward death. Thus, early identification at the initial stage is fundamental to stopping the spread of cancer. In this paper, a ViT-based architecture able to classify melanoma versus non-cancerous lesions is presented. The proposed predictive model is trained and tested on public skin cancer data from the ISIC challenge, and the obtained results are highly promising. Different classifier configurations are considered and analyzed in order to find the most discriminating one. The best one reached an accuracy of 0.948, sensitivity of 0.928, specificity of 0.967, and AUROC of 0.948.
Collapse
Affiliation(s)
- Giansalvo Cirrincione
- Département Electronique-Electrotechnique-Automatique (EEA), University of Picardie Jules Verne, 80000 Amiens, France
| | - Sergio Cannata
- Department of Electronics and Telecommunications, Politecnico di Torino, 10129 Turin, Italy
| | - Giovanni Cicceri
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, 90127 Palermo, Italy
| | - Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, 90127 Palermo, Italy
| | - Tiziana Currieri
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, 90127 Palermo, Italy
| | - Marta Lovino
- Department of Engineering Enzo Ferrari, University of Modena and Reggio Emilia, 41125 Modena, Italy
| | - Carmelo Militello
- Institute for High-Performance Computing and Networking (ICAR-CNR), Italian National Research Council, 90146 Palermo, Italy
| | - Eros Pasero
- Department of Electronics and Telecommunications, Politecnico di Torino, 10129 Turin, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, 90127 Palermo, Italy
| |
Collapse
|
240
|
Zafar S, Nizami IF, Rehman MU, Majid M, Ryu J. NISQE: Non-Intrusive Speech Quality Evaluator Based on Natural Statistics of Mean Subtracted Contrast Normalized Coefficients of Spectrogram. SENSORS (BASEL, SWITZERLAND) 2023; 23:5652. [PMID: 37420818 DOI: 10.3390/s23125652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/01/2023] [Accepted: 06/14/2023] [Indexed: 07/09/2023]
Abstract
With the evolution in technology, communication based on the voice has gained importance in applications such as online conferencing, online meetings, voice-over internet protocol (VoIP), etc. Limiting factors such as environmental noise, encoding and decoding of the speech signal, and limitations of technology may degrade the quality of the speech signal. Therefore, there is a requirement for continuous quality assessment of the speech signal. Speech quality assessment (SQA) enables the system to automatically tune network parameters to improve speech quality. Furthermore, there are many speech transmitters and receivers that are used for voice processing including mobile devices and high-performance computers that can benefit from SQA. SQA plays a significant role in the evaluation of speech-processing systems. Non-intrusive speech quality assessment (NI-SQA) is a challenging task due to the unavailability of pristine speech signals in real-world scenarios. The success of NI-SQA techniques highly relies on the features used to assess speech quality. Various NI-SQA methods are available that extract features from speech signals in different domains, but they do not take into account the natural structure of the speech signals for assessment of speech quality. This work proposes a method for NI-SQA based on the natural structure of the speech signals that are approximated using the natural spectrogram statistical (NSS) properties derived from the speech signal spectrogram. The pristine version of the speech signal follows a structured natural pattern that is disrupted when distortion is introduced in the speech signal. The deviation of NSS properties between the pristine and distorted speech signals is utilized to predict speech quality. The proposed methodology shows better performance in comparison to state-of-the-art NI-SQA methods on the Centre for Speech Technology Voice Cloning Toolkit corpus (VCTK-Corpus) with a Spearman's rank-ordered correlation constant (SRC) of 0.902, Pearson correlation constant (PCC) of 0.960, and root mean squared error (RMSE) of 0.206. Conversely, on the NOIZEUS-960 database, the proposed methodology shows an SRC of 0.958, PCC of 0.960, and RMSE of 0.114.
Collapse
Affiliation(s)
- Shakeel Zafar
- Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Imran Fareed Nizami
- Department of Electrical Engineering, Bahria University, Islamabad 44000, Pakistan
| | - Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea
| | - Muhammad Majid
- Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Jihyoung Ryu
- Electronics and Telecommunications Research Institute (ETRI), Gwangju 61012, Republic of Korea
| |
Collapse
|
241
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
242
|
Lagua EB, Mun HS, Ampode KMB, Chem V, Kim YH, Yang CJ. Artificial Intelligence for Automatic Monitoring of Respiratory Health Conditions in Smart Swine Farming. Animals (Basel) 2023; 13:1860. [PMID: 37889795 PMCID: PMC10251864 DOI: 10.3390/ani13111860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/31/2023] [Accepted: 05/31/2023] [Indexed: 10/29/2023] Open
Abstract
Porcine respiratory disease complex is an economically important disease in the swine industry. Early detection of the disease is crucial for immediate response to the disease at the farm level to prevent and minimize the potential damage that it may cause. In this paper, recent studies on the application of artificial intelligence (AI) in the early detection and monitoring of respiratory disease in swine have been reviewed. Most of the studies used coughing sounds as a feature of respiratory disease. The performance of different models and the methodologies used for cough recognition using AI were reviewed and compared. An AI technology available in the market was also reviewed. The device uses audio technology that can monitor and evaluate the herd's respiratory health status through cough-sound recognition and quantification. The device also has temperature and humidity sensors to monitor environmental conditions. It has an alarm system based on variations in coughing patterns and abrupt temperature changes. However, some limitations of the existing technology were identified. Substantial effort must be exerted to surmount the limitations to have a smarter AI technology for monitoring respiratory health status in swine.
Collapse
Affiliation(s)
- Eddiemar B. Lagua
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Sunchon National University, 255 Jungangno, Suncheon 57922, Republic of Korea
| | - Hong-Seok Mun
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Department of Multimedia Engineering, Sunchon National University, Suncheon 57922, Republic of Korea
| | - Keiven Mark B. Ampode
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Department of Animal Science, College of Agriculture, Sultan Kudarat State University, Tacurong City 9800, Philippines
| | - Veasna Chem
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
| | - Young-Hwa Kim
- Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Chonnam National University, Gwangju 61186, Republic of Korea;
| | - Chul-Ju Yang
- Animal Nutrition and Feed Science Laboratory, Department of Animal Science and Technology, Sunchon National University, Suncheon 57922, Republic of Korea; (E.B.L.); (H.-S.M.); (K.M.B.A.); (V.C.)
- Interdisciplinary Program in IT-Bio Convergence System (BK21 Plus), Sunchon National University, 255 Jungangno, Suncheon 57922, Republic of Korea
| |
Collapse
|
243
|
Choo YJ, Chang MC. Use of machine learning in the field of prosthetics and orthotics: A systematic narrative review. Prosthet Orthot Int 2023; 47:226-240. [PMID: 36811961 DOI: 10.1097/pxr.0000000000000199] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 09/08/2022] [Indexed: 02/24/2023]
Abstract
Although machine learning is not yet being used in clinical practice within the fields of prosthetics and orthotics, several studies on the use of prosthetics and orthotics have been conducted. We intend to provide relevant knowledge by conducting a systematic review of prior studies on using machine learning in the fields of prosthetics and orthotics. We searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, Embase, and Scopus databases and retrieved studies published until July 18, 2021. The study included the application of machine learning algorithms to upper-limb and lower-limb prostheses and orthoses. The criteria of the Quality in Prognosis Studies tool were used to assess the methodological quality of the studies. A total of 13 studies were included in this systematic review. In the realm of prostheses, machine learning has been used to identify prosthesis, select an appropriate prosthesis, train after wearing the prosthesis, detect falls, and manage the temperature in the socket. In the field of orthotics, machine learning was used to control real-time movement while wearing an orthosis and predict the need for an orthosis. The studies included in this systematic review are limited to the algorithm development stage. However, if the developed algorithms are actually applied to clinical practice, it is expected that it will be useful for medical staff and users to handle prosthesis and orthosis.
Collapse
Affiliation(s)
- Yoo Jin Choo
- Production R&D Division Advanced Interdisciplinary Team, Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation, Deagu, South Korea
| | - Min Cheol Chang
- Department of Rehabilitation Medicine, College of Medicine, Yeungnam University, Daegu, South Korea
| |
Collapse
|
244
|
Said Y, Atri M, Albahar MA, Ben Atitallah A, Alsariera YA. Obstacle Detection System for Navigation Assistance of Visually Impaired People Based on Deep Learning Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115262. [PMID: 37299996 DOI: 10.3390/s23115262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/30/2023] [Accepted: 05/31/2023] [Indexed: 06/12/2023]
Abstract
Visually impaired people seek social integration, yet their mobility is restricted. They need a personal navigation system that can provide privacy and increase their confidence for better life quality. In this paper, based on deep learning and neural architecture search (NAS), we propose an intelligent navigation assistance system for visually impaired people. The deep learning model has achieved significant success through well-designed architecture. Subsequently, NAS has proved to be a promising technique for automatically searching for the optimal architecture and reducing human efforts for architecture design. However, this new technique requires extensive computation, limiting its wide use. Due to its high computation requirement, NAS has been less investigated for computer vision tasks, especially object detection. Therefore, we propose a fast NAS to search for an object detection framework by considering efficiency. The NAS will be used to explore the feature pyramid network and the prediction stage for an anchor-free object detection model. The proposed NAS is based on a tailored reinforcement learning technique. The searched model was evaluated on a combination of the Coco dataset and the Indoor Object Detection and Recognition (IODR) dataset. The resulting model outperformed the original model by 2.6% in average precision (AP) with acceptable computation complexity. The achieved results proved the efficiency of the proposed NAS for custom object detection.
Collapse
Affiliation(s)
- Yahia Said
- Remote Sensing Unit, College of Engineering, Northern Border University, Arar 91431, Saudi Arabia
- King Salman Center for Disability Research, Riyadh 11614, Saudi Arabia
- Laboratory of Electronics and Microelectronics (LR99ES30), University of Monastir, Monatir 5019, Tunisia
| | - Mohamed Atri
- College of Computer Sciences, King Khalid University, Abha 62529, Saudi Arabia
| | - Marwan Ali Albahar
- School of Computer Science, Umm Al-Qura University, Mecca 24382, Saudi Arabia
| | - Ahmed Ben Atitallah
- Department of Electrical Engineering, College of Engineering, Jouf University, Sakaka 72388, Saudi Arabia
| | | |
Collapse
|
245
|
Khalaf K, Terrin M, Jovani M, Rizkala T, Spadaccini M, Pawlak KM, Colombo M, Andreozzi M, Fugazza A, Facciorusso A, Grizzi F, Hassan C, Repici A, Carrara S. A Comprehensive Guide to Artificial Intelligence in Endoscopic Ultrasound. J Clin Med 2023; 12:jcm12113757. [PMID: 37297953 DOI: 10.3390/jcm12113757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 05/28/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023] Open
Abstract
BACKGROUND Endoscopic Ultrasound (EUS) is widely used for the diagnosis of bilio-pancreatic and gastrointestinal (GI) tract diseases, for the evaluation of subepithelial lesions, and for sampling of lymph nodes and solid masses located next to the GI tract. The role of Artificial Intelligence in healthcare in growing. This review aimed to provide an overview of the current state of AI in EUS from imaging to pathological diagnosis and training. METHODS AI algorithms can assist in lesion detection and characterization in EUS by analyzing EUS images and identifying suspicious areas that may require further clinical evaluation or biopsy sampling. Deep learning techniques, such as convolutional neural networks (CNNs), have shown great potential for tumor identification and subepithelial lesion (SEL) evaluation by extracting important features from EUS images and using them to classify or segment the images. RESULTS AI models with new features can increase the accuracy of diagnoses, provide faster diagnoses, identify subtle differences in disease presentation that may be missed by human eyes, and provide more information and insights into disease pathology. CONCLUSIONS The integration of AI in EUS images and biopsies has the potential to improve the diagnostic accuracy, leading to better patient outcomes and to a reduction in repeated procedures in case of non-diagnostic biopsies.
Collapse
Affiliation(s)
- Kareem Khalaf
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Maria Terrin
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Manol Jovani
- Division of Gastroenterology, Maimonides Medical Center, SUNY Downstate University, Brooklyn, NY 11219, USA
| | - Tommy Rizkala
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20089 Milan, Italy
| | - Marco Spadaccini
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Katarzyna M Pawlak
- Division of Gastroenterology, St. Michael's Hospital, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Matteo Colombo
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Marta Andreozzi
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Alessandro Fugazza
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Antonio Facciorusso
- Section of Gastroenterology, Department of Medical and Surgical Sciences, University of Foggia, 71122 Foggia, Italy
| | - Fabio Grizzi
- Department of Immunology and Inflammation, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| | - Cesare Hassan
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20089 Milan, Italy
| | - Alessandro Repici
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20089 Milan, Italy
| | - Silvia Carrara
- Division of Gastroenterology and Digestive Endoscopy, Humanitas Research Hospital IRCCS, Rozzano, 20089 Milan, Italy
| |
Collapse
|
246
|
Sistaninejhad B, Rasi H, Nayeri P. A Review Paper about Deep Learning for Medical Image Analysis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:7091301. [PMID: 37284172 PMCID: PMC10241570 DOI: 10.1155/2023/7091301] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/12/2023] [Accepted: 04/21/2023] [Indexed: 06/08/2023]
Abstract
Medical imaging refers to the process of obtaining images of internal organs for therapeutic purposes such as discovering or studying diseases. The primary objective of medical image analysis is to improve the efficacy of clinical research and treatment options. Deep learning has revamped medical image analysis, yielding excellent results in image processing tasks such as registration, segmentation, feature extraction, and classification. The prime motivations for this are the availability of computational resources and the resurgence of deep convolutional neural networks. Deep learning techniques are good at observing hidden patterns in images and supporting clinicians in achieving diagnostic perfection. It has proven to be the most effective method for organ segmentation, cancer detection, disease categorization, and computer-assisted diagnosis. Many deep learning approaches have been published to analyze medical images for various diagnostic purposes. In this paper, we review the work exploiting current state-of-the-art deep learning approaches in medical image processing. We begin the survey by providing a synopsis of research works in medical imaging based on convolutional neural networks. Second, we discuss popular pretrained models and general adversarial networks that aid in improving convolutional networks' performance. Finally, to ease direct evaluation, we compile the performance metrics of deep learning models focusing on COVID-19 detection and child bone age prediction.
Collapse
Affiliation(s)
| | - Habib Rasi
- Sahand University of Technology, East Azerbaijan, New City of Sahand, Iran
| | - Parisa Nayeri
- Khoy University of Medical Sciences, West Azerbaijan, Khoy, Iran
| |
Collapse
|
247
|
Inneci T, Badem H. Detection of Corneal Ulcer Using a Genetic Algorithm-Based Image Selection and Residual Neural Network. Bioengineering (Basel) 2023; 10:639. [PMID: 37370570 DOI: 10.3390/bioengineering10060639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 05/20/2023] [Accepted: 05/22/2023] [Indexed: 06/29/2023] Open
Abstract
Corneal ulcer is one of the most devastating eye diseases causing permanent damage. There exist limited soft techniques available for detecting this disease. In recent years, deep neural networks (DNN) have significantly solved numerous classification problems. However, many samples are needed to obtain reasonable classification performance using a DNN with a huge amount of layers and weights. Since collecting a data set with a large number of samples is usually a difficult and time-consuming process, very large-scale pre-trained DNNs, such as the AlexNet, the ResNet and the DenseNet, can be adapted to classify a dataset with a small number of samples, through the utility of transfer learning techniques. Although such pre-trained DNNs produce successful results in some cases, their classification performances can be low due to many parameters, weights and the emergence of redundancy features that repeat themselves in many layers in som cases. The proposed technique removes these unnecessary features by systematically selecting images in the layers using a genetic algorithm (GA). The proposed method has been tested on ResNet on a small-scale dataset which classifies corneal ulcers. According to the results, the proposed method significantly increased the classification performance compared to the classical approaches.
Collapse
Affiliation(s)
- Tugba Inneci
- Department of Informatics System, Kahramanmaras Sutcu Imam University, Kahramanmaras 46050, Türkiye
| | - Hasan Badem
- Department of Computer Engineering, Kahramanmaras Sutcu Imam University, Kahramanmaras 46050, Türkiye
| |
Collapse
|
248
|
Park SH, Kim YJ, Kim KG, Chung JW, Kim HC, Choi IY, You MW, Lee GP, Hwang JH. Comparison between single and serial computed tomography images in classification of acute appendicitis, acute right-sided diverticulitis, and normal appendix using EfficientNet. PLoS One 2023; 18:e0281498. [PMID: 37224137 PMCID: PMC10208462 DOI: 10.1371/journal.pone.0281498] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 01/24/2023] [Indexed: 05/26/2023] Open
Abstract
This study aimed to develop a convolutional neural network (CNN) using the EfficientNet algorithm for the automated classification of acute appendicitis, acute diverticulitis, and normal appendix and to evaluate its diagnostic performance. We retrospectively enrolled 715 patients who underwent contrast-enhanced abdominopelvic computed tomography (CT). Of these, 246 patients had acute appendicitis, 254 had acute diverticulitis, and 215 had normal appendix. Training, validation, and test data were obtained from 4,078 CT images (1,959 acute appendicitis, 823 acute diverticulitis, and 1,296 normal appendix cases) using both single and serial (RGB [red, green, blue]) image methods. We augmented the training dataset to avoid training disturbances caused by unbalanced CT datasets. For classification of the normal appendix, the RGB serial image method showed a slightly higher sensitivity (89.66 vs. 87.89%; p = 0.244), accuracy (93.62% vs. 92.35%), and specificity (95.47% vs. 94.43%) than did the single image method. For the classification of acute diverticulitis, the RGB serial image method also yielded a slightly higher sensitivity (83.35 vs. 80.44%; p = 0.019), accuracy (93.48% vs. 92.15%), and specificity (96.04% vs. 95.12%) than the single image method. Moreover, the mean areas under the receiver operating characteristic curve (AUCs) were significantly higher for acute appendicitis (0.951 vs. 0.937; p < 0.0001), acute diverticulitis (0.972 vs. 0.963; p = 0.0025), and normal appendix (0.979 vs. 0.972; p = 0.0101) with the RGB serial image method than those obtained by the single method for each condition. Thus, acute appendicitis, acute diverticulitis, and normal appendix could be accurately distinguished on CT images by our model, particularly when using the RGB serial image method.
Collapse
Affiliation(s)
- So Hyun Park
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University, Gil Medical Center, Incheon, South Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University, Gil Medical Center, Incheon, South Korea
| | - Jun-Won Chung
- Division of Gastroenterology, Department of Internal Medicine, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
| | - Hyun Cheol Kim
- Department of Radiology, Kyung Hee University Hospital at Gangdong, Seoul, South Korea
| | - In Young Choi
- Department of Radiology, Korea University Ansan Hospital, Ansan, South Korea
| | - Myung-Won You
- Department of Radiology, Kyung Hee University Hospital, Seoul, South Korea
| | - Gi Pyo Lee
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, South Korea
| | - Jung Han Hwang
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, South Korea
| |
Collapse
|
249
|
Hu W, Li X, Li C, Li R, Jiang T, Sun H, Huang X, Grzegorzek M, Li X. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput Biol Med 2023; 161:107034. [PMID: 37230019 DOI: 10.1016/j.compbiomed.2023.107034] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 04/13/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xintong Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
250
|
Umer M, Sadiq S, Alhebshi RM, Sabir MF, Alsubai S, Al Hejaili A, Khayyat MM, Eshmawi AA, Mohamed A. IoT based smart home automation using blockchain and deep learning models. PeerJ Comput Sci 2023; 9:e1332. [PMID: 37346725 PMCID: PMC10280418 DOI: 10.7717/peerj-cs.1332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 03/15/2023] [Indexed: 06/23/2023]
Abstract
For the past few years, the concept of the smart house has gained popularity. The major challenges concerning a smart home include data security, privacy issues, authentication, secure identification, and automated decision-making of Internet of Things (IoT) devices. Currently, existing home automation systems address either of these challenges, however, home automation that also involves automated decision-making systems and systematic features apart from being reliable and safe is an absolute necessity. The current study proposes a deep learning-driven smart home system that integrates a Convolutional neural network (CNN) for automated decision-making such as classifying the device as "ON" and "OFF" based on its utilization at home. Additionally, to provide a decentralized, secure, and reliable mechanism to assure the authentication and identification of the IoT devices we integrated the emerging blockchain technology into this study. The proposed system is fundamentally comprised of a variety of sensors, a 5 V relay circuit, and Raspberry Pi which operates as a server and maintains the database of each device being used. Moreover, an android application is developed which communicates with the Raspberry Pi interface using the Apache server and HTTP web interface. The practicality of the proposed system for home automation is tested and evaluated in the lab and in real-time to ensure its efficacy. The current study also assures that the technology and hardware utilized in the proposed smart house system are inexpensive, widely available, and scalable. Furthermore, the need for a more comprehensive security and privacy model to be incorporated into the design phase of smart homes is highlighted by a discussion of the risks analysis' implications including cyber threats, hardware security, and cyber attacks. The experimental results emphasize the significance of the proposed system and validate its usability in the real world.
Collapse
Affiliation(s)
- Muhammad Umer
- Department of Computer Science & Information Technology, The Islamia University of Bahawalpur, Bahawalpur, Pakistan
| | - Saima Sadiq
- Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, Pakistan
| | - Reemah M. Alhebshi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdul Aziz University, Jeddah, Saudi Arabia
| | - Maha Farouk Sabir
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdul Aziz University, Jeddah, Saudi Arabia
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences in Al-Kharj, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Abdullah Al Hejaili
- Faculty of Computers & Information Technology, Computer Science Department, University of Tabuk, Tabuk, Saudi Arabia
| | - Mashael M. Khayyat
- Department of Information Systems and Technology, Faculty of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Ala’ Abdulmajid Eshmawi
- Department of Cybersecurity, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudia Arabia
| | - Abdullah Mohamed
- University Research Centre, Future University in Egypt, New Cairo, Egypt
| |
Collapse
|