1
|
Bacon EJ, He D, Achi NAD, Wang L, Li H, Yao-Digba PDZ, Monkam P, Qi S. Neuroimage analysis using artificial intelligence approaches: a systematic review. Med Biol Eng Comput 2024; 62:2599-2627. [PMID: 38664348 DOI: 10.1007/s11517-024-03097-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/14/2024] [Indexed: 08/18/2024]
Abstract
In the contemporary era, artificial intelligence (AI) has undergone a transformative evolution, exerting a profound influence on neuroimaging data analysis. This development has significantly elevated our comprehension of intricate brain functions. This study investigates the ramifications of employing AI techniques on neuroimaging data, with a specific objective to improve diagnostic capabilities and contribute to the overall progress of the field. A systematic search was conducted in prominent scientific databases, including PubMed, IEEE Xplore, and Scopus, meticulously curating 456 relevant articles on AI-driven neuroimaging analysis spanning from 2013 to 2023. To maintain rigor and credibility, stringent inclusion criteria, quality assessments, and precise data extraction protocols were consistently enforced throughout this review. Following a rigorous selection process, 104 studies were selected for review, focusing on diverse neuroimaging modalities with an emphasis on mental and neurological disorders. Among these, 19.2% addressed mental illness, and 80.7% focused on neurological disorders. It is found that the prevailing clinical tasks are disease classification (58.7%) and lesion segmentation (28.9%), whereas image reconstruction constituted 7.3%, and image regression and prediction tasks represented 9.6%. AI-driven neuroimaging analysis holds tremendous potential, transforming both research and clinical applications. Machine learning and deep learning algorithms outperform traditional methods, reshaping the field significantly.
Collapse
Affiliation(s)
- Eric Jacob Bacon
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Dianning He
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | | | - Lanbo Wang
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Han Li
- Department of Neurosurgery, Shengjing Hospital of China Medical University, Shenyang, China
| | | | - Patrice Monkam
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
2
|
Wang R, Chen ZS. Large-scale foundation models and generative AI for BigData neuroscience. Neurosci Res 2024:S0168-0102(24)00075-0. [PMID: 38897235 DOI: 10.1016/j.neures.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 04/15/2024] [Accepted: 05/15/2024] [Indexed: 06/21/2024]
Abstract
Recent advances in machine learning have led to revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities.
Collapse
Affiliation(s)
- Ran Wang
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, NY 11201, USA.
| |
Collapse
|
3
|
Hajim WI, Zainudin S, Mohd Daud K, Alheeti K. Optimized models and deep learning methods for drug response prediction in cancer treatments: a review. PeerJ Comput Sci 2024; 10:e1903. [PMID: 38660174 PMCID: PMC11042005 DOI: 10.7717/peerj-cs.1903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 01/31/2024] [Indexed: 04/26/2024]
Abstract
Recent advancements in deep learning (DL) have played a crucial role in aiding experts to develop personalized healthcare services, particularly in drug response prediction (DRP) for cancer patients. The DL's techniques contribution to this field is significant, and they have proven indispensable in the medical field. This review aims to analyze the diverse effectiveness of various DL models in making these predictions, drawing on research published from 2017 to 2023. We utilized the VOS-Viewer 1.6.18 software to create a word cloud from the titles and abstracts of the selected studies. This study offers insights into the focus areas within DL models used for drug response. The word cloud revealed a strong link between certain keywords and grouped themes, highlighting terms such as deep learning, machine learning, precision medicine, precision oncology, drug response prediction, and personalized medicine. In order to achieve an advance in DRP using DL, the researchers need to work on enhancing the models' generalizability and interoperability. It is also crucial to develop models that not only accurately represent various architectures but also simplify these architectures, balancing the complexity with the predictive capabilities. In the future, researchers should try to combine methods that make DL models easier to understand; this will make DRP reviews more open and help doctors trust the decisions made by DL models in cancer DRP.
Collapse
Affiliation(s)
- Wesam Ibrahim Hajim
- Department of Applied Geology, College of Sciences, Tirkit University, Tikrit, Salah ad Din, Iraq
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Selangor, Malaysia
| | - Suhaila Zainudin
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Selangor, Malaysia
| | - Kauthar Mohd Daud
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Selangor, Malaysia
| | - Khattab Alheeti
- Department of Computer Networking Systems, College of Computer Sciences and Information Technology, University of Anbar, Al Anbar, Ramadi, Iraq
| |
Collapse
|
4
|
Wood DA, Townend M, Guilhem E, Kafiabadi S, Hammam A, Wei Y, Al Busaidi A, Mazumder A, Sasieni P, Barker GJ, Ourselin S, Cole JH, Booth TC. Optimising brain age estimation through transfer learning: A suite of pre-trained foundation models for improved performance and generalisability in a clinical setting. Hum Brain Mapp 2024; 45:e26625. [PMID: 38433665 PMCID: PMC10910262 DOI: 10.1002/hbm.26625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 12/27/2023] [Accepted: 01/30/2024] [Indexed: 03/05/2024] Open
Abstract
Estimated age from brain MRI data has emerged as a promising biomarker of neurological health. However, the absence of large, diverse, and clinically representative training datasets, along with the complexity of managing heterogeneous MRI data, presents significant barriers to the development of accurate and generalisable models appropriate for clinical use. Here, we present a deep learning framework trained on routine clinical data (N up to 18,890, age range 18-96 years). We trained five separate models for accurate brain age prediction (all with mean absolute error ≤4.0 years, R2 ≥ .86) across five different MRI sequences (T2 -weighted, T2 -FLAIR, T1 -weighted, diffusion-weighted, and gradient-recalled echo T2 *-weighted). Our trained models offer dual functionality. First, they have the potential to be directly employed on clinical data. Second, they can be used as foundation models for further refinement to accommodate a range of other MRI sequences (and therefore a range of clinical scenarios which employ such sequences). This adaptation process, enabled by transfer learning, proved effective in our study across a range of MRI sequences and scan orientations, including those which differed considerably from the original training datasets. Crucially, our findings suggest that this approach remains viable even with limited data availability (as low as N = 25 for fine-tuning), thus broadening the application of brain age estimation to more diverse clinical contexts and patient populations. By making these models publicly available, we aim to provide the scientific community with a versatile toolkit, promoting further research in brain age prediction and related areas.
Collapse
Affiliation(s)
- David A. Wood
- School of Biomedical Engineering and Imaging Sciences, Rayne InstituteKing's College LondonLondonUK
| | - Matthew Townend
- School of Biomedical Engineering and Imaging Sciences, Rayne InstituteKing's College LondonLondonUK
| | - Emily Guilhem
- King's College Hospital NHS Foundation TrustLondonUK
| | | | - Ahmed Hammam
- King's College Hospital NHS Foundation TrustLondonUK
| | - Yiran Wei
- School of Biomedical Engineering and Imaging Sciences, Rayne InstituteKing's College LondonLondonUK
| | | | | | - Peter Sasieni
- School of Biomedical Engineering and Imaging Sciences, Rayne InstituteKing's College LondonLondonUK
| | - Gareth J. Barker
- Department of Neuroimaging, Institute of Psychiatry, Psychology, and NeuroscienceKing's College LondonLondonUK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, Rayne InstituteKing's College LondonLondonUK
| | - James H. Cole
- Dementia Research Centre, Institute of NeurologyUniversity College LondonLondonUK
- Centre for Medical Image Computing, Department of Computer ScienceUniversity College LondonLondonUK
| | - Thomas C. Booth
- School of Biomedical Engineering and Imaging Sciences, Rayne InstituteKing's College LondonLondonUK
- King's College Hospital NHS Foundation TrustLondonUK
| |
Collapse
|
5
|
Guzmán Chacón E, Ovando-Tellez M, Thiebaut de Schotten M, Forkel SJ. Embracing digital innovation in neuroscience: 2023 in review at NEUROCCINO. Brain Struct Funct 2024; 229:251-255. [PMID: 38386031 PMCID: PMC10917830 DOI: 10.1007/s00429-024-02768-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 01/25/2024] [Indexed: 02/23/2024]
Affiliation(s)
- Eva Guzmán Chacón
- Donders Institute for Brain Cognition Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Marcela Ovando-Tellez
- University Bordeaux, CNRS, CEA, IMN, UMR 5293, GIN, 33000, Bordeaux, France
- Brain Connectivity and Behaviour Laboratory, Paris, France
| | - Michel Thiebaut de Schotten
- University Bordeaux, CNRS, CEA, IMN, UMR 5293, GIN, 33000, Bordeaux, France
- Brain Connectivity and Behaviour Laboratory, Paris, France
| | - Stephanie J Forkel
- Donders Institute for Brain Cognition Behaviour, Radboud University, Nijmegen, The Netherlands.
- Brain Connectivity and Behaviour Laboratory, Paris, France.
- Centre for Neuroimaging Sciences, Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| |
Collapse
|
6
|
Oliveira M, Wilming R, Clark B, Budding C, Eitel F, Ritter K, Haufe S. Benchmarking the influence of pre-training on explanation performance in MR image classification. Front Artif Intell 2024; 7:1330919. [PMID: 38469161 PMCID: PMC10925627 DOI: 10.3389/frai.2024.1330919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 02/09/2024] [Indexed: 03/13/2024] Open
Abstract
Convolutional Neural Networks (CNNs) are frequently and successfully used in medical prediction tasks. They are often used in combination with transfer learning, leading to improved performance when training data for the task are scarce. The resulting models are highly complex and typically do not provide any insight into their predictive mechanisms, motivating the field of "explainable" artificial intelligence (XAI). However, previous studies have rarely quantitatively evaluated the "explanation performance" of XAI methods against ground-truth data, and transfer learning and its influence on objective measures of explanation performance has not been investigated. Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task. We employ this benchmark to understand the influence of transfer learning on the quality of explanations. Experimental results show that popular XAI methods applied to the same underlying model differ vastly in performance, even when considering only correctly classified examples. We further observe that explanation performance strongly depends on the task used for pre-training and the number of CNN layers pre-trained. These results hold after correcting for a substantial correlation between explanation and classification performance.
Collapse
Affiliation(s)
- Marta Oliveira
- Division 8.44, Physikalisch-Technische Bundesanstalt, Berlin, Germany
| | - Rick Wilming
- Computer Science Department, Technische Universität Berlin, Berlin, Germany
| | - Benedict Clark
- Division 8.44, Physikalisch-Technische Bundesanstalt, Berlin, Germany
| | - Céline Budding
- Berlin Center for Advanced Neuroimaging (BCAN), Charité— Universitätsmedizin Berlin, Berlin, Germany
| | - Fabian Eitel
- Berlin Center for Advanced Neuroimaging (BCAN), Charité— Universitätsmedizin Berlin, Berlin, Germany
| | - Kerstin Ritter
- Berlin Center for Advanced Neuroimaging (BCAN), Charité— Universitätsmedizin Berlin, Berlin, Germany
| | - Stefan Haufe
- Division 8.44, Physikalisch-Technische Bundesanstalt, Berlin, Germany
- Computer Science Department, Technische Universität Berlin, Berlin, Germany
- Berlin Center for Advanced Neuroimaging (BCAN), Charité— Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
7
|
Kim J, Wang SG, Lee JC, Cheon YI, Shin SC, Lim DW, Jang DI, Bhattacharjee S, Hwang YB, Choi HK, Kwon I, Kim SJ, Kwon SB. Evaluation of Vertical Level Differences Between Left and Right Vocal Folds Using Artificial Intelligence System in Excised Canine Larynx. J Voice 2024:S0892-1997(23)00385-5. [PMID: 38216386 DOI: 10.1016/j.jvoice.2023.11.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 01/14/2024]
Abstract
OBJECTIVES This study aimed to establish an artificial intelligence (AI) system to classify vertical level differences between vocal folds during vocalization and to evaluate the accuracy of the classification. METHODS We designed models with different depths between the right and left vocal folds using an excised canine larynx. Video files for the data set were obtained using a high-speed camera system and a color complementary metal oxide semiconductor camera with global shutter. The data sets were divided into training, validation, and testing. We used 20,000 images for building the model and 8000 images for testing. To perform deep learning multiclass classification and to estimate the vertical level difference, we introduced DenseNet121-ConvLSTM. RESULTS The model was trained several times using different numbers of epochs. We achieved the most optimal results at 100 epochs, and the batch size used during training was 16. The proposed DenseNet121-ConvLSTM model achieved classification accuracies of 99.5% and 88.0% for training and testing, respectively. After verification using an external data set, the overall accuracy, precision, recall, and f1-score were 90.8%, 91.6%, 90.9%, and 91.2%, respectively. CONCLUSIONS The newly developed AI system may be an easy and accurate method for classifying superior and inferior vertical level differences between vocal folds. Thus, this AI system can be applied and may help in the assessment of vertical level differences in patients with unilateral vocal fold paralysis.
Collapse
Affiliation(s)
- Jaewon Kim
- Department of Cognitive Science, Pusan National University, Doctor's Course, Busan, South Korea; Department of Otorhinolaryngology, Head and Neck Surgery, Pusan National University Yangsan Hospital, Yangsan, Gyeongsangnam-do, South Korea
| | - Soo-Geun Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, College of Medicine, Pusan National University and Medical Research Institute, Pusan National University Hospital, Busan, South Korea
| | - Jin-Choon Lee
- Department of Otorhinolaryngology, Head and Neck Surgery, Pusan National University School of Medicine, Pusan National University Yangsan Hospital, Yangsan, Gyeongsangnam-do, South Korea
| | - Yong-Il Cheon
- Department of Otorhinolaryngology, Head and Neck Surgery, Biomedical Research Institute, Pusan National University School of Medicine, Pusan National University Hospital, Busan, South Korea
| | - Sung-Chan Shin
- Department of Otorhinolaryngology, Head and Neck Surgery, Biomedical Research Institute, Pusan National University School of Medicine, Pusan National University Hospital, Busan, South Korea
| | - Dong-Won Lim
- Department of Otorhinolaryngology, Head and Neck Surgery, Pusan National University Hospital, Busan, South Korea
| | - Dae-Ik Jang
- Department of Otorhinolaryngology, Head and Neck Surgery, Kosin University Gospel Hospital, Kosin University College of Medicine, Busan, South Korea
| | | | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, South Korea
| | - Heung-Kook Choi
- Department of Digital Anti-Aging Healthcare, u-AHRC, Inje University, Gimhae, South Korea; Artificial Intelligence Research Center, JLK Inc., Seoul, South Korea
| | - Ickhwan Kwon
- Platform Development Headquarters, Autonomous A2Z, Daegu, South Korea
| | - Seon-Jong Kim
- Department of Applied IT and Engineering, Pusan National University, Miryang, Gyeongsangnam-do, South Korea
| | - Soon-Bok Kwon
- Department of Humanities, Language and Information, Pusan National University, Busan, South Korea.
| |
Collapse
|
8
|
Cabrera-León Y, Báez PG, Fernández-López P, Suárez-Araujo CP. Neural Computation-Based Methods for the Early Diagnosis and Prognosis of Alzheimer's Disease Not Using Neuroimaging Biomarkers: A Systematic Review. J Alzheimers Dis 2024; 98:793-823. [PMID: 38489188 PMCID: PMC11091566 DOI: 10.3233/jad-231271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/03/2024] [Indexed: 03/17/2024]
Abstract
Background The growing number of older adults in recent decades has led to more prevalent geriatric diseases, such as strokes and dementia. Therefore, Alzheimer's disease (AD), as the most common type of dementia, has become more frequent too. Background Objective: The goals of this work are to present state-of-the-art studies focused on the automatic diagnosis and prognosis of AD and its early stages, mainly mild cognitive impairment, and predicting how the research on this topic may change in the future. Methods Articles found in the existing literature needed to fulfill several selection criteria. Among others, their classification methods were based on artificial neural networks (ANNs), including deep learning, and data not from brain signals or neuroimaging techniques were used. Considering our selection criteria, 42 articles published in the last decade were finally selected. Results The most medically significant results are shown. Similar quantities of articles based on shallow and deep ANNs were found. Recurrent neural networks and transformers were common with speech or in longitudinal studies. Convolutional neural networks (CNNs) were popular with gait or combined with others in modular approaches. Above one third of the cross-sectional studies utilized multimodal data. Non-public datasets were frequently used in cross-sectional studies, whereas the opposite in longitudinal ones. The most popular databases were indicated, which will be helpful for future researchers in this field. Conclusions The introduction of CNNs in the last decade and their superb results with neuroimaging data did not negatively affect the usage of other modalities. In fact, new ones emerged.
Collapse
Affiliation(s)
- Ylermi Cabrera-León
- Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Las Palmas de Gran Canaria, Canary Islands, Spain
| | - Patricio García Báez
- Departamento de Ingeniería Informática y de Sistemas, Escuela Superior de Ingeniería y Tecnología, Universidad de La Laguna, San Cristóbal de La Laguna, Canary Islands, Spain
| | - Pablo Fernández-López
- Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Las Palmas de Gran Canaria, Canary Islands, Spain
| | - Carmen Paz Suárez-Araujo
- Instituto Universitario de Cibernética, Empresa y Sociedad, Universidad de Las Palmas de Gran Canaria, Las Palmas de Gran Canaria, Canary Islands, Spain
| |
Collapse
|
9
|
Kang DW, Park GH, Ryu WS, Schellingerhout D, Kim M, Kim YS, Park CY, Lee KJ, Han MK, Jeong HG, Kim DE. Strengthening deep-learning models for intracranial hemorrhage detection: strongly annotated computed tomography images and model ensembles. Front Neurol 2023; 14:1321964. [PMID: 38221995 PMCID: PMC10784380 DOI: 10.3389/fneur.2023.1321964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 12/11/2023] [Indexed: 01/16/2024] Open
Abstract
Background and purpose Multiple attempts at intracranial hemorrhage (ICH) detection using deep-learning techniques have been plagued by clinical failures. We aimed to compare the performance of a deep-learning algorithm for ICH detection trained on strongly and weakly annotated datasets, and to assess whether a weighted ensemble model that integrates separate models trained using datasets with different ICH improves performance. Methods We used brain CT scans from the Radiological Society of North America (27,861 CT scans, 3,528 ICHs) and AI-Hub (53,045 CT scans, 7,013 ICHs) for training. DenseNet121, InceptionResNetV2, MobileNetV2, and VGG19 were trained on strongly and weakly annotated datasets and compared using independent external test datasets. We then developed a weighted ensemble model combining separate models trained on all ICH, subdural hemorrhage (SDH), subarachnoid hemorrhage (SAH), and small-lesion ICH cases. The final weighted ensemble model was compared to four well-known deep-learning models. After external testing, six neurologists reviewed 91 ICH cases difficult for AI and humans. Results InceptionResNetV2, MobileNetV2, and VGG19 models outperformed when trained on strongly annotated datasets. A weighted ensemble model combining models trained on SDH, SAH, and small-lesion ICH had a higher AUC, compared with a model trained on all ICH cases only. This model outperformed four deep-learning models (AUC [95% C.I.]: Ensemble model, 0.953[0.938-0.965]; InceptionResNetV2, 0.852[0.828-0.873]; DenseNet121, 0.875[0.852-0.895]; VGG19, 0.796[0.770-0.821]; MobileNetV2, 0.650[0.620-0.680]; p < 0.0001). In addition, the case review showed that a better understanding and management of difficult cases may facilitate clinical use of ICH detection algorithms. Conclusion We propose a weighted ensemble model for ICH detection, trained on large-scale, strongly annotated CT scans, as no model can capture all aspects of complex tasks.
Collapse
Affiliation(s)
- Dong-Wan Kang
- Department of Public Health, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
- Department of Neurology, Gyeonggi Provincial Medical Center, Icheon Hospital, Icheon, Republic of Korea
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Gi-Hun Park
- JLK Inc., Artificial Intelligence Research Center, Seoul, Republic of Korea
| | - Wi-Sun Ryu
- JLK Inc., Artificial Intelligence Research Center, Seoul, Republic of Korea
| | - Dawid Schellingerhout
- Department of Neuroradiology and Imaging Physics, The University of Texas M.D. Anderson Cancer Center, Houston, TX, United States
| | - Museong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
- Hospital Medicine Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Yong Soo Kim
- Department of Neurology, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul, Republic of Korea
| | - Chan-Young Park
- Department of Neurology, Chung-Ang University Hospital, Seoul, Republic of Korea
| | - Keon-Joo Lee
- Department of Neurology, Korea University Guro Hospital, Seoul, Republic of Korea
| | - Moon-Ku Han
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Han-Gil Jeong
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Dong-Eog Kim
- Department of Neurology, Dongguk University Ilsan Hospital, Goyang, Republic of Korea
- National Priority Research Center for Stroke, Goyang, Republic of Korea
| |
Collapse
|
10
|
Mudeng V, Farid MN, Ayana G, Choe SW. Domain and Histopathology Adaptations-Based Classification for Malignancy Grading System. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2080-2098. [PMID: 37673327 DOI: 10.1016/j.ajpath.2023.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 06/30/2023] [Accepted: 07/19/2023] [Indexed: 09/08/2023]
Abstract
Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.
Collapse
Affiliation(s)
- Vicky Mudeng
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Mifta Nur Farid
- Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
11
|
Alwuthaynani MM, Abdallah ZS, Santos-Rodriguez R. A robust class decomposition-based approach for detecting Alzheimer's progression. Exp Biol Med (Maywood) 2023; 248:2514-2525. [PMID: 38059336 PMCID: PMC10854473 DOI: 10.1177/15353702231211880] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 10/17/2023] [Accepted: 02/28/2023] [Indexed: 12/08/2023] Open
Abstract
Computer-aided diagnosis of Alzheimer's disease (AD) is a rapidly growing field with the possibility to be utilized in practice. Deep learning has received much attention in detecting AD from structural magnetic resonance imaging (sMRI). However, training a convolutional neural network from scratch is problematic because it requires a lot of annotated data and additional computational time. Transfer learning can offer a promising and practical solution by transferring information learned from other image recognition tasks to medical image classification. Another issue is the dataset distribution's irregularities. A common classification issue in datasets is a class imbalance, where the distribution of samples among the classes is biased. For example, a dataset may contain more instances of some classes than others. Class imbalance is challenging because most machine learning algorithms assume that each class should have an equal number of samples. Models consequently perform poorly in prediction. Class decomposition can address this problem by making learning a dataset's class boundaries easier. Motivated by these approaches, we propose a class decomposition transfer learning (CDTL) approach that employs VGG19, AlexNet, and an entropy-based technique to detect AD from sMRI. This study aims to assess the robustness of the CDTL approach in detecting the cognitive decline of AD using data from various ADNI cohorts to determine whether comparable classification accuracy for the two or more cohorts would be obtained. Furthermore, the proposed model achieved state-of-the-art performance in predicting mild cognitive impairment (MCI)-to-AD conversion with an accuracy of 91.45%.
Collapse
Affiliation(s)
- Maha M Alwuthaynani
- University of Bristol, Bristol BS8 1TH, UK
- College of Computer Science & Information Systems, Najran University, Najran 61441, Saudi Arabia
| | | | | |
Collapse
|
12
|
Tsang B, Gupta A, Takahashi MS, Baffi H, Ola T, Doria AS. Applications of artificial intelligence in magnetic resonance imaging of primary pediatric cancers: a scoping review and CLAIM score assessment. Jpn J Radiol 2023; 41:1127-1147. [PMID: 37395982 DOI: 10.1007/s11604-023-01437-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 04/18/2023] [Indexed: 07/04/2023]
Abstract
PURPOSES To review the uses of AI for magnetic resonance (MR) imaging assessment of primary pediatric cancer and identify common literature topics and knowledge gaps. To assess the adherence of the existing literature to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. MATERIALS AND METHODS A scoping literature search using MEDLINE, EMBASE and Cochrane databases was performed, including studies of > 10 subjects with a mean age of < 21 years. Relevant data were summarized into three categories based on AI application: detection, characterization, treatment and monitoring. Readers independently scored each study using CLAIM guidelines, and inter-rater reproducibility was assessed using intraclass correlation coefficients. RESULTS Twenty-one studies were included. The most common AI application for pediatric cancer MR imaging was pediatric tumor diagnosis and detection (13/21 [62%] studies). The most commonly studied tumor was posterior fossa tumors (14 [67%] studies). Knowledge gaps included a lack of research in AI-driven tumor staging (0/21 [0%] studies), imaging genomics (1/21 [5%] studies), and tumor segmentation (2/21 [10%] studies). Adherence to CLAIM guidelines was moderate in primary studies, with an average (range) of 55% (34%-73%) CLAIM items reported. Adherence has improved over time based on publication year. CONCLUSION The literature surrounding AI applications of MR imaging in pediatric cancers is limited. The existing literature shows moderate adherence to CLAIM guidelines, suggesting that better adherence is required for future studies.
Collapse
Affiliation(s)
- Brian Tsang
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Aaryan Gupta
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Marcelo Straus Takahashi
- Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (InRad/HC-FMUSP), São Paulo, SP, Brazil
- Instituto da Criança do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (ICr/HC-FMUSP), São Paulo, SP, Brazil
- DasaInova, Diagnósticos da América SA (Dasa), São Paulo, SP, Brazil
| | | | - Tolulope Ola
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada
| | - Andrea S Doria
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada.
- Department of Diagnostic Imaging, Research Institute, The Hospital for Sick Children, Toronto, ON, Canada.
| |
Collapse
|
13
|
Yang C, Dai W, Qin B, He X, Zhao W. A real-time automated bone age assessment system based on the RUS-CHN method. Front Endocrinol (Lausanne) 2023; 14:1073219. [PMID: 37008947 PMCID: PMC10050736 DOI: 10.3389/fendo.2023.1073219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 02/27/2023] [Indexed: 03/17/2023] Open
Abstract
Background Bone age is the age of skeletal development and is a direct indicator of physical growth and development in children. Most bone age assessment (BAA) systems use direct regression with the entire hand bone map or first segmenting the region of interest (ROI) using the clinical a priori method and then deriving the bone age based on the characteristics of the ROI, which takes more time and requires more computation. Materials and methods Key bone grades and locations were determined using three real-time target detection models and Key Bone Search (KBS) post-processing using the RUS-CHN approach, and then the age of the bones was predicted using a Lightgbm regression model. Intersection over Union (IOU) was used to evaluate the precision of the key bone locations, while the mean absolute error (MAE), the root mean square error (RMSE), and the root mean squared percentage error (RMSPE) were used to evaluate the discrepancy between predicted and true bone age. The model was finally transformed into an Open Neural Network Exchange (ONNX) model and tested for inference speed on the GPU (RTX 3060). Results The three real-time models achieved good results with an average (IOU) of no less than 0.9 in all key bones. The most accurate outcomes for the inference results utilizing KBS were a MAE of 0.35 years, a RMSE of 0.46 years, and a RMSPE of 0.11. Using the GPU RTX3060 for inference, the critical bone level and position inference time was 26 ms. The bone age inference time was 2 ms. Conclusions We developed an automated end-to-end BAA system that is based on real-time target detection, obtaining key bone developmental grade and location in a single pass with the aid of KBS, and using Lightgbm to obtain bone age, capable of outputting results in real-time with good accuracy and stability, and able to be used without hand-shaped segmentation. The BAA system automatically implements the entire process of the RUS-CHN method and outputs information on the location and developmental grade of the 13 key bones of the RUS-CHN method along with the bone age to assist the physician in making judgments, making full use of clinical a priori knowledge.
Collapse
Affiliation(s)
- Chen Yang
- College of Medical Informatics, Chongqing Medical University, Chongqing, China
- Medical Data Science Academy, Chongqing Medical University, Chongqing, China
- Chongqing Engineering Research Center for Clinical Big-Data and Drug Evaluation, Chongqing, China
| | - Wei Dai
- College of Medical Informatics, Chongqing Medical University, Chongqing, China
- Medical Data Science Academy, Chongqing Medical University, Chongqing, China
- Chongqing Engineering Research Center for Clinical Big-Data and Drug Evaluation, Chongqing, China
| | - Bin Qin
- Department of Radiology, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Translational Medical Research in Cognitive Development and Learning and Memory Disorders, Children’s Hospital of Chongqing Medical University, Chongqing, China
| | - Xiangqian He
- College of Medical Informatics, Chongqing Medical University, Chongqing, China
- Medical Data Science Academy, Chongqing Medical University, Chongqing, China
- Chongqing Engineering Research Center for Clinical Big-Data and Drug Evaluation, Chongqing, China
| | - Wenlong Zhao
- College of Medical Informatics, Chongqing Medical University, Chongqing, China
- Medical Data Science Academy, Chongqing Medical University, Chongqing, China
- Chongqing Engineering Research Center for Clinical Big-Data and Drug Evaluation, Chongqing, China
| |
Collapse
|
14
|
Yang CY, Chen PC, Huang WC. Cross-Domain Transfer of EEG to EEG or ECG Learning for CNN Classification Models. SENSORS (BASEL, SWITZERLAND) 2023; 23:2458. [PMID: 36904661 PMCID: PMC10007254 DOI: 10.3390/s23052458] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/19/2023] [Accepted: 02/20/2023] [Indexed: 06/18/2023]
Abstract
Electroencephalography (EEG) is often used to evaluate several types of neurological brain disorders because of its noninvasive and high temporal resolution. In contrast to electrocardiography (ECG), EEG can be uncomfortable and inconvenient for patients. Moreover, deep-learning techniques require a large dataset and a long time for training from scratch. Therefore, in this study, EEG-EEG or EEG-ECG transfer learning strategies were applied to explore their effectiveness for the training of simple cross-domain convolutional neural networks (CNNs) used in seizure prediction and sleep staging systems, respectively. The seizure model detected interictal and preictal periods, whereas the sleep staging model classified signals into five stages. The patient-specific seizure prediction model with six frozen layers achieved 100% accuracy for seven out of nine patients and required only 40 s of training time for personalization. Moreover, the cross-signal transfer learning EEG-ECG model for sleep staging achieved an accuracy approximately 2.5% higher than that of the ECG model; additionally, the training time was reduced by >50%. In summary, transfer learning from an EEG model to produce personalized models for a more convenient signal can both reduce the training time and increase the accuracy; moreover, challenges such as data insufficiency, variability, and inefficiency can be effectively overcome.
Collapse
|
15
|
Zhao Z, Chuah JH, Lai KW, Chow CO, Gochoo M, Dhanalakshmi S, Wang N, Bao W, Wu X. Conventional machine learning and deep learning in Alzheimer's disease diagnosis using neuroimaging: A review. Front Comput Neurosci 2023; 17:1038636. [PMID: 36814932 PMCID: PMC9939698 DOI: 10.3389/fncom.2023.1038636] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/13/2023] [Indexed: 02/08/2023] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative disorder that causes memory degradation and cognitive function impairment in elderly people. The irreversible and devastating cognitive decline brings large burdens on patients and society. So far, there is no effective treatment that can cure AD, but the process of early-stage AD can slow down. Early and accurate detection is critical for treatment. In recent years, deep-learning-based approaches have achieved great success in Alzheimer's disease diagnosis. The main objective of this paper is to review some popular conventional machine learning methods used for the classification and prediction of AD using Magnetic Resonance Imaging (MRI). The methods reviewed in this paper include support vector machine (SVM), random forest (RF), convolutional neural network (CNN), autoencoder, deep learning, and transformer. This paper also reviews pervasively used feature extractors and different types of input forms of convolutional neural network. At last, this review discusses challenges such as class imbalance and data leakage. It also discusses the trade-offs and suggestions about pre-processing techniques, deep learning, conventional machine learning methods, new techniques, and input type selection.
Collapse
Affiliation(s)
- Zhen Zhao
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia,*Correspondence: Joon Huang Chuah ✉
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia,Khin Wee Lai ✉
| | - Chee-Onn Chow
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Munkhjargal Gochoo
- Department of Computer Science and Software Engineering, United Arab Emirates University, Al Ain, United Arab Emirates
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Chennai, India
| | - Na Wang
- School of Automation, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Wei Bao
- China Electronics Standardization Institute, Beijing, China,Wei Bao ✉
| | - Xiang Wu
- School of Medical Information Engineering, Xuzhou Medical University, Xuzhou, China
| |
Collapse
|