1
|
Ham S, Lee C. Classification of land lot shapes in real estate sector using a convolutional neural network. PLoS One 2024; 19:e0308788. [PMID: 39298502 DOI: 10.1371/journal.pone.0308788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 07/30/2024] [Indexed: 09/21/2024] Open
Abstract
In the agriculture and real estate industries, land lot shapes have mostly been classified by visual inspection or hard-crafted rules. These conventional methods are time-consuming, resource-intensive, and subject to human bias. This study aims to fill this gap and alleviate problems inherent in traditional lot classification approaches. This study attempts to classify lot shapes automatically, using a convolutional neural network. A study area was chosen, image data of the lots in the area were collected and preprocessed, and an Xception neural network was specified to classify land lots according to their shapes. The test applied to a different area adjacent to the study area achieved an accuracy of 90.1% and area under the curve (AUC) of 0.96. Additionally, this study demonstrated that shape regularity can be quantified using the output scores from the neural network analysis. This is the first attempt to employ a deep learning algorithm for land management on a micro-spatial scale. The classification approach proposed in this study is expected to encourage the rapid and accurate classification of various lot shapes.
Collapse
Affiliation(s)
- Subin Ham
- Department of Real Estate, Kangwon National University, Chuncheon, Gangwon-do, Republic of Korea
| | - Changro Lee
- Department of Real Estate, Kangwon National University, Chuncheon, Gangwon-do, Republic of Korea
| |
Collapse
|
2
|
Liu J, Jiao G. Cross-domain additive learning of new knowledge rather than replacement. Biomed Eng Lett 2024; 14:1137-1146. [PMID: 39220031 PMCID: PMC11362399 DOI: 10.1007/s13534-024-00399-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 01/10/2024] [Accepted: 05/27/2024] [Indexed: 09/04/2024] Open
Abstract
In medical clinical scenarios for reasons such as patient privacy, information protection and data migration, when domain adaptation is needed for real scenarios, the source-domain data is often inaccessible and only the pre-trained source model on the source-domain is available. Existing solutions for this type of problem tend to forget the rich task experience previously learned on the source domain after adapting, which means that the model simply overfits the target-domain data when adapting and does not learn robust features that facilitate real task decisions. We address this problem by exploring the particular application of source-free domain adaptation in medical image segmentation and propose a two-stage additive source-free adaptation framework. We generalize the domain-invariant features by constraining the core pathological structure and semantic consistency between different perspectives. And we reduce the segmentation generated by locating and filtering elements that may have errors through Monte-Carlo uncertainty estimation. We conduct comparison experiments with some other methods on a cross-device polyp segmentation and a cross-modal brain tumor segmentation dataset, the results in both the target and source domains verify that the proposed method can effectively solve the domain offset problem and the model retains its dominance on the source domain after learning new knowledge of the target domain.This work provides valuable exploration for achieving additive learning on the target and source domains in the absence of source data and offers new ideas and methods for adaptation research in the field of medical image segmentation.
Collapse
Affiliation(s)
- Jiahao Liu
- College of Computer Science, Hengyang Normal University, Hengyang, 421008 China
| | - Ge Jiao
- College of Computer Science, Hengyang Normal University, Hengyang, 421008 China
| |
Collapse
|
3
|
Nguyen CV, Duong HM, Do CD. MELEP: A Novel Predictive Measure of Transferability in Multi-label ECG Diagnosis. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2024; 8:506-522. [PMID: 39131101 PMCID: PMC11310184 DOI: 10.1007/s41666-024-00168-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 05/04/2024] [Accepted: 06/04/2024] [Indexed: 08/13/2024]
Abstract
In practical electrocardiography (ECG) interpretation, the scarcity of well-annotated data is a common challenge. Transfer learning techniques are valuable in such situations, yet the assessment of transferability has received limited attention. To tackle this issue, we introduce MELEP, which stands for Muti-label Expected Log of Empirical Predictions, a measure designed to estimate the effectiveness of knowledge transfer from a pre-trained model to a downstream multi-label ECG diagnosis task. MELEP is generic, working with new target data with different label sets, and computationally efficient, requiring only a single forward pass through the pre-trained model. To the best of our knowledge, MELEP is the first transferability metric specifically designed for multi-label ECG classification problems. Our experiments show that MELEP can predict the performance of pre-trained convolutional and recurrent deep neural networks, on small and imbalanced ECG data. Specifically, we observed strong correlation coefficients (with absolute values exceeding 0.6 in most cases) between MELEP and the actual average F1 scores of the fine-tuned models. Our work highlights the potential of MELEP to expedite the selection of suitable pre-trained models for ECG diagnosis tasks, saving time and effort that would otherwise be spent on fine-tuning these models.
Collapse
Affiliation(s)
- Cuong V. Nguyen
- College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam
| | - Hieu Minh Duong
- College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam
| | - Cuong D. Do
- College of Engineering and Computer Science, VinUniversity, Hanoi, Vietnam
- VinUni-Illinois Smart Health Center, VinUniversity, Hanoi, Vietnam
| |
Collapse
|
4
|
Abdollahifard S, Farrokhi A, Mowla A, Liebeskind DS. Performance Metrics, Algorithms, and Applications of Artificial Intelligence in Vascular and Interventional Neurology: A Review of Basic Elements. Neurol Clin 2024; 42:633-650. [PMID: 38937033 DOI: 10.1016/j.ncl.2024.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Artificial intelligence (AI) is currently being used as a routine tool for day-to-day activity. Medicine is not an exception to the growing usage of AI in various scientific fields. Vascular and interventional neurology deal with diseases that require early diagnosis and appropriate intervention, which are crucial to saving patients' lives. In these settings, AI can be an extra pair of hands for physicians or in conditions where there is a shortage of clinical experts. In this article, the authors have reviewed the common metrics used in interpreting the performance of models and common algorithms used in this field.
Collapse
Affiliation(s)
- Saeed Abdollahifard
- School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran; Research Center for Neuromodulation and Pain, Shiraz, Iran
| | | | - Ashkan Mowla
- Division of Stroke and Endovascular Neurosurgery, Department of Neurological Surgery, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| | - David S Liebeskind
- UCLA Department of Neurology, Neurovascular Imaging Research Core, UCLA Comprehensive Stroke Center, University of California Los Angeles(UCLA), CA, USA.
| |
Collapse
|
5
|
Peng Y, Huang X, Gan M, Zhang K, Chen Y. Radiograph-based rheumatoid arthritis diagnosis via convolutional neural network. BMC Med Imaging 2024; 24:180. [PMID: 39039460 PMCID: PMC11265088 DOI: 10.1186/s12880-024-01362-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 07/11/2024] [Indexed: 07/24/2024] Open
Abstract
OBJECTIVES Rheumatoid arthritis (RA) is a severe and common autoimmune disease. Conventional diagnostic methods are often subjective, error-prone, and repetitive works. There is an urgent need for a method to detect RA accurately. Therefore, this study aims to develop an automatic diagnostic system based on deep learning for recognizing and staging RA from radiographs to assist physicians in diagnosing RA quickly and accurately. METHODS We develop a CNN-based fully automated RA diagnostic model, exploring five popular CNN architectures on two clinical applications. The model is trained on a radiograph dataset containing 240 hand radiographs, of which 39 are normal and 201 are RA with five stages. For evaluation, we use 104 hand radiographs, of which 13 are normal and 91 RA with five stages. RESULTS The CNN model achieves good performance in RA diagnosis based on hand radiographs. For the RA recognition, all models achieve an AUC above 90% with a sensitivity over 98%. In particular, the AUC of the GoogLeNet-based model is 97.80%, and the sensitivity is 100.0%. For the RA staging, all models achieve over 77% AUC with a sensitivity over 80%. Specifically, the VGG16-based model achieves 83.36% AUC with 92.67% sensitivity. CONCLUSION The presented GoogLeNet-based model and VGG16-based model have the best AUC and sensitivity for RA recognition and staging, respectively. The experimental results demonstrate the feasibility and applicability of CNN in radiograph-based RA diagnosis. Therefore, this model has important clinical significance, especially for resource-limited areas and inexperienced physicians.
Collapse
Affiliation(s)
- Yong Peng
- Department of Rheumatology, Ningbo No.2 Hospital, Ningbo, Zhejiang, China
| | - Xianqian Huang
- Department of Rheumatology, Ningbo No.2 Hospital, Ningbo, Zhejiang, China
| | - Minzhi Gan
- Department of Rheumatology, Ningbo No.2 Hospital, Ningbo, Zhejiang, China
| | - Keyue Zhang
- Department of Rheumatology, Ningbo No.2 Hospital, Ningbo, Zhejiang, China
| | - Yong Chen
- Department of Rheumatology, Ningbo No.2 Hospital, Ningbo, Zhejiang, China.
| |
Collapse
|
6
|
White A, Saranti M, d'Avila Garcez A, Hope TMH, Price CJ, Bowman H. Predicting recovery following stroke: Deep learning, multimodal data and feature selection using explainable AI. Neuroimage Clin 2024; 43:103638. [PMID: 39002223 PMCID: PMC11299565 DOI: 10.1016/j.nicl.2024.103638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 04/22/2024] [Accepted: 06/29/2024] [Indexed: 07/15/2024]
Abstract
Machine learning offers great potential for automated prediction of post-stroke symptoms and their response to rehabilitation. Major challenges for this endeavour include the very high dimensionality of neuroimaging data, the relatively small size of the datasets available for learning and interpreting the predictive features, as well as, how to effectively combine neuroimaging and tabular data (e.g. demographic information and clinical characteristics). This paper evaluates several solutions based on two strategies. The first is to use 2D images that summarise MRI scans. The second is to select key features that improve classification accuracy. Additionally, we introduce the novel approach of training a convolutional neural network (CNN) on images that combine regions-of-interests (ROIs) extracted from MRIs, with symbolic representations of tabular data. We evaluate a series of CNN architectures (both 2D and a 3D) that are trained on different representations of MRI and tabular data, to predict whether a composite measure of post-stroke spoken picture description ability is in the aphasic or non-aphasic range. MRI and tabular data were acquired from 758 English speaking stroke survivors who participated in the PLORAS study. Each participant was assigned to one of five different groups that were matched for initial severity of symptoms, recovery time, left lesion size and the months or years post-stroke that spoken description scores were collected. Training and validation were carried out on the first four groups. The fifth (lock-box/test set) group was used to test how well model accuracy generalises to new (unseen) data. The classification accuracy for a baseline logistic regression was 0.678 based on lesion size alone, rising to 0.757 and 0.813 when initial symptom severity and recovery time were successively added. The highest classification accuracy (0.854), area under the curve (0.899) and F1 score (0.901) were observed when 8 regions of interest were extracted from each MRI scan and combined with lesion size, initial severity and recovery time in a 2D Residual Neural Network (ResNet). This was also the best model when data were limited to the 286 participants with moderate or severe initial aphasia (with area under curve = 0.865), a group that would be considered more difficult to classify. Our findings demonstrate how imaging and tabular data can be combined to achieve high post-stroke classification accuracy, even when the dataset is small in machine learning terms. We conclude by proposing how the current models could be improved to achieve even higher levels of accuracy using images from hospital scanners.
Collapse
Affiliation(s)
- Adam White
- Department of Computer Science, City, University of London, UK
| | | | | | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, University College London, UK
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, University College London, UK
| | - Howard Bowman
- School of Psychology, University of Birmingham, UK; School of Computer Science, University of Birmingham, UK
| |
Collapse
|
7
|
Huang TY, Chung Yu JC. Assessment of artificial intelligence to detect gasoline in fire debris using HS-SPME-GC/MS and transfer learning. J Forensic Sci 2024; 69:1222-1234. [PMID: 38798027 DOI: 10.1111/1556-4029.15550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 05/07/2024] [Accepted: 05/13/2024] [Indexed: 05/29/2024]
Abstract
Due to the complex nature of the chemical compositions of ignitable liquids (IL) and the interferences from fire debris matrices, interpreting chromatographic data poses challenges to analysts. In this work, artificial intelligence (AI) was developed by transfer learning in a convolutional neural network (CNN), GoogLeNet. The image classification AI was fine-tuned to create intelligent classification systems to discriminate samples containing gasoline residues from burned substrates. All ground truth samples were analyzed by headspace solid-phase microextraction (HS-SPME) coupled with a gas chromatograph and mass spectrometer (GC/MS). The HS-SPME-GC/MS data were transformed into three types of image presentations, that is, heatmaps, extracted ion heatmaps, and total ion chromatograms. The abundance and mass-to-charge ratios of each scan were converted into image patterns that are characteristic of the chemical profiles of gasoline. The transfer learning data were labeled as "gasoline present" and "gasoline absent" classes. The assessment results demonstrated that all AI models achieved 100 ± 0% accuracy in identifying neat gasoline. When the models were assessed using the spiked samples, the AI model developed using the extracted ion heatmap obtained the highest accuracy rate (95.9 ± 0.4%), which was greater than those obtained by other machine learning models, ranging from 17.3 ± 0.7% to 78.7 ± 0.7%. The proposed work demonstrated that the heatmaps created from GC/MS data can represent chemical features from the samples. Additionally, the pretrained CNN models are readily available in the transfer learning workflow to develop AI for GC/MS data interpretation in fire debris analysis.
Collapse
Affiliation(s)
- Ting-Yu Huang
- Department of Forensic Science, College of Criminal Justice, Sam Houston State University, Huntsville, Texas, USA
- Department of Criminal Justice, School of Social Sciences, Ming Chuan University, Taipei, Taiwan
| | - Jorn Chi Chung Yu
- Department of Forensic Science, College of Criminal Justice, Sam Houston State University, Huntsville, Texas, USA
| |
Collapse
|
8
|
Aversano L, Bernardi ML, Cimitile M, Montano D, Pecori R. Characterization of Heart Diseases per Single Lead Using ECG Images and CNN-2D. SENSORS (BASEL, SWITZERLAND) 2024; 24:3485. [PMID: 38894275 PMCID: PMC11174772 DOI: 10.3390/s24113485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 05/21/2024] [Accepted: 05/27/2024] [Indexed: 06/21/2024]
Abstract
Cardiopathy has become one of the predominant global causes of death. The timely identification of different types of heart diseases significantly diminishes mortality risk and enhances the efficacy of treatment. However, fast and efficient recognition necessitates continuous monitoring, encompassing not only specific clinical conditions but also diverse lifestyles. Consequently, an increasing number of studies are striving to automate and progress in the identification of different cardiopathies. Notably, the assessment of electrocardiograms (ECGs) is crucial, given that it serves as the initial diagnostic test for patients, proving to be both the simplest and the most cost-effective tool. This research employs a customized architecture of Convolutional Neural Network (CNN) to forecast heart diseases by analyzing the images of both three bands of electrodes and of each single electrode signal of the ECG derived from four distinct patient categories, representing three heart-related conditions as well as a spectrum of healthy controls. The analyses are conducted on a real dataset, providing noteworthy performance (recall greater than 80% for the majority of the considered diseases and sometimes even equal to 100%) as well as a certain degree of interpretability thanks to the understanding of the importance a band of electrodes or even a single ECG electrode can have in detecting a specific heart-related pathology.
Collapse
Affiliation(s)
- Lerina Aversano
- Department of Agricultural Science, Food, Natural Resources and Engineering, University of Foggia, 71122 Foggia, FG, Italy
| | - Mario Luca Bernardi
- Department of Engineering, University of Sannio, 82100 Benevento, BN, Italy;
| | - Marta Cimitile
- Department of Law and Digital Society, Unitelma Sapienza University, 00161 Rome, RM, Italy;
| | - Debora Montano
- CeRICT scrl, Regional Center Information Communication Technology, 82100 Benevento, BN, Italy
| | - Riccardo Pecori
- Institute of Materials for Electronics and Magnetism, National Research Council of Italy, 43124 Parma, PR, Italy
- SMARTEST Research Centre, eCampus University, 22060 Novedrate, CO, Italy
| |
Collapse
|
9
|
Isinkaye FO, Olusanya MO, Singh PK. Deep learning and content-based filtering techniques for improving plant disease identification and treatment recommendations: A comprehensive review. Heliyon 2024; 10:e29583. [PMID: 38737274 PMCID: PMC11088271 DOI: 10.1016/j.heliyon.2024.e29583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/30/2024] [Accepted: 04/10/2024] [Indexed: 05/14/2024] Open
Abstract
The importance of identifying plant diseases has risen recently due to the adverse effect they have on agricultutal production. Plant diseases have been a big concern in agriculture, as they affect crop production, and constitute a major threat to global food security. In the domain of modern agriculture, effective plant disease management is vital to ensure healthy crop yields and sustainable practices. Traditional means of identifying plant disease are faced with lots of challenges and the need for better and efficient detection methods cannot be overemphazised. The emergence of advanced technologies, particularly deep learning and content-based filtering techniques, if integrated together can changed the way plant diseases are identified and treated. Such as speedy and correct identification of plant diseases and efficient treatment recommendations which are keys for sustainable food production. In this work, We try to investigate the current state of research, identified gaps and limitations in knowledge, and suggests future directions for researchers, experts and farmers that could help to provide better ways of mitigating plant disease problems.
Collapse
Affiliation(s)
- Folasade Olubusola Isinkaye
- Department of Computer Science and Information Technology, Sol Plaatje University Kimberley, 8301, South Africa
| | - Michael Olusoji Olusanya
- Department of Computer Science and Information Technology, Sol Plaatje University Kimberley, 8301, South Africa
| | - Pramod Kumar Singh
- Department of Computer Science and Engineering, ABV-Indian Institute of Information Technology and Management Gwalior, Gwalior, 474015, MP, India
| |
Collapse
|
10
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
11
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
12
|
Verhoeven R, Hulscher JBF. Editorial: Artificial intelligence and machine learning in pediatric surgery. Front Pediatr 2024; 12:1404600. [PMID: 38659697 PMCID: PMC11042026 DOI: 10.3389/fped.2024.1404600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 04/01/2024] [Indexed: 04/26/2024] Open
Affiliation(s)
- Rosa Verhoeven
- Department of Surgery, Division of Pediatric Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- Department of Neonatology, Beatrix Children’s Hospital, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Jan B. F. Hulscher
- Department of Surgery, Division of Pediatric Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
13
|
Smith A, Carroll PW, Aravamuthan S, Walleser E, Lin H, Anklam K, Döpfer D, Apostolopoulos N. Computer vision model for the detection of canine pododermatitis and neoplasia of the paw. Vet Dermatol 2024; 35:138-147. [PMID: 38057947 DOI: 10.1111/vde.13221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 09/01/2023] [Accepted: 11/20/2023] [Indexed: 12/08/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has been used successfully in human dermatology. AI utilises convolutional neural networks (CNN) to accomplish tasks such as image classification, object detection and segmentation, facilitating early diagnosis. Computer vision (CV), a field of AI, has shown great results in detecting signs of human skin diseases. Canine paw skin diseases are a common problem in general veterinary practice, and computer vision tools could facilitate the detection and monitoring of disease processes. Currently, no such tool is available in veterinary dermatology. ANIMALS Digital images of paws from healthy dogs and paws with pododermatitis or neoplasia were used. OBJECTIVES We tested the novel object detection model Pawgnosis, a Tiny YOLOv4 image analysis model deployed on a microcomputer with a camera for the rapid detection of canine pododermatitis and neoplasia. MATERIALS AND METHODS The prediction performance metrics used to evaluate the models included mean average precision (mAP), precision, recall, average precision (AP) for accuracy and frames per second (FPS) for speed. RESULTS A large dataset labelled by a single individual (Dataset A) used to train a Tiny YOLOv4 model provided the best results with a mean mAP of 0.95, precision of 0.86, recall of 0.93 and 20 FPS. CONCLUSIONS AND CLINICAL RELEVANCE This novel object detection model has the potential for application in the field of veterinary dermatology.
Collapse
Affiliation(s)
- Andrew Smith
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Patrick W Carroll
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Srikanth Aravamuthan
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Emil Walleser
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Haley Lin
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Kelly Anklam
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Dörte Döpfer
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| | - Neoklis Apostolopoulos
- School of Veterinary Medicine, Department of Medical Sciences, University of Wisconsin in Madison, Madison, Wisconsin, USA
| |
Collapse
|
14
|
Yokote A, Umeno J, Kawasaki K, Fujioka S, Fuyuno Y, Matsuno Y, Yoshida Y, Imazu N, Miyazono S, Moriyama T, Kitazono T, Torisu T. Small bowel capsule endoscopy examination and open access database with artificial intelligence: The SEE-artificial intelligence project. DEN OPEN 2024; 4:e258. [PMID: 37359150 PMCID: PMC10288072 DOI: 10.1002/deo2.258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/31/2023] [Accepted: 06/05/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVES Artificial intelligence (AI) may be practical for image classification of small bowel capsule endoscopy (CE). However, creating a functional AI model is challenging. We attempted to create a dataset and an object detection CE AI model to explore modeling problems to assist in reading small bowel CE. METHODS We extracted 18,481 images from 523 small bowel CE procedures performed at Kyushu University Hospital from September 2014 to June 2021. We annotated 12,320 images with 23,033 disease lesions, combined them with 6161 normal images as the dataset, and examined the characteristics. Based on the dataset, we created an object detection AI model using YOLO v5 and we tested validation. RESULTS We annotated the dataset with 12 types of annotations, and multiple annotation types were observed in the same image. We test validated our AI model with 1396 images, and sensitivity for all 12 types of annotations was about 91%, with 1375 true positives, 659 false positives, and 120 false negatives detected. The highest sensitivity for individual annotations was 97%, and the highest area under the receiver operating characteristic curve was 0.98, but the quality of detection varied depending on the specific annotation. CONCLUSIONS Object detection AI model in small bowel CE using YOLO v5 may provide effective and easy-to-understand reading assistance. In this SEE-AI project, we open our dataset, the weights of the AI model, and a demonstration to experience our AI. We look forward to further improving the AI model in the future.
Collapse
Affiliation(s)
- Akihito Yokote
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Junji Umeno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Keisuke Kawasaki
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Shin Fujioka
- Department of Endoscopic Diagnostics and Therapeutics Kyushu University Hospital Fukuoka Japan
| | - Yuta Fuyuno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Yuichi Matsuno
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Yuichiro Yoshida
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Noriyuki Imazu
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Satoshi Miyazono
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Tomohiko Moriyama
- International Medical Department Kyushu University Hospital Fukuoka Japan
| | - Takanari Kitazono
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| | - Takehiro Torisu
- Department of Medicine and Clinical Science Graduate School of Medical Science Kyushu University Fukuoka Japan
| |
Collapse
|
15
|
Wang H, Liu Q, Gui D, Liu Y, Feng X, Qu J, Zhao J, Wei G. Automatedly identify dryland threatened species at large scale by using deep learning. THE SCIENCE OF THE TOTAL ENVIRONMENT 2024; 917:170375. [PMID: 38280598 DOI: 10.1016/j.scitotenv.2024.170375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/27/2023] [Accepted: 01/21/2024] [Indexed: 01/29/2024]
Abstract
Dryland biodiversity is decreasing at an alarming rate. Advanced intelligent tools are urgently needed to rapidly, automatedly, and precisely detect dryland threatened species on a large scale for biological conservation. Here, we explored the performance of three deep convolutional neural networks (Deeplabv3+, Unet, and Pspnet models) on the intelligent recognition of rare species based on high-resolution (0.3 m) satellite images taken by an unmanned aerial vehicle (UAV). We focused on a threatened species, Populus euphratica, in the Tarim River Basin (China), where there has been a severe population decline in the 1970s and restoration has been carried out since 2000. The testing results showed that Unet outperforms Deeplabv3+ and Pspnet when the training samples are lower, while Deeplabv3+ performs best as the dataset increases. Overall, when training samples are 80, Deeplabv3+ had the best overall performance for Populus euphratica identification, with mean pixel accuracy (MPA) between 87.31 % and 90.2 %, which, on average is 3.74 % and 11.29 % higher than Unet and Pspnet, respectively. Deeplabv3+ can accurately detect the boundaries of Populus euphratica even in areas of dense vegetation, with lower identification uncertainty for each pixel than other models. This study developed a UAV imagery-based identification framework using deep learning with high resolution in large-scale regions. This approach can accurately capture the variation in dryland threatened species, especially those in inaccessible areas, thereby fostering rapid and efficient conservation actions.
Collapse
Affiliation(s)
- Haolin Wang
- State Key Laboratory of Desert and Oasis Ecology, Key Laboratory of Ecological Safety and Sustainable Development in Arid Lands, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China; College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
| | - Qi Liu
- State Key Laboratory of Desert and Oasis Ecology, Key Laboratory of Ecological Safety and Sustainable Development in Arid Lands, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China; Cele National Station of Observation & Research for Desert Grassland Ecosystem in Xinjiang, Cele 848300, China.
| | - Dongwei Gui
- State Key Laboratory of Desert and Oasis Ecology, Key Laboratory of Ecological Safety and Sustainable Development in Arid Lands, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China; Cele National Station of Observation & Research for Desert Grassland Ecosystem in Xinjiang, Cele 848300, China; University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunfei Liu
- State Key Laboratory of Desert and Oasis Ecology, Key Laboratory of Ecological Safety and Sustainable Development in Arid Lands, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China; Cele National Station of Observation & Research for Desert Grassland Ecosystem in Xinjiang, Cele 848300, China
| | - Xinlong Feng
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
| | - Jia Qu
- State Key Laboratory of Desert and Oasis Ecology, Key Laboratory of Ecological Safety and Sustainable Development in Arid Lands, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China; College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
| | - Jianping Zhao
- College of Mathematics and System Sciences, Xinjiang University, Urumqi 830017, China
| | - Guanghui Wei
- Xinjiang Tarim River Basin Management Bureau, Korla 841000, China
| |
Collapse
|
16
|
Prinzi F, Currieri T, Gaglio S, Vitabile S. Shallow and deep learning classifiers in medical image analysis. Eur Radiol Exp 2024; 8:26. [PMID: 38438821 PMCID: PMC10912073 DOI: 10.1186/s41747-024-00428-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/03/2024] [Indexed: 03/06/2024] Open
Abstract
An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.
Collapse
Affiliation(s)
- Francesco Prinzi
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN, UK
| | - Tiziana Currieri
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy
| | - Salvatore Gaglio
- Department of Engineering, University of Palermo, Palermo, Italy
- Institute for High-Performance Computing and Networking, National Research Council (ICAR-CNR), Palermo, Italy
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo, Italy.
| |
Collapse
|
17
|
Ao Y, Shi W, Ji B, Miao Y, He W, Jiang Z. MS-TCNet: An effective Transformer-CNN combined network using multi-scale feature learning for 3D medical image segmentation. Comput Biol Med 2024; 170:108057. [PMID: 38301516 DOI: 10.1016/j.compbiomed.2024.108057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/31/2023] [Accepted: 01/26/2024] [Indexed: 02/03/2024]
Abstract
Medical image segmentation is a fundamental research problem in the field of medical image processing. Recently, the Transformer have achieved highly competitive performance in computer vision. Therefore, many methods combining Transformer with convolutional neural networks (CNNs) have emerged for segmenting medical images. However, these methods cannot effectively capture the multi-scale features in medical images, even though texture and contextual information embedded in the multi-scale features are extremely beneficial for segmentation. To alleviate this limitation, we propose a novel Transformer-CNN combined network using multi-scale feature learning for three-dimensional (3D) medical image segmentation, which is called MS-TCNet. The proposed model utilizes a shunted Transformer and CNN to construct an encoder and pyramid decoder, allowing six different scale levels of feature learning. It captures multi-scale features with refinement at each scale level. Additionally, we propose a novel lightweight multi-scale feature fusion (MSFF) module that can fully fuse the different-scale semantic features generated by the pyramid decoder for each segmentation class, resulting in a more accurate segmentation output. We conducted experiments on three widely used 3D medical image segmentation datasets. The experimental results indicated that our method outperformed state-of-the-art medical image segmentation methods, suggesting its effectiveness, robustness, and superiority. Meanwhile, our model has a smaller number of parameters and lower computational complexity than conventional 3D segmentation networks. The results confirmed that the model is capable of effective multi-scale feature learning and that the learned multi-scale features are useful for improving segmentation performance. We open-sourced our code, which can be found at https://github.com/AustinYuAo/MS-TCNet.
Collapse
Affiliation(s)
- Yu Ao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Bai Ji
- Department of Hepatobiliary and Pancreatic Surgery, The First Hospital of Jilin University, Changchun, 130061, China
| | - Yu Miao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Wei He
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China; Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, 528437, China.
| |
Collapse
|
18
|
Fuentes AM, Milligan K, Wiebe M, Narayan A, Lum JJ, Brolo AG, Andrews JL, Jirasek A. Stratification of tumour cell radiation response and metabolic signatures visualization with Raman spectroscopy and explainable convolutional neural network. Analyst 2024; 149:1645-1657. [PMID: 38312026 DOI: 10.1039/d3an01797d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Reprogramming of cellular metabolism is a driving factor of tumour progression and radiation therapy resistance. Identifying biochemical signatures associated with tumour radioresistance may assist with the development of targeted treatment strategies to improve clinical outcomes. Raman spectroscopy (RS) can monitor post-irradiation biomolecular changes and signatures of radiation response in tumour cells in a label-free manner. Convolutional Neural Networks (CNN) perform feature extraction directly from data in an end-to-end learning manner, with high classification performance. Furthermore, recently developed CNN explainability techniques help visualize the critical discriminative features captured by the model. In this work, a CNN is developed to characterize tumour response to radiotherapy based on its degree of radioresistance. The model was trained to classify Raman spectra of three human tumour cell lines as radiosensitive (LNCaP) or radioresistant (MCF7, H460) over a range of treatment doses and data collection time points. Additionally, a method based on Gradient-Weighted Class Activation Mapping (Grad-CAM) was used to determine response-specific salient Raman peaks influencing the CNN predictions. The CNN effectively classified the cell spectra, with accuracy, sensitivity, specificity, and F1 score exceeding 99.8%. Grad-CAM heatmaps of H460 and MCF7 cell spectra (radioresistant) exhibited high contributions from Raman bands tentatively assigned to glycogen, amino acids, and nucleic acids. Conversely, heatmaps of LNCaP cells (radiosensitive) revealed activations at lipid and phospholipid bands. Finally, Grad-CAM variable importance scores were derived for glycogen, asparagine, and phosphatidylcholine, and we show that their trends over cell line, dose, and acquisition time agreed with previously established models. Thus, the CNN can accurately detect biomolecular differences in the Raman spectra of tumour cells of varying radiosensitivity without requiring manual feature extraction. Finally, Grad-CAM may help identify metabolic signatures associated with the observed categories, offering the potential for automated clinical tumour radiation response characterization.
Collapse
Affiliation(s)
- Alejandra M Fuentes
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| | - Kirsty Milligan
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| | - Mitchell Wiebe
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| | - Apurva Narayan
- Department of Computer Science, Western University, London, Canada
- Department of Computer Science, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Julian J Lum
- Department of Biochemistry and Microbiology, The University of Victoria, Victoria, Canada
- Trev and Joyce Deeley Research Centre, BC Cancer, Victoria, Canada
| | - Alexandre G Brolo
- Department of Chemistry, The University of Victoria, Victoria, Canada
| | - Jeffrey L Andrews
- Department of Statistics, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Andrew Jirasek
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| |
Collapse
|
19
|
Franco A, Murray J, Heng D, Lygate A, Moreira D, Ferreira J, Miranda E Paulo D, Machado CP, Bueno J, Mânica S, Porto L, Abade A, Paranhos LR. Binary decisions of artificial intelligence to classify third molar development around the legal age thresholds of 14, 16 and 18 years. Sci Rep 2024; 14:4668. [PMID: 38409354 PMCID: PMC10897208 DOI: 10.1038/s41598-024-55497-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 02/24/2024] [Indexed: 02/28/2024] Open
Abstract
Third molar development is used for dental age estimation when all the other teeth are fully mature. In most medicolegal facilities, dental age estimation is an operator-dependent procedure. During the examination of unaccompanied and undocumented minors, this procedure may lead to binary decisions around age thresholds of legal interest, namely the ages of 14, 16 and 18 years. This study aimed to test the performance of artificial intelligence to classify individuals below and above the legal age thresholds of 14, 16 and 18 years using third molar development. The sample consisted of 11,640 panoramic radiographs (9680 used for training and 1960 used for validation) of males (n = 5400) and females (n = 6240) between 6 and 22.9 years. Computer-based image annotation was performed with V7 software (V7labs, London, UK). The region of interest was the mandibular left third molar (T38) outlined with a semi-automated contour. DenseNet121 was the Convolutional Neural Network (CNN) of choice and was used with Transfer Learning. After Receiver-operating characteristic curves, the area under the curve (AUC) was 0.87 and 0.86 to classify males and females below and above the age of 14, respectively. For the age threshold of 16, the AUC values were 0.88 (males) and 0.83 (females), while for the age of 18, AUC were 0.94 (males) and 0.83 (females). Specificity rates were always between 0.80 and 0.92. Artificial intelligence was able to classify male and females below and above the legal age thresholds of 14, 16 and 18 years with high accuracy.
Collapse
Affiliation(s)
- Ademir Franco
- Division of Forensic Dentistry, Faculdade São Leopoldo Mandic, Campinas, Brazil
- Department of Therapeutic Stomatology, Institute of Dentistry, Sechenov University, Moscow, Russia
| | - Jared Murray
- Centre of Forensic and Legal Medicine and Dentistry, University of Dundee, Dundee, UK
| | - Dennis Heng
- Centre of Forensic and Legal Medicine and Dentistry, University of Dundee, Dundee, UK
| | - Anna Lygate
- Centre of Forensic and Legal Medicine and Dentistry, University of Dundee, Dundee, UK
| | - Debora Moreira
- Division of Oral Radiology, Faculdade São Leopoldo Mandic, Campinas, Brazil
| | - Jaqueline Ferreira
- Division of Forensic Dentistry, Faculdade São Leopoldo Mandic, Campinas, Brazil
| | | | | | - Juliano Bueno
- Division of Oral Radiology, Faculdade São Leopoldo Mandic, Campinas, Brazil
| | - Scheila Mânica
- Centre of Forensic and Legal Medicine and Dentistry, University of Dundee, Dundee, UK
| | - Lucas Porto
- Computer Vision Solutions, Rumina S.A., Belo Horizonte, Minas Gerais, Brazil
| | - André Abade
- Computer Science, Federal Institute of Science and Technology, Barra do Garças, Brazil
| | - Luiz Renato Paranhos
- Department of Preventive and Social Dentistry, Federal University of Uberlandia, Av. Pará-1720, Bairro Umuarama, Uberlândia, MG, 38405-320, Brazil.
| |
Collapse
|
20
|
Shafique A, Gonzalez R, Pantanowitz L, Tan PH, Machado A, Cree IA, Tizhoosh HR. A Preliminary Investigation into Search and Matching for Tumor Discrimination in World Health Organization Breast Taxonomy Using Deep Networks. Mod Pathol 2024; 37:100381. [PMID: 37939901 PMCID: PMC10891482 DOI: 10.1016/j.modpat.2023.100381] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 10/26/2023] [Accepted: 10/31/2023] [Indexed: 11/10/2023]
Abstract
Breast cancer is one of the most common cancers affecting women worldwide. It includes a group of malignant neoplasms with a variety of biological, clinical, and histopathologic characteristics. There are more than 35 different histologic forms of breast lesions that can be classified and diagnosed histologically according to cell morphology, growth, and architecture patterns. Recently, deep learning, in the field of artificial intelligence, has drawn a lot of attention for the computerized representation of medical images. Searchable digital atlases can provide pathologists with patch-matching tools, allowing them to search among evidently diagnosed and treated archival cases, a technology that may be regarded as computational second opinion. In this study, we indexed and analyzed the World Health Organization breast taxonomy (Classification of Tumors fifth ed.) spanning 35 tumor types. We visualized all tumor types using deep features extracted from a state-of-the-art deep-learning model, pretrained on millions of diagnostic histopathology images from the Cancer Genome Atlas repository. Furthermore, we tested the concept of a digital "atlas" as a reference for search and matching with rare test cases. The patch similarity search within the World Health Organization breast taxonomy data reached >88% accuracy when validating through "majority vote" and >91% accuracy when validating using top n tumor types. These results show for the first time that complex relationships among common and rare breast lesions can be investigated using an indexed digital archive.
Collapse
Affiliation(s)
- Abubakr Shafique
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota; Kimia Lab, University of Waterloo, Waterloo, Ontario, Canada
| | - Ricardo Gonzalez
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania
| | - Puay Hoon Tan
- Women's Imaging Centre, Luma Medical Centre, Singapore
| | - Alberto Machado
- WHO Classification of Tumours Group, International Agency for Research on Cancer, Lyon, France
| | - Ian A Cree
- WHO Classification of Tumours Group, International Agency for Research on Cancer, Lyon, France
| | - Hamid R Tizhoosh
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota; Kimia Lab, University of Waterloo, Waterloo, Ontario, Canada.
| |
Collapse
|
21
|
Borges P, Shaw R, Varsavsky T, Kläser K, Thomas D, Drobnjak I, Ourselin S, Cardoso MJ. Acquisition-invariant brain MRI segmentation with informative uncertainties. Med Image Anal 2024; 92:103058. [PMID: 38104403 DOI: 10.1016/j.media.2023.103058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 08/24/2023] [Accepted: 12/05/2023] [Indexed: 12/19/2023]
Abstract
Combining multi-site data can strengthen and uncover trends, but is a task that is marred by the influence of site-specific covariates that can bias the data and, therefore, any downstream analyses. Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios. Algorithms should be designed in a way that can account for site-specific effects, such as those that arise from sequence parameter choices, and in instances where generalisation fails, should be able to identify such a failure by means of explicit uncertainty modelling. This body of work showcases such an algorithm that can become robust to the physics of acquisition in the context of segmentation tasks while simultaneously modelling uncertainty. We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality but does so while also accounting for site-specific sequence choices, which also allows it to perform as a harmonisation tool.
Collapse
Affiliation(s)
- Pedro Borges
- Department of Medical Physics and Biomedical Engineering, UCL, UK; School of Biomedical Engineering and Imaging Sciences, KCL, UK.
| | - Richard Shaw
- Department of Medical Physics and Biomedical Engineering, UCL, UK; School of Biomedical Engineering and Imaging Sciences, KCL, UK
| | - Thomas Varsavsky
- Department of Medical Physics and Biomedical Engineering, UCL, UK; School of Biomedical Engineering and Imaging Sciences, KCL, UK
| | - Kerstin Kläser
- School of Biomedical Engineering and Imaging Sciences, KCL, UK
| | | | - Ivana Drobnjak
- Department of Medical Physics and Biomedical Engineering, UCL, UK
| | | | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Sciences, KCL, UK
| |
Collapse
|
22
|
Sellin J, Pantel JT, Börsch N, Conrad R, Mücke M. [Short paths to diagnosis with artificial intelligence: systematic literature review on diagnostic decision support systems]. Schmerz 2024; 38:19-27. [PMID: 38165492 DOI: 10.1007/s00482-023-00777-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/24/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND Rare diseases are often recognized late. Their diagnosis is particularly challenging due to the diversity, complexity and heterogeneity of clinical symptoms. Computer-aided diagnostic aids, often referred to as diagnostic decision support systems (DDSS), are promising tools for shortening the time to diagnosis. Despite initial positive evaluations, DDSS are not yet widely used, partly due to a lack of integration with existing clinical or practice information systems. OBJECTIVE This article provides an insight into currently existing diagnostic support systems that function without access to electronic patient records and only require information that is easily obtainable. MATERIALS AND METHODS A systematic literature search identified eight articles on DDSS that can assist in the diagnosis of rare diseases with no need for access to electronic patient records or other information systems in practices and hospitals. The main advantages and disadvantages of the identified rare disease diagnostic support systems were extracted and summarized. RESULTS Symptom checkers and DDSS based on portrait photos and pain drawings already exist. The degree of maturity of these applications varies. CONCLUSION DDSS currently still face a number of challenges, such as concerns about data protection and accuracy, and acceptance and awareness continue to be rather low. On the other hand, there is great potential for faster diagnosis, especially for rare diseases, which are easily overlooked due to their large number and the low awareness of them. The use of DDSS should therefore be carefully considered by doctors on a case-by-case basis.
Collapse
Affiliation(s)
- Julia Sellin
- Institut für Digitale Allgemeinmedizin, Universitätsklinikum RWTH Aachen, Aachen, Deutschland.
- Zentrum für Seltene Erkrankungen Aachen (ZSEA), Universitätsklinikum RWTH Aachen, Aachen, Deutschland.
| | - Jean Tori Pantel
- Institut für Digitale Allgemeinmedizin, Universitätsklinikum RWTH Aachen, Aachen, Deutschland
- Zentrum für Seltene Erkrankungen Aachen (ZSEA), Universitätsklinikum RWTH Aachen, Aachen, Deutschland
| | - Natalie Börsch
- Institut für Digitale Allgemeinmedizin, Universitätsklinikum RWTH Aachen, Aachen, Deutschland
- Zentrum für Seltene Erkrankungen Aachen (ZSEA), Universitätsklinikum RWTH Aachen, Aachen, Deutschland
| | - Rupert Conrad
- Klinik für Psychosomatische Medizin und Psychotherapie, Universitätsklinikum Münster, Münster, Deutschland
| | - Martin Mücke
- Institut für Digitale Allgemeinmedizin, Universitätsklinikum RWTH Aachen, Aachen, Deutschland
- Zentrum für Seltene Erkrankungen Aachen (ZSEA), Universitätsklinikum RWTH Aachen, Aachen, Deutschland
| |
Collapse
|
23
|
Guo Z, Ao S, Ao B. Few-shot learning based oral cancer diagnosis using a dual feature extractor prototypical network. J Biomed Inform 2024; 150:104584. [PMID: 38199300 DOI: 10.1016/j.jbi.2024.104584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 12/04/2023] [Accepted: 01/02/2024] [Indexed: 01/12/2024]
Abstract
A large global health issue is cancer, wherein early diagnosis and treatment have proven to be life-saving. This holds true for oral cancer, thus emphasizing the significance of timely intervention. Deep learning techniques have gained traction in early cancer detection, exhibiting promising outcomes in accurate diagnosis. However, collecting a substantial amount of training data poses a challenge for deep learning models in cancer diagnosis. To address this limitation, this study proposes an oral cancer diagnosis approach based on a few-shot learning framework that circumvents the need for extensive training data. Specifically, a prototypical network is employed to construct a diagnostic model, wherein two feature extractors are utilized to extract prototypical features and query features respectively, departing from the conventional use of a single feature extraction function in prototypical networks. Moreover, a customized loss function is designed for the proposed method. Rigorous experimentation using a histopathological image dataset demonstrates the superior performance of our proposed approach over comparison methods.
Collapse
Affiliation(s)
- Zijun Guo
- Department of Stomatology, Daping Hospital, Army Medical Center of PLA, Chongqing 400042, China
| | - Sha Ao
- The People's Hospital of Rongchang District in Chongqing, Chongqing 402460, China
| | - Bo Ao
- Traditional Chinese Medicine Hospital of Jiulongpo District in Chongqing, Chongqing 400080, China.
| |
Collapse
|
24
|
Liu W, Shen N, Zhang L, Wang X, Chen B, Liu Z, Yang C. Research in the application of artificial intelligence to lung cancer diagnosis. Front Med (Lausanne) 2024; 11:1343485. [PMID: 38352145 PMCID: PMC10861801 DOI: 10.3389/fmed.2024.1343485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 01/02/2024] [Indexed: 02/16/2024] Open
Abstract
The morbidity and mortality rates in lung cancer are high worldwide. Early diagnosis and personalized treatment are important to manage this public health issue. In recent years, artificial intelligence (AI) has played increasingly important roles in early screening, auxiliary diagnosis, and prognostic assessment. AI uses algorithms to extract quantitative feature information from high-volume and high-latitude data and learn existing data to predict disease outcomes. In this review, we describe the current uses of AI in lung cancer-focused pathomics, imageomics, and genomics applications.
Collapse
Affiliation(s)
- Wenjuan Liu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Nan Shen
- Department of Nephrology, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Limin Zhang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Xiaoxi Wang
- Department of Clinical Laboratory, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Bainan Chen
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Zhuo Liu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Chao Yang
- Department of Radiology, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
25
|
Wu KY, Kulbay M, Daigle P, Nguyen BH, Tran SD. Nonspecific Orbital Inflammation (NSOI): Unraveling the Molecular Pathogenesis, Diagnostic Modalities, and Therapeutic Interventions. Int J Mol Sci 2024; 25:1553. [PMID: 38338832 PMCID: PMC10855920 DOI: 10.3390/ijms25031553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/21/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024] Open
Abstract
Nonspecific orbital inflammation (NSOI), colloquially known as orbital pseudotumor, sometimes presents a diagnostic and therapeutic challenge in ophthalmology. This review aims to dissect NSOI through a molecular lens, offering a comprehensive overview of its pathogenesis, clinical presentation, diagnostic methods, and management strategies. The article delves into the underpinnings of NSOI, examining immunological and environmental factors alongside intricate molecular mechanisms involving signaling pathways, cytokines, and mediators. Special emphasis is placed on emerging molecular discoveries and approaches, highlighting the significance of understanding molecular mechanisms in NSOI for the development of novel diagnostic and therapeutic tools. Various diagnostic modalities are scrutinized for their utility and limitations. Therapeutic interventions encompass medical treatments with corticosteroids and immunomodulatory agents, all discussed in light of current molecular understanding. More importantly, this review offers a novel molecular perspective on NSOI, dissecting its pathogenesis and management with an emphasis on the latest molecular discoveries. It introduces an integrated approach combining advanced molecular diagnostics with current clinical assessments and explores emerging targeted therapies. By synthesizing these facets, the review aims to inform clinicians and researchers alike, paving the way for molecularly informed, precision-based strategies for managing NSOI.
Collapse
Affiliation(s)
- Kevin Y. Wu
- Department of Surgery, Division of Ophthalmology, University of Sherbrooke, Sherbrooke, QC J1G 2E8, Canada; (K.Y.W.)
| | - Merve Kulbay
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 0A4, Canada
| | - Patrick Daigle
- Department of Surgery, Division of Ophthalmology, University of Sherbrooke, Sherbrooke, QC J1G 2E8, Canada; (K.Y.W.)
| | - Bich H. Nguyen
- CHU Sainte Justine Hospital, Montreal, QC H3T 1C5, Canada
| | - Simon D. Tran
- Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, QC H3A 1G1, Canada
| |
Collapse
|
26
|
Park S, Kim JH, Ahn Y, Lee CH, Kim YG, Yuh WT, Hyun SJ, Kim CH, Kim KJ, Chung CK. Multi-pose-based convolutional neural network model for diagnosis of patients with central lumbar spinal stenosis. Sci Rep 2024; 14:203. [PMID: 38168665 PMCID: PMC10761871 DOI: 10.1038/s41598-023-50885-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 12/27/2023] [Indexed: 01/05/2024] Open
Abstract
Although the role of plain radiographs in diagnosing lumbar spinal stenosis (LSS) has declined in importance since the advent of magnetic resonance imaging (MRI), diagnostic ability of plain radiographs has improved dramatically when combined with deep learning. Previously, we developed a convolutional neural network (CNN) model using a radiograph for diagnosing LSS. In this study, we aimed to improve and generalize the performance of CNN models and overcome the limitation of the single-pose-based CNN (SP-CNN) model using multi-pose radiographs. Individuals with severe or no LSS, confirmed using MRI, were enrolled. Lateral radiographs of patients in three postures were collected. We developed a multi-pose-based CNN (MP-CNN) model using the encoders of the three SP-CNN model (extension, flexion, and neutral postures). We compared the validation results of the MP-CNN model using four algorithms pretrained with ImageNet. The MP-CNN model underwent additional internal and external validations to measure generalization performance. The ResNet50-based MP-CNN model achieved the largest area under the receiver operating characteristic curve (AUROC) of 91.4% (95% confidence interval [CI] 90.9-91.8%) for internal validation. The AUROC of the MP-CNN model were 91.3% (95% CI 90.7-91.9%) and 79.5% (95% CI 78.2-80.8%) for the extra-internal and external validation, respectively. The MP-CNN based heatmap offered a logical decision-making direction through optimized visualization. This model holds potential as a screening tool for LSS diagnosis, offering an explainable rationale for its prediction.
Collapse
Affiliation(s)
- Seyeon Park
- Transdisciplinary Department of Medicine & Advanced Technology, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Jun-Hoe Kim
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Youngbin Ahn
- Transdisciplinary Department of Medicine & Advanced Technology, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Chang-Hyun Lee
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea.
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Young-Gon Kim
- Transdisciplinary Department of Medicine & Advanced Technology, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea.
| | - Woon Tak Yuh
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
| | - Seung-Jae Hyun
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Chi Heon Kim
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Ki-Jeong Kim
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Chun Kee Chung
- Department of Neurosurgery, Seoul National University Hospital, 101 Daehak-Ro, Jongro-Gu, Seoul, 03080, Republic of Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Brain and Cognitive Sciences, Seoul National University College of Natural Sciences, Seoul, Republic of Korea
| |
Collapse
|
27
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
28
|
Hadzic A, Urschler M, Press JNA, Riedl R, Rugani P, Štern D, Kirnbauer B. Evaluating a Periapical Lesion Detection CNN on a Clinically Representative CBCT Dataset-A Validation Study. J Clin Med 2023; 13:197. [PMID: 38202204 PMCID: PMC10779652 DOI: 10.3390/jcm13010197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/20/2023] [Accepted: 12/25/2023] [Indexed: 01/12/2024] Open
Abstract
The aim of this validation study was to comprehensively evaluate the performance and generalization capability of a deep learning-based periapical lesion detection algorithm on a clinically representative cone-beam computed tomography (CBCT) dataset and test for non-inferiority. The evaluation involved 195 CBCT images of adult upper and lower jaws, where sensitivity and specificity metrics were calculated for all teeth, stratified by jaw, and stratified by tooth type. Furthermore, each lesion was assigned a periapical index score based on its size to enable a score-based evaluation. Non-inferiority tests were conducted with proportions of 90% for sensitivity and 82% for specificity. The algorithm achieved an overall sensitivity of 86.7% and a specificity of 84.3%. The non-inferiority test indicated the rejection of the null hypothesis for specificity but not for sensitivity. However, when excluding lesions with a periapical index score of one (i.e., very small lesions), the sensitivity improved to 90.4%. Despite the challenges posed by the dataset, the algorithm demonstrated promising results. Nevertheless, further improvements are needed to enhance the algorithm's robustness, particularly in detecting very small lesions and the handling of artifacts and outliers commonly encountered in real-world clinical scenarios.
Collapse
Affiliation(s)
- Arnela Hadzic
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, 8036 Graz, Austria; (A.H.); (R.R.)
| | - Martin Urschler
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, 8036 Graz, Austria; (A.H.); (R.R.)
| | - Jan-Niclas Aaron Press
- Division of Oral Surgery and Orthodontics, Medical University of Graz, 8010 Graz, Austria (P.R.); (B.K.)
| | - Regina Riedl
- Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, 8036 Graz, Austria; (A.H.); (R.R.)
| | - Petra Rugani
- Division of Oral Surgery and Orthodontics, Medical University of Graz, 8010 Graz, Austria (P.R.); (B.K.)
| | - Darko Štern
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria
| | - Barbara Kirnbauer
- Division of Oral Surgery and Orthodontics, Medical University of Graz, 8010 Graz, Austria (P.R.); (B.K.)
| |
Collapse
|
29
|
Mascarenhas M, Ribeiro T, Afonso J, Mendes F, Cardoso P, Martins M, Ferreira J, Macedo G. Smart Endoscopy Is Greener Endoscopy: Leveraging Artificial Intelligence and Blockchain Technologies to Drive Sustainability in Digestive Health Care. Diagnostics (Basel) 2023; 13:3625. [PMID: 38132209 PMCID: PMC10743290 DOI: 10.3390/diagnostics13243625] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/14/2023] [Accepted: 11/25/2023] [Indexed: 12/23/2023] Open
Abstract
The surge in the implementation of artificial intelligence (AI) in recent years has permeated many aspects of our life, and health care is no exception. Whereas this technology can offer clear benefits, some of the problems associated with its use have also been recognised and brought into question, for example, its environmental impact. In a similar fashion, health care also has a significant environmental impact, and it requires a considerable source of greenhouse gases. Whereas efforts are being made to reduce the footprint of AI tools, here, we were specifically interested in how employing AI tools in gastroenterology departments, and in particular in conjunction with capsule endoscopy, can reduce the carbon footprint associated with digestive health care while offering improvements, particularly in terms of diagnostic accuracy. We address the different ways that leveraging AI applications can reduce the carbon footprint associated with all types of capsule endoscopy examinations. Moreover, we contemplate how the incorporation of other technologies, such as blockchain technology, into digestive health care can help ensure the sustainability of this clinical speciality and by extension, health care in general.
Collapse
Affiliation(s)
- Miguel Mascarenhas
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| | - Tiago Ribeiro
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| | - João Afonso
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| | - Francisco Mendes
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| | - Pedro Cardoso
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| | - Miguel Martins
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| | - João Ferreira
- Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal;
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- Precision Medicine Unit, Department of Gastroenterology, Hospital São João, 4200-437 Porto, Portugal; (T.R.); (J.A.); (P.C.); (M.M.)
- WGO Training Center, 4200-437 Porto, Portugal
| |
Collapse
|
30
|
Angthong C, Rungrattanawilai N, Pundee C. Artificial intelligence assistance in deciding management strategies for polytrauma and trauma patients. POLISH JOURNAL OF SURGERY 2023; 96:114-117. [PMID: 38348980 DOI: 10.5604/01.3001.0053.9857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
<b><br>Introduction:</b> Artificial intelligence (AI) is an emerging technology with vast potential for use in several fields of medicine. However, little is known about the application of AI in treatment decisions for patients with polytrauma. In this systematic review, we investigated the benefits and performance of AI in predicting the management of patients with polytrauma and trauma.</br> <b><br>Methods:</b> This systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies were extracted from the PubMed and Google Scholar databases from their inception until November 2022, using the search terms "Artificial intelligence," "polytrauma," and "decision." Seventeen articles were identified and screened for eligibility. Animal studies, review articles, systematic reviews, meta-analyses, and studies that did not involve polytrauma or severe trauma management decisions were excluded. Eight studies were eligible for final review.</br> <b><br>Results:</b> Eight studies focusing on patients with trauma, including two on military trauma, were included. The AI applications were mainly implemented for predictions and/or decisions on shock, bleeding, and blood transfusion. Few studies predicted death/survival. The identification of trauma patients using AI was proposed in a previous study. The overall performance of AI was good (six studies), excellent (one study), and acceptable (one study).</br> <b><br>Discussion:</b> AI demonstrated satisfactory performance in decision-making and management prediction in patients with polytrauma/severe trauma, especially in situations of shock/bleeding.</br> <b><br>Importance:</b> The present study serves as a basis for further research to develop practical AI applications for the management of patients with trauma.</br>.
Collapse
Affiliation(s)
- Chayanin Angthong
- Division of Digital and Innovative Medicine, Faculty of Medicine, King Mongkut's Institute of Technology Ladkrabang (KMITL), Bangkok, Thailand
| | | | - Chaiyapruk Pundee
- Department of Orthopaedics, Samitivej Srinakarin Hospital, Bangkok Dusit Medical Services (BDMS), Bangkok, Thailand
| |
Collapse
|
31
|
Lim Y, Choi S, Oh HJ, Kim C, Song S, Kim S, Song H, Park S, Kim JW, Kim JW, Kim JH, Kang M, Kang SB, Kim DW, Oh HK, Lee HS, Lee KW. Artificial intelligence-powered spatial analysis of tumor-infiltrating lymphocytes for prediction of prognosis in resected colon cancer. NPJ Precis Oncol 2023; 7:124. [PMID: 37985785 PMCID: PMC10662481 DOI: 10.1038/s41698-023-00470-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 10/24/2023] [Indexed: 11/22/2023] Open
Abstract
Tumor-infiltrating lymphocytes (TIL) have been suggested as an important prognostic marker in colorectal cancer, but assessment usually requires additional tissue processing and interpretational efforts. The aim of this study is to assess the clinical significance of artificial intelligence (AI)-powered spatial TIL analysis using only a hematoxylin and eosin (H&E)-stained whole-slide image (WSI) for the prediction of prognosis in stage II-III colon cancer treated with surgery and adjuvant therapy. In this retrospective study, we used Lunit SCOPE IO, an AI-powered H&E WSI analyzer, to assess intratumoral TIL (iTIL) and tumor-related stromal TIL (sTIL) densities from WSIs of 289 patients. The patients with confirmed recurrences had significantly lower sTIL densities (mean sTIL density 630.2/mm2 in cases with confirmed recurrence vs. 1021.3/mm2 in no recurrence, p < 0.001). Additionally, significantly higher recurrence rates were observed in patients having sTIL or iTIL in the lower quartile groups. Risk groups defined as high-risk (both iTIL and sTIL in the lowest quartile groups), low-risk (sTIL higher than the median), or intermediate-risk (not high- or low-risk) were predictive of recurrence and were independently associated with clinical outcomes after adjusting for other clinical factors. AI-powered TIL analysis can provide prognostic information in stage II/III colon cancer in a practical manner.
Collapse
Affiliation(s)
| | - Songji Choi
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Hyeon Jeong Oh
- Department of Pathology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea.
| | - Chanyoung Kim
- Department of Pathology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | | | | | | | | | - Ji-Won Kim
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Jin Won Kim
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Jee Hyun Kim
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Minsu Kang
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Sung-Bum Kang
- Department of Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea
| | - Duck-Woo Kim
- Department of Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea
| | - Heung-Kwon Oh
- Department of Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Korea
| | - Hye Seung Lee
- Department of Pathology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Keun-Wook Lee
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea.
| |
Collapse
|
32
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
33
|
Rankovic N, Rankovic D, Lukic I, Savic N, Jovanovic V. Ensemble model for predicting chronic non-communicable diseases using Latin square extraction and fuzzy-artificial neural networks from 2013 to 2019. Heliyon 2023; 9:e22561. [PMID: 38034797 PMCID: PMC10687296 DOI: 10.1016/j.heliyon.2023.e22561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 11/13/2023] [Accepted: 11/15/2023] [Indexed: 12/02/2023] Open
Abstract
Background The presented study tracks the increase or decrease in the prevalence of seventeen different chronic non-communicable diseases in Serbia. This analysis considers factors such as region, age, and gender and is based on data from two national cross-sectional studies conducted in 2013 and 2019. The research aims to accurately identify the regions with the highest percentage of affected individuals, as well as their respective age and gender groups. The ultimate goal is to facilitate organized, free preventive screenings for these population categories within a very short time-frame in the future. Materials and methods The study analyzed two cross-sectional studies conducted between 2013 and 2019, using data obtained from the Institute of Public Health of Serbia. Both studies involved a total of 27801 participants. The study compared the performance of Decision Tree and Support Vector Regressor models with artificial neural network (ANN) models that employed two encoding functions. The new methodology for the ANN-L36 model was based on artificial neural networks constructed using a Latin square (L36) design, incorporating Taguchi's robust design optimization. Results The results of the analysis from three different models have shown that cardiovascular diseases are the most prevalent illnesses among the population in Serbia, with hypertension as the leading condition in all regions, particularly among individuals aged 64 to 75 years, and more prevalent among females. In 2019, there was a decrease in the percentage of the leading disease, hypertension, compared to 2013, with a decrease from 34.0% to 32.2%. The ANN-L36 model with Fuzzy encoding function demonstrated the highest precision, achieving the smallest relative error of 0.1%. Conclusion To date, no studies have been conducted at the national level in Serbia to comprehensively track and identify chronic diseases in the manner proposed by this study. The model presented in this research will be implemented in practice and is set to significantly contribute to the future healthcare framework in Serbia, shaping and advancing the approach towards addressing these conditions. Furthermore, experimental evidence has shown that Taguchi's optimization approach yields the best results for identifying various chronic non-communicable diseases.
Collapse
Affiliation(s)
- Nevena Rankovic
- Department of Cognitive Science and Artificial Intelligence, Tilburg School of Humanities and Digital Sciences, Tilburg University, Warandelaan 2, Tilburg, 5037 AB, Netherlands
| | - Dragica Rankovic
- Department of Mathematics, Statistics and Informatics, Faculty of Applied Sciences, Union University “Nikola Tesla”, Dusana Popovica 22, Nis, 18000, Serbia
| | - Igor Lukic
- Department of Preventive Medicine, Faculty of Medical Sciences, University of Kragujevac, Svetozara Markovica 69, Kragujevac, 34000, Serbia
| | - Nikola Savic
- Faculty of Business Valjevo, Singidunum University, Zeleznicka 5, Valjevo, 14000, Serbia
| | - Verica Jovanovic
- Institute of the Public Health “Dr. Milan Jovanovic Batut”, dr Subotica starijeg 5, Belgrade, 11000, Serbia
| |
Collapse
|
34
|
Mangileva D, Kursanov A, Katsnelson L, Solovyova O. Unsupervised deep network for image texture transformation: Improving the quality of cross-correlation analysis and mechanical vortex visualisation during cardiac fibrillation. Heliyon 2023; 9:e22207. [PMID: 38053873 PMCID: PMC10694166 DOI: 10.1016/j.heliyon.2023.e22207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 10/26/2023] [Accepted: 11/06/2023] [Indexed: 12/07/2023] Open
Abstract
Visualisation of cardiac fibrillation plays a very considerable role in cardiophysiological study and clinical applications. One of the ways to obtain the image of these phenomena is the registration of mechanical displacement fields reflecting the track from electrical activity. In this work, we read these fields using cross-correlation analysis from the video of open pig's epicardium at the start of fibrillation recorded with electrocardiogram. However, the quality of obtained displacement fields remains low due to the weak pixels heterogeneity of the frames. It disables to see more clearly such interesting phenomena as mechanical vortexes that underline the mechanical dysfunction of fibrillation. The applying of chemical or mechanical markers to solve this problem can affect the course of natural processes and falsify the results. Therefore, we developed a novel scheme of an unsupervised deep neural network that is based on the state-of-art positional coding technology for a multilayer perceptron. This network enables to generate a couple of frames with a more heterogeneous pixel texture, that is more suitable for cross-correlation analysis methods, from two consecutive frames. The novel network scheme was tested on synthetic pairs of images with different texture heterogeneity and frequency of displacement fields and also it was compared with different filters on our cardiac tissue image dataset. The testing showed that the displacement fields obtained with our method are closer to the ground truth than those which were computed only with the cross-correlation analysis in low contrast images case where filtering is impossible. Moreover, our model showed the best results comparing with the one of the popular filter CLAHE on our dataset. As a result, using our approach, it was possible to register more clearly a mechanical vortex on the epicardium at the start of fibrillation continuously for several milliseconds for the first time.
Collapse
Affiliation(s)
- Daria Mangileva
- Department of Computational Mathematics and Computer Science, Ural Federal University, Ekaterinburg, 620002, Russia
| | - Alexander Kursanov
- Department of Computational Mathematics and Computer Science, Ural Federal University, Ekaterinburg, 620002, Russia
- Institute of Immunology and Physiology, Ural Branch of Russian Sciences Academy, Ekaterinburg, 620049, Russia
| | - Leonid Katsnelson
- Department of Computational Mathematics and Computer Science, Ural Federal University, Ekaterinburg, 620002, Russia
- Institute of Immunology and Physiology, Ural Branch of Russian Sciences Academy, Ekaterinburg, 620049, Russia
| | - Olga Solovyova
- Department of Computational Mathematics and Computer Science, Ural Federal University, Ekaterinburg, 620002, Russia
- Institute of Immunology and Physiology, Ural Branch of Russian Sciences Academy, Ekaterinburg, 620049, Russia
| |
Collapse
|
35
|
Nader R, Bourcier R, Autrusseau F. Using deep learning for an automatic detection and classification of the vascular bifurcations along the Circle of Willis. Med Image Anal 2023; 89:102919. [PMID: 37619447 DOI: 10.1016/j.media.2023.102919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/01/2023] [Accepted: 07/31/2023] [Indexed: 08/26/2023]
Abstract
Most of the intracranial aneurysms (ICA) occur on a specific portion of the cerebral vascular tree named the Circle of Willis (CoW). More particularly, they mainly arise onto fifteen of the major arterial bifurcations constituting this circular structure. Hence, for an efficient and timely diagnosis it is critical to develop some methods being able to accurately recognize each Bifurcation of Interest (BoI). Indeed, an automatic extraction of the bifurcations presenting the higher risk of developing an ICA would offer the neuroradiologists a quick glance at the most alarming areas. Due to the recent efforts on Artificial Intelligence, Deep Learning turned out to be the best performing technology for many pattern recognition tasks. Moreover, various methods have been particularly designed for medical image analysis purposes. This study intends to assist the neuroradiologists to promptly locate any bifurcation presenting a high risk of ICA occurrence. It can be seen as a Computer Aided Diagnosis scheme, where the Artificial Intelligence facilitates the access to the regions of interest within the MRI. In this work, we propose a method for a fully automatic detection and recognition of the bifurcations of interest forming the Circle of Willis. Several neural networks architectures have been tested, and we thoroughly evaluate the bifurcation recognition rate.
Collapse
Affiliation(s)
- Rafic Nader
- Nantes Université, CHU Nantes, CNRS, INSERM, l'institut du thorax, F-44000 Nantes, France
| | - Romain Bourcier
- Nantes Université, CHU Nantes, CNRS, INSERM, l'institut du thorax, F-44000 Nantes, France
| | - Florent Autrusseau
- Nantes Université, CHU Nantes, CNRS, INSERM, l'institut du thorax, F-44000 Nantes, France; Nantes Université, Polytech'Nantes, LTeN, U-6607, Rue Ch. Pauc, 44306, Nantes, France.
| |
Collapse
|
36
|
Gong R, Shi J, Wang J, Wang J, Zhou J, Lu X, Du J, Shi J. Hybrid-supervised bidirectional transfer networks for computer-aided diagnosis. Comput Biol Med 2023; 165:107409. [PMID: 37672923 DOI: 10.1016/j.compbiomed.2023.107409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/10/2023] [Accepted: 08/27/2023] [Indexed: 09/08/2023]
Abstract
Medical imaging techniques have been widely used for diagnosis of various diseases. However, the imaging-based diagnosis generally depends on the clinical skill of radiologists. Computer-aided diagnosis (CAD) can help radiologists improve diagnostic accuracy as well as the consistency and reproducibility. Although convolutional neural network (CNN) has shown its feasibility and effectiveness in CAD, it generally suffers from the problem of small sample size when training CAD models. Nowadays, self-supervised learning (SSL) has shown its effectiveness in the field of medical image analysis, especially when there are only limited training samples. However, the backbone of downstream task sometimes cannot be well pre-trained in the conventional SSL framework due to the limitation of the pretext task and fine-tuning mechanism. In this work, an improved SSL framework, named Hybrid-supervised Bidirectional Transfer Networks (HBTN), is proposed to improve the performance of CAD models. Specifically, a novel Gray-Scale Image Mapping (GSIM) task is developed, which still takes the widely used image restoration task in SSL as the pretext task, but further embeds the class label information into it to improve discriminative feature learning of its corresponding network model. The proposed HBTN then integrates two different network architectures, i.e. the image restoration network for the pretext task and the classification network for the downstream task, into a unified hybrid-supervised learning (HSL) framework. It jointly trains both networks and collaboratively transfers the knowledge between each other. Consequently, the performance of downstream network is thus improved. The proposed HBTN is evaluated on two medical image datasets for CAD tasks. The experimental results indicate that HBTN outperforms the conventional SSL algorithms for CAD with limited training samples.
Collapse
Affiliation(s)
- Ronglin Gong
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jing Shi
- Department of Radiology, Shanghai Children's Medical Center, Shanghai Jiao Tong University School of Medicine, China
| | - Jian Wang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jun Wang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jianwei Zhou
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Xiaofeng Lu
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jun Du
- Department of Radiology, Shanghai Children's Medical Center, Shanghai Jiao Tong University School of Medicine, China.
| | - Jun Shi
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China.
| |
Collapse
|
37
|
Ahalya RK, Almutairi FM, Snekhalatha U, Dhanraj V, Aslam SM. RANet: a custom CNN model and quanvolutional neural network for the automated detection of rheumatoid arthritis in hand thermal images. Sci Rep 2023; 13:15638. [PMID: 37730717 PMCID: PMC10511741 DOI: 10.1038/s41598-023-42111-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 09/05/2023] [Indexed: 09/22/2023] Open
Abstract
Rheumatoid arthritis is an autoimmune disease which affects the small joints. Early prediction of RA is necessary for the treatment and management of the disease. The current work presents a deep learning and quantum computing-based automated diagnostic approach for RA in hand thermal imaging. The study's goals are (i) to develop a custom RANet model and compare its performance with the pretrained models and quanvolutional neural network (QNN) to distinguish between the healthy subjects and RA patients, (ii) To validate the performance of the custom model using feature selection method and classification using machine learning (ML) classifiers. The present study developed a custom RANet model and employed pre-trained models such as ResNet101V2, InceptionResNetV2, and DenseNet201 to classify the RA patients and normal subjects. The deep features extracted from the RA Net model are fed into the ML classifiers after the feature selection process. The RANet model, RA Net+ SVM, and QNN model produced an accuracy of 95%, 97% and 93.33% respectively in the classification of healthy groups and RA patients. The developed RANet and QNN models based on thermal imaging could be employed as an accurate automated diagnostic tool to differentiate between the RA and control groups.
Collapse
Affiliation(s)
- R K Ahalya
- Department of Biomedical Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, 603203, India
- Department of Biomedical Engineering, Easwari Engineering college, Ramapuram, Chennai, Tamil Nadu, India
| | - Fadiyah M Almutairi
- Department of Information Systems, College of Computer and Information Sciences, Majmaah University, 11952, Al Majmaah, Saudi Arabia
| | - U Snekhalatha
- Department of Biomedical Engineering, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, 603203, India.
| | - Varun Dhanraj
- Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, Canada
| | - Shabnam M Aslam
- Department of Information Technology, College of Computer and Information Sciences, Majmaah University, 11952, Al Majmaah, Saudi Arabia
| |
Collapse
|
38
|
Yu CY, Chammas M, Gurden H, Lin HH, Pain F. Design and validation of a convolutional neural network for fast, model-free blood flow imaging with multiple exposure speckle imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:4439-4454. [PMID: 37791260 PMCID: PMC10545206 DOI: 10.1364/boe.492739] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/15/2023] [Accepted: 07/10/2023] [Indexed: 10/05/2023]
Abstract
Multiple exposure speckle imaging has demonstrated its improved accuracy compared to single exposure speckle imaging for relative quantitation of blood flow in vivo. However, the calculation of blood flow maps relies on a pixelwise non-linear fit of a multi-parametric model to the speckle contrasts. This approach has two major drawbacks. First, it is computer-intensive and prevents real time imaging and, second, the mathematical model is not universal and should in principle be adapted to the type of blood vessels. We evaluated a model-free machine learning approach based on a convolutional neural network as an alternative to the non-linear fit approach. A network was designed and trained with annotated speckle contrast data from microfluidic experiments. The neural network performances are then compared to the non-linear fit approach applied to in vitro and in vivo data. The study demonstrates the potential of convolutional networks to provide relative blood flow maps from multiple exposure speckle data in real time.
Collapse
Affiliation(s)
- Chao-Yueh Yu
- Chang-Gung University, Department of Medical Imaging and Radiological Sciences, Taoyuan City, Taiwan
| | - Marc Chammas
- Université Paris-Saclay, Institut d'Optique Graduate School, CNRS, Laboratoire Charles Fabry, 91127, Palaiseau, France
| | - Hirac Gurden
- Université Paris Cité, CNRS, Laboratoire Biologie Fonctionnelle et Adaptative, 75013, Paris, France
| | - Hsin-Hon Lin
- Chang-Gung University, Department of Medical Imaging and Radiological Sciences, Taoyuan City, Taiwan
- Department of Nuclear Medicine, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Frédéric Pain
- Université Paris-Saclay, Institut d'Optique Graduate School, CNRS, Laboratoire Charles Fabry, 91127, Palaiseau, France
| |
Collapse
|
39
|
Talwar V, Singh P, Mukhia N, Shetty A, Birur P, Desai KM, Sunkavalli C, Varma KS, Sethuraman R, Jawahar CV, Vinod PK. AI-Assisted Screening of Oral Potentially Malignant Disorders Using Smartphone-Based Photographic Images. Cancers (Basel) 2023; 15:4120. [PMID: 37627148 PMCID: PMC10452422 DOI: 10.3390/cancers15164120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
The prevalence of oral potentially malignant disorders (OPMDs) and oral cancer is surging in low- and middle-income countries. A lack of resources for population screening in remote locations delays the detection of these lesions in the early stages and contributes to higher mortality and a poor quality of life. Digital imaging and artificial intelligence (AI) are promising tools for cancer screening. This study aimed to evaluate the utility of AI-based techniques for detecting OPMDs in the Indian population using photographic images of oral cavities captured using a smartphone. A dataset comprising 1120 suspicious and 1058 non-suspicious oral cavity photographic images taken by trained front-line healthcare workers (FHWs) was used for evaluating the performance of different deep learning models based on convolution (DenseNets) and Transformer (Swin) architectures. The best-performing model was also tested on an additional independent test set comprising 440 photographic images taken by untrained FHWs (set I). DenseNet201 and Swin Transformer (base) models show high classification performance with an F1-score of 0.84 (CI 0.79-0.89) and 0.83 (CI 0.78-0.88) on the internal test set, respectively. However, the performance of models decreases on test set I, which has considerable variation in the image quality, with the best F1-score of 0.73 (CI 0.67-0.78) obtained using DenseNet201. The proposed AI model has the potential to identify suspicious and non-suspicious oral lesions using photographic images. This simplified image-based AI solution can assist in screening, early detection, and prompt referral for OPMDs.
Collapse
Affiliation(s)
- Vivek Talwar
- CVIT, International Institute of Information Technology, Hyderabad 500032, India; (V.T.); (C.V.J.)
| | - Pragya Singh
- INAI, International Institute of Information Technology, Hyderabad 500032, India; (P.S.); (K.S.V.)
| | - Nirza Mukhia
- Department of Oral Medicine and Radiology, KLE Society’s Institute of Dental Sciences, Bengaluru 560022, India; (N.M.); (P.B.)
| | | | - Praveen Birur
- Department of Oral Medicine and Radiology, KLE Society’s Institute of Dental Sciences, Bengaluru 560022, India; (N.M.); (P.B.)
| | - Karishma M. Desai
- iHUB-Data, International Institute of Information Technology, Hyderabad 500032, India;
| | | | - Konala S. Varma
- INAI, International Institute of Information Technology, Hyderabad 500032, India; (P.S.); (K.S.V.)
- Intel Technology India Private Limited, Bengaluru, India;
| | | | - C. V. Jawahar
- CVIT, International Institute of Information Technology, Hyderabad 500032, India; (V.T.); (C.V.J.)
| | - P. K. Vinod
- CCNSB, International Institute of Information Technology, Hyderabad 500032, India
| |
Collapse
|
40
|
Yu X, Ren J, Cui Y, Zeng R, Long H, Ma C. DRSN4mCPred: accurately predicting sites of DNA N4-methylcytosine using deep residual shrinkage network for diagnosis and treatment of gastrointestinal cancer in the precision medicine era. Front Med (Lausanne) 2023; 10:1187430. [PMID: 37215722 PMCID: PMC10192687 DOI: 10.3389/fmed.2023.1187430] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 04/05/2023] [Indexed: 05/24/2023] Open
Abstract
Introduction The DNA N4-methylcytosine (4mC) site levels of those suffering from digestive system cancers were higher, and the pathogenesis of digestive system cancers may also be related to the changes in DNA 4mC levels. Identifying DNA 4mC sites is a very important step in studying the analysis of biological function and cancer prediction. Extracting accurate features from DNA sequences is the key to establishing a prediction model of effective DNA 4mC sites. This study sought to develop a new predictive model, DRSN4mCPred, which aimed to improve the performance of the predicting DNA 4mC sites. Methods The model adopted multi-scale channel attention to extract features and used attention feature fusion (AFF) to fuse features. In order to capture features information more accurately and effectively, this model utilized Deep Residual Shrinkage Network with Channel-Wise thresholds (DRSN-CW) to eliminate noise-related features and achieve a more precise feature representation, thereby, distinguishing the sites in DNA with 4mC and non-4mC. Additionally, the predictive model incorporated an inverted residual block, a Multi-scale Channel Attention Module (MS-CAM), a Bi-directional Long Short Term Memory Network (Bi-LSTM), AFF, and DRSN-CW. Results and Discussion The results indicated the predictive model DRSN4mCPred had extremely good performance in predicting the DNA 4mC sites across different species. This paper will potentially provide support for the diagnosis and treatment of gastrointestinal cancer based on artificial intelligence in the precise medical era.
Collapse
Affiliation(s)
- Xia Yu
- School of Information and Communication Engineering, Hainan University, Haikou, Hainan, China
- School of Information Science and Technology, Hainan Normal University, Haikou, Hainan, China
| | - Jia Ren
- Industrial Design School, Shandong University of ART and Design, Jinan, Shandong, China
| | - Yani Cui
- School of Information and Communication Engineering, Hainan University, Haikou, Hainan, China
| | - Rao Zeng
- School of Information Science and Technology, Hainan Normal University, Haikou, Hainan, China
| | - Haixia Long
- School of Information Science and Technology, Hainan Normal University, Haikou, Hainan, China
| | - Cuihua Ma
- School of Information Science and Technology, Hainan Normal University, Haikou, Hainan, China
| |
Collapse
|
41
|
Pan H, Zhang M, Bai W, Li B, Wang H, Geng H, Zhao X, Zhang D, Li Y, Chen M. An Instance Segmentation Model Based on Deep Learning for Intelligent Diagnosis of Uterine Myomas in MRI. Diagnostics (Basel) 2023; 13:diagnostics13091525. [PMID: 37174917 PMCID: PMC10177878 DOI: 10.3390/diagnostics13091525] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 04/16/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023] Open
Abstract
Uterine myomas affect 70% of women of reproductive age, potentially impacting their fertility and health. Manual film reading is commonly used to identify uterine myomas, but it is time-consuming, laborious, and subjective. Clinical treatment requires the consideration of the positional relationship among the uterine wall, uterine cavity, and uterine myomas. However, due to their complex and variable shapes, the low contrast of adjacent tissues or organs, and indistinguishable edges, accurately identifying them in MRI is difficult. Our work addresses these challenges by proposing an instance segmentation network capable of automatically outputting the location, category, and masks of each organ and lesion. Specifically, we designed a new backbone that facilitates learning the shape features of object diversity, and filters out background noise interference. We optimized the anchor box generation strategy to provide better priors in order to enhance the process of bounding box prediction and regression. An adaptive iterative subdivision strategy ensures that the mask boundary details of objects are more realistic and accurate. We conducted extensive experiments to validate our network, which achieved better average precision (AP) results than those of state-of-the-art instance segmentation models. Compared to the baseline network, our model improved AP on the uterine wall, uterine cavity, and myomas by 8.8%, 8.4%, and 3.2%, respectively. Our work is the first to realize multiclass instance segmentation in uterine MRI, providing a convenient and objective reference for the clinical development of appropriate surgical plans, and has significant value in improving diagnostic efficiency and realizing the automatic auxiliary diagnosis of uterine myomas.
Collapse
Affiliation(s)
- Haixia Pan
- College of Software, Beihang University, Beijing 100191, China
| | - Meng Zhang
- College of Software, Beihang University, Beijing 100191, China
| | - Wenpei Bai
- Department of Obstetrics and Gynecology, Beijing Shijitan Hospital, Capital Medical University, Beijing 100038, China
| | - Bin Li
- Department of MRI, Beijing Shijitan Hospital, Capital Medical University, Beijing 100038, China
| | - Hongqiang Wang
- College of Software, Beihang University, Beijing 100191, China
| | - Haotian Geng
- College of Software, Beihang University, Beijing 100191, China
| | - Xiaoran Zhao
- College of Software, Beihang University, Beijing 100191, China
| | - Dongdong Zhang
- College of Software, Beihang University, Beijing 100191, China
| | - Yanan Li
- College of Software, Beihang University, Beijing 100191, China
| | - Minghuang Chen
- Department of Obstetrics and Gynecology, Beijing Shijitan Hospital, Capital Medical University, Beijing 100038, China
| |
Collapse
|
42
|
Seo SY, Oh JS, Chung J, Kim SY, Kim JS. MR Template-Based Individual Brain PET Volumes-of-Interest Generation Neither Using MR nor Using Spatial Normalization. Nucl Med Mol Imaging 2023; 57:73-85. [PMID: 36998592 PMCID: PMC10043100 DOI: 10.1007/s13139-022-00772-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 07/01/2022] [Accepted: 08/29/2022] [Indexed: 10/10/2022] Open
Abstract
For more anatomically precise quantitation of mouse brain PET, spatial normalization (SN) of PET onto MR template and subsequent template volumes-of-interest (VOIs)-based analysis are commonly used. Although this leads to dependency on the corresponding MR and the process of SN, routine preclinical/clinical PET images cannot always afford corresponding MR and relevant VOIs. To resolve this issue, we propose a deep learning (DL)-based individual-brain-specific VOIs (i.e., cortex, hippocampus, striatum, thalamus, and cerebellum) directly generated from PET images using the inverse-spatial-normalization (iSN)-based VOI labels and deep convolutional neural network model (deep CNN). Our technique was applied to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer's disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans before and after the administration of human immunoglobin or antibody-based treatments. To train the CNN, PET images were used as inputs and MR iSN-based target VOIs as labels. Our devised methods achieved decent performance in terms of not only VOI agreements (i.e., Dice similarity coefficient) but also the correlation of mean counts and SUVR, and CNN-based VOIs was highly accordant with ground-truth (the corresponding MR and MR template-based VOIs). Moreover, the performance metrics were comparable to that of VOI generated by MR-based deep CNN. In conclusion, we established a novel quantitative analysis method both MR-less and SN-less fashion to generate individual brain space VOIs using MR template-based VOIs for PET image quantification. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-022-00772-4.
Collapse
Affiliation(s)
- Seung Yeon Seo
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jungsu S. Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
| | - Jinwha Chung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seog-Young Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympicro-43 Rd, Songpa-gu, Seoul, 05505 South Korea
| |
Collapse
|
43
|
Wen G, Shim V, Holdsworth SJ, Fernandez J, Qiao M, Kasabov N, Wang A. Machine Learning for Brain MRI Data Harmonisation: A Systematic Review. Bioengineering (Basel) 2023; 10:bioengineering10040397. [PMID: 37106584 PMCID: PMC10135601 DOI: 10.3390/bioengineering10040397] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/16/2023] [Accepted: 03/21/2023] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been used to solve different types of problems related to MRI data, showing great promise. OBJECTIVE This study explores how well various ML algorithms perform in harmonising MRI data, both implicitly and explicitly, by summarising the findings in relevant peer-reviewed articles. Furthermore, it provides guidelines for the use of current methods and identifies potential future research directions. METHOD This review covers articles published through PubMed, Web of Science, and IEEE databases through June 2022. Data from studies were analysed based on the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Quality assessment questions were derived to assess the quality of the included publications. RESULTS a total of 41 articles published between 2015 and 2022 were identified and analysed. In the review, MRI data has been found to be harmonised either in an implicit (n = 21) or an explicit (n = 20) way. Three MRI modalities were identified: structural MRI (n = 28), diffusion MRI (n = 7) and functional MRI (n = 6). CONCLUSION Various ML techniques have been employed to harmonise different types of MRI data. There is currently a lack of consistent evaluation methods and metrics used across studies, and it is recommended that the issue be addressed in future studies. Harmonisation of MRI data using ML shows promises in improving performance for ML downstream tasks, while caution should be exercised when using ML-harmonised data for direct interpretation.
Collapse
Affiliation(s)
- Grace Wen
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
| | - Samantha Jane Holdsworth
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Mātai Medical Research Institute, Tairāwhiti-Gisborne 4010, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Miao Qiao
- Department of Computer Science, University of Auckland, Auckland 1142, New Zealand
| | - Nikola Kasabov
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand
- Intelligent Systems Research Centre, Ulster University, Londonderry BT52 1SA, UK
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
44
|
Goceri E. Medical image data augmentation: techniques, comparisons and interpretations. Artif Intell Rev 2023; 56:1-45. [PMID: 37362888 PMCID: PMC10027281 DOI: 10.1007/s10462-023-10453-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2023] [Indexed: 03/29/2023]
Abstract
Designing deep learning based methods with medical images has always been an attractive area of research to assist clinicians in rapid examination and accurate diagnosis. Those methods need a large number of datasets including all variations in their training stages. On the other hand, medical images are always scarce due to several reasons, such as not enough patients for some diseases, patients do not want to allow their images to be used, lack of medical equipment or equipment, inability to obtain images that meet the desired criteria. This issue leads to bias in datasets, overfitting, and inaccurate results. Data augmentation is a common solution to overcome this issue and various augmentation techniques have been applied to different types of images in the literature. However, it is not clear which data augmentation technique provides more efficient results for which image type since different diseases are handled, different network architectures are used, and these architectures are trained and tested with different numbers of data sets in the literature. Therefore, in this work, the augmentation techniques used to improve performances of deep learning based diagnosis of the diseases in different organs (brain, lung, breast, and eye) from different imaging modalities (MR, CT, mammography, and fundoscopy) have been examined. Also, the most commonly used augmentation methods have been implemented, and their effectiveness in classifications with a deep network has been discussed based on quantitative performance evaluations. Experiments indicated that augmentation techniques should be chosen carefully according to image types.
Collapse
Affiliation(s)
- Evgin Goceri
- Department of Biomedical Engineering, Engineering Faculty, Akdeniz University, Antalya, Turkey
| |
Collapse
|
45
|
Innovation in Hyperinsulinemia Diagnostics with ANN-L( atin square) Models. Diagnostics (Basel) 2023; 13:diagnostics13040798. [PMID: 36832286 PMCID: PMC9955502 DOI: 10.3390/diagnostics13040798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/11/2023] [Accepted: 02/12/2023] [Indexed: 02/22/2023] Open
Abstract
Hyperinsulinemia is a condition characterized by excessively high levels of insulin in the bloodstream. It can exist for many years without any symptomatology. The research presented in this paper was conducted from 2019 to 2022 in cooperation with a health center in Serbia as a large cross-sectional observational study of adolescents of both genders using datasets collected from the field. Previously used analytical approaches of integrated and relevant clinical, hematological, biochemical, and other variables could not identify potential risk factors for developing hyperinsulinemia. This paper aims to present several different models using machine learning (ML) algorithms such as naive Bayes, decision tree, and random forest and compare them with a new methodology constructed based on artificial neural networks using Taguchi's orthogonal vector plans (ANN-L), a special extraction of Latin squares. Furthermore, the experimental part of this study showed that ANN-L models achieved an accuracy of 99.5% with less than seven iterations performed. Furthermore, the study provides valuable insights into the share of each risk factor contributing to the occurrence of hyperinsulinemia in adolescents, which is crucial for more precise and straightforward medical diagnoses. Preventing the risk of hyperinsulinemia in this age group is crucial for the well-being of the adolescents and society as a whole.
Collapse
|
46
|
Oh S, Kang SR, Oh IJ, Kim MS. Deep learning model integrating positron emission tomography and clinical data for prognosis prediction in non-small cell lung cancer patients. BMC Bioinformatics 2023; 24:39. [PMID: 36747153 PMCID: PMC9903435 DOI: 10.1186/s12859-023-05160-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 01/25/2023] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related deaths worldwide. The majority of lung cancers are non-small cell lung cancer (NSCLC), accounting for approximately 85% of all lung cancer types. The Cox proportional hazards model (CPH), which is the standard method for survival analysis, has several limitations. The purpose of our study was to improve survival prediction in patients with NSCLC by incorporating prognostic information from F-18 fluorodeoxyglucose positron emission tomography (FDG PET) images into a traditional survival prediction model using clinical data. RESULTS The multimodal deep learning model showed the best performance, with a C-index and mean absolute error of 0.756 and 399 days under a five-fold cross-validation, respectively, followed by ResNet3D for PET (0.749 and 405 days) and CPH for clinical data (0.747 and 583 days). CONCLUSION The proposed deep learning-based integrative model combining the two modalities improved the survival prediction in patients with NSCLC.
Collapse
Affiliation(s)
- Seungwon Oh
- grid.14005.300000 0001 0356 9399Department of Mathematics and Statistics, Chonnam National University, Gwangju, Republic of Korea
| | - Sae-Ryung Kang
- grid.14005.300000 0001 0356 9399Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, Hwasun, Jeonnam Republic of Korea
| | - In-Jae Oh
- Department of Internal Medicine, Chonnam National University Medical School and Hwasun Hospital, Hwasun, Jeonnam, Republic of Korea.
| | - Min-Soo Kim
- Department of Mathematics and Statistics, Chonnam National University, Gwangju, Republic of Korea.
| |
Collapse
|
47
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
48
|
Fuentes AM, Narayan A, Milligan K, Lum JJ, Brolo AG, Andrews JL, Jirasek A. Raman spectroscopy and convolutional neural networks for monitoring biochemical radiation response in breast tumour xenografts. Sci Rep 2023; 13:1530. [PMID: 36707535 PMCID: PMC9883395 DOI: 10.1038/s41598-023-28479-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 01/19/2023] [Indexed: 01/29/2023] Open
Abstract
Tumour cells exhibit altered metabolic pathways that lead to radiation resistance and disease progression. Raman spectroscopy (RS) is a label-free optical modality that can monitor post-irradiation biomolecular signatures in tumour cells and tissues. Convolutional Neural Networks (CNN) perform automated feature extraction directly from data, with classification accuracy exceeding that of traditional machine learning, in cases where data is abundant and feature extraction is challenging. We are interested in developing a CNN-based predictive model to characterize clinical tumour response to radiation therapy based on their degree of radiosensitivity or radioresistance. In this work, a CNN architecture is built for identifying post-irradiation spectral changes in Raman spectra of tumour tissue. The model was trained to classify irradiated versus non-irradiated tissue using Raman spectra of breast tumour xenografts. The CNN effectively classified the tissue spectra, with accuracies exceeding 92.1% for data collected 3 days post-irradiation, and 85.0% at day 1 post-irradiation. Furthermore, the CNN was evaluated using a leave-one-out- (mouse, section or Raman map) validation approach to investigate its generalization to new test subjects. The CNN retained good predictive accuracy (average accuracies 83.7%, 91.4%, and 92.7%, respectively) when little to no information for a specific subject was given during training. Finally, the classification performance of the CNN was compared to that of a previously developed model based on group and basis restricted non-negative matrix factorization and random forest (GBR-NMF-RF) classification. We found that CNN yielded higher classification accuracy, sensitivity, and specificity in mice assessed 3 days post-irradiation, as compared with the GBR-NMF-RF approach. Overall, the CNN can detect biochemical spectral changes in tumour tissue at an early time point following irradiation, without the need for previous manual feature extraction. This study lays the foundation for developing a predictive framework for patient radiation response monitoring.
Collapse
Affiliation(s)
- Alejandra M Fuentes
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Apurva Narayan
- Department of Computer Science, Western University, London, Canada
- Department of Computer Science, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Kirsty Milligan
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Julian J Lum
- Department of Biochemistry and Microbiology, The University of Victoria, Victoria, Canada
| | - Alex G Brolo
- Department of Chemistry, The University of Victoria, Victoria, Canada
| | - Jeffrey L Andrews
- Department of Statistics, The University of British Columbia Okanagan Campus, Kelowna, Canada
| | - Andrew Jirasek
- Department of Physics, The University of British Columbia Okanagan Campus, Kelowna, Canada.
| |
Collapse
|
49
|
Wang S, Wang S, Wang Z. A survey on multi-omics-based cancer diagnosis using machine learning with the potential application in gastrointestinal cancer. Front Med (Lausanne) 2023; 9:1109365. [PMID: 36703893 PMCID: PMC9871466 DOI: 10.3389/fmed.2022.1109365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 12/28/2022] [Indexed: 01/12/2023] Open
Abstract
Gastrointestinal cancer is becoming increasingly common, which leads to over 3 million deaths every year. No typical symptoms appear in the early stage of gastrointestinal cancer, posing a significant challenge in the diagnosis and treatment of patients with gastrointestinal cancer. Many patients are in the middle and late stages of gastrointestinal cancer when they feel uncomfortable, unfortunately, most of them will die of gastrointestinal cancer. Recently, various artificial intelligence techniques like machine learning based on multi-omics have been presented for cancer diagnosis and treatment in the era of precision medicine. This paper provides a survey on multi-omics-based cancer diagnosis using machine learning with potential application in gastrointestinal cancer. Particularly, we make a comprehensive summary and analysis from the perspective of multi-omics datasets, task types, and multi-omics-based integration methods. Furthermore, this paper points out the remaining challenges of multi-omics-based cancer diagnosis using machine learning and discusses future topics.
Collapse
Affiliation(s)
- Suixue Wang
- School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Shuling Wang
- Department of Neurology, Affiliated Haikou Hospital of Xiangya School of Medicine, Central South University, Haikou, China
| | - Zhengxia Wang
- School of Computer Science and Technology, Hainan University, Haikou, China
| |
Collapse
|
50
|
Zhang X, Shams SP, Yu H, Wang Z, Zhang Q. A Similarity Measure-Based Approach Using RS-fMRI Data for Autism Spectrum Disorder Diagnosis. Diagnostics (Basel) 2023; 13:diagnostics13020218. [PMID: 36673028 PMCID: PMC9858445 DOI: 10.3390/diagnostics13020218] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/24/2022] [Accepted: 12/28/2022] [Indexed: 01/11/2023] Open
Abstract
Autism spectrum disorder (ASD) is a lifelong neurological disease, which seriously reduces the patients' life quality. Generally, an early diagnosis is beneficial to improve ASD children's life quality. Current methods based on samples from multiple sites for ASD diagnosis perform poorly in generalization due to the heterogeneity of the data from multiple sites. To address this problem, this paper presents a similarity measure-based approach for ASD diagnosis. Specifically, the few-shot learning strategy is used to measure potential similarities in the RS-fMRI data distributions, and, furthermore, a similarity function for samples from multiple sites is trained to enhance the generalization. On the ABIDE database, the presented approach is compared to some representative methods, such as SVM and random forest, in terms of accuracy, precision, and F1 score. The experimental results show that the experimental indicators of the proposed method are better than those of the comparison methods to varying degrees. For example, the accuracy on the TRINITY site is more than 5% higher than that of the comparison method, which clearly proves that the presented approach achieves a better generalization performance than the compared methods.
Collapse
Affiliation(s)
- Xiangfei Zhang
- School of Cyberspace Security, Hainan University, Haikou 570228, China
| | - Shayel Parvez Shams
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Hang Yu
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
| | - Zhengxia Wang
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
| | - Qingchen Zhang
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
- Correspondence:
| |
Collapse
|