1
|
Liu X, Shi J, Jiao Y, An J, Tian J, Yang Y, Zhuo L. Integrated multi-omics with machine learning to uncover the intricacies of kidney disease. Brief Bioinform 2024; 25:bbae364. [PMID: 39082652 PMCID: PMC11289682 DOI: 10.1093/bib/bbae364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 06/20/2024] [Accepted: 07/17/2024] [Indexed: 08/03/2024] Open
Abstract
The development of omics technologies has driven a profound expansion in the scale of biological data and the increased complexity in internal dimensions, prompting the utilization of machine learning (ML) as a powerful toolkit for extracting knowledge and understanding underlying biological patterns. Kidney disease represents one of the major growing global health threats with intricate pathogenic mechanisms and a lack of precise molecular pathology-based therapeutic modalities. Accordingly, there is a need for advanced high-throughput approaches to capture implicit molecular features and complement current experiments and statistics. This review aims to delineate strategies for integrating multi-omics data with appropriate ML methods, highlighting key clinical translational scenarios, including predicting disease progression risks to improve medical decision-making, comprehensively understanding disease molecular mechanisms, and practical applications of image recognition in renal digital pathology. Examining the benefits and challenges of current integration efforts is expected to shed light on the complexity of kidney disease and advance clinical practice.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Li Zhuo
- Corresponding author. Department of Nephrology, China-Japan Friendship Hospital, Beijing 100029, China; China-Japan Friendship Clinic Medical College, Beijing University of Chinese Medicine, 100029 Beijing, China. E-mail:
| |
Collapse
|
2
|
Rong R, Denton K, Jin KW, Quan P, Wen Z, Kozlitina J, Lyon S, Wang A, Wise CA, Beutler B, Yang DM, Li Q, Rios JJ, Xiao G. Deep Learning-Based Automated Measurement of Murine Bone Length in Radiographs. Bioengineering (Basel) 2024; 11:670. [PMID: 39061752 PMCID: PMC11273961 DOI: 10.3390/bioengineering11070670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/18/2024] [Accepted: 06/18/2024] [Indexed: 07/28/2024] Open
Abstract
Genetic mouse models of skeletal abnormalities have demonstrated promise in the identification of phenotypes relevant to human skeletal diseases. Traditionally, phenotypes are assessed by manually examining radiographs, a tedious and potentially error-prone process. In response, this study developed a deep learning-based model that streamlines the measurement of murine bone lengths from radiographs in an accurate and reproducible manner. A bone detection and measurement pipeline utilizing the Keypoint R-CNN algorithm with an EfficientNet-B3 feature extraction backbone was developed to detect murine bone positions and measure their lengths. The pipeline was developed utilizing 94 X-ray images with expert annotations on the start and end position of each murine bone. The accuracy of our pipeline was evaluated on an independent dataset test with 592 images, and further validated on a previously published dataset of 21,300 mouse radiographs. The results showed that our model performed comparably to humans in measuring tibia and femur lengths (R2 > 0.92, p-value = 0) and significantly outperformed humans in measuring pelvic lengths in terms of precision and consistency. Furthermore, the model improved the precision and consistency of genetic association mapping results, identifying significant associations between genetic mutations and skeletal phenotypes with reduced variability. This study demonstrates the feasibility and efficiency of automated murine bone length measurement in the identification of mouse models of abnormal skeletal phenotypes.
Collapse
Affiliation(s)
- Ruichen Rong
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
| | - Kristin Denton
- Center for Pediatric Bone Biology and Translational Research, Scottish Rite for Children, Dallas, TX 75219, USA; (K.D.); (C.A.W.)
| | - Kevin W. Jin
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
| | - Peiran Quan
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
| | - Julia Kozlitina
- McDermott Center for Human Growth and Development, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA;
| | - Stephen Lyon
- Center for the Genetics of Host Defense, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (S.L.); (B.B.)
| | - Aileen Wang
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
| | - Carol A. Wise
- Center for Pediatric Bone Biology and Translational Research, Scottish Rite for Children, Dallas, TX 75219, USA; (K.D.); (C.A.W.)
- McDermott Center for Human Growth and Development, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA;
- Department of Orthopaedic Surgery, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Bruce Beutler
- Center for the Genetics of Host Defense, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (S.L.); (B.B.)
| | - Donghan M. Yang
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
| | - Qiwei Li
- Department of Mathematical Sciences, The University of Texas at Dallas, Richardson, TX 75083, USA;
| | - Jonathan J. Rios
- Center for Pediatric Bone Biology and Translational Research, Scottish Rite for Children, Dallas, TX 75219, USA; (K.D.); (C.A.W.)
- McDermott Center for Human Growth and Development, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA;
- Department of Orthopaedic Surgery, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Simmons Comprehensive Cancer Center, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Peter O’Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; (R.R.); (K.W.J.); (P.Q.); (Z.W.); (A.W.); (D.M.Y.)
- Simmons Comprehensive Cancer Center, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Bioinformatics, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
3
|
Yıldız Potter İ, Yeritsyan D, Rodriguez EK, Wu JS, Nazarian A, Vaziri A. Detection and Localization of Spine Disorders from Plain Radiography. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01175-x. [PMID: 38937344 DOI: 10.1007/s10278-024-01175-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 05/16/2024] [Accepted: 06/09/2024] [Indexed: 06/29/2024]
Abstract
Spine disorders can cause severe functional limitations, including back pain, decreased pulmonary function, and increased mortality risk. Plain radiography is the first-line imaging modality to diagnose suspected spine disorders. Nevertheless, radiographical appearance is not always sufficient due to highly variable patient and imaging parameters, which can lead to misdiagnosis or delayed diagnosis. Employing an accurate automated detection model can alleviate the workload of clinical experts, thereby reducing human errors, facilitating earlier detection, and improving diagnostic accuracy. To this end, deep learning-based computer-aided diagnosis (CAD) tools have significantly outperformed the accuracy of traditional CAD software. Motivated by these observations, we proposed a deep learning-based approach for end-to-end detection and localization of spine disorders from plain radiographs. In doing so, we took the first steps in employing state-of-the-art transformer networks to differentiate images of multiple spine disorders from healthy counterparts and localize the identified disorders, focusing on vertebral compression fractures (VCF) and spondylolisthesis due to their high prevalence and potential severity. The VCF dataset comprised 337 images, with VCFs collected from 138 subjects and 624 normal images collected from 337 subjects. The spondylolisthesis dataset comprised 413 images, with spondylolisthesis collected from 336 subjects and 782 normal images collected from 413 subjects. Transformer-based models exhibited 0.97 Area Under the Receiver Operating Characteristic Curve (AUC) in VCF detection and 0.95 AUC in spondylolisthesis detection. Further, transformers demonstrated significant performance improvements against existing end-to-end approaches by 4-14% AUC (p-values < 10-13) for VCF detection and by 14-20% AUC (p-values < 10-9) for spondylolisthesis detection.
Collapse
Affiliation(s)
| | - Diana Yeritsyan
- Beth Israel Deaconess Medical Center (BIDMC), Carl J. Shapiro Department of Orthopedic Surgery, Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, RN123, USA
| | - Edward K Rodriguez
- Beth Israel Deaconess Medical Center (BIDMC), Carl J. Shapiro Department of Orthopedic Surgery, Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, RN123, USA
| | - Jim S Wu
- Department of Radiology, Massachusetts General Brigham (MGB), Harvard Medical School, 75 Francis Street, Boston, MA, 02215, USA
| | - Ara Nazarian
- Beth Israel Deaconess Medical Center (BIDMC), Carl J. Shapiro Department of Orthopedic Surgery, Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, RN123, USA
- Department of Orthopaedics Surgery, Yerevan State University, 0025, Yerevan, Armenia
| | - Ashkan Vaziri
- BioSensics, LLC, 57 Chapel Street, Newton, MA, 02458, USA
| |
Collapse
|
4
|
Zaman A, Hassan H, Zeng X, Khan R, Lu J, Yang H, Miao X, Cao A, Yang Y, Huang B, Guo Y, Kang Y. Adaptive Feature Medical Segmentation Network: an adaptable deep learning paradigm for high-performance 3D brain lesion segmentation in medical imaging. Front Neurosci 2024; 18:1363930. [PMID: 38680446 PMCID: PMC11047127 DOI: 10.3389/fnins.2024.1363930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 03/04/2024] [Indexed: 05/01/2024] Open
Abstract
Introduction In neurological diagnostics, accurate detection and segmentation of brain lesions is crucial. Identifying these lesions is challenging due to its complex morphology, especially when using traditional methods. Conventional methods are either computationally demanding with a marginal impact/enhancement or sacrifice fine details for computational efficiency. Therefore, balancing performance and precision in compute-intensive medical imaging remains a hot research topic. Methods We introduce a novel encoder-decoder network architecture named the Adaptive Feature Medical Segmentation Network (AFMS-Net) with two encoder variants: the Single Adaptive Encoder Block (SAEB) and the Dual Adaptive Encoder Block (DAEB). A squeeze-and-excite mechanism is employed in SAEB to identify significant data while disregarding peripheral details. This approach is best suited for scenarios requiring quick and efficient segmentation, with an emphasis on identifying key lesion areas. In contrast, the DAEB utilizes an advanced channel spatial attention strategy for fine-grained delineation and multiple-class classifications. Additionally, both architectures incorporate a Segmentation Path (SegPath) module between the encoder and decoder, refining segmentation, enhancing feature extraction, and improving model performance and stability. Results AFMS-Net demonstrates exceptional performance across several notable datasets, including BRATs 2021, ATLAS 2021, and ISLES 2022. Its design aims to construct a lightweight architecture capable of handling complex segmentation challenges with high precision. Discussion The proposed AFMS-Net addresses the critical balance issue between performance and computational efficiency in the segmentation of brain lesions. By introducing two tailored encoder variants, the network adapts to varying requirements of speed and feature. This approach not only advances the state-of-the-art in lesion segmentation but also provides a scalable framework for future research in medical image processing.
Collapse
Affiliation(s)
- Asim Zaman
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
| | - Haseeb Hassan
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Xueqiang Zeng
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Rashid Khan
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Jiaxi Lu
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Huihui Yang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Xiaoqiang Miao
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Anbo Cao
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
| | - Yingjian Yang
- Shenzhen Lanmage Medical Technology Co., Ltd, Shenzhen, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Yingwei Guo
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Electrical and Information Engineering, Northeast Petroleum University, Daqing, China
| | - Yan Kang
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
- School of Applied Technology, Shenzhen University, Shenzhen, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
5
|
Yuan Y, Pan B, Mo H, Wu X, Long Z, Yang Z, Zhu J, Ming J, Qiu L, Sun Y, Yin S, Zhang F. Deep learning-based computer-aided diagnosis system for the automatic detection and classification of lateral cervical lymph nodes on original ultrasound images of papillary thyroid carcinoma: a prospective diagnostic study. Endocrine 2024:10.1007/s12020-024-03808-1. [PMID: 38570388 DOI: 10.1007/s12020-024-03808-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
PURPOSE This study aims to develop a deep learning-based computer-aided diagnosis (CAD) system for the automatic detection and classification of lateral cervical lymph nodes (LNs) on original ultrasound images of papillary thyroid carcinoma (PTC) patients. METHODS A retrospective data set of 1801 cervical LN ultrasound images from 1675 patients with PTC and a prospective test set including 185 images from 160 patients were collected. Four different deep leaning models were trained and validated in the retrospective data set. The best model was selected for CAD system development and compared with three sonographers in the retrospective and prospective test sets. RESULTS The Deformable Detection Transformer (DETR) model showed the highest diagnostic efficacy, with a mean average precision score of 86.3% in the retrospective test set, and was therefore used in constructing the CAD system. The detection performance of the CAD system was superior to the junior sonographer and intermediate sonographer with accuracies of 86.3% and 92.4% in the retrospective and prospective test sets, respectively. The classification performance of the CAD system was better than all sonographers with the areas under the curve (AUCs) of 94.4% and 95.2% in the retrospective and prospective test sets, respectively. CONCLUSIONS This study developed a Deformable DETR model-based CAD system for automatically detecting and classifying lateral cervical LNs on original ultrasound images, which showed excellent diagnostic efficacy and clinical utility. It can be an important tool for assisting sonographers in the diagnosis process.
Collapse
Affiliation(s)
- Yuquan Yuan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Bin Pan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Hongbiao Mo
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Xing Wu
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zhaoxin Long
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zeyu Yang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Junping Zhu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Jing Ming
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Lin Qiu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Yiceng Sun
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Supeng Yin
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| | - Fan Zhang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| |
Collapse
|
6
|
Binh LN, Nhu NT, Vy VPT, Son DLH, Hung TNK, Bach N, Huy HQ, Tuan LV, Le NQK, Kang JH. Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:725-733. [PMID: 38308069 PMCID: PMC11031555 DOI: 10.1007/s10278-024-00968-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/19/2023] [Accepted: 01/11/2024] [Indexed: 02/04/2024]
Abstract
Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008-2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers-an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model's performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.
Collapse
Affiliation(s)
- Le Nguyen Binh
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan
| | - Nguyen Thanh Nhu
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
- Faculty of Medicine, Can Tho University of Medicine and Pharmacy, Can Tho 94117, Can Tho, Vietnam
| | - Vu Pham Thao Vy
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan
| | - Do Le Hoang Son
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
| | | | - Nguyen Bach
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Hoang Quoc Huy
- Department of Orthopedics, University Medical Center Ho Chi Minh City, 201 Nguyen Chi Thanh Street, District 5, Ho Chi Minh City, Vietnam
| | - Le Van Tuan
- Department of Orthopedics and Trauma, Cho Ray Hospital, Ho Chi Minh City, Vietnam
| | - Nguyen Quoc Khanh Le
- AIBioMed Research Group, Taipei Medical University, Taipei, 11031, Taiwan.
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
| | - Jiunn-Horng Kang
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, School of Medicine, College of Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
- Department of Physical Medicine and Rehabilitation, Taipei Medical University Hospital, Taipei, 11031, Taiwan.
- Graduate Institute of Nanomedicine and Medical Engineering, College of Biomedical Engineering, Taipei Medical University, Xinyi District, No.250, Wuxing Street, Taipei, 11031, Taiwan.
| |
Collapse
|
7
|
Chhillar I, Singh A. A feature engineering-based machine learning technique to detect and classify lung and colon cancer from histopathological images. Med Biol Eng Comput 2024; 62:913-924. [PMID: 38091162 DOI: 10.1007/s11517-023-02984-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 11/29/2023] [Indexed: 02/22/2024]
Abstract
Globally, lung and colon cancers are among the most prevalent and lethal tumors. Early cancer identification is essential to increase the likelihood of survival. Histopathological images are considered an appropriate tool for diagnosing cancer, which is tedious and error-prone if done manually. Recently, machine learning methods based on feature engineering have gained prominence in automatic histopathological image classification. Furthermore, these methods are more interpretable than deep learning, which operates in a "black box" manner. In the medical profession, the interpretability of a technique is critical to gaining the trust of end users to adopt it. In view of the above, this work aims to create an accurate and interpretable machine-learning technique for the automated classification of lung and colon cancers from histopathology images. In the proposed approach, following the preprocessing steps, texture and color features are retrieved by utilizing the Haralick and Color histogram feature extraction algorithms, respectively. The obtained features are concatenated to form a single feature set. The three feature sets (texture, color, and combined features) are passed into the Light Gradient Boosting Machine (LightGBM) classifier for classification. And their performance is evaluated on the LC25000 dataset using hold-out and stratified 10-fold cross-validation (Stratified 10-FCV) techniques. With a test/hold-out set, the LightGBM with texture, color, and combined features classifies the lung and colon cancer images with 97.72%, 99.92%, and 100% accuracy respectively. In addition, a stratified 10-fold cross-validation method also revealed that LightGBM's combined or color features performed well, with an excellent mean auc_mu score and a low mean multi_logloss value. Thus, this proposed technique can help histologists detect and classify lung and colon histopathology images more efficiently, effectively, and economically, resulting in more productivity.
Collapse
Affiliation(s)
- Indu Chhillar
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India.
| | - Ajmer Singh
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India
| |
Collapse
|
8
|
Wang MX, Kim JK, Choi JW, Park D, Chang MC. Deep learning algorithm for automatically measuring Cobb angle in patients with idiopathic scoliosis. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2024:10.1007/s00586-023-08024-5. [PMID: 38367024 DOI: 10.1007/s00586-023-08024-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/30/2023] [Accepted: 10/25/2023] [Indexed: 02/19/2024]
Abstract
PURPOSE The Cobb angle is a standard measurement to qualify and track the progression of scoliosis. However, the Cobb angle has high inter- and intra-observer variability. Consequently, its measurement varies with vertebrae and may even differ when the same vertebra is measured. Therefore, it is not constant and differs with measurements. This study aimed to develop a deep learning model that automatically measures the Cobb angle. The deep learning model for identifying vertebrae on spine radiographs was developed. METHODS The dataset consisted of 297 images that were divided into two subsets for training and validation. Two hundred and twenty-seven images (76.4%) were used to train the model, while 70 images (23.6%) were used as the validation dataset. Absolut error between the measurements by the observer and developed deep learning model and intraclass correlation coefficient (ICC). RESULTS The average absolute error between the measurements was 1.97° with a standard deviation of 1.57°. In addition, 95.9% of the angles had an absolute error of less than 5°. The ICC was calculated to assess the model's reliability further. The ICC was 0.981, indicating excellent reliability. CONCLUSIONS The authors believe the model will be useful in clinical practice by relieving clinicians of the burden of having to manually compute the Cobb angle. Further studies are needed to enhance the accuracy and versatility of this deep learning model.
Collapse
Affiliation(s)
- Ming Xing Wang
- Department of Business Administration, School of Business, Yeungnam University, Gyeongsan-Si, Republic of Korea
| | - Jeoung Kun Kim
- Department of Business Administration, School of Business, Yeungnam University, Gyeongsan-Si, Republic of Korea
| | - Jin-Woo Choi
- Department of Physical Medicine and Rehabilitation, Ulsan University Hospital, University of Ulsan College of Medicine, Ulsan, Republic of Korea
| | - Donghwi Park
- Department of Rehabilitation Medicine, Daegu Fatima Hospital, Ayangro 99, Dong Gu, Daegu, 41199, Republic of Korea.
| | - Min Cheol Chang
- Department of Physical Medicine and Rehabilitation, College of Medicine, Yeungnam University, 317-1, Daemyungdong, Namku, Daegu, 705-717, Republic of Korea.
| |
Collapse
|
9
|
Park J, Cho H, Ji Y, Lee K, Yoon H. Detection of spondylosis deformans in thoracolumbar and lumbar lateral X-ray images of dogs using a deep learning network. Front Vet Sci 2024; 11:1334438. [PMID: 38425836 PMCID: PMC10902442 DOI: 10.3389/fvets.2024.1334438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024] Open
Abstract
Introduction Spondylosis deformans is a non-inflammatory osteophytic reaction that develops to re-establish the stability of weakened joints between intervertebral discs. However, assessing these changes using radiography is subjective and difficult. In human medicine, attempts have been made to use artificial intelligence to accurately diagnose difficult and ambiguous diseases in medical imaging. Deep learning, a form of artificial intelligence, is most commonly used in medical imaging data analysis. It is a technique that utilizes neural networks to self-learn and extract features from data to diagnose diseases. However, no deep learning model has been developed to detect vertebral diseases in canine thoracolumbar and lumbar lateral X-ray images. Therefore, this study aimed to establish a segmentation model that automatically recognizes the vertebral body and spondylosis deformans in the thoracolumbar and lumbar lateral radiographs of dogs. Methods A total of 265 thoracolumbar and lumbar lateral radiographic images from 162 dogs were used to develop and evaluate the deep learning model based on the attention U-Net algorithm to segment the vertebral body and detect spondylosis deformans. Results When comparing the ability of the deep learning model and veterinary clinicians to recognize spondylosis deformans in the test dataset, the kappa value was 0.839, indicating an almost perfect agreement. Conclusions The deep learning model developed in this study is expected to automatically detect spondylosis deformans on thoracolumbar and lumbar lateral radiographs of dogs, helping to quickly and accurately identify unstable intervertebral disc space sites. Furthermore, the segmentation model developed in this study is expected to be useful for developing models that automatically recognize various vertebral and disc diseases.
Collapse
Affiliation(s)
- Junseol Park
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
- Biosafety Research Institute and College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Hyunwoo Cho
- Department of Electronic Engineering, Sogang University, Seoul, Republic of Korea
| | - Yewon Ji
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Kichang Lee
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| | - Hakyoung Yoon
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
- Biosafety Research Institute and College of Veterinary Medicine, Jeonbuk National University, Iksan, Republic of Korea
| |
Collapse
|
10
|
Gende M, Castelo L, de Moura J, Novo J, Ortega M. Intra- and Inter-expert Validation of an Automatic Segmentation Method for Fluid Regions Associated with Central Serous Chorioretinopathy in OCT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:107-122. [PMID: 38343245 DOI: 10.1007/s10278-023-00926-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/16/2023] [Accepted: 10/16/2023] [Indexed: 03/02/2024]
Abstract
Central Serous Chorioretinopathy (CSC) is a retinal disorder caused by the accumulation of fluid, resulting in vision distortion. The diagnosis of this disease is typically performed through Optical Coherence Tomography (OCT) imaging, which displays any fluid buildup between the retinal layers. Currently, these fluid regions are manually detected by visual inspection a time-consuming and subjective process that can be prone to errors. A series of six deep learning-based automatic segmentation architectural configurations of different levels of complexity were trained and compared in order to determine the best model intended for the automatic segmentation of CSC-related lesions in OCT images. The best performing models were then evaluated in an external validation study. Furthermore, an intra- and inter-expert analysis was conducted in order to compare the manual segmentation performed by expert ophthalmologists with the automatic segmentation provided by the models. Test results of the best performing configuration achieved a mean Dice of 0.868 ± 0.056 in the internal dataset. In the external validation set, these models achieved a level of agreement with human experts of up to 0.960 in terms of Kappa coefficient, contrasting with a value of 0.951 for agreement between human experts. Overall, the models reached a better agreement with either of the human experts than these experts with each other, suggesting that automatic segmentation models for the detection of CSC-related lesions in OCT imaging can be useful tools for assessing this disease, reducing the workload of manual inspection and leading to a more robust and objective diagnosis method.
Collapse
Affiliation(s)
- Mateo Gende
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, 15006, A Coruña, Spain
- Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, 15071, A Coruña, Spain
| | - Lúa Castelo
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, 15006, A Coruña, Spain
- Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, 15071, A Coruña, Spain
| | - Joaquim de Moura
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, 15006, A Coruña, Spain.
- Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, 15071, A Coruña, Spain.
| | - Jorge Novo
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, 15006, A Coruña, Spain
- Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, 15071, A Coruña, Spain
| | - Marcos Ortega
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, 15006, A Coruña, Spain
- Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, 15071, A Coruña, Spain
| |
Collapse
|
11
|
Zeng J, Gao X, Gao L, Yu Y, Shen L, Pan X. Recognition of rare antinuclear antibody patterns based on a novel attention-based enhancement framework. Brief Bioinform 2024; 25:bbad531. [PMID: 38279651 PMCID: PMC10818137 DOI: 10.1093/bib/bbad531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 12/17/2023] [Accepted: 12/19/2023] [Indexed: 01/28/2024] Open
Abstract
Rare antinuclear antibody (ANA) pattern recognition has been a widely applied technology for routine ANA screening in clinical laboratories. In recent years, the application of deep learning methods in recognizing ANA patterns has witnessed remarkable advancements. However, the majority of studies in this field have primarily focused on the classification of the most common ANA patterns, while another subset has concentrated on the detection of mitotic metaphase cells. To date, no prior research has been specifically dedicated to the identification of rare ANA patterns. In the present paper, we introduce a novel attention-based enhancement framework, which was designed for the recognition of rare ANA patterns in ANA-indirect immunofluorescence images. More specifically, we selected the algorithm with the best performance as our target detection network by conducting comparative experiments. We then further developed and enhanced the chosen algorithm through a series of optimizations. Then, attention mechanism was introduced to facilitate neural networks in expediting the learning process, extracting more essential and distinctive features for the target features that belong to the specific patterns. The proposed approach has helped to obtained high precision rate of 86.40%, 82.75% recall, 84.24% F1 score and 84.64% mean average precision for a 9-category rare ANA pattern detection task on our dataset. Finally, we evaluated the potential of the model as medical technologist assistant and observed that the technologist's performance improved after referring to the results of the model prediction. These promising results highlighted its potential as an efficient and reliable tool to assist medical technologists in their clinical practice.
Collapse
Affiliation(s)
- Junxiang Zeng
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Faculty of Medical Laboratory Science, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Artificial Intelligence Medicine, Shanghai Academy of Experimental Medicine, Shanghai, China
| | - Xiupan Gao
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Limei Gao
- Department of Immunology and Rheumatology, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Youyou Yu
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lisong Shen
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Faculty of Medical Laboratory Science, College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Artificial Intelligence Medicine, Shanghai Academy of Experimental Medicine, Shanghai, China
| | - Xiujun Pan
- Department of Clinical Laboratory, Xinhua Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
12
|
Yang J, Huang J, Han D, Ma X. Artificial Intelligence Applications in the Treatment of Colorectal Cancer: A Narrative Review. Clin Med Insights Oncol 2024; 18:11795549231220320. [PMID: 38187459 PMCID: PMC10771756 DOI: 10.1177/11795549231220320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/26/2023] [Indexed: 01/09/2024] Open
Abstract
Colorectal cancer is the third most prevalent cancer worldwide, and its treatment has been a demanding clinical problem. Beyond traditional surgical therapy and chemotherapy, newly revealed molecular mechanisms diversify therapeutic approaches for colorectal cancer. However, the selection of personalized treatment among multiple treatment options has become another challenge in the era of precision medicine. Artificial intelligence has recently been increasingly investigated in the treatment of colorectal cancer. This narrative review mainly discusses the applications of artificial intelligence in the treatment of colorectal cancer patients. A comprehensive literature search was conducted in MEDLINE, EMBASE, and Web of Science to identify relevant papers, resulting in 49 articles being included. The results showed that, based on different categories of data, artificial intelligence can predict treatment outcomes and essential guidance information of traditional and novel therapies, thus enabling individualized treatment strategy selection for colorectal cancer patients. Some frequently implemented machine learning algorithms and deep learning frameworks have also been employed for long-term prognosis prediction in patients with colorectal cancer. Overall, artificial intelligence shows encouraging results in treatment strategy selection and prognosis evaluation for colorectal cancer patients.
Collapse
Affiliation(s)
- Jiaqing Yang
- Department of Biotherapy, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, Chengdu, China
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jing Huang
- Department of Ultrasound, West China Hospital, Sichuan University, Chengdu, China
| | - Deqian Han
- Department of Oncology, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- Department of Biotherapy, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, Chengdu, China
| |
Collapse
|
13
|
Kim JK, Chang MC, Park WT, Lee GW. Identification of L5 vertebra on lumbar spine radiographs using deep learning. J Int Med Res 2024; 52:3000605231223881. [PMID: 38206194 PMCID: PMC10785730 DOI: 10.1177/03000605231223881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 12/14/2023] [Indexed: 01/12/2024] Open
Abstract
OBJECTIVE Deep learning is an advanced machine-learning approach that is used in several medical fields. Here, we developed a deep learning model using an object detection algorithm to identify the L5 vertebra on anteroposterior lumbar spine radiographs, and assessed its detection accuracy. METHODS We retrospectively recruited 150 participants for whom both anteroposterior whole-spine and lumbar spine radiographs were available. The anteroposterior lumbar spine radiographs of these patients were used as the input data. Of the 150 images, 105 (70%) were randomly selected as the training set, and the remaining 45 (30%) were assigned to the validation set. YOLOv5x, of the YOLOv5 family model, was used to detect the L5 vertebra area. RESULTS The mean average precisions 0.5 and 0.75 of the trained L5 detection model were 99.2% and 96.9%, respectively. The model's precision was 95.7% and its recall was 97.8%. Furthermore, 93.3% of the validation data were correctly detected. CONCLUSION Our deep learning model showed an outstanding ability to identify L5 vertebrae.
Collapse
Affiliation(s)
- Jeoung Kun Kim
- Department of Business Administration, School of Business, Yeungnam University, Gyeongsan-si, Republic of Korea
| | - Min Cheol Chang
- Department of Physical Medicine and Rehabilitation, Yeungnam University College of Medicine, Daegu, Republic of Korea
| | - Wook Tae Park
- Department of Orthopaedic Surgery, Yeungnam University College of Medicine, Daegu, Republic of Korea
| | - Gun Woo Lee
- Department of Orthopaedic Surgery, Yeungnam University College of Medicine, Daegu, Republic of Korea
| |
Collapse
|
14
|
Tatar OC, Akay MA, Metin S. DraiNet: AI-driven decision support in pneumothorax and pleural effusion management. Pediatr Surg Int 2023; 40:30. [PMID: 38151565 DOI: 10.1007/s00383-023-05609-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/24/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVE This study presents DraiNet, a deep learning model developed to detect pneumothorax and pleural effusion in pediatric patients and aid in assessing the necessity for tube thoracostomy. The primary goal is to utilize DraiNet as a decision support tool to enhance clinical decision-making in the management of these conditions. METHODS DraiNet was trained on a diverse dataset of pediatric CT scans, carefully annotated by experienced surgeons. The model incorporated advanced object detection techniques and underwent evaluation using standard metrics, such as mean Average Precision (mAP), to assess its performance. RESULTS DraiNet achieved an impressive mAP score of 0.964, demonstrating high accuracy in detecting and precisely localizing abnormalities associated with pneumothorax and pleural effusion. The model's precision and recall further confirmed its ability to effectively predict positive cases. CONCLUSION The integration of DraiNet as an AI-driven decision support system marks a significant advancement in pediatric healthcare. By combining deep learning algorithms with clinical expertise, DraiNet provides a valuable tool for non-surgical teams and emergency room doctors, aiding them in making informed decisions about surgical interventions. With its remarkable mAP score of 0.964, DraiNet has the potential to enhance patient outcomes and optimize the management of critical conditions, including pneumothorax and pleural effusion.
Collapse
Affiliation(s)
- Ozan Can Tatar
- Department of General Surgery, School of Medicine, Kocaeli University, 41000, Kocaeli, Turkey.
- Information Systems Engineering, Faculty of Technology, Kocaeli University, Kocaeli, Turkey.
| | - Mustafa Alper Akay
- Department of Pediatric Surgery, School of Medicine, Kocaeli University, Kocaeli, Turkey
| | - Semih Metin
- Department of Pediatric Surgery, School of Medicine, Kocaeli University, Kocaeli, Turkey
| |
Collapse
|
15
|
Goh GL, Goh GD, Pan JW, Teng PSP, Kong PW. Automated Service Height Fault Detection Using Computer Vision and Machine Learning for Badminton Matches. SENSORS (BASEL, SWITZERLAND) 2023; 23:9759. [PMID: 38139605 PMCID: PMC10747833 DOI: 10.3390/s23249759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 12/05/2023] [Accepted: 12/09/2023] [Indexed: 12/24/2023]
Abstract
In badminton, accurate service height detection is critical for ensuring fairness. We developed an automated service fault detection system that employed computer vision and machine learning, specifically utilizing the YOLOv5 object detection model. Comprising two cameras and a workstation, our system identifies elements, such as shuttlecocks, rackets, players, and players' shoes. We developed an algorithm that can pinpoint the shuttlecock hitting event to capture its height information. To assess the accuracy of the new system, we benchmarked the results against a high sample-rate motion capture system and conducted a comparative analysis with eight human judges that used a fixed height service tool in a backhand low service situation. Our findings revealed a substantial enhancement in accuracy compared with human judgement; the system outperformed human judges by 3.5 times, achieving a 58% accuracy rate for detecting service heights between 1.150 and 1.155 m, as opposed to a 16% accuracy rate for humans. The system we have developed offers a highly reliable solution, substantially enhancing the consistency and accuracy of service judgement calls in badminton matches and ensuring fairness in the sport. The system's development signifies a meaningful step towards leveraging technology for precision and integrity in sports officiation.
Collapse
Affiliation(s)
- Guo Liang Goh
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Guo Dong Goh
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Jing Wen Pan
- Physical Education and Sports Science Academic Group, National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616, Singapore; (J.W.P.); (P.S.P.T.)
- Rehabilitation Research Institute of Singapore, Nanyang Technological University, 11 Mandalay Road, Singapore 308232, Singapore
| | - Phillis Soek Po Teng
- Physical Education and Sports Science Academic Group, National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616, Singapore; (J.W.P.); (P.S.P.T.)
| | - Pui Wah Kong
- Physical Education and Sports Science Academic Group, National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 637616, Singapore; (J.W.P.); (P.S.P.T.)
| |
Collapse
|
16
|
Kim H, Jeon YD, Park KB, Cha H, Kim MS, You J, Lee SW, Shin SH, Chung YG, Kang SB, Jang WS, Yoon DK. Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning. Sci Rep 2023; 13:20431. [PMID: 37993627 PMCID: PMC10665312 DOI: 10.1038/s41598-023-47706-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023] Open
Abstract
Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5-8 times faster than the experts' recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
Collapse
Affiliation(s)
- Hyeonjoo Kim
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Young Dae Jeon
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Ki Bong Park
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Hayeong Cha
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Moo-Sub Kim
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Juyeon You
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Se-Won Lee
- Department of Orthopedic Surgery, Yeouido St. Mary's Hospital,, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Bin Kang
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Won Seuk Jang
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea.
| | - Do-Kun Yoon
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea.
| |
Collapse
|
17
|
Liu H, Zhao Y, Wang M, Ma M, Chen Z. Activation extending based on long-range dependencies for weakly supervised semantic segmentation. PLoS One 2023; 18:e0288596. [PMID: 37988337 PMCID: PMC10662704 DOI: 10.1371/journal.pone.0288596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 06/28/2023] [Indexed: 11/23/2023] Open
Abstract
Weakly supervised semantic segmentation (WSSS) principally obtains pseudo-labels based on the class activation maps (CAM) to handle expensive annotation resources. However, CAM easily involves false and local activation due to the the lack of annotation information. This paper suggests weakly supervised learning as semantic information mining to extend object mask. We proposes a novel architecture to mining semantic information by modeling through long-range dependencies from in-sample and inter-sample. Considering the confusion caused by the long-range dependencies, the images are divided into blocks and carried out self-attention operation on the premise of fewer classes to obtain long-range dependencies, to reduce false predictions. Moreover, we perform global to local weighted self-supervised contrastive learning among image blocks, and the local activation of CAM is transferred to different foreground area. Experiments verified that superior semantic details and more reliable pseudo-labels are captured through these suggested modules. Experiments on PASCAL VOC 2012 demonstrated the proposed model achieves 76.6% and 77.4% mIoU in val and test sets, which is superior to the comparison baselines.
Collapse
Affiliation(s)
- Haipeng Liu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Yibo Zhao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Meng Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, China
| | - Meiyan Ma
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Zhaoyu Chen
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
18
|
Wei HL, Wei C, Feng Y, Yan W, Yu YS, Chen YC, Yin X, Li J, Zhang H. Predicting the efficacy of non-steroidal anti-inflammatory drugs in migraine using deep learning and three-dimensional T1-weighted images. iScience 2023; 26:108107. [PMID: 37867961 PMCID: PMC10585394 DOI: 10.1016/j.isci.2023.108107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 07/19/2023] [Accepted: 09/27/2023] [Indexed: 10/24/2023] Open
Abstract
Deep learning (DL) models based on individual images could contribute to tailored therapies and personalized treatment strategies. We aimed to construct a DL model using individual 3D structural images for predicting the efficacy of non-steroidal anti-inflammatory drugs (NSAIDs) in migraine. A 3D convolutional neural network model was constructed, with ResNet18 as the classification backbone, to link structural images to predict the efficacy of NSAIDs. In total, 111 patients were included and allocated to the training and testing sets in a 4:1 ratio. The prediction accuracies of the ResNet34, ResNet50, ResNeXt50, DenseNet121, and 3D ResNet18 models were 0.65, 0.74, 0.65, 0.70, and 0.78, respectively. This model, based on individual 3D structural images, demonstrated better predictive performance in comparison to conventional models. Our study highlights the feasibility of the DL algorithm based on brain structural images and suggests that it can be applied to predict the efficacy of NSAIDs in migraine treatment.
Collapse
Affiliation(s)
- Heng-Le Wei
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Cunsheng Wei
- Department of Neurology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Yibo Feng
- Infervision Medical Technology Co., Ltd, Beijing, China
| | - Wanying Yan
- Infervision Medical Technology Co., Ltd, Beijing, China
| | - Yu-Sheng Yu
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Jiangsu Province, Nanjing 210006, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Jiangsu Province, Nanjing 210006, China
| | - Junrong Li
- Department of Neurology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Hong Zhang
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| |
Collapse
|
19
|
Hossain MSA, Gul S, Chowdhury MEH, Khan MS, Sumon MSI, Bhuiyan EH, Khandakar A, Hossain M, Sadique A, Al-Hashimi I, Ayari MA, Mahmud S, Alqahtani A. Deep Learning Framework for Liver Segmentation from T1-Weighted MRI Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8890. [PMID: 37960589 PMCID: PMC10650219 DOI: 10.3390/s23218890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/08/2023] [Accepted: 08/15/2023] [Indexed: 11/15/2023]
Abstract
The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.
Collapse
Affiliation(s)
- Md. Sakib Abrar Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sidra Gul
- Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan
- Artificial Intelligence in Healthcare, IIPL, National Center of Artificial Intelligence, Peshawar 25000, Pakistan
| | | | | | | | - Enamul Haque Bhuiyan
- Center for Magnetic Resonance Research, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | - Abdus Sadique
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | | | | | - Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Abdulrahman Alqahtani
- Department of Medical Equipment Technology, College of Applied, Medical Science, Majmaah University, Majmaah City 11952, Saudi Arabia
- Department of Biomedical Technology, College of Applied Medical Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
20
|
Zhang C, Li M, Luo Z, Xiao R, Li B, Shi J, Zeng C, Sun B, Xu X, Yang H. Deep learning-driven MRI trigeminal nerve segmentation with SEVB-net. Front Neurosci 2023; 17:1265032. [PMID: 37920295 PMCID: PMC10618361 DOI: 10.3389/fnins.2023.1265032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 09/29/2023] [Indexed: 11/04/2023] Open
Abstract
Purpose Trigeminal neuralgia (TN) poses significant challenges in its diagnosis and treatment due to its extreme pain. Magnetic resonance imaging (MRI) plays a crucial role in diagnosing TN and understanding its pathogenesis. Manual delineation of the trigeminal nerve in volumetric images is time-consuming and subjective. This study introduces a Squeeze and Excitation with BottleNeck V-Net (SEVB-Net), a novel approach for the automatic segmentation of the trigeminal nerve in three-dimensional T2 MRI volumes. Methods We enrolled 88 patients with trigeminal neuralgia and 99 healthy volunteers, dividing them into training and testing groups. The SEVB-Net was designed for end-to-end training, taking three-dimensional T2 images as input and producing a segmentation volume of the same size. We assessed the performance of the basic V-Net, nnUNet, and SEVB-Net models by calculating the Dice similarity coefficient (DSC), sensitivity, precision, and network complexity. Additionally, we used the Mann-Whitney U test to compare the time required for manual segmentation and automatic segmentation with manual modification. Results In the testing group, the experimental results demonstrated that the proposed method achieved state-of-the-art performance. SEVB-Net combined with the ωDoubleLoss loss function achieved a DSC ranging from 0.6070 to 0.7923. SEVB-Net combined with the ωDoubleLoss method and nnUNet combined with the DoubleLoss method, achieved DSC, sensitivity, and precision values exceeding 0.7. However, SEVB-Net significantly reduced the number of parameters (2.20 M), memory consumption (11.41 MB), and model size (17.02 MB), resulting in improved computation and forward time compared with nnUNet. The difference in average time between manual segmentation and automatic segmentation with manual modification for both radiologists was statistically significant (p < 0.001). Conclusion The experimental results demonstrate that the proposed method can automatically segment the root and three main branches of the trigeminal nerve in three-dimensional T2 images. SEVB-Net, compared with the basic V-Net model, showed improved segmentation performance and achieved a level similar to nnUNet. The segmentation volumes of both SEVB-Net and nnUNet aligned with expert annotations but SEVB-Net displayed a more lightweight feature.
Collapse
Affiliation(s)
- Chuan Zhang
- The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Man Li
- Shanghai United Imaging Intelligence, Co., Ltd., Shanghai, China
| | - Zheng Luo
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Ruhui Xiao
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Bing Li
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Jing Shi
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Chen Zeng
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - BaiJinTao Sun
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xiaoxue Xu
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Hanfeng Yang
- The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| |
Collapse
|
21
|
Lupariello F, Sussetto L, Di Trani S, Di Vella G. Artificial Intelligence and Child Abuse and Neglect: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1659. [PMID: 37892322 PMCID: PMC10605696 DOI: 10.3390/children10101659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 09/30/2023] [Accepted: 10/06/2023] [Indexed: 10/29/2023]
Abstract
All societies should carefully address the child abuse and neglect phenomenon due to its acute and chronic sequelae. Even if artificial intelligence (AI) implementation in this field could be helpful, the state of the art of this implementation is not known. No studies have comprehensively reviewed the types of AI models that have been developed/validated. Furthermore, no indications about the risk of bias in these studies are available. For these reasons, the authors conducted a systematic review of the PubMed database to answer the following questions: "what is the state of the art about the development and/or validation of AI predictive models useful to contrast child abuse and neglect phenomenon?"; "which is the risk of bias of the included articles?". The inclusion criteria were: articles written in English and dated from January 1985 to 31 March 2023; publications that used a medical and/or protective service dataset to develop and/or validate AI prediction models. The reviewers screened 413 articles. Among them, seven papers were included. Their analysis showed that: the types of input data were heterogeneous; artificial neural networks, convolutional neural networks, and natural language processing were used; the datasets had a median size of 2600 cases; the risk of bias was high for all studies. The results of the review pointed out that the implementation of AI in the child abuse and neglect field lagged compared to other medical fields. Furthermore, the evaluation of the risk of bias suggested that future studies should provide an appropriate choice of sample size, validation, and management of overfitting, optimism, and missing data.
Collapse
Affiliation(s)
- Francesco Lupariello
- Dipartimento di Scienze della Sanità Pubblica e Pediatriche, Sezione di Medicina Legale, Università degli Studi di Torino, 10126 Torino, Italy
| | | | | | | |
Collapse
|
22
|
Sakashita S, Sakamoto N, Kojima M, Taki T, Miyazaki S, Minakata N, Sasabe M, Kinoshita T, Ishii G, Ochiai A. Requirement of image standardization for AI-based macroscopic diagnosis for surgical specimens of gastric cancer. J Cancer Res Clin Oncol 2023; 149:6467-6477. [PMID: 36773090 DOI: 10.1007/s00432-022-04570-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 12/31/2022] [Indexed: 02/12/2023]
Abstract
PURPOSE The pathological diagnosis of surgically resected gastric cancer involves both a macroscopic diagnosis by gross observation and a microscopic diagnosis by microscopy. Macroscopic diagnosis determines the location and stage of the disease and the involvement of other organs and surgical margin. Lesion recognition is, thus, an important diagnostic step that requires a skilled pathologist. Nonetheless, artificial intelligence (AI) technologies could allow even inexperienced doctors and laboratory technicians to examine surgically resected specimens without the need for pathologists. However, organ imaging conditions vary across hospitals, and an AI algorithm created in one setting may not work properly in another. Thus, we identified and standardized factors affecting the quality of pathological macroscopic images, which could further affect lesion identification using AI. METHODS We examined necessary image standardization for developing cancer detection AI for surgically resected gastric cancer by changing the following imaging conditions: focus, resolution, brightness, and contrast. RESULTS Regarding focus, brightness, and contrast, the farther away the test data were from the training macro-image, the less likely the inference was to be correct. Little change was observed for resolution, even with differing conditions for the training and test data. Regarding focus, brightness, and contrast, there were conditions appropriate for AI. Contrast, in particular, was far from the conditions appropriate for humans. CONCLUSION Standardizing focus, brightness, and contrast is important in the development of AI methodologies for lesion detection in surgically resected gastric cancer. This standardization is essential for AI to be implemented across hospitals.
Collapse
Affiliation(s)
- Shingo Sakashita
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center, Kashiwa, Chiba, Japan.
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.
| | - Naoya Sakamoto
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center, Kashiwa, Chiba, Japan
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Motohiro Kojima
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center, Kashiwa, Chiba, Japan
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Tetsuro Taki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Saori Miyazaki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Nobuhisa Minakata
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Maasa Sasabe
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Takahiro Kinoshita
- Department of Gastric Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Genichiro Ishii
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Atsushi Ochiai
- National Cancer Center, Kashiwa, Chiba, Japan
- Research Institute for Biomedical Sciences, Tokyo University of Science, Noda, Chiba, Japan
| |
Collapse
|
23
|
Johnson DR, Vaidhyanathan RU. Detection and localization of multi-scale and oriented objects using an enhanced feature refinement algorithm. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15219-15243. [PMID: 37679178 DOI: 10.3934/mbe.2023681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Object detection is a fundamental aspect of computer vision, with numerous generic object detectors proposed by various researchers. The proposed work presents a novel single-stage rotation detector that can detect oriented and multi-scale objects accurately from diverse scenarios. This detector addresses the challenges faced by current rotation detectors, such as the detection of arbitrary orientations, objects that are densely arranged, and the issue of loss discontinuity. First, the detector also adopts a progressive regression form (coarse-to-fine-grained approach) that uses both horizontal anchors (speed and higher recall) and rotating anchors (oriented objects) in cluttered backgrounds. Second, the proposed detector includes a feature refinement module that helps minimize the problems related to feature angulation and reduces the number of bounding boxes generated. Finally, to address the issue of loss discontinuity, the proposed detector utilizes a newly formulated adjustable loss function that can be extended to both single-stage and two-stage detectors. The proposed detector shows outstanding performance on benchmark datasets and significantly outperforms other state-of-the-art methods in terms of speed and accuracy.
Collapse
Affiliation(s)
- Deepika Roselind Johnson
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, India
| | | |
Collapse
|
24
|
Yang R, Du Y, Kwan W, Yan R, Shi Q, Zang L, Zhu Z, Zhang J, Li C, Yu Y. A quick and reliable image-based AI algorithm for evaluating cellular senescence of gastric organoids. Cancer Biol Med 2023; 20:j.issn.2095-3941.2023.0099. [PMID: 37417294 PMCID: PMC10466441 DOI: 10.20892/j.issn.2095-3941.2023.0099] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 05/11/2023] [Indexed: 07/08/2023] Open
Abstract
OBJECTIVE Organoids are a powerful tool with broad application prospects in biomedicine. Notably, they provide alternatives to animal models for testing potential drugs before clinical trials. However, the number of passages for which organoids maintain cellular vitality ex vivo remains unclear. METHODS Herein, we constructed 55 gastric organoids from 35 individuals, serially passaged the organoids, and captured microscopic images for phenotypic evaluation. Senescence-associated β-galactosidase (SA-β-Gal), cell diameter in suspension, and gene expression reflecting cell cycle regulation were examined. The YOLOv3 object detection algorithm integrated with a convolutional block attention module (CBAM) was used to evaluate organoid vitality. RESULTS SA-β-Gal staining intensity; single-cell diameter; and expression of p15, p16, p21, CCNA2, CCNE2, and LMNB1 reflected the progression of aging in organoids during passaging. The CBAM-YOLOv3 algorithm precisely evaluated aging organoids on the basis of organoid average diameter, organoid number, and number × diameter, and the findings positively correlated with SA-β-Gal staining and single-cell diameter. Organoids derived from normal gastric mucosa had limited passaging ability (passages 1-5), before aging, whereas tumor organoids showed unlimited passaging potential for more than 45 passages (511 days) without showing clear senescence. CONCLUSIONS Given the lack of indicators for evaluating organoid growth status, we established a reliable approach for integrated analysis of phenotypic parameters that uses an artificial intelligence algorithm to indicate organoid vitality. This method enables precise evaluation of organoid status in biomedical studies and monitoring of living biobanks.
Collapse
Affiliation(s)
- Ruixin Yang
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Yutong Du
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Wingyan Kwan
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Ranlin Yan
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Qimeng Shi
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Lu Zang
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Zhenggang Zhu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Jianming Zhang
- Institute of Translational Medicine, Zhangjiang Institute for Advanced Study, Shanghai Jiao Tong University, Shanghai 201210, China
| | - Chen Li
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| | - Yingyan Yu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiaotong University School of Medicine, Shanghai 200025, China
| |
Collapse
|
25
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
26
|
Su X, Liu Q, Gao X, Ma L. Evaluation of deep learning methods for early gastric cancer detection using gastroscopic images. Technol Health Care 2023; 31:313-322. [PMID: 37066932 DOI: 10.3233/thc-236027] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
BACKGROUND A timely diagnosis of early gastric cancer (EGC) can greatly reduce the death rate of patients. However, the manual detection of EGC is a costly and low-accuracy task. The artificial intelligence (AI) method based on deep learning is considered as a potential method to detect EGC. AI methods have outperformed endoscopists in EGC detection, especially with the use of the different region convolutional neural network (RCNN) models recently reported. However, no studies compared the performances of different RCNN series models. OBJECTIVE This study aimed to compare the performances of different RCNN series models for EGC. METHODS Three typical RCNN models were used to detect gastric cancer using 3659 gastroscopic images, including 1434 images of EGC: Faster RCNN, Cascade RCNN, and Mask RCNN. RESULTS The models were evaluated in terms of specificity, accuracy, precision, recall, and AP. Fast RCNN, Cascade RCNN, and Mask RCNN had similar accuracy (0.935, 0.938, and 0.935). The specificity of Cascade RCNN was 0.946, which was slightly higher than 0.908 for Faster RCNN and 0.908 for Mask RCNN. CONCLUSION Faster RCNN and Mask RCNN place more emphasis on positive detection, and Cascade RCNN places more emphasis on negative detection. These methods based on deep learning were conducive to helping in early cancer diagnosis using endoscopic images.
Collapse
Affiliation(s)
- Xiufeng Su
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Qingshan Liu
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| | - Xiaozhong Gao
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Liyong Ma
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| |
Collapse
|
27
|
Anttila TT, Karjalainen TV, Mäkelä TO, Waris EM, Lindfors NC, Leminen MM, Ryhänen JO. Detecting Distal Radius Fractures Using a Segmentation-Based Deep Learning Model. J Digit Imaging 2023; 36:679-687. [PMID: 36542269 PMCID: PMC10039188 DOI: 10.1007/s10278-022-00741-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 11/08/2022] [Accepted: 11/09/2022] [Indexed: 12/24/2022] Open
Abstract
Deep learning algorithms can be used to classify medical images. In distal radius fracture treatment, fracture detection and radiographic assessment of fracture displacement are critical steps. The aim of this study was to use pixel-level annotations of fractures to develop a deep learning model for precise distal radius fracture detection. We randomly divided 3785 consecutive emergency wrist radiograph examinations from six hospitals to a training set (3399 examinations) and test set (386 examinations). The training set was used to develop the deep learning model and the test set to assess its validity. The consensus of three hand surgeons was used as the gold standard for the test set. The area under the ROC curve was 0.97 (CI 0.95-0.98) and 0.95 (CI 0.92-0.98) for examinations without a cast. Fractures were identified with higher accuracy in the postero-anterior radiographs than in the lateral radiographs. Our deep learning model performed well in our multi-hospital and multi-radiograph system manufacturer settings. Thus, segmentation-based deep learning models may provide additional benefit. Further research is needed with algorithm comparison and external validation.
Collapse
Affiliation(s)
- Turkka T Anttila
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland.
| | - Teemu V Karjalainen
- Department of Orthopedics, Traumatology and Hand Surgery, Central Finland Hospital, Jyvaskyla, Finland
| | - Teemu O Mäkelä
- Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Eero M Waris
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland
| | - Nina C Lindfors
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland
| | - Miika M Leminen
- Analytics and AI Development Services, IT Department, Helsinki University Hospital, Helsinki, Finland
- Department of Otorhinolaryngology and Phoniatrics, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Jorma O Ryhänen
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland
| |
Collapse
|
28
|
De Santi LA, Meloni A, Santarelli MF, Pistoia L, Spasiano A, Casini T, Putti MC, Cuccia L, Cademartiri F, Positano V. Left Ventricle Detection from Cardiac Magnetic Resonance Relaxometry Images Using Visual Transformer. SENSORS (BASEL, SWITZERLAND) 2023; 23:3321. [PMID: 36992032 PMCID: PMC10052975 DOI: 10.3390/s23063321] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 03/16/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
Left Ventricle (LV) detection from Cardiac Magnetic Resonance (CMR) imaging is a fundamental step, preliminary to myocardium segmentation and characterization. This paper focuses on the application of a Visual Transformer (ViT), a novel neural network architecture, to automatically detect LV from CMR relaxometry sequences. We implemented an object detector based on the ViT model to identify LV from CMR multi-echo T2* sequences. We evaluated performances differentiated by slice location according to the American Heart Association model using 5-fold cross-validation and on an independent dataset of CMR T2*, T2, and T1 acquisitions. To the best of our knowledge, this is the first attempt to localize LV from relaxometry sequences and the first application of ViT for LV detection. We collected an Intersection over Union (IoU) index of 0.68 and a Correct Identification Rate (CIR) of blood pool centroid of 0.99, comparable with other state-of-the-art methods. IoU and CIR values were significantly lower in apical slices. No significant differences in performances were assessed on independent T2* dataset (IoU = 0.68, p = 0.405; CIR = 0.94, p = 0.066). Performances were significantly worse on the T2 and T1 independent datasets (T2: IoU = 0.62, CIR = 0.95; T1: IoU = 0.67, CIR = 0.98), but still encouraging considering the different types of acquisition. This study confirms the feasibility of the application of ViT architectures in LV detection and defines a benchmark for relaxometry imaging.
Collapse
Affiliation(s)
- Lisa Anita De Santi
- Department of Information Engineering, University of Pisa, 56122 Pisa, Italy;
- U.O.C. Bioingegneria, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy;
| | - Antonella Meloni
- U.O.C. Bioingegneria, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy;
- Department of Radiology, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy; (L.P.)
| | | | - Laura Pistoia
- Department of Radiology, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy; (L.P.)
| | - Anna Spasiano
- Unità Operativa Semplice Dipartimentale Malattie Rare del Globulo Rosso, Azienda Ospedaliera di Rilievo Nazionale “A. Cardarelli”, 80131 Napoli, Italy
| | - Tommaso Casini
- Centro Talassemie ed Emoglobinopatie, Ospedale “Meyer”, 50139 Firenze, Italy
| | - Maria Caterina Putti
- Clinica di Emato-Oncologia Pediatrica, Dipartimento di Salute della Donna e del Bambino, Azienda Ospedale Università, 35128 Padova, Italy
| | - Liana Cuccia
- Unità Operativa Complessa Ematologia con Talassemia, ARNAS Civico “Benfratelli-Di Cristina”, 90127 Palermo, Italy
| | - Filippo Cademartiri
- Department of Radiology, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy; (L.P.)
| | - Vincenzo Positano
- U.O.C. Bioingegneria, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy;
- Department of Radiology, Fondazione G. Monasterio CNR-Regione Toscana, 56124 Pisa, Italy; (L.P.)
| |
Collapse
|
29
|
Bakrania A, Joshi N, Zhao X, Zheng G, Bhat M. Artificial intelligence in liver cancers: Decoding the impact of machine learning models in clinical diagnosis of primary liver cancers and liver cancer metastases. Pharmacol Res 2023; 189:106706. [PMID: 36813095 DOI: 10.1016/j.phrs.2023.106706] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/17/2023] [Accepted: 02/19/2023] [Indexed: 02/22/2023]
Abstract
Liver cancers are the fourth leading cause of cancer-related mortality worldwide. In the past decade, breakthroughs in the field of artificial intelligence (AI) have inspired development of algorithms in the cancer setting. A growing body of recent studies have evaluated machine learning (ML) and deep learning (DL) algorithms for pre-screening, diagnosis and management of liver cancer patients through diagnostic image analysis, biomarker discovery and predicting personalized clinical outcomes. Despite the promise of these early AI tools, there is a significant need to explain the 'black box' of AI and work towards deployment to enable ultimate clinical translatability. Certain emerging fields such as RNA nanomedicine for targeted liver cancer therapy may also benefit from application of AI, specifically in nano-formulation research and development given that they are still largely reliant on lengthy trial-and-error experiments. In this paper, we put forward the current landscape of AI in liver cancers along with the challenges of AI in liver cancer diagnosis and management. Finally, we have discussed the future perspectives of AI application in liver cancer and how a multidisciplinary approach using AI in nanomedicine could accelerate the transition of personalized liver cancer medicine from bench side to the clinic.
Collapse
Affiliation(s)
- Anita Bakrania
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
| | | | - Xun Zhao
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Mamatha Bhat
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Division of Gastroenterology, Department of Medicine, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Medical Sciences, Toronto, ON, Canada.
| |
Collapse
|
30
|
Valeri F, Bartolucci M, Cantoni E, Carpi R, Cisbani E, Cupparo I, Doria S, Gori C, Grigioni M, Lasagni L, Marconi A, Mazzoni LN, Miele V, Pradella S, Risaliti G, Sanguineti V, Sona D, Vannucchi L, Taddeucci A. UNet and MobileNet CNN-based model observers for CT protocol optimization: comparative performance evaluation by means of phantom CT images. J Med Imaging (Bellingham) 2023; 10:S11904. [PMID: 36895439 PMCID: PMC9989681 DOI: 10.1117/1.jmi.10.s1.s11904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 02/09/2023] [Indexed: 03/09/2023] Open
Abstract
Purpose The aim of this work is the development and characterization of a model observer (MO) based on convolutional neural networks (CNNs), trained to mimic human observers in image evaluation in terms of detection and localization of low-contrast objects in CT scans acquired on a reference phantom. The final goal is automatic image quality evaluation and CT protocol optimization to fulfill the ALARA principle. Approach Preliminary work was carried out to collect localization confidence ratings of human observers for signal presence/absence from a dataset of 30,000 CT images acquired on a PolyMethyl MethAcrylate phantom containing inserts filled with iodinated contrast media at different concentrations. The collected data were used to generate the labels for the training of the artificial neural networks. We developed and compared two CNN architectures based respectively on Unet and MobileNetV2, specifically adapted to achieve the double tasks of classification and localization. The CNN evaluation was performed by computing the area under localization-ROC curve (LAUC) and accuracy metrics on the test dataset. Results The mean of absolute percentage error between the LAUC of the human observer and MO was found to be below 5% for the most significative test data subsets. An elevated inter-rater agreement was achieved in terms of S-statistics and other common statistical indices. Conclusions Very good agreement was measured between the human observer and MO, as well as between the performance of the two algorithms. Therefore, this work is highly supportive of the feasibility of employing CNN-MO combined with a specifically designed phantom for CT protocol optimization programs.
Collapse
Affiliation(s)
- Federico Valeri
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy.,Università degli Studi di Firenze, Scuola di Scienze della Salute Umana, Florence, Italy
| | - Maurizio Bartolucci
- Ospedale S. Stefano, Azienda USL Toscana Centro, SOC Radiodiagnostica, Prato, Italy
| | - Elena Cantoni
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Roberto Carpi
- Ospedale Santa Maria Nuova, Azienda USL Toscana Centro, SOC Radiologia, Florence, Italy
| | - Evaristo Cisbani
- Istituto Superiore di Sanità, Centro Nazionale Tecnologie Innvative in Sanità Pubblica, Rome, Italy
| | - Ilaria Cupparo
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy.,Università degli Studi di Firenze, Scuola di Scienze della Salute Umana, Florence, Italy
| | - Sandra Doria
- Istituto di Chimica dei Composti OrganoMetallici, Consiglio Nazionale delle Ricerche, Florence, Italy.,Università degli Studi di Firenze, European Laboratory for Nonlinear Spectroscopy, Florence, Italy
| | - Cesare Gori
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Mauro Grigioni
- Istituto Superiore di Sanità, Centro Nazionale Tecnologie Innvative in Sanità Pubblica, Rome, Italy
| | - Lorenzo Lasagni
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy.,Università degli Studi di Firenze, Scuola di Scienze della Salute Umana, Florence, Italy
| | - Alessandro Marconi
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Lorenzo Nicola Mazzoni
- Ospedale San Jacopo, Azienda USL Toscana Centro, UO Fisica Sanitaria Prato e Pistoia, Pistoia, Italy
| | - Vittorio Miele
- Azienda Ospedaliero-Universitaria Careggi, SOD Radiodiagnostica di Emergenza-Urgenza, Florence, Italy
| | - Silvia Pradella
- Azienda Ospedaliero-Universitaria Careggi, SOD Radiodiagnostica di Emergenza-Urgenza, Florence, Italy
| | - Guido Risaliti
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Valentina Sanguineti
- Istituto Italiano di Tecnologia, Pattern Analysis & Computer Vision, Genoa, Italy
| | - Diego Sona
- Fondazione Bruno Kessler, Data Science for Health Unit, Trento, Italy
| | - Letizia Vannucchi
- Ospedale S. Jacopo, AUSL Toscana Centro, SOC Radiodiagnostica, Pistoia, Italy
| | - Adriana Taddeucci
- Azienda Ospedaliero-Universitaria Careggi, UO Fisica Sanitaria, Florence, Italy.,Istituto Nazionale di Fisica Nucleare - Sezione di Firenze, Sesto Fiorentino, Italy
| |
Collapse
|
31
|
Seo JW, Park S, Kim YJ, Hwang JH, Yu SH, Kim JH, Kim KG. Artificial intelligence-based iliofemoral deep venous thrombosis detection using a clinical approach. Sci Rep 2023; 13:967. [PMID: 36653367 PMCID: PMC9849339 DOI: 10.1038/s41598-022-25849-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 12/06/2022] [Indexed: 01/19/2023] Open
Abstract
Early diagnosis of deep venous thrombosis is essential for reducing complications, such as recurrent pulmonary embolism and venous thromboembolism. There are numerous studies on enhancing efficiency of computer-aided diagnosis, but clinical diagnostic approaches have never been considered. In this study, we evaluated the performance of an artificial intelligence (AI) algorithm in the detection of iliofemoral deep venous thrombosis on computed tomography angiography of the lower extremities to investigate the effectiveness of using the clinical approach during the feature extraction process of the AI algorithm. To investigate the effectiveness of the proposed method, we created synthesized images to consider practical diagnostic procedures and applied them to the convolutional neural network-based RetinaNet model. We compared and analyzed the performances based on the model's backbone and data. The performance of the model was as follows: ResNet50: sensitivity = 0.843 (± 0.037), false positives per image = 0.608 (± 0.139); ResNet152 backbone: sensitivity = 0.839 (± 0.031), false positives per image = 0.503 (± 0.079). The results demonstrated the effectiveness of the suggested method in using computed tomography angiography of the lower extremities, and improving the reporting efficiency of the critical iliofemoral deep venous thrombosis cases.
Collapse
Affiliation(s)
- Jae Won Seo
- Department of Health Sciences and Technology, GAIHST, Gachon University, Incheon, 21999, Republic of Korea
| | - Suyoung Park
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University, Incheon, 21565, Republic of Korea
| | - Jung Han Hwang
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea
| | - Sung Hyun Yu
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea
| | - Jeong Ho Kim
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, 21565, Republic of Korea.
| | - Kwang Gi Kim
- Department of Health Sciences and Technology, GAIHST, Gachon University, Incheon, 21999, Republic of Korea. .,Department of Biomedical Engineering, Gil Medical Center, Gachon University, Incheon, 21565, Republic of Korea.
| |
Collapse
|
32
|
Cui L, Fan Z, Yang Y, Liu R, Wang D, Feng Y, Lu J, Fan Y. Deep Learning in Ischemic Stroke Imaging Analysis: A Comprehensive Review. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2456550. [PMID: 36420096 PMCID: PMC9678444 DOI: 10.1155/2022/2456550] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 09/27/2022] [Accepted: 10/20/2022] [Indexed: 09/15/2023]
Abstract
Ischemic stroke is a cerebrovascular disease with a high morbidity and mortality rate, which poses a serious challenge to human health and life. Meanwhile, the management of ischemic stroke remains highly dependent on manual visual analysis of noncontrast computed tomography (CT) or magnetic resonance imaging (MRI). However, artifacts and noise of the equipment as well as the radiologist experience play a significant role on diagnostic accuracy. To overcome these defects, the number of computer-aided diagnostic (CAD) methods for ischemic stroke is increasing substantially during the past decade. Particularly, deep learning models with massive data learning capabilities are recognized as powerful auxiliary tools for the acute intervention and guiding prognosis of ischemic stroke. To select appropriate interventions, facilitate clinical practice, and improve the clinical outcomes of patients, this review firstly surveys the current state-of-the-art deep learning technology. Then, we summarized the major applications in acute ischemic stroke imaging, particularly in exploring the potential function of stroke diagnosis and multimodal prognostication. Finally, we sketched out the current problems and prospects.
Collapse
Affiliation(s)
- Liyuan Cui
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Zhiyuan Fan
- Centre of Intelligent Medical Technology and Equipment, Binjiang Institute of Zhejiang University, Hangzhou, Zhejiang, China
| | - Yingjian Yang
- School of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Rui Liu
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Dajiang Wang
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Yingying Feng
- School of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Jiahui Lu
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Yifeng Fan
- School of Medical Imaging, Hangzhou Medical College, Hangzhou, Zhejiang, China
| |
Collapse
|
33
|
Liu Y, Lian L, Zhang E, Xu L, Xiao C, Zhong X, Li F, Jiang B, Dong Y, Ma L, Huang Q, Xu M, Zhang Y, Yu D, Yan C, Qin P. Mixed-UNet: Refined class activation mapping for weakly-supervised semantic segmentation with multi-scale inference. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1036934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Deep learning techniques have shown great potential in medical image processing, particularly through accurate and reliable image segmentation on magnetic resonance imaging (MRI) scans or computed tomography (CT) scans, which allow the localization and diagnosis of lesions. However, training these segmentation models requires a large number of manually annotated pixel-level labels, which are time-consuming and labor-intensive, in contrast to image-level labels that are easier to obtain. It is imperative to resolve this problem through weakly-supervised semantic segmentation models using image-level labels as supervision since it can significantly reduce human annotation efforts. Most of the advanced solutions exploit class activation mapping (CAM). However, the original CAMs rarely capture the precise boundaries of lesions. In this study, we propose the strategy of multi-scale inference to refine CAMs by reducing the detail loss in single-scale reasoning. For segmentation, we develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase. The results can be obtained after fusing the extracted features from two branches. We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets. The validation results demonstrate that our model surpasses available methods under the same supervision level in the segmentation of various lesions from brain imaging.
Collapse
|
34
|
Ma X, Lu X, Huang Y, Yang X, Xu Z, Mo G, Ren Y, Li L. An Advanced Chicken Face Detection Network Based on GAN and MAE. Animals (Basel) 2022; 12:ani12213055. [PMID: 36359179 PMCID: PMC9655765 DOI: 10.3390/ani12213055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/26/2022] [Accepted: 11/02/2022] [Indexed: 11/09/2022] Open
Abstract
Simple Summary Chicken face detection is a fundamental task for accurate poultry management. Achieving satisfactory chicken face detection is necessary to implement downstream tasks, such as day-age detection, behavior recognition, and health monitoring. Nonetheless, the image dataset of the chicken face is small-scale, and there are few related studies. Moreover, chicken heads and features are smaller than other livestock, making recognition tricky. Inspired by these significances and obstacles, this paper proposes a chicken face detection network with an augmentation module. Based on the YOLOv4 backbone, our model achieved 0.91 F1, 0.84 mAP, and 37 FPS, far surpassing the two-stage RCNN and EfficientDet baselines. This model can be applied to an actual chicken coop, and its performance is adequate to conduct downstream tasks. Abstract Achieving high-accuracy chicken face detection is a significant breakthrough for smart poultry agriculture in large-scale farming and precision management. However, the current dataset of chicken faces based on accurate data is scarce, detection models possess low accuracy and slow speed, and the related detection algorithm is ineffective for small object detection. To tackle these problems, an object detection network based on GAN-MAE (generative adversarial network-masked autoencoders) data augmentation is proposed in this paper for detecting chickens of different ages. First, the images were generated using GAN and MAE to augment the dataset. Afterward, CSPDarknet53 was used as the backbone network to enhance the receptive field in the object detection network to detect different sizes of objects in the same image. The 128×128 feature map output was added to three feature map outputs of this paper, thus changing the feature map output of eightfold downsampling to fourfold downsampling, which provided smaller object features for subsequent feature fusion. Secondly, the feature fusion module was improved based on the idea of dense connection. Then the module achieved feature reuse so that the YOLO head classifier could combine features from different levels of feature layers to capture greater classification and detection results. Ultimately, the comparison experiments’ outcomes showed that the mAP (mean average Precision) of the suggested method was up to 0.84, which was 29.2% higher than other networks’, and the detection speed was the same, up to 37 frames per second. Better detection accuracy can be obtained while meeting the actual scenario detection requirements. Additionally, an end-to-end web system was designed to apply the algorithm to practical applications.
Collapse
Affiliation(s)
- Xiaoxiao Ma
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Xinai Lu
- International College Beijing, China Agricultural University, Beijing 100083, China
| | - Yihong Huang
- College of Animal Science and Technology, China Agricultural University, Beijing 100083, China
| | - Xinyi Yang
- College of Economics and Management, China Agricultural University, Beijing 100083, China
| | - Ziyin Xu
- College of Economics and Management, China Agricultural University, Beijing 100083, China
| | - Guozhao Mo
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Yufei Ren
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
| | - Lin Li
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China
- Correspondence:
| |
Collapse
|
35
|
Pasquini L, Napolitano A, Pignatelli M, Tagliente E, Parrillo C, Nasta F, Romano A, Bozzao A, Di Napoli A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics 2022; 14:pharmaceutics14112378. [PMID: 36365197 PMCID: PMC9695136 DOI: 10.3390/pharmaceutics14112378] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of 'virtual' and 'augmented' contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media.
Collapse
Affiliation(s)
- Luca Pasquini
- Neuroradiology Unit, Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
- Correspondence:
| | - Matteo Pignatelli
- Radiology Department, Castelli Hospital, Via Nettunense Km 11.5, 00040 Ariccia, Italy
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Chiara Parrillo
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Francesco Nasta
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Andrea Romano
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alessandro Bozzao
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alberto Di Napoli
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
- Neuroimaging Lab, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| |
Collapse
|
36
|
Jin J, Zhang Q, Dong B, Ma T, Mei X, Wang X, Song S, Peng J, Wu A, Dong L, Kong D. Automatic detection of early gastric cancer in endoscopy based on Mask region-based convolutional neural networks (Mask R-CNN)(with video). Front Oncol 2022; 12:927868. [PMID: 36338757 PMCID: PMC9630732 DOI: 10.3389/fonc.2022.927868] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 09/05/2022] [Indexed: 12/04/2022] Open
Abstract
The artificial intelligence (AI)-assisted endoscopic detection of early gastric cancer (EGC) has been preliminarily developed. The currently used algorithms still exhibit limitations of large calculation and low-precision expression. The present study aimed to develop an endoscopic automatic detection system in EGC based on a mask region-based convolutional neural network (Mask R-CNN) and to evaluate the performance in controlled trials. For this purpose, a total of 4,471 white light images (WLIs) and 2,662 narrow band images (NBIs) of EGC were obtained for training and testing. In total, 10 of the WLIs (videos) were obtained prospectively to examine the performance of the RCNN system. Furthermore, 400 WLIs were randomly selected for comparison between the Mask R-CNN system and doctors. The evaluation criteria included accuracy, sensitivity, specificity, positive predictive value and negative predictive value. The results revealed that there were no significant differences between the pathological diagnosis with the Mask R-CNN system in the WLI test (χ2 = 0.189, P=0.664; accuracy, 90.25%; sensitivity, 91.06%; specificity, 89.01%) and in the NBI test (χ2 = 0.063, P=0.802; accuracy, 95.12%; sensitivity, 97.59%). Among 10 WLI real-time videos, the speed of the test videos was up to 35 frames/sec, with an accuracy of 90.27%. In a controlled experiment of 400 WLIs, the sensitivity of the Mask R-CNN system was significantly higher than that of experts (χ2 = 7.059, P=0.000; 93.00% VS 80.20%), and the specificity was higher than that of the juniors (χ2 = 9.955, P=0.000, 82.67% VS 71.87%), and the overall accuracy rate was higher than that of the seniors (χ2 = 7.009, P=0.000, 85.25% VS 78.00%). On the whole, the present study demonstrates that the Mask R-CNN system exhibited an excellent performance status for the detection of EGC, particularly for the real-time analysis of WLIs. It may thus be effectively applied to clinical settings.
Collapse
Affiliation(s)
- Jing Jin
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Qianqian Zhang
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Bill Dong
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Tao Ma
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Xuecan Mei
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Wang
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Shaofang Song
- Research and Development Department, Hefei Zhongna Medical Instrument Co. LTD, Hefei, China
| | - Jie Peng
- Research and Development Department, Hefei Zhongna Medical Instrument Co. LTD, Hefei, China
| | - Aijiu Wu
- Research and Development Department, Hefei Zhongna Medical Instrument Co. LTD, Hefei, China
| | - Lanfang Dong
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Derun Kong
- Key Laboratory of Digestive Diseases of Anhui Province, Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- *Correspondence: Derun Kong,
| |
Collapse
|
37
|
Yang C, Guo H. A Lightweight Semantic Segmentation Algorithm Based on Deep Convolutional Neural Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5339664. [PMID: 36110913 PMCID: PMC9470329 DOI: 10.1155/2022/5339664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 08/20/2022] [Indexed: 11/17/2022]
Abstract
With the development of deep learning theory and the decrease of the cost of acquiring massive data, the image semantic segmentation algorithm based on Convolutional Neural Networks (CNNs) is gradually replacing the conventional segmentation algorithm by its high accuracy segmentation performance. By increasing the amount of training data and stacking more convolutional layers to form Deep Convolutional Neural Networks (DCNNs), a neural network model with higher segmentation accuracy can be obtained, but it faces the problems of serious memory consumption and long latency. For some special application scenarios, such as augmented reality and mobile interaction, real-time processing cannot be performed. To improve the speed of semantic segmentation while obtaining the most accurate segmentation results as possible, this paper proposes a semantic segmentation algorithm based on lightweight convolutional neural networks. Taking the computational complexity and segmentation accuracy into account, the algorithm starts from the perspective of extracting high-level semantic features and introduces a position-attention mechanism with richer contextual information to model the relationship between different pixels, avoiding the convolutional local perceptual field to be too small. To recover clearer target boundaries, a channel attention mechanism is introduced in the decoding part of the model to mine more useful feature channel information and effectively improve the fusion of low-level features with high-level features. By verifying the effectiveness of the above model on a publicly available dataset and comparing it with the more popular semantic segmentation methods, the model proposed in this paper has higher semantic segmentation accuracy and reflects certain advantages in objective evaluation.
Collapse
Affiliation(s)
- Chengzhi Yang
- Laboratory of Intelligent Information Processing, Suzhou University, Suzhou 234000, Anhui, China
| | - Hongjun Guo
- Laboratory of Intelligent Information Processing, Suzhou University, Suzhou 234000, Anhui, China
| |
Collapse
|
38
|
Lyu J, Shi H, Zhang J, Norvilitis J. Prediction model for suicide based on back propagation neural network and multilayer perceptron. Front Neuroinform 2022; 16:961588. [PMID: 36059864 PMCID: PMC9435582 DOI: 10.3389/fninf.2022.961588] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 07/15/2022] [Indexed: 12/18/2022] Open
Abstract
Introduction The aim was to explore the neural network prediction model for suicide based on back propagation (BP) and multilayer perceptron, in order to establish the popular, non-invasive, brief and more precise prediction model of suicide. Materials and method Data were collected by psychological autopsy (PA) in 16 rural counties from three provinces in China. The questionnaire was designed to investigate factors for suicide. Univariate statistical methods were used to preliminary filter factors, and BP neural network and multilayer perceptron were employed to establish the prediction model of suicide. Results The overall percentage correct of samples was 80.9% in logistic regression model. The total coincidence rate for all samples was 82.9% and the area under ROC curve was about 82.0% in the Back Propagation Neural Network (BPNN) prediction model. The AUC of the optimal multilayer perceptron prediction model was above 90% in multilayer perceptron model. The discrimination efficiency of the multilayer perceptron model was superior to BPNN model. Conclusions The neural network prediction models have greater accuracy than traditional methods. The multilayer perceptron is the best prediction model of suicide. The neural network prediction model has significance for clinical diagnosis and developing an artificial intelligence (AI) auxiliary clinical system.
Collapse
Affiliation(s)
- Juncheng Lyu
- School of Public Health, Weifang Medical University, Weifang, China
| | - Hong Shi
- Shandong Ikang Group, Weifang Ikang Guobin Medical Examination Center, Weifang, China
| | - Jie Zhang
- Department of Sociology, Central University of Finance Economics, Beijing, China
- Department of Sociology, State University of New York Buffalo State, Buffalo, NY, United States
| | - Jill Norvilitis
- Department of Sociology, State University of New York Buffalo State, Buffalo, NY, United States
| |
Collapse
|
39
|
Olaniyi OM, Alfa AA, Umar BU. Artificial Intelligence for Demystifying Blockchain Technology Challenges: A Survey of Recent Advances. FRONTIERS IN BLOCKCHAIN 2022. [DOI: 10.3389/fbloc.2022.927006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Blockchain technology has gained lots of traction in the past five years due to the innovations introduced in digital currency, the Bitcoin. This technology is powered by distributed ledger technology, which is a distributed database system. It is often renowned for decentralization, anti-attack, and unfalsified attributes making it a top choice in several non-monetary applications. In fact, the problem of privacy and security of the Internet of Things has been undertaken aggressively with Blockchain. Several problems have been identified with blockchain technology such as large delays and lack of support for real-time transaction processing, authorization, node verification, and consensus mechanisms. This article intends to provide a comprehensive survey on the recent advances and solutions to the problems of blockchain technology by leveraging the artificial intelligence approaches. The outcomes of this study will provide valuable information and guidance on the design of Blockchain-based systems to support time-sensitive and real-time specific applications and processes.
Collapse
|
40
|
Konnaris MA, Brendel M, Fontana MA, Otero M, Ivashkiv LB, Wang F, Bell RD. Computational pathology for musculoskeletal conditions using machine learning: advances, trends, and challenges. Arthritis Res Ther 2022; 24:68. [PMID: 35277196 PMCID: PMC8915507 DOI: 10.1186/s13075-021-02716-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 12/29/2021] [Indexed: 11/21/2022] Open
Abstract
Histopathology is widely used to analyze clinical biopsy specimens and tissues from pre-clinical models of a variety of musculoskeletal conditions. Histological assessment relies on scoring systems that require expertise, time, and resources, which can lead to an analysis bottleneck. Recent advancements in digital imaging and image processing provide an opportunity to automate histological analyses by implementing advanced statistical models such as machine learning and deep learning, which would greatly benefit the musculoskeletal field. This review provides a high-level overview of machine learning applications, a general pipeline of tissue collection to model selection, and highlights the development of image analysis methods, including some machine learning applications, to solve musculoskeletal problems. We discuss the optimization steps for tissue processing, sectioning, staining, and imaging that are critical for the successful generalizability of an automated image analysis model. We also commenting on the considerations that should be taken into account during model selection and the considerable advances in the field of computer vision outside of histopathology, which can be leveraged for image analysis. Finally, we provide a historic perspective of the previously used histopathological image analysis applications for musculoskeletal diseases, and we contrast it with the advantages of implementing state-of-the-art computational pathology approaches. While some deep learning approaches have been used, there is a significant opportunity to expand the use of such approaches to solve musculoskeletal problems.
Collapse
Affiliation(s)
- Maxwell A Konnaris
- Research Institute, Hospital for Special Surgery, New York, USA.,Orthopedic Soft Tissue Research Program, Hospital for Special Surgery, New York, USA
| | - Matthew Brendel
- Department of Population Health Sciences, Weill Cornell Medical College, New York, USA.,Institute for Computational Biomedicine, Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY, USA
| | - Mark Alan Fontana
- Department of Population Health Sciences, Weill Cornell Medical College, New York, USA.,Center for Analytics, Modeling, & Performance, Hospital for Special Surgery, New York, USA
| | - Miguel Otero
- Research Institute, Hospital for Special Surgery, New York, USA.,Orthopedic Soft Tissue Research Program, Hospital for Special Surgery, New York, USA
| | - Lionel B Ivashkiv
- Research Institute, Hospital for Special Surgery, New York, USA.,Arthritis and Tissue Degeneration Program, Hospital for Special Surgery, New York, USA.,Rosenweig Genomics Center, Hospital for Special Surgery, New York, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medical College, New York, USA
| | - Richard D Bell
- Research Institute, Hospital for Special Surgery, New York, USA. .,Center for Analytics, Modeling, & Performance, Hospital for Special Surgery, New York, USA. .,Rosenweig Genomics Center, Hospital for Special Surgery, New York, USA.
| |
Collapse
|
41
|
Revailler W, Cottereau AS, Rossi C, Noyelle R, Trouillard T, Morschhauser F, Casasnovas O, Thieblemont C, Le Gouill S, André M, Ghesquieres H, Ricci R, Meignan M, Kanoun S. Deep Learning Approach to Automatize TMTV Calculations Regardless of Segmentation Methodology for Major FDG-Avid Lymphomas. Diagnostics (Basel) 2022; 12:diagnostics12020417. [PMID: 35204515 PMCID: PMC8870809 DOI: 10.3390/diagnostics12020417] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/28/2022] [Accepted: 02/01/2022] [Indexed: 11/16/2022] Open
Abstract
The total metabolic tumor volume (TMTV) is a new prognostic factor in lymphomas that could benefit from automation with deep learning convolutional neural networks (CNN). Manual TMTV segmentations of 1218 baseline 18FDG-PET/CT have been used for training. A 3D V-NET model has been trained to generate segmentations with soft dice loss. Ground truth segmentation has been generated using a combination of different thresholds (TMTVprob), applied to the manual region of interest (Otsu, relative 41% and SUV 2.5 and 4 cutoffs). In total, 407 and 405 PET/CT were used for test and validation datasets, respectively. The training was completed in 93 h. In comparison with the TMTVprob, mean dice reached 0.84 in the training set, 0.84 in the validation set and 0.76 in the test set. The median dice scores for each TMTV methodology were 0.77, 0.70 and 0.90 for 41%, 2.5 and 4 cutoff, respectively. Differences in the median TMTV between manual and predicted TMTV were 32, 147 and 5 mL. Spearman’s correlations between manual and predicted TMTV were 0.92, 0.95 and 0.98. This generic deep learning model to compute TMTV in lymphomas can drastically reduce computation time of TMTV.
Collapse
Affiliation(s)
- Wendy Revailler
- Centre de Recherche Clinique de Toulouse, Team 9, 31100 Toulouse, France; (W.R.); (T.T.)
- Institut Universitaire du Cancer de Toulouse, Institut Claudius Regaud, Nuclear Medicine, 1 avenue Joliot Curie, 31000 Toulouse, France
| | - Anne Ségolène Cottereau
- Assistance Publique-Hôpitaux de Paris, Hôpital Cochin, Nuclear Medecine, René Descartes University, 75014 Paris, France;
| | - Cedric Rossi
- CHU Dijon, Hematology, 10 Boulevard Maréchal De Lattre De Tassigny, 21000 Dijon, France; (C.R.); (O.C.)
| | | | - Thomas Trouillard
- Centre de Recherche Clinique de Toulouse, Team 9, 31100 Toulouse, France; (W.R.); (T.T.)
- Institut Universitaire du Cancer de Toulouse, Institut Claudius Regaud, Nuclear Medicine, 1 avenue Joliot Curie, 31000 Toulouse, France
| | - Franck Morschhauser
- ULR 7365—GRITA—Groupe de Recherche sur les formes Injectables et les Technologies Associées, University of Lille, CHU Lille, 59000 Lille, France;
| | - Olivier Casasnovas
- CHU Dijon, Hematology, 10 Boulevard Maréchal De Lattre De Tassigny, 21000 Dijon, France; (C.R.); (O.C.)
| | - Catherine Thieblemont
- Hemato-Oncology Unit, Saint-Louis University Hospital Center, Public Hospital Network of Paris, 75010 Paris, France;
| | - Steven Le Gouill
- Department of Hematology, Nantes University Hospital, INSERM CRCINA Nantes-Angers, NeXT Université de Nantes, 44000 Nantes, France;
| | - Marc André
- Department of Hematology, Université catholique de Louvain, CHU UcL Namur, 5530 Yvoir, Belgium;
| | - Herve Ghesquieres
- Department of Hematology, Hôpital Lyon Sud, Hospices Civils de Lyon, 69310 Pierre-Bénite, France;
| | - Romain Ricci
- LYSARC, Centre Hospitalier Lyon-Sud, 165 Chemin du Grand Revoyet Bâtiment 2D, 69310 Pierre-Bénite, France;
| | - Michel Meignan
- LYSA Imaging, Henri Mondor University Hospital, AP-HP, University Paris East, 94000 Créteil, France;
| | - Salim Kanoun
- Centre de Recherche Clinique de Toulouse, Team 9, 31100 Toulouse, France; (W.R.); (T.T.)
- Institut Universitaire du Cancer de Toulouse, Institut Claudius Regaud, Nuclear Medicine, 1 avenue Joliot Curie, 31000 Toulouse, France
- Correspondence: ; Tel.: +33-6-88-62-81-18
| |
Collapse
|
42
|
Zhang J, Qiu Y, Peng L, Zhou Q, Wang Z, Qi M. A comprehensive review of methods based on deep learning for diabetes-related foot ulcers. Front Endocrinol (Lausanne) 2022; 13:945020. [PMID: 36004341 PMCID: PMC9394750 DOI: 10.3389/fendo.2022.945020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/04/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND Diabetes mellitus (DM) is a chronic disease with hyperglycemia. If not treated in time, it may lead to lower limb amputation. At the initial stage, the detection of diabetes-related foot ulcer (DFU) is very difficult. Deep learning has demonstrated state-of-the-art performance in various fields and has been used to analyze images of DFUs. OBJECTIVE This article reviewed current applications of deep learning to the early detection of DFU to avoid limb amputation or infection. METHODS Relevant literature on deep learning models, including in the classification, object detection, and semantic segmentation for images of DFU, published during the past 10 years, were analyzed. RESULTS Currently, the primary uses of deep learning in early DFU detection are related to different algorithms. For classification tasks, improved classification models were all based on convolutional neural networks (CNNs). The model with parallel convolutional layers based on GoogLeNet and the ensemble model outperformed the other models in classification accuracy. For object detection tasks, the models were based on architectures such as faster R-CNN, You-Only-Look-Once (YOLO) v3, YOLO v5, or EfficientDet. The refinements on YOLO v3 models achieved an accuracy of 91.95% and the model with an adaptive faster R-CNN architecture achieved a mean average precision (mAP) of 91.4%, which outperformed the other models. For semantic segmentation tasks, the models were based on architectures such as fully convolutional networks (FCNs), U-Net, V-Net, or SegNet. The model with U-Net outperformed the other models with an accuracy of 94.96%. Taking segmentation tasks as an example, the models were based on architectures such as mask R-CNN. The model with mask R-CNN obtained a precision value of 0.8632 and a mAP of 0.5084. CONCLUSION Although current research is promising in the ability of deep learning to improve a patient's quality of life, further research is required to better understand the mechanisms of deep learning for DFUs.
Collapse
Affiliation(s)
- Jianglin Zhang
- Department of Dermatology, Shenzhen Peoples Hospital, The Second Clinical Medica College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen, China
| | - Yue Qiu
- Dermatology Department of Xiangya Hospital, Central South University, Changsha, China
| | - Li Peng
- School of Computer Science, Hunan First Normal University, Changsha, China
| | - Qiuhong Zhou
- Teaching and Research Section of Clinical Nursing, Xiangya Hospital of Central South University, Changsha, China
| | - Zheng Wang
- School of Computer Science, Hunan First Normal University, Changsha, China
- *Correspondence: Zheng Wang, ; Min Qi,
| | - Min Qi
- Department of Plastic Surgery, Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Zheng Wang, ; Min Qi,
| |
Collapse
|
43
|
Lim HK, Jung SK, Kim SH, Cho Y, Song IS. Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 2021; 21:630. [PMID: 34876105 PMCID: PMC8650351 DOI: 10.1186/s12903-021-01983-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Background The inferior alveolar nerve (IAN) innervates and regulates the sensation of the mandibular teeth and lower lip. The position of the IAN should be monitored prior to surgery. Therefore, a study using artificial intelligence (AI) was planned to image and track the position of the IAN automatically for a quicker and safer surgery. Methods A total of 138 cone-beam computed tomography datasets (Internal: 98, External: 40) collected from multiple centers (three hospitals) were used in the study. A customized 3D nnU-Net was used for image segmentation. Active learning, which consists of three steps, was carried out in iterations for 83 datasets with cumulative additions after each step. Subsequently, the accuracy of the model for IAN segmentation was evaluated using the 50 datasets. The accuracy by deriving the dice similarity coefficient (DSC) value and the segmentation time for each learning step were compared. In addition, visual scoring was considered to comparatively evaluate the manual and automatic segmentation. Results After learning, the DSC gradually increased to 0.48 ± 0.11 to 0.50 ± 0.11, and 0.58 ± 0.08. The DSC for the external dataset was 0.49 ± 0.12. The times required for segmentation were 124.8, 143.4, and 86.4 s, showing a large decrease at the final stage. In visual scoring, the accuracy of manual segmentation was found to be higher than that of automatic segmentation. Conclusions The deep active learning framework can serve as a fast, accurate, and robust clinical tool for demarcating IAN location.
Collapse
Affiliation(s)
- Ho-Kyung Lim
- Department of Oral and Maxillofacial Surgery, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seok-Ki Jung
- Department of Orthodontics, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seung-Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, 46, Gaeunsa 2-gil, Seongbuk-gu, Seoul, 02842, Republic of Korea
| | - Yongwon Cho
- Department of Radiology and AI Center, Korea University College of Medicine, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - In-Seok Song
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
44
|
Hasani N, Farhadi F, Morris MA, Nikpanah M, Rhamim A, Xu Y, Pariser A, Collins MT, Summers RM, Jones E, Siegel E, Saboury B. Artificial Intelligence in Medical Imaging and its Impact on the Rare Disease Community: Threats, Challenges and Opportunities. PET Clin 2021; 17:13-29. [PMID: 34809862 DOI: 10.1016/j.cpet.2021.09.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Almost 1 in 10 individuals can suffer from one of many rare diseases (RDs). The average time to diagnosis for an RD patient is as high as 7 years. Artificial intelligence (AI)-based positron emission tomography (PET), if implemented appropriately, has tremendous potential to advance the diagnosis of RDs. Patient advocacy groups must be active stakeholders in the AI ecosystem if we are to avoid potential issues related to the implementation of AI into health care. AI medical devices must not only be RD-aware at each stage of their conceptualization and life cycle but also should be trained on diverse and augmented datasets representative of the end-user population including RDs. Inability to do so leads to potential harm and unsustainable deployment of AI-based medical devices (AIMDs) into clinical practice.
Collapse
Affiliation(s)
- Navid Hasani
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; University of Queensland Faculty of Medicine, Ochsner Clinical School, New Orleans, LA 70121, USA
| | - Faraz Farhadi
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Michael A Morris
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland-Baltimore Country, Baltimore, MD, USA
| | - Moozhan Nikpanah
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Arman Rhamim
- Department of Radiology, BC Cancer Research Institute, University of British Columbia, 675 West 10th Avenue, Vancouver, British Columbia, V5Z 1L3, Canada; Department of Physics, BC cancer Research Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Yanji Xu
- Office of Rare Diseases Research, National Center for Advancing Translational Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Anne Pariser
- Office of Rare Diseases Research, National Center for Advancing Translational Sciences, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Michael T Collins
- Skeletal Disorders and Mineral Homeostasis Section, National Institute of Dental and Craniofacial Research, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Ronald M Summers
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Elizabeth Jones
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Eliot Siegel
- Department of Radiology and Nuclear Medicine, University of Maryland Medical Center, 655 W. Baltimore Street, Baltimore, MD 21201, USA
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland-Baltimore Country, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
45
|
Song Q, Li S, Bai Q, Yang J, Zhang X, Li Z, Duan Z. Object Detection Method for Grasping Robot Based on Improved YOLOv5. MICROMACHINES 2021; 12:mi12111273. [PMID: 34832685 PMCID: PMC8625549 DOI: 10.3390/mi12111273] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 10/10/2021] [Accepted: 10/18/2021] [Indexed: 01/24/2023]
Abstract
In the industrial field, the anthropomorphism of grasping robots is the trend of future development, however, the basic vision technology adopted by the grasping robot at this stage has problems such as inaccurate positioning and low recognition efficiency. Based on this practical problem, in order to achieve more accurate positioning and recognition of objects, an object detection method for grasping robot based on improved YOLOv5 was proposed in this paper. Firstly, the robot object detection platform was designed, and the wooden block image data set is being proposed. Secondly, the Eye-In-Hand calibration method was used to obtain the relative three-dimensional pose of the object. Then the network pruning method was used to optimize the YOLOv5 model from the two dimensions of network depth and network width. Finally, the hyper parameter optimization was carried out. The simulation results show that the improved YOLOv5 network proposed in this paper has better object detection performance. The specific performance is that the recognition precision, recall, mAP value and F1 score are 99.35%, 99.38%, 99.43% and 99.41% respectively. Compared with the original YOLOv5s, YOLOv5m and YOLOv5l models, the mAP of the YOLOv5_ours model has increased by 1.12%, 1.2% and 1.27%, respectively, and the scale of the model has been reduced by 10.71%, 70.93% and 86.84%, respectively. The object detection experiment has verified the feasibility of the method proposed in this paper.
Collapse
Affiliation(s)
- Qisong Song
- College of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (Q.S.); (Q.B.); (J.Y.); (Z.L.)
| | - Shaobo Li
- College of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (Q.S.); (Q.B.); (J.Y.); (Z.L.)
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China;
- Correspondence:
| | - Qiang Bai
- College of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (Q.S.); (Q.B.); (J.Y.); (Z.L.)
| | - Jing Yang
- College of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (Q.S.); (Q.B.); (J.Y.); (Z.L.)
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China;
| | - Xingxing Zhang
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China;
| | - Zhiang Li
- College of Mechanical Engineering, Guizhou University, Guiyang 550025, China; (Q.S.); (Q.B.); (J.Y.); (Z.L.)
| | - Zhongjing Duan
- Key Laboratory of Advanced Manufacturing Technology of Ministry of Education, Guizhou University, Guiyang 550025, China;
| |
Collapse
|
46
|
Yang R, Yan C, Lu S, Li J, Ji J, Yan R, Yuan F, Zhu Z, Yu Y. Tracking cancer lesions on surgical samples of gastric cancer by artificial intelligent algorithms. J Cancer 2021; 12:6473-6483. [PMID: 34659538 PMCID: PMC8489126 DOI: 10.7150/jca.63879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 08/29/2021] [Indexed: 01/10/2023] Open
Abstract
To quickly locate cancer lesions, especially suspected metastatic lesions after gastrectomy, AI algorithms of object detection and semantic segmentation were established. A total of 509 macroscopic images from 381 patients were collected. The RFB-SSD object detection algorithm and ResNet50-PSPNet semantic segmentation algorithm were used. Another 57 macroscopic images from 48 patients were collected for prospective verification. We used mAP as the metrics of object detection. The best mAP was 95.90% with an average of 89.89% in the test set. The mAP reached 92.60% in validation set. We used mIoU for evaluation of semantic segmentation. The best mIoU was 80.97% with an average of 79.26% in the test set. In addition, 81 out of 92 (88.04%) gastric specimens were accurately predicted for the cancer lesion located at the serosa by ResNet50-PSPNet semantic segmentation model. The positive rate and accuracy of AI prediction were different based on cancer invasive depth. The metastatic lymph nodes were predicted in 24 cases by semantic segmentation model. Among them, 18 cases were confirmed by pathology. The predictive accuracy was 75.00%. Our well-trained AI algorithms effectively identified the subtle features of gastric cancer in resected specimens that may be missed by naked eyes. Taken together, AI algorithms could assist clinical doctors quickly locating cancer lesions and improve their work efficiency.
Collapse
Affiliation(s)
- Ruixin Yang
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Chao Yan
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Sheng Lu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Jun Li
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Jun Ji
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Ranlin Yan
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Fei Yuan
- Department of Pathology of Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Zhenggang Zhu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| | - Yingyan Yu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery, and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, 200025, Shanghai, China
| |
Collapse
|
47
|
Badawy SM, Mohamed AENA, Hefnawy AA, Zidan HE, GadAllah MT, El-Banby GM. Automatic semantic segmentation of breast tumors in ultrasound images based on combining fuzzy logic and deep learning-A feasibility study. PLoS One 2021; 16:e0251899. [PMID: 34014987 PMCID: PMC8136850 DOI: 10.1371/journal.pone.0251899] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 05/05/2021] [Indexed: 11/29/2022] Open
Abstract
Computer aided diagnosis (CAD) of biomedical images assists physicians for a fast facilitated tissue characterization. A scheme based on combining fuzzy logic (FL) and deep learning (DL) for automatic semantic segmentation (SS) of tumors in breast ultrasound (BUS) images is proposed. The proposed scheme consists of two steps: the first is a FL based preprocessing, and the second is a Convolutional neural network (CNN) based SS. Eight well-known CNN based SS models have been utilized in the study. Studying the scheme was by a dataset of 400 cancerous BUS images and their corresponding 400 ground truth images. SS process has been applied in two modes: batch and one by one image processing. Three quantitative performance evaluation metrics have been utilized: global accuracy (GA), mean Jaccard Index (mean intersection over union (IoU)), and mean BF (Boundary F1) Score. In the batch processing mode: quantitative metrics’ average results over the eight utilized CNNs based SS models over the 400 cancerous BUS images were: 95.45% GA instead of 86.08% without applying fuzzy preprocessing step, 78.70% mean IoU instead of 49.61%, and 68.08% mean BF score instead of 42.63%. Moreover, the resulted segmented images could show tumors’ regions more accurate than with only CNN based SS. While, in one by one image processing mode: there has been no enhancement neither qualitatively nor quantitatively. So, only when a batch processing is needed, utilizing the proposed scheme may be helpful in enhancing automatic ss of tumors in BUS images. Otherwise applying the proposed approach on a one-by-one image mode will disrupt segmentation’s efficiency. The proposed batch processing scheme may be generalized for an enhanced CNN based SS of a targeted region of interest (ROI) in any batch of digital images. A modified small dataset is available: https://www.kaggle.com/mohammedtgadallah/mt-small-dataset (S1 Data).
Collapse
Affiliation(s)
- Samir M. Badawy
- Industrial Electronics and Control Engineering Department, Faculty of Electronic Engineering, Menoufia University, Menoufia, Egypt
| | - Abd El-Naser A. Mohamed
- Electronics and Electrical Communications Engineering Department, Faculty of Electronic Engineering, Menoufia University, Menoufia, Egypt
| | - Alaa A. Hefnawy
- Computers and Systems Department, Electronics Research Institute (ERI), Cairo, Egypt
| | - Hassan E. Zidan
- Computers and Systems Department, Electronics Research Institute (ERI), Cairo, Egypt
| | - Mohammed T. GadAllah
- Computers and Systems Department, Electronics Research Institute (ERI), Cairo, Egypt
- * E-mail: , ,
| | - Ghada M. El-Banby
- Industrial Electronics and Control Engineering Department, Faculty of Electronic Engineering, Menoufia University, Menoufia, Egypt
| |
Collapse
|