1
|
Li P, Hu Y. Deep magnetic resonance fingerprinting based on Local and Global Vision Transformer. Med Image Anal 2024; 95:103198. [PMID: 38759259 DOI: 10.1016/j.media.2024.103198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 04/28/2024] [Accepted: 05/02/2024] [Indexed: 05/19/2024]
Abstract
To mitigate systematic errors in magnetic resonance fingerprinting (MRF), the precomputed dictionary is usually computed with minimal granularity across the entire range of tissue parameters. However, the dictionary grows exponentially with the number of parameters increase, posing significant challenges to the computational efficiency and matching accuracy of pattern-matching algorithms. Existing works, primarily based on convolutional neural networks (CNN), focus solely on local information to reconstruct multiple parameter maps, lacking in-depth investigations on the MRF mechanism. These methods may not exploit long-distance redundancies and the contextual information within voxel fingerprints introduced by the Bloch equation dynamics, leading to limited reconstruction speed and accuracy. To overcome these limitations, we propose a novel end-to-end neural network called the Local and Global Vision Transformer (LG-ViT) for MRF parameter reconstruction. Our proposed LG-ViT employs a multi-stage architecture that effectively reduces the computational overhead associated with the high-dimensional MRF data and the transformer model. Specifically, a local Transformer encoder is proposed to capture contextual information embedded within voxel fingerprints and local correlations introduced by the interconnected human tissues. Additionally, a global Transformer encoder is proposed to leverage long-distance dependencies arising from shared characteristics among different tissues across various spatial regions. By incorporating MRF physics-based data priors and effectively capturing local and global correlations, our proposed LG-ViT can achieve fast and accurate MRF parameter reconstruction. Experiments on both simulation and in vivo data demonstrate that the proposed method enables faster and more accurate MRF parameter reconstruction compared to state-of-the-art deep learning-based methods.
Collapse
Affiliation(s)
- Peng Li
- The School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, China
| | - Yue Hu
- The School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
2
|
Wang D, He X, Huang C, Li W, Li H, Huang C, Hu C. Magnetic resonance imaging-based radiomics and deep learning models for predicting lymph node metastasis of squamous cell carcinoma of the tongue. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:214-224. [PMID: 38378316 DOI: 10.1016/j.oooo.2024.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 01/14/2024] [Accepted: 01/28/2024] [Indexed: 02/22/2024]
Abstract
OBJECTIVE This study aimed to establish a combined method of radiomics and deep learning (DL) in magnetic resonance imaging (MRI) to predict lymph node metastasis (LNM) preoperatively in patients with squamous cell carcinoma of the tongue. STUDY DESIGN In total, MR images of 196 patients with lingual squamous cell carcinoma were divided into training (n = 156) and test (n = 40) cohorts. Radiomics and DL features were extracted from MR images and selected to construct machine learning models. A DL radiomics nomogram was established via multivariate logistic regression by incorporating the radiomics signature, the DL signature, and MRI-reported LN status. RESULTS Nine radiomics and 3 DL features were selected. In the radiomics test cohort, the multilayer perceptron model performed best with an area under the receiver operating characteristic curve (AUC) of 0.747, but in the DL cohort, the best model (logistic regression) performed less well (AUC = 0.655). The DL radiomics nomogram showed good calibration and performance with an AUC of 0.934 (outstanding discrimination ability) in the training cohort and 0.757 (acceptable discrimination ability) in the test cohort. The decision curve analysis demonstrated that the nomogram could offer more net benefit than a single radiomics or DL signature. CONCLUSION The DL radiomics nomogram exhibited promising performance in predicting LNM, which facilitates personalized treatment of tongue cancer.
Collapse
Affiliation(s)
- Dawei Wang
- Department of Plastic Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiao He
- Department of Plastic Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Chunming Huang
- Department of Stomatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wenqiang Li
- Department of Stomatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Haosen Li
- Department of Stomatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cicheng Huang
- Department of Stomatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Chuanyu Hu
- Department of Stomatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
| |
Collapse
|
3
|
Wang Y, Guo Y, Wang Z, Yu L, Yan Y, Gu Z. Enhancing semantic segmentation in chest X-ray images through image preprocessing: ps-KDE for pixel-wise substitution by kernel density estimation. PLoS One 2024; 19:e0299623. [PMID: 38913621 PMCID: PMC11195943 DOI: 10.1371/journal.pone.0299623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 05/08/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND In medical imaging, the integration of deep-learning-based semantic segmentation algorithms with preprocessing techniques can reduce the need for human annotation and advance disease classification. Among established preprocessing techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) has demonstrated efficacy in improving segmentation algorithms across various modalities, such as X-rays and CT. However, there remains a demand for improved contrast enhancement methods considering the heterogeneity of datasets and the various contrasts across different anatomic structures. METHOD This study proposes a novel preprocessing technique, ps-KDE, to investigate its impact on deep learning algorithms to segment major organs in posterior-anterior chest X-rays. Ps-KDE augments image contrast by substituting pixel values based on their normalized frequency across all images. We evaluate our approach on a U-Net architecture with ResNet34 backbone pre-trained on ImageNet. Five separate models are trained to segment the heart, left lung, right lung, left clavicle, and right clavicle. RESULTS The model trained to segment the left lung using ps-KDE achieved a Dice score of 0.780 (SD = 0.13), while that of trained on CLAHE achieved a Dice score of 0.717 (SD = 0.19), p<0.01. ps-KDE also appears to be more robust as CLAHE-based models misclassified right lungs in select test images for the left lung model. The algorithm for performing ps-KDE is available at https://github.com/wyc79/ps-KDE. DISCUSSION Our results suggest that ps-KDE offers advantages over current preprocessing techniques when segmenting certain lung regions. This could be beneficial in subsequent analyses such as disease classification and risk stratification.
Collapse
Affiliation(s)
- Yuanchen Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Guo
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ziqi Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Linzi Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Yan
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Zifan Gu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
4
|
Yang X, Zhang B, Liu Y, Lv Q, Guo J. Automatic Quantitative Assessment of Muscle Strength Based on Deep Learning and Ultrasound. ULTRASONIC IMAGING 2024:1617346241255590. [PMID: 38881032 DOI: 10.1177/01617346241255590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
Skeletal muscle is a vital organ that promotes human movement and maintains posture. Accurate assessment of muscle strength is helpful to provide valuable insights for athletes' rehabilitation and strength training. However, traditional techniques rely heavily on the operator's expertise, which may affect the accuracy of the results. In this study, we propose an automated method to evaluate muscle strength using ultrasound and deep learning techniques. B-mode ultrasound data of biceps brachii of multiple athletes at different strength levels were collected and then used to train our deep learning model. To evaluate the effectiveness of this method, this study tested the contraction of the biceps brachii under different force levels. The classification accuracy of this method for grade 4 and grade 6 muscle strength reached 98% and 96%, respectively, and the overall average accuracy was 93% and 87%, respectively. The experimental results confirm that the innovative methods in this paper can accurately and effectively evaluate and classify muscle strength.
Collapse
Affiliation(s)
- Xiao Yang
- Key Laboratory of Ultrasound of Shaanxi Province, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Beilei Zhang
- Key Laboratory of Ultrasound of Shaanxi Province, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Ying Liu
- Key Laboratory of Ultrasound of Shaanxi Province, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Qian Lv
- Key Laboratory of Ultrasound of Shaanxi Province, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Jianzhong Guo
- Key Laboratory of Ultrasound of Shaanxi Province, School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| |
Collapse
|
5
|
Yin R, Chen H, Wang C, Qin C, Tao T, Hao Y, Wu R, Jiang Y, Gui J. Transformer-based Multi-label Deep Learning Model is Efficient for Detecting Ankle Lateral and Medial Ligament Injuries on MRI and Improving Clinicians' Diagnostic Accuracy for Rotational Chronic Ankle Instability. Arthroscopy 2024:S0749-8063(24)00409-2. [PMID: 38876447 DOI: 10.1016/j.arthro.2024.05.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 05/11/2024] [Accepted: 05/19/2024] [Indexed: 06/16/2024]
Abstract
PURPOSE To develop a deep learning (DL) model that can simultaneously detect lateral and medial collateral ligament injuries of the ankle, aiding in the diagnosis of chronic ankle instability (CAI), and assess its impact on clinicians' diagnostic performance. METHODS DL models were developed and external validated on retrospectively collected ankle MRIs between April 2016 and March 2022 respectively at three centers. Included patients were confirmed diagnoses of CAI through arthroscopy, as well as individuals who had undergone MRI and physical examinations that ruled out ligament injuries. DL models were constructed based on a multi-label paradigm. A transformer-based multi-label DL model (AnkleNet) was developed and compared with four convolution neural network (CNN) models. Subsequently, a reader study was conducted to evaluate the impact of model assistance on clinicians when diagnosing challenging cases: identifying rotational CAI (RCAI). Diagnostic performance was assessed using area under the receiver operating characteristic curve (AUC). RESULTS Our transformer-based model achieved AUC of 0.910 and 0.892 for detecting lateral and medial collateral ligament injury, respectively, both of which was significantly higher than that of CNN-based models (all P < 0.001). In terms of further CAI diagnosis, it exhibited a macro-average AUC of 0.870 and a balanced accuracy of 0.805. The reader study indicated that incorporation with our model significantly enhanced the diagnostic accuracy of clinicians (P = 0.042), particularly junior clinicians, and led to a reduction in diagnostic variability. The code of the model can be accessed at https://github.com/ChiariRay/AnkleNet. CONCLUSION Our transformer-based model was able to detect lateral and medial collateral ligament injuries based on MRI and outperformed CNN-based models, demonstrating a promising performance in diagnosing CAI, especially RCAI patients.
Collapse
Affiliation(s)
- Rui Yin
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Hao Chen
- Department of Clinical Neuroscience, Cambridge University, Cambridge, UK; School of Computer Science, University of Birmingham, Birmingham, UK
| | - Changjiang Wang
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Chaoren Qin
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Tianqi Tao
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Yunjia Hao
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China; Department of Hand and Foot Microsurgery, Xuzhou Central Hospital
| | - Rui Wu
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China; Department of Orthopedics, The Second People's Hospital of Lianyungang
| | - Yiqiu Jiang
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Jianchao Gui
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China.
| |
Collapse
|
6
|
Gan K, Liu Y, Zhang T, Xu D, Lian L, Luo Z, Li J, Lu L. Deep Learning Model for Automatic Identification and Classification of Distal Radius Fracture. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01144-4. [PMID: 38862852 DOI: 10.1007/s10278-024-01144-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 05/13/2024] [Accepted: 05/15/2024] [Indexed: 06/13/2024]
Abstract
Distal radius fracture (DRF) is one of the most common types of wrist fractures. We aimed to construct a model for the automatic segmentation of wrist radiographs using a deep learning approach and further perform automatic identification and classification of DRF. A total of 2240 participants with anteroposterior wrist radiographs from one hospital between January 2015 and October 2021 were included. The outcomes were automatic segmentation of wrist radiographs, identification of DRF, and classification of DRF (type A, type B, type C). The Unet model and Fast-RCNN model were used for automatic segmentation. The DenseNet121 model and ResNet50 model were applied to DRF identification of DRF. The DenseNet121 model, ResNet50 model, VGG-19 model, and InceptionV3 model were used for DRF classification. The area under the curve (AUC) with 95% confidence interval (CI), accuracy, precision, and F1-score was utilized to assess the effectiveness of the identification and classification models. Of these 2240 participants, 1440 (64.3%) had DRF, of which 701 (48.7%) were type A, 278 (19.3%) were type B, and 461 (32.0%) were type C. Both the Unet model and the Fast-RCNN model showed good segmentation of wrist radiographs. For DRF identification, the AUCs of the DenseNet121 model and the ResNet50 model in the testing set were 0.941 (95%CI: 0.926-0.965) and 0.936 (95%CI: 0.913-0.955), respectively. The AUCs of the DenseNet121 model (testing set) for classification type A, type B, and type C were 0.96, 0.96, and 0.96, respectively. The DenseNet121 model may provide clinicians with a tool for interpreting wrist radiographs.
Collapse
Affiliation(s)
- Kaifeng Gan
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China
| | - Yunpeng Liu
- Ningbo University of Technology, Ningbo, 315100, Zhejiang, China
| | - Ting Zhang
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China
| | - Dingli Xu
- Health Science Center, Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Leidong Lian
- Health Science Center, Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Zhe Luo
- Health Science Center, Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jin Li
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China
| | - Liangjie Lu
- Department of Orthopaedics, the Affiliated LiHuiLi Hospital of Ningbo University, No. 57 Xingning Road, Yinzhou District, Ningbo, 315211, Zhejiang, China.
| |
Collapse
|
7
|
Fang X, Zhang S, Wei Z, Wang K, Yang G, Li C, Han M, Du M. Automatic detection of the third molar and mandibular canal on panoramic radiographs based on deep learning. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:101946. [PMID: 38857691 DOI: 10.1016/j.jormas.2024.101946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/12/2024] [Accepted: 06/08/2024] [Indexed: 06/12/2024]
Abstract
PURPOSE This study aims to develop a deep learning framework for the automatic detection of the position relationship between the mandibular third molar (M3) and the mandibular canal (MC) on panoramic radiographs (PRs), to assist doctors in assessing and planning appropriate surgical interventions. METHODS Datasets D1 and D2 were obtained by collecting 253 PRs from a hospitals and 197 PRs from online platforms. The RPIFormer model proposed in this study was trained and validated on D1 to create a segmentation model. The CycleGAN model was trained and validated on both D1 and D2 to develop an image enhancement model. Ultimately, the segmentation and enhancement models were integrated with an object detection model to create a fully automated framework for M3 and MC detection in PRs. Experimental evaluation included calculating Dice coefficient, IoU, Recall, and Precision during the process. RESULTS The RPIFormer model proposed in this study achieved an average Dice coefficient of 92.56 % for segmenting M3 and MC, representing a 3.06 % improvement over the previous best study. The deep learning framework developed in this research enables automatic detection of M3 and MC in PRs without manual cropping, demonstrating superior detection accuracy and generalization capability. CONCLUSION The framework developed in this study can be applied to PRs captured in different hospitals without the need for model fine-tuning. This feature is significant for aiding doctors in accurately assessing the spatial relationship between M3 and MC, thereby determining the optimal treatment plan to ensure patients' oral health and surgical safety.
Collapse
Affiliation(s)
- Xinle Fang
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Shengben Zhang
- Department of Implantology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Zhiyuan Wei
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Kaixin Wang
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Guanghui Yang
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Chengliang Li
- School of Information Science and Engineering, Shandong University, Qingdao, China
| | - Min Han
- School of Information Science and Engineering, Shandong University, Qingdao, China.
| | - Mi Du
- Department of Implantology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, China; Shandong Key Laboratory of Oral Tissue Regeneration, Jinan, China; Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Jinan, China; Shandong Provincial Clinical Research Center for Oral Diseases, Jinan, China.
| |
Collapse
|
8
|
Zhang AD, Shi QL, Zhang HT, Duan WH, Li Y, Ruan L, Han YF, Liu ZK, Li HF, Xiao JS, Shi GF, Wan X, Wang RZ. Pairwise machine learning-based automatic diagnostic platform utilizing CT images and clinical information for predicting radiotherapy locoregional recurrence in elderly esophageal cancer patients. Abdom Radiol (NY) 2024:10.1007/s00261-024-04377-7. [PMID: 38831075 DOI: 10.1007/s00261-024-04377-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 05/04/2024] [Accepted: 05/06/2024] [Indexed: 06/05/2024]
Abstract
OBJECTIVE To investigate the feasibility and accuracy of predicting locoregional recurrence (LR) in elderly patients with esophageal squamous cell cancer (ESCC) who underwent radical radiotherapy using a pairwise machine learning algorithm. METHODS The 130 datasets enrolled were randomly divided into a training set and a testing set in a 7:3 ratio. Clinical factors were included and radiomics features were extracted from pretreatment CT scans using pyradiomics-based software, and a pairwise naive Bayes (NB) model was developed. The performance of the model was evaluated using receiver operating characteristic (ROC) curves and decision curve analysis (DCA). To facilitate practical application, we attempted to construct an automated esophageal cancer diagnosis system based on trained models. RESULTS To the follow-up date, 64 patients (49.23%) had experienced LR. Ten radiomics features and two clinical factors were selected for modeling. The model demonstrated good prediction performance, with area under the ROC curve of 0.903 (0.829-0.958) for the training cohort and 0.944 (0.849-1.000) for the testing cohort. The corresponding accuracies were 0.852 and 0.914, respectively. Calibration curves showed good agreement, and DCA curve confirmed the clinical validity of the model. The model accurately predicted LR in elderly patients, with a positive predictive value of 85.71% for the testing cohort. CONCLUSIONS The pairwise NB model, based on pre-treatment enhanced chest CT-based radiomics and clinical factors, can accurately predict LR in elderly patients with ESCC. The esophageal cancer automated diagnostic system embedded with the pairwise NB model holds significant potential for application in clinical practice.
Collapse
Affiliation(s)
- An-du Zhang
- Department of Radiotherapy, Hebei Medical University Fourth Affiliated Hospital and Hebei Provincial Tumor Hospital, 12 Jiankang Road, Shijiazhuang, Hebei, 050011, People's Republic of China
| | - Qing-Lei Shi
- School of Medicine, Chinese University of Hong Kong (Shenzhen), No. 2001, Longxiang Avenue, Longgang District, Shenzhen, 518172, People's Republic of China
- Medical Big Data Laboratory, Shenzhen Research Institute of Big Data, Daoyuan Building, No. 2001, Longxiang Avenue, Longgang District, Shenzhen, 518172, People's Republic of China
| | - Hong-Tao Zhang
- Department of Oncology, Hebei General Hospital, NO. 348 Heping West Road, Xinhua District, Shijiazhuang, Hebei, 050051, People's Republic of China
| | - Wen-Han Duan
- School of Computer Science and Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Yang Li
- Department of Radiotherapy, Hebei Medical University Fourth Affiliated Hospital and Hebei Provincial Tumor Hospital, 12 Jiankang Road, Shijiazhuang, Hebei, 050011, People's Republic of China
| | - Li Ruan
- School of Computer Science and Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Yi-Fan Han
- School of Computer Science and Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Zhi-Kun Liu
- Department of Radiotherapy, Hebei Medical University Fourth Affiliated Hospital and Hebei Provincial Tumor Hospital, 12 Jiankang Road, Shijiazhuang, Hebei, 050011, People's Republic of China
| | - Hao-Feng Li
- Medical Big Data Laboratory, Shenzhen Research Institute of Big Data, Daoyuan Building, No. 2001, Longxiang Avenue, Longgang District, Shenzhen, 518172, People's Republic of China
| | - Jia-Shun Xiao
- Medical Big Data Laboratory, Shenzhen Research Institute of Big Data, Daoyuan Building, No. 2001, Longxiang Avenue, Longgang District, Shenzhen, 518172, People's Republic of China
| | - Gao-Feng Shi
- Department of Radiotherapy, Hebei Medical University Fourth Affiliated Hospital and Hebei Provincial Tumor Hospital, 12 Jiankang Road, Shijiazhuang, Hebei, 050011, People's Republic of China.
| | - Xiang Wan
- Medical Big Data Laboratory, Shenzhen Research Institute of Big Data, Daoyuan Building, No. 2001, Longxiang Avenue, Longgang District, Shenzhen, 518172, People's Republic of China.
| | - Ren-Zhi Wang
- School of Medicine, Chinese University of Hong Kong (Shenzhen), No. 2001, Longxiang Avenue, Longgang District, Shenzhen, 518172, People's Republic of China.
| |
Collapse
|
9
|
Cao P, Derhaag J, Coonen E, Brunner H, Acharya G, Salumets A, Zamani Esteki M. Generative artificial intelligence to produce high-fidelity blastocyst-stage embryo images. Hum Reprod 2024; 39:1197-1207. [PMID: 38600621 PMCID: PMC11145014 DOI: 10.1093/humrep/deae064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 02/13/2024] [Indexed: 04/12/2024] Open
Abstract
STUDY QUESTION Can generative artificial intelligence (AI) models produce high-fidelity images of human blastocysts? SUMMARY ANSWER Generative AI models exhibit the capability to generate high-fidelity human blastocyst images, thereby providing substantial training datasets crucial for the development of robust AI models. WHAT IS KNOWN ALREADY The integration of AI into IVF procedures holds the potential to enhance objectivity and automate embryo selection for transfer. However, the effectiveness of AI is limited by data scarcity and ethical concerns related to patient data privacy. Generative adversarial networks (GAN) have emerged as a promising approach to alleviate data limitations by generating synthetic data that closely approximate real images. STUDY DESIGN, SIZE, DURATION Blastocyst images were included as training data from a public dataset of time-lapse microscopy (TLM) videos (n = 136). A style-based GAN was fine-tuned as the generative model. PARTICIPANTS/MATERIALS, SETTING, METHODS We curated a total of 972 blastocyst images as training data, where frames were captured within the time window of 110-120 h post-insemination at 1-h intervals from TLM videos. We configured the style-based GAN model with data augmentation (AUG) and pretrained weights (Pretrained-T: with translation equivariance; Pretrained-R: with translation and rotation equivariance) to compare their optimization on image synthesis. We then applied quantitative metrics including Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) to assess the quality and fidelity of the generated images. Subsequently, we evaluated qualitative performance by measuring the intelligence behavior of the model through the visual Turing test. To this end, 60 individuals with diverse backgrounds and expertise in clinical embryology and IVF evaluated the quality of synthetic embryo images. MAIN RESULTS AND THE ROLE OF CHANCE During the training process, we observed consistent improvement of image quality that was measured by FID and KID scores. Pretrained and AUG + Pretrained initiated with remarkably lower FID and KID values compared to both Baseline and AUG + Baseline models. Following 5000 training iterations, the AUG + Pretrained-R model showed the highest performance of the evaluated five configurations with FID and KID scores of 15.2 and 0.004, respectively. Subsequently, we carried out the visual Turing test, such that IVF embryologists, IVF laboratory technicians, and non-experts evaluated the synthetic blastocyst-stage embryo images and obtained similar performance in specificity with marginal differences in accuracy and sensitivity. LIMITATIONS, REASONS FOR CAUTION In this study, we primarily focused the training data on blastocyst images as IVF embryos are primarily assessed in blastocyst stage. However, generation of an array of images in different preimplantation stages offers further insights into the development of preimplantation embryos and IVF success. In addition, we resized training images to a resolution of 256 × 256 pixels to moderate the computational costs of training the style-based GAN models. Further research is needed to involve a more extensive and diverse dataset from the formation of the zygote to the blastocyst stage, e.g. video generation, and the use of improved image resolution to facilitate the development of comprehensive AI algorithms and to produce higher-quality images. WIDER IMPLICATIONS OF THE FINDINGS Generative AI models hold promising potential in generating high-fidelity human blastocyst images, which allows the development of robust AI models as it can provide sufficient training datasets while safeguarding patient data privacy. Additionally, this may help to produce sufficient embryo imaging training data with different (rare) abnormal features, such as embryonic arrest, tripolar cell division to avoid class imbalances and reach to even datasets. Thus, generative models may offer a compelling opportunity to transform embryo selection procedures and substantially enhance IVF outcomes. STUDY FUNDING/COMPETING INTEREST(S) This study was supported by a Horizon 2020 innovation grant (ERIN, grant no. EU952516) and a Horizon Europe grant (NESTOR, grant no. 101120075) of the European Commission to A.S. and M.Z.E., the Estonian Research Council (grant no. PRG1076) to A.S., and the EVA (Erfelijkheid Voortplanting & Aanleg) specialty program (grant no. KP111513) of Maastricht University Medical Centre (MUMC+) to M.Z.E. TRIAL REGISTRATION NUMBER Not applicable.
Collapse
Affiliation(s)
- Ping Cao
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Genetics and Cell Biology, GROW Research Institute for Oncology and Reproduction, Faculty of Health, Medicine and Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
| | - Josien Derhaag
- Department of Reproductive Medicine, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
| | - Edith Coonen
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Reproductive Medicine, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
| | - Han Brunner
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Genetics and Cell Biology, GROW Research Institute for Oncology and Reproduction, Faculty of Health, Medicine and Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
- Department of Human Genetics, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Ganesh Acharya
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, and Karolinska University Hospital, Stockholm, Sweden
- Women’s Health and Perinatology Research Group, Department of Clinical Medicine, UiT—The Arctic University of Norway, Tromsø, Norway
| | - Andres Salumets
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, and Karolinska University Hospital, Stockholm, Sweden
- Competence Centre on Health Technologies, Tartu, Estonia
- Department of Obstetrics and Gynecology, Institute of Clinical Medicine, University of Tartu, Tartu, Estonia
| | - Masoud Zamani Esteki
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Genetics and Cell Biology, GROW Research Institute for Oncology and Reproduction, Faculty of Health, Medicine and Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, and Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
10
|
Özbay E, Özbay FA, Gharehchopogh FS. Kidney Tumor Classification on CT images using Self-supervised Learning. Comput Biol Med 2024; 176:108554. [PMID: 38744013 DOI: 10.1016/j.compbiomed.2024.108554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 04/06/2024] [Accepted: 04/30/2024] [Indexed: 05/16/2024]
Abstract
One of the most common diseases affecting society around the world is kidney tumor. The risk of kidney disease increases due to reasons such as consumption of ready-made food and bad habits. Early diagnosis of kidney tumors is essential for effective treatment, reducing side effects, and reducing the number of deaths. With the development of computer-aided diagnostic methods, the need for accurate renal tumor classification is also increasing. Because traditional methods based on manual detection are time-consuming, boring, and costly, high-accuracy tests can be performed faster and at a lower cost with deep learning (DL) methods in kidney tumor detection (KTD). Among the current challenges regarding artificial intelligence-assisted KTD, obtaining more precise programming information and the capacity to group with high accuracy make clinical determination more vital and bring it to an important point for current treatment in KTD prediction. This encourages us to propose a more effective DL model that can effectively assist specialist physicians in the diagnosis of kidney tumors. In this way, the workload of radiologists can be alleviated and errors in clinical diagnoses that may occur due to the complex structure of the kidney can be prevented. A large amount of data is needed during the training of the developed methods. Although various studies have been conducted to reduce the amount of data with feature selection techniques, these techniques provide little improvement in the classification accuracy rate. In this paper, a masked autoencoder (MAE) is proposed for KTD, which can produce effective results on datasets containing some samples and can be directly fine-tuned and pre-trained. Self-supervised learning (SSL) is achieved through self-distillation (SD), which can be reintroduced into the configuration loss calculation using masked patches. The SD loss on the decoder and encoder outputs' latent representation is calculated operating SSLSD-KTD. The encoder obtains local attention, while the decoder transfers its global attention to calculate losses. The SSLSD-KTD method reached 98.04 % classification accuracy on the KAUH-kidney dataset, including 8400 samples, and 82.14 % on the CT-kidney dataset, containing 840 samples. By adding more external information to the SSLSD-KTD method with transfer learning, accuracy results of 99.82 % and 95.24 % were obtained on the same datasets. Experimental results have shown that the SSLSD-KTD method can effectively extract kidney tumor features with limited data and can be an aid or even an alternative for radiologists in decision-making in the diagnosis of the disease.
Collapse
Affiliation(s)
- Erdal Özbay
- Department of Computer Engineering, Firat University, 23119, Elazig, Turkey.
| | | | | |
Collapse
|
11
|
Rousta F, Esteki A, Shalbaf A, Sadeghi A, Moghadam PK, Voshagh A. Application of artificial intelligence in pancreas endoscopic ultrasound imaging- A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108205. [PMID: 38703435 DOI: 10.1016/j.cmpb.2024.108205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Revised: 04/13/2024] [Accepted: 04/24/2024] [Indexed: 05/06/2024]
Abstract
The pancreas is a vital organ in digestive system which has significant health implications. It is imperative to evaluate and identify malignant pancreatic lesions promptly in light of the high mortality rate linked to such malignancies. Endoscopic Ultrasound (EUS) is a non-invasive precise technique to detect pancreas disorders, but it is highly operator dependent. Artificial intelligence (AI), including traditional machine learning (ML) and deep learning (DL) techniques can play a pivotal role to enhancing the performance of EUS regardless of operator. AI performs a critical function in the detection, classification, and segmentation of medical images. The utilization of AI-assisted systems has improved the accuracy and productivity of pancreatic analysis, including the detection of diverse pancreatic disorders (e.g., pancreatitis, masses, and cysts) as well as landmarks and parenchyma. This systematic review examines the rapidly developing domain of AI-assisted system in EUS of the pancreas. Its objective is to present a thorough study of the present research status and developments in this area. This paper explores the significant challenges of AI-assisted system in pancreas EUS imaging, highlights the potential of AI techniques in addressing these challenges, and suggests the scope for future research in domain of AI-assisted EUS systems.
Collapse
Affiliation(s)
- Fatemeh Rousta
- Department of Biomedical Engineering and Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Esteki
- Department of Biomedical Engineering and Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Amir Sadeghi
- Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Pardis Ketabi Moghadam
- Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ardalan Voshagh
- Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran
| |
Collapse
|
12
|
Fang T, Jiang Z, Zhou Y, Jia S, Zhao J, Nie S. Automatic assessment of DWI-ASPECTS for acute ischemic stroke based on deep learning. Med Phys 2024; 51:4351-4364. [PMID: 38687043 DOI: 10.1002/mp.17101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 04/04/2024] [Accepted: 04/12/2024] [Indexed: 05/02/2024] Open
Abstract
BACKGROUND Alberta Stroke Program Early Computed Tomography Score (ASPECTS) is a standardized semi-quantitative method for early ischemic changes in acute ischemic stroke. PURPOSE However, ASPECTS is still affected by expert experience and inconsistent results between readers in clinical. This study aims to propose an automatic ASPECTS scoring model based on diffusion-weighted imaging (DWI) mode to help clinicians make accurate treatment plans. METHODS Eighty-two patients with stroke were included in the study. First, we designed a new deep learning network for segmenting ASPECTS scoring brain regions. The network is improved based on U-net, which integrates multiple modules. Second, we proposed using hybrid classifiers to classify brain regions. For brain regions with larger areas, we used brain grayscale comparison algorithm to train machine learning classifiers, while using hybrid feature training for brain regions with smaller areas. RESULTS The average DICE coefficient of the segmented hindbrain area can reach 0.864. With the proposed hybrid classifier, our method performs significantly on both region-level ASPECTS and dichotomous ASPECTS. The sensitivity and accuracy on the test set are 95.51% and 93.43%, respectively. For dichotomous ASPECTS, the intraclass correlation coefficient (ICC) between our automated ASPECTS score and the expert reading was 0.87. CONCLUSIONS This study proposed an automated model for ASPECTS scoring of patients with acute ischemic stroke based on DWI images. Experimental results show that the method of segmentation first and then classification is feasible. Our method has the potential to assist physicians in the Alberta Stroke Program with early CT scoring and clinical stroke diagnosis.
Collapse
Affiliation(s)
- Ting Fang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Zhuoyun Jiang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Yuxi Zhou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Shouqiang Jia
- Department of Imaging, Jinan People's Hospital affiliated to Shandong First Medical University, Shandong, China
| | - Jiaqi Zhao
- Department of Ultrasound, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
13
|
Zeng M, Wang X, Chen W. Worldwide research landscape of artificial intelligence in lung disease: A scientometric study. Heliyon 2024; 10:e31129. [PMID: 38826704 PMCID: PMC11141367 DOI: 10.1016/j.heliyon.2024.e31129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 06/04/2024] Open
Abstract
Purpose To perform a comprehensive bibliometric analysis of the application of artificial intelligence (AI) in lung disease to understand the current status and emerging trends of this field. Materials and methods AI-based lung disease research publications were selected from the Web of Science Core Collection. Citespace, VOS viewer and Excel were used to analyze and visualize co-authorship, co-citation, and co-occurrence analysis of authors, keywords, countries/regions, references and institutions in this field. Results Our study included a total of 5210 papers. The number of publications on AI in lung disease showed explosive growth since 2017. China and the United States lead in publication numbers. The most productive author were Li, Weimin and Qian Wei, with Shanghai Jiaotong University as the most productive institution. Radiology was the most co-cited journal. Lung cancer and COVID-19 emerged as the most studied diseases. Deep learning, convolutional neural network, lung cancer, radiomics will be the focus of future research. Conclusions AI-based diagnosis and treatment of lung disease has become a research hotspot in recent years, yielding significant results. Future work should focus on establishing multimodal AI models that incorporate clinical, imaging and laboratory information. Enhanced visualization of deep learning, AI-driven differential diagnosis model for lung disease and the creation of international large-scale lung disease databases should also be considered.
Collapse
Affiliation(s)
| | | | - Wei Chen
- Department of Radiology, Southwest Hospital, Third Military Medical University, Chongqing, China
| |
Collapse
|
14
|
Wang Z, Cao N, Sun J, Zhang H, Zhang S, Ding J, Xie K, Gao L, Ni X. Uncertainty estimation- and attention-based semi-supervised models for automatically delineate clinical target volume in CBCT images of breast cancer. Radiat Oncol 2024; 19:66. [PMID: 38811994 DOI: 10.1186/s13014-024-02455-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 05/14/2024] [Indexed: 05/31/2024] Open
Abstract
OBJECTIVES Accurate segmentation of the clinical target volume (CTV) of CBCT images can observe the changes of CTV during patients' radiotherapy, and lay a foundation for the subsequent implementation of adaptive radiotherapy (ART). However, segmentation is challenging due to the poor quality of CBCT images and difficulty in obtaining target volumes. An uncertainty estimation- and attention-based semi-supervised model called residual convolutional block attention-uncertainty aware mean teacher (RCBA-UAMT) was proposed to delineate the CTV in cone-beam computed tomography (CBCT) images of breast cancer automatically. METHODS A total of 60 patients who undergone radiotherapy after breast-conserving surgery were enrolled in this study, which involved 60 planning CTs and 380 CBCTs. RCBA-UAMT was proposed by integrating residual and attention modules in the backbone network 3D UNet. The attention module can adjust channel and spatial weights of the extracted image features. The proposed design can train the model and segment CBCT images with a small amount of labeled data (5%, 10%, and 20%) and a large amount of unlabeled data. Four types of evaluation metrics, namely, dice similarity coefficient (DSC), Jaccard, average surface distance (ASD), and 95% Hausdorff distance (95HD), are used to assess the model segmentation performance quantitatively. RESULTS The proposed method achieved average DSC, Jaccard, 95HD, and ASD of 82%, 70%, 8.93, and 1.49 mm for CTV delineation on CBCT images of breast cancer, respectively. Compared with the three classical methods of mean teacher, uncertainty-aware mean-teacher and uncertainty rectified pyramid consistency, DSC and Jaccard increased by 7.89-9.33% and 14.75-16.67%, respectively, while 95HD and ASD decreased by 33.16-67.81% and 36.05-75.57%, respectively. The comparative experiment results of the labeled data with different proportions (5%, 10% and 20%) showed significant differences in the DSC, Jaccard, and 95HD evaluation indexes in the labeled data with 5% versus 10% and 5% versus 20%. Moreover, no significant differences were observed in the labeled data with 10% versus 20% among all evaluation indexes. Therefore, we can use only 10% labeled data to achieve the experimental objective. CONCLUSIONS Using the proposed RCBA-UAMT, the CTV of breast cancer CBCT images can be delineated reliably with a small amount of labeled data. These delineated images can be used to observe the changes in CTV and lay the foundation for the follow-up implementation of ART.
Collapse
Affiliation(s)
- Ziyi Wang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Nannan Cao
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Jiawei Sun
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Heng Zhang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Sai Zhang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Jiangyi Ding
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Kai Xie
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Liugang Gao
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Xinye Ni
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China.
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China.
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China.
| |
Collapse
|
15
|
Yıldız Potter İ, Rodriguez EK, Wu J, Nazarian A, Vaziri A. An Automated Vertebrae Localization, Segmentation, and Osteoporotic Compression Fracture Detection Pipeline for Computed Tomographic Imaging. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01135-5. [PMID: 38717516 DOI: 10.1007/s10278-024-01135-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/30/2024] [Accepted: 05/01/2024] [Indexed: 06/29/2024]
Abstract
Osteoporosis is the most common chronic metabolic bone disease worldwide. Vertebral compression fracture (VCF) is the most common type of osteoporotic fracture. Approximately 700,000 osteoporotic VCFs are diagnosed annually in the USA alone, resulting in an annual economic burden of ~$13.8B. With an aging population, the rate of osteoporotic VCFs and their associated burdens are expected to rise. Those burdens include pain, functional impairment, and increased medical expenditure. Therefore, it is of utmost importance to develop an analytical tool to aid in the identification of VCFs. Computed Tomography (CT) imaging is commonly used to detect occult injuries. Unlike the existing VCF detection approaches based on CT, the standard clinical criteria for determining VCF relies on the shape of vertebrae, such as loss of vertebral body height. We developed a novel automated vertebrae localization, segmentation, and osteoporotic VCF detection pipeline for CT scans using state-of-the-art deep learning models to bridge this gap. To do so, we employed a publicly available dataset of spine CT scans with 325 scans annotated for segmentation, 126 of which also graded for VCF (81 with VCFs and 45 without VCFs). Our approach attained 96% sensitivity and 81% specificity in detecting VCF at the vertebral-level, and 100% accuracy at the subject-level, outperforming deep learning counterparts tested for VCF detection without segmentation. Crucially, we showed that adding predicted vertebrae segments as inputs significantly improved VCF detection at both vertebral and subject levels by up to 14% Sensitivity and 20% Specificity (p-value = 0.028).
Collapse
Affiliation(s)
| | - Edward K Rodriguez
- Carl J. Shapiro Department of Orthopedic Surgery, Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, RN123, Boston, MA, 02215, USA
| | - Jim Wu
- Department of Radiology, Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Shapiro 4, Boston, MA, 02215, USA
| | - Ara Nazarian
- Carl J. Shapiro Department of Orthopedic Surgery, Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, RN123, Boston, MA, 02215, USA
- Department of Orthopaedics Surgery, Yerevan State University, Yerevan, Armenia
| | - Ashkan Vaziri
- BioSensics, LLC, 57 Chapel Street, Newton, MA, 02458, USA
| |
Collapse
|
16
|
Thinggaard BS, Frederiksen K, Subhi Y, Möller S, Sørensen TL, Kawasaki R, Grauslund J, Stokholm L. VEGF Inhibition Associates With Decreased Risk of Mortality in Patients With Neovascular Age-related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2024; 4:100446. [PMID: 38313400 PMCID: PMC10837639 DOI: 10.1016/j.xops.2023.100446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 11/22/2023] [Accepted: 12/04/2023] [Indexed: 02/06/2024]
Abstract
Purpose Controversy exists regarding the systemic safety of intravitreal VEGF inhibitors in the treatment of neovascular age-related macular degeneration (nAMD). We aimed to investigate the potential impact of VEGF inhibitor treatment on the risk of all-cause mortality and cardiovascular disease (CVD) among patients with nAMD. Design A nationwide register-based cohort study with 16 years follow-up. Participants Patients with nAMD exposed with VEGF inhibitors (n = 37 733) and unexposed individuals without nAMD (n = 1 897 073) aged ≥ 65 years residing in Denmark between January 1, 2007, and December 31, 2022. Methods Cox proportional hazards analysis was conducted to assess the effect of intravitreal VEGF inhibitor treatment on all-cause mortality and incident CVD. Main Outcome Measures In a predefined analysis plan we defined primary outcomes as hazard ratios (HRs) of all-cause mortality and a composite CVD endpoint in patients with nAMD treated with VEGF inhibitors compared with individuals without nAMD. The secondary outcomes encompassed analyses that explored the impact of the number of doses and the association between exposure and outcome over a specific time period. Results Overall, 63.7% of patients with nAMD were women with an average age of 69.9 years (interquartile range 65.0-76.0 years). Patients exposed to VEGF inhibitors demonstrated a reduced risk of all-cause mortality compared with individuals without nAMD (HR, 0.79; 95% confidence interval [CI], 0.78-0.81), and an increased risk of composite CVD (HR, 1.04; 95% CI, 1.01-1.07). The decreased risk of all-cause mortality persisted, but there was no significant association between VEGF inhibitor treatment and CVD when patients with nAMD were grouped by the number of doses or considered exposed within 60 days postinjection. Conclusions Our study revealed a decreased risk of all-cause mortality and a 4% increased risk of CVD among patients with nAMD exposed with VEGF inhibitors. The decreased risk of mortality is unlikely to be directly pathophysiologically related to VEGF inhibitor treatment. Instead, we speculate that patients undergoing VEGF inhibitor treatment are, on average, individuals in good health with adequate personal resources. Therefore, they also have a higher likelihood of overall survival. These findings strongly support the safety of VEGF inhibitor treatment in terms of all-cause mortality and CVD among patients with nAMD. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Benjamin Sommer Thinggaard
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- OPEN, Open Patient Data Explorative Network, Odense University Hospital, Odense, Denmark
| | - Katrine Frederiksen
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Yousif Subhi
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Department of Ophthalmology, Rigshospitalet, Glostrup, Denmark
- Department of Ophthalmology, Zealand University Hospital, Roskilde, Denmark
| | - Sören Möller
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- OPEN, Open Patient Data Explorative Network, Odense University Hospital, Odense, Denmark
| | - Torben Lykke Sørensen
- Department of Ophthalmology, Zealand University Hospital, Roskilde, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Ryo Kawasaki
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Division of Public Health, Department of Social Medicine, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Jakob Grauslund
- Department of Ophthalmology, Odense University Hospital, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Lonny Stokholm
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- OPEN, Open Patient Data Explorative Network, Odense University Hospital, Odense, Denmark
| |
Collapse
|
17
|
Ni M, He M, Yang Y, Wen X, Zhao Y, Gao L, Yan R, Xu J, Zhang Y, Chen W, Jiang C, Li Y, Zhao Q, Wu P, Li C, Qu J, Yuan H. Application research of AI-assisted compressed sensing technology in MRI scanning of the knee joint: 3D-MRI perspective. Eur Radiol 2024; 34:3046-3058. [PMID: 37932390 DOI: 10.1007/s00330-023-10368-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 08/29/2023] [Accepted: 09/04/2023] [Indexed: 11/08/2023]
Abstract
OBJECTIVE To investigate the potential applicability of AI-assisted compressed sensing (ACS) in knee MRI to enhance and optimize the scanning process. METHODS Volunteers and patients with sports-related injuries underwent prospective MRI scans with a range of acceleration techniques. The volunteers were subjected to varied ACS acceleration levels to ascertain the most effective level. Patients underwent scans at the determined optimal 3D-ACS acceleration level, and 3D compressed sensing (CS) and 2D parallel acquisition technology (PAT) scans were performed. The resultant 3D-ACS images underwent 3.5 mm/2.0 mm multiplanar reconstruction (MPR). Experienced radiologists evaluated and compared the quality of images obtained by 3D-ACS-MRI and 3D-CS-MRI, 3.5 mm/2.0 mm MPR and 2D-PAT-MRI, diagnosed diseases, and compared the results with the arthroscopic findings. The diagnostic agreement was evaluated using Cohen's kappa correlation coefficient, and both absolute and relative evaluation methods were utilized for objective assessment. RESULTS The study involved 15 volunteers and 53 patients. An acceleration factor of 10.69 × was identified as optimal. The quality evaluation showed that 3D-ACS provided poorer bone structure visualization, and improved cartilage visualization and less satisfactory axial images with 3.5 mm/2.0 mm MPR than 2D-PAT. In terms of objective evaluation, the relative evaluation yielded satisfactory results across different groups, while the absolute evaluation revealed significant variances in most features. Nevertheless, high levels of diagnostic agreement (κ: 0.81-0.94) and accuracy (0.83-0.98) were observed across all diagnoses. CONCLUSION ACS technology presents significant potential as a replacement for traditional CS in 3D-MRI knee scans, allowing thinner MPRs and markedly faster scans without sacrificing diagnostic accuracy. CLINICAL RELEVANCE STATEMENT 3D-ACS-MRI of the knee can be completed in the 160 s with good diagnostic consistency and image quality. 3D-MRI-MPR can replace 2D-MRI and reconstruct images with thinner slices, which helps to optimize the current MRI examination process and shorten scanning time. KEY POINTS • AI-assisted compressed sensing technology can reduce knee MRI scan time by over 50%. • 3D AI-assisted compressed sensing MRI and related multiplanar reconstruction can replace traditional accelerated MRI and yield thinner 2D multiplanar reconstructions. • Successful application of 3D AI-assisted compressed sensing MRI can help optimize the current knee MRI process.
Collapse
Affiliation(s)
- Ming Ni
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Miao He
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, People's Republic of China
- Beijing Key Laboratory of Fundamental Research On Biomechanics in Clinical Application, Capital Medical University, Beijing, People's Republic of China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing, People's Republic of China
| | - Yuxin Yang
- United Imaging Research Institute of Intelligent Imaging, Beijing, People's Republic of China
| | - Xiaoyi Wen
- Institute of Statistics and Big Data, Renmin University of China, Beijing, People's Republic of China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Lixiang Gao
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Ruixin Yan
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Jiajia Xu
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Yarui Zhang
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Wen Chen
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Chenyu Jiang
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Yali Li
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Qiang Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China
| | - Peng Wu
- United Imaging Healthcare Co, Shanghai, People's Republic of China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, People's Republic of China
- Beijing Key Laboratory of Fundamental Research On Biomechanics in Clinical Application, Capital Medical University, Beijing, People's Republic of China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing, People's Republic of China
| | - Junda Qu
- School of Biomedical Engineering, Capital Medical University, Beijing, 100069, People's Republic of China.
- Beijing Key Laboratory of Fundamental Research On Biomechanics in Clinical Application, Capital Medical University, Beijing, People's Republic of China.
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing, People's Republic of China.
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, People's Republic of China.
| |
Collapse
|
18
|
Jiang Q, Ye H, Yang B, Cao F. Label-Decoupled Medical Image Segmentation With Spatial-Channel Graph Convolution and Dual Attention Enhancement. IEEE J Biomed Health Inform 2024; 28:2830-2841. [PMID: 38376972 DOI: 10.1109/jbhi.2024.3367756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Deep learning-based methods have been widely used in medical image segmentation recently. However, existing works are usually difficult to simultaneously capture global long-range information from images and topological correlations among feature maps. Further, medical images often suffer from blurred target edges. Accordingly, this paper proposes a novel medical image segmentation framework named a label-decoupled network with spatial-channel graph convolution and dual attention enhancement mechanism (LADENet for short). It constructs learnable adjacency matrices and utilizes graph convolutions to effectively capture global long-range information on spatial locations and topological dependencies between different channels in an image. Then a label-decoupled strategy based on distance transformation is introduced to decouple an original segmentation label into a body label and an edge label for supervising the body branch and edge branch. Again, a dual attention enhancement mechanism, designing a body attention block in the body branch and an edge attention block in the edge branch, is built to promote the learning ability of spatial region and boundary features. Besides, a feature interactor is devised to fully consider the information interaction between the body and edge branches to improve segmentation performance. Experiments on benchmark datasets reveal the superiority of LADENet compared to state-of-the-art approaches.
Collapse
|
19
|
Xu Z, Dai Y, Liu F, Li S, Liu S, Shi L, Fu J. Parotid Gland Segmentation Using Purely Transformer-Based U-Shaped Network and Multimodal MRI. Ann Biomed Eng 2024:10.1007/s10439-024-03510-3. [PMID: 38691234 DOI: 10.1007/s10439-024-03510-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 04/03/2024] [Indexed: 05/03/2024]
Abstract
Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Segmentation of parotid glands and tumors on magnetic resonance images is essential in accurately diagnosing and selecting appropriate surgical plans. However, segmentation of parotid glands is particularly challenging due to their variable shape and low contrast with surrounding structures. Recently, deep learning has developed rapidly, and Transformer-based networks have performed well on many computer vision tasks. However, Transformer-based networks have yet to be well used in parotid gland segmentation tasks. We collected a multi-center multimodal parotid gland MRI dataset and implemented parotid gland segmentation using a purely Transformer-based U-shaped segmentation network. We used both absolute and relative positional encoding to improve parotid gland segmentation and achieved multimodal information fusion without increasing the network computation. In addition, our novel training approach reduces the clinician's labeling workload by nearly half. Our method achieved good segmentation of both parotid glands and tumors. On the test set, our model achieved a Dice-Similarity Coefficient of 86.99%, Pixel Accuracy of 99.19%, Mean Intersection over Union of 81.79%, and Hausdorff Distance of 3.87. The purely Transformer-based U-shaped segmentation network we used outperforms other convolutional neural networks. In addition, our method can effectively fuse the information from multi-center multimodal MRI dataset, thus improving the parotid gland segmentation.
Collapse
Affiliation(s)
- Zi'an Xu
- Northeastern University, Shenyang, China
| | - Yin Dai
- Northeastern University, Shenyang, China.
| | - Fayu Liu
- China Medical University, Shenyang, China
| | - Siqi Li
- China Medical University, Shenyang, China
| | - Sheng Liu
- China Medical University, Shenyang, China
| | - Lifu Shi
- Liaoning Jiayin Medical Technology Co., Shenyang, China
| | - Jun Fu
- Northeastern University, Shenyang, China
| |
Collapse
|
20
|
Wu Z, Zhang X, Li F, Wang S, Li J. A feature-enhanced network for stroke lesion segmentation from brain MRI images. Comput Biol Med 2024; 174:108326. [PMID: 38599066 DOI: 10.1016/j.compbiomed.2024.108326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 04/12/2024]
Abstract
Accurate and expeditious segmentation of stroke lesions can greatly assist physicians in making accurate medical diagnoses and administering timely treatments. However, there are two limitations to the current deep learning methods. On the one hand, the attention structure utilizes only local features, which misleads the subsequent segmentation; on the other hand, simple downsampling compromises task-relevant detailed semantic information. To address these challenges, we propose a novel feature refinement and protection network (FRPNet) for stroke lesion segmentation. FRPNet employs a symmetric encoding-decoding structure and incorporates twin attention gate (TAG) and multi-dimension attention pooling (MAP) modules. The TAG module leverages the self-attention mechanism and bi-directional attention to extract both global and local features of the lesion. On the other hand, the MAP module establishes multidimensional pooling attention to effectively mitigate the loss of features during the encoding process. Extensive comparative experiments show that, our method significantly outperforms the state-of-the-art approaches with 60.16% DSC, 36.20px HD and 85.72% DSC, 27.02px HD on two ischemic stroke datasets that contain all stroke stages and several sequences of stroke images. The excellent results that exceed those of existing methods illustrate the efficacy and generalizability of the proposed method. The source code is released on https://github.com/wu2ze2lin2/FRPNet.
Collapse
Affiliation(s)
- Zelin Wu
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Xueying Zhang
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China.
| | - Fenglian Li
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China.
| | - Suzhe Wang
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China
| | - Jiaying Li
- The first clinical medical College, Shanxi Medical University, Taiyuan, 030024, China
| |
Collapse
|
21
|
Cheng CT, Ooyang CH, Kang SC, Liao CH. Applications of Deep Learning in Trauma Radiology: A Narrative Review. Biomed J 2024:100743. [PMID: 38679199 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan.
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| |
Collapse
|
22
|
Yamada D, Kojima F, Otsuka Y, Kawakami K, Koishi N, Oba K, Bando T, Matsusako M, Kurihara Y. Multimodal modeling with low-dose CT and clinical information for diagnostic artificial intelligence on mediastinal tumors: a preliminary study. BMJ Open Respir Res 2024; 11:e002249. [PMID: 38589197 PMCID: PMC11015206 DOI: 10.1136/bmjresp-2023-002249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 03/22/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Diagnosing mediastinal tumours, including incidental lesions, using low-dose CT (LDCT) performed for lung cancer screening, is challenging. It often requires additional invasive and costly tests for proper characterisation and surgical planning. This indicates the need for a more efficient and patient-centred approach, suggesting a gap in the existing diagnostic methods and the potential for artificial intelligence technologies to address this gap. This study aimed to create a multimodal hybrid transformer model using the Vision Transformer that leverages LDCT features and clinical data to improve surgical decision-making for patients with incidentally detected mediastinal tumours. METHODS This retrospective study analysed patients with mediastinal tumours between 2010 and 2021. Patients eligible for surgery (n=30) were considered 'positive,' whereas those without tumour enlargement (n=32) were considered 'negative.' We developed a hybrid model combining a convolutional neural network with a transformer to integrate imaging and clinical data. The dataset was split in a 5:3:2 ratio for training, validation and testing. The model's efficacy was evaluated using a receiver operating characteristic (ROC) analysis across 25 iterations of random assignments and compared against conventional radiomics models and models excluding clinical data. RESULTS The multimodal hybrid model demonstrated a mean area under the curve (AUC) of 0.90, significantly outperforming the non-clinical data model (AUC=0.86, p=0.04) and radiomics models (random forest AUC=0.81, p=0.008; logistic regression AUC=0.77, p=0.004). CONCLUSION Integrating clinical and LDCT data using a hybrid transformer model can improve surgical decision-making for mediastinal tumours, showing superiority over models lacking clinical data integration.
Collapse
Affiliation(s)
- Daisuke Yamada
- Department of Radiology, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Fumitsugu Kojima
- Department of Thoracic Surgery, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Yujiro Otsuka
- Department of Radiology, Juntendo University, Bunkyo-ku, Japan
- Plusman LLC, Tokyo, Japan
| | - Kouhei Kawakami
- Department of Radiology, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Naoki Koishi
- Department of Radiology, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Ken Oba
- Department of Radiology, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Toru Bando
- Department of Thoracic Surgery, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Masaki Matsusako
- Department of Radiology, Saint Luke's International Hospital, Chuo-ku, Japan
| | - Yasuyuki Kurihara
- Department of Radiology, Saint Luke's International Hospital, Chuo-ku, Japan
| |
Collapse
|
23
|
Zhu Q, Zhuang H, Zhao M, Xu S, Meng R. A study on expression recognition based on improved mobilenetV2 network. Sci Rep 2024; 14:8121. [PMID: 38582772 PMCID: PMC10998880 DOI: 10.1038/s41598-024-58736-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 04/02/2024] [Indexed: 04/08/2024] Open
Abstract
This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.
Collapse
Affiliation(s)
- Qiming Zhu
- College of Equipment Support and Management, Engineering University of PAP, Xi'an, 710086, China
| | - Hongwei Zhuang
- College of Equipment Support and Management, Engineering University of PAP, Xi'an, 710086, China.
| | - Mi Zhao
- Basic Education, Engineering University of PAP, Xi'an, 710086, China
| | - Shuangchao Xu
- College of Equipment Support and Management, Engineering University of PAP, Xi'an, 710086, China
| | - Rui Meng
- College of Military Basic Education, Engineering University of PAP, Xi'an, 710086, China
| |
Collapse
|
24
|
Yao Z, Wo J, Zheng E, Yang J, Li H, Li X, Li J, Luo Y, Wang T, Fan Z, Zhan Y, Yang Y, Wu Z, Yin L, Meng F. A deep learning-based approach for fully automated segmentation and quantitative analysis of muscle fibers in pig skeletal muscle. Meat Sci 2024; 213:109506. [PMID: 38603965 DOI: 10.1016/j.meatsci.2024.109506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 02/06/2024] [Accepted: 04/01/2024] [Indexed: 04/13/2024]
Abstract
Muscle fiber properties exert a significant influence on pork quality, with cross-sectional area (CSA) being a crucial parameter closely associated with various meat quality indicators, such as shear force. Effectively identifying and segmenting muscle fibers in a robust manner constitutes a vital initial step in determining CSA. This step is highly intricate and time-consuming, necessitating an accurate and automated analytical approach. One limitation of existing methods is their tendency to perform well on high signal-to-noise ratio images of intact, healthy muscle fibers but their lack of validation on more complex image datasets featuring significant morphological changes, such as the presence of ice crystals. In this study, we undertake the fully automatic segmentation of muscle fiber microscopic images stained with myosin adenosine triphosphate (mATPase) activity using a deep learning architecture known as SOLOv2. Our objective is to efficiently derive accurate measurements of muscle fiber size and distribution. Tests conducted on actual images demonstrate that our method adeptly handles the intricate task of muscle fiber segmentation, yielding quantitative results amenable to statistical analysis and displaying reliability comparable to manual analysis.
Collapse
Affiliation(s)
- Zekai Yao
- State Key Laboratory of Swine and Poultry Breeding Industry/ Guangdong Key Laboratory of Animal Breeding and Nutrition, Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China; College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Jingjie Wo
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, PR China
| | - Enqin Zheng
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China; Guangdong Provincial Key Laboratory of Agro-animal Genomics and Molecular Breeding, South China Agricultural University, Guangzhou 510642, PR China
| | - Jie Yang
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China; Guangdong Provincial Key Laboratory of Agro-animal Genomics and Molecular Breeding, South China Agricultural University, Guangzhou 510642, PR China
| | - Hao Li
- State Key Laboratory of Swine and Poultry Breeding Industry/ Guangdong Key Laboratory of Animal Breeding and Nutrition, Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China; College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Xinxin Li
- State Key Laboratory of Swine and Poultry Breeding Industry/ Guangdong Key Laboratory of Animal Breeding and Nutrition, Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China; College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Jianhao Li
- State Key Laboratory of Swine and Poultry Breeding Industry/ Guangdong Key Laboratory of Animal Breeding and Nutrition, Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China
| | - Yizhi Luo
- State Key Laboratory of Swine and Poultry Breeding Industry/ Guangdong Key Laboratory of Animal Breeding and Nutrition, Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China; Institute of Facility Agriculture, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China
| | - Ting Wang
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Zhenfei Fan
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Yuexin Zhan
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Yingshan Yang
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China
| | - Zhenfang Wu
- College of Animal Science and National Engineering Research Center for Breeding Swine Industry, South China Agricultural University, Guangzhou 510642, PR China; Yunfu Subcenter of Guangdong Laboratory for Lingnan Modern Agriculture, Yunfu 527400, PR China; Guangdong Provincial Key Laboratory of Agro-animal Genomics and Molecular Breeding, South China Agricultural University, Guangzhou 510642, PR China.
| | - Ling Yin
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou 510642, PR China.
| | - Fanming Meng
- State Key Laboratory of Swine and Poultry Breeding Industry/ Guangdong Key Laboratory of Animal Breeding and Nutrition, Institute of Animal Science, Guangdong Academy of Agricultural Sciences, Guangzhou 510640, PR China.
| |
Collapse
|
25
|
Gon Park S, Park J, Rock Choi H, Ho Lee J, Tae Cho S, Goo Lee Y, Ahn H, Pak S. Deep Learning Model for Real‑time Semantic Segmentation During Intraoperative Robotic Prostatectomy. EUR UROL SUPPL 2024; 62:47-53. [PMID: 38585210 PMCID: PMC10998267 DOI: 10.1016/j.euros.2024.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/11/2024] [Indexed: 04/09/2024] Open
Abstract
Background and objective Recently, deep learning algorithms, including convolutional neural networks (CNNs), have shown remarkable progress in medical imaging analysis. Semantic segmentation, which segments an unknown image into different parts and objects, has potential applications in robotic surgery in areas where artificial intelligence (AI) can be applied, such as in AI-assisted surgery, surgeon training, and skill assessment. We aimed to investigate the performance of a CNN-based deep learning model in real-time segmentation in robot-assisted radical prostatectomy (RALP). Methods Intraoperative videos of RALP procedures were obtained. The reinforcement U-Net model was used for segmentation. Segmentation of the images of instruments, bladder, prostate, and seminal vesicle-vas deferens was performed. The dataset was preprocessed and split randomly into training, validation, and test data in a 7:2:1 ratio. Dice coefficient, intersection over union (IoU), and accuracy by class, which are commonly used in medical image segmentation, were calculated to evaluate the performance of the model. Key findings and limitations From 120 patient videos, 2400 images were selected for RALP procedures. The mean Dice scores for the identification of the instruments, bladder, prostate, and seminal vesicle-vas deferens were 0.96, 0.74, 0.85, and 0.84, respectively. Overall, when applied to the test data, the model had a mean Dice coefficient value of 0.85, IoU of 0.77, and accuracy of 0.85. Limitations included the sample size, lack of diversity in the methods of surgery, incomplete surgical processes, and lack of external validation. Conclusions and clinical implications The CNN-based segmentation provides accurate real-time recognition of surgical instruments and anatomy in RALP. Deep learning algorithms can be used to identify anatomy within the surgical field and could potentially be used to provide real-time guidance in robotic surgery. Patient summary We demonstrate the potential effectiveness of deep learning segmentation in robotic prostatectomy procedures. Deep learning algorithms could be used to identify anatomical structures within the surgical field and may provide real-time guidance in robotic surgery.
Collapse
Affiliation(s)
- Sung Gon Park
- Department of Urology, Hallym University College of Medicine, Kangnam Sacred Heart Hospital, Seoul, South Korea
| | | | | | | | - Sung Tae Cho
- Department of Urology, Hallym University College of Medicine, Kangnam Sacred Heart Hospital, Seoul, South Korea
| | - Young Goo Lee
- Department of Urology, Hallym University College of Medicine, Kangnam Sacred Heart Hospital, Seoul, South Korea
| | - Hanjong Ahn
- Department of Urology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Sahyun Pak
- Department of Urology, Hallym University College of Medicine, Kangnam Sacred Heart Hospital, Seoul, South Korea
| |
Collapse
|
26
|
Chen J, Fan X, Chen Z, Peng Y, Liang L, Su C, Chen Y, Yao J. Enhancing YOLO5 for the Assessment of Irregular Pelvic Radiographs with Multimodal Information. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:744-755. [PMID: 38315343 PMCID: PMC11031542 DOI: 10.1007/s10278-024-00986-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/09/2023] [Accepted: 12/12/2023] [Indexed: 02/07/2024]
Abstract
Developmental dysplasia of the hip (DDH) is one of the most common orthopedic disorders in infants and young children. Accurate identification and localization of anatomical landmarks are prerequisites for the diagnosis of DDH. In recent years, various works have employed deep learning algorithms on radiography images for DDH diagnosis. However, none of these works have considered the incorporation of multimodal information. The pelvis exhibits distinct structures at different developmental stages, and there are also gender-based differences. In light of this, this study proposes a method to enhance the performance of deep learning models in diagnosing DDH by incorporating age and gender information into the channels. The study utilizes YOLO5 to construct a deep learning network for detecting hip joint landmarks. Moreover, a comprehensive dataset of 7750 pelvic X-ray images is established, covering ages from 4 months to 16 years and encompassing various conditions, such as deformities and post-operative cases, which authentically capture the temporal diversity and pathological complexities of DDH. Experimental results show that the YOLO5 model with integrated multimodal information achieves a mAP0.5-0.95 of 83.1% and a diagnostic accuracy of 86.7% in test dataset. The F1 scores for diagnosing cases of normal (NM), suspected dislocation (SD), mild dislocation (MD), and heavily dislocation (HD) are 90.9%, 79.8%, 63.5%, and 97.4%, respectively. Furthermore, experiments conducted on datasets of different sizes and networks of different sizes demonstrate the beneficial impact of multimodal information in improving the effectiveness of deep learning in diagnosing DDH.
Collapse
Affiliation(s)
- Jing Chen
- School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Xiaoyou Fan
- School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Zhen Chen
- School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Yichao Peng
- Department of Pediatric Orthopedics, Center for Orthopaedic Surgery, Hospital of Guangdong Province, The Third School of Clinical Medicine, Southern Medical University, The Third Affiliated Hospital of Southern Medical University, Guangzhou, 510630, Guangdong, China
- Department of Orthopedics, Academy of Orthopedics Guangdong Province, Guangdong Provincial Key Laboratory of Bone and Joint Degeneration Diseases, Guangzhou, 510630, Guangdong, China
| | - Lichong Liang
- School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Chengyue Su
- School of Physics and Optoelectronic Engineering, Guangdong University of Technology, Guangzhou, 510006, Guangdong, China
| | - Yun Chen
- Department of Pediatric Orthopedics, Center for Orthopaedic Surgery, Hospital of Guangdong Province, The Third School of Clinical Medicine, Southern Medical University, The Third Affiliated Hospital of Southern Medical University, Guangzhou, 510630, Guangdong, China.
- School of Nursing, Southern Medical University, The Third Affiliated Hospital of Southern Medical University, Guangzhou, 510630, Guangdong, China.
| | - Jinghui Yao
- Department of Pediatric Orthopedics, Center for Orthopaedic Surgery, Hospital of Guangdong Province, The Third School of Clinical Medicine, Southern Medical University, The Third Affiliated Hospital of Southern Medical University, Guangzhou, 510630, Guangdong, China.
- Department of Orthopedics, Academy of Orthopedics Guangdong Province, Guangdong Provincial Key Laboratory of Bone and Joint Degeneration Diseases, Guangzhou, 510630, Guangdong, China.
| |
Collapse
|
27
|
Chen H, Xue P, Xi H, Gu C, He S, Sun G, Pan K, Du B, Liu X. A Deep-Learning Model for Predicting the Efficacy of Non-vascularized Fibular Grafting Using Digital Radiography. Acad Radiol 2024; 31:1501-1507. [PMID: 37935609 DOI: 10.1016/j.acra.2023.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/30/2023] [Accepted: 10/10/2023] [Indexed: 11/09/2023]
Abstract
RATIONALE AND OBJECTIVES To develop a fully automated deep-learning (DL) model using digital radiography (DR) with relatively high accuracy for predicting the efficacy of non-vascularized fibular grafting (NVFG) and identifying suitable patients for this procedure. MATERIALS AND METHODS A retrospective analysis was conducted on osteonecrosis of femoral head patients who underwent NVFG between June 2009 and June 2021. All patients underwent standard preoperative anteroposterior (AP) and frog-lateral (FL) DR. Subsequently, the radiographs were pre-processed and labeled based on the follow-up results. The dataset was randomly divided into training and testing datasets. The DL-based prediction model was developed in the training dataset and its diagnostic performance was evaluated using the testing dataset. RESULTS A total of 339 patients with 432 hips were included in this study, with a hip preservation success rate of 71.52% as of June 2023. The hips were randomly divided into a training dataset (n = 324) and a testing dataset (n = 108). The ensemble model in predicting the efficacy of NVFG, reaching an accuracy of 78.9%, a precision of 78.7%, a recall of 96.0%, a F1-score of 86.5%, and an area under the curve (AUC) of 0.780. FL views (AUC, 0.71) exhibited better performance compared to AP views (AUC, 0.66). CONCLUSION The proposed DL model using DR enables automatic and efficient prediction of NVFG efficacy without additional clinical and financial burden. It can be seamlessly integrated into various clinical scenarios, serving as a practical tool to identify suitable patients for NVFG.
Collapse
Affiliation(s)
- Hao Chen
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Peng Xue
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Hongzhong Xi
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Changyuan Gu
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Shuai He
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Guangquan Sun
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Ke Pan
- Liyang Branch of Jiangsu Provincial Hospital of Chinese Medicine, Changzhou, 213300, Jiangsu, China (K.P.)
| | - Bin Du
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.)
| | - Xin Liu
- Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing, 210029, Jiangsu, China (H.C., P.X., H.X., C.G., S.H., G.S., B.D., X.L.).
| |
Collapse
|
28
|
Russo C, Bria A, Marrocco C. GravityNet for end-to-end small lesion detection. Artif Intell Med 2024; 150:102842. [PMID: 38553147 DOI: 10.1016/j.artmed.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/01/2024] [Accepted: 03/11/2024] [Indexed: 04/02/2024]
Abstract
This paper introduces a novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be "attracted" by the lesions. We conducted experiments on two well-established medical problems involving small lesions to evaluate the performance of the proposed approach: microcalcifications detection in digital mammograms and microaneurysms detection in digital fundus images. Our method demonstrates promising results in effectively detecting small lesions in these medical imaging tasks.
Collapse
Affiliation(s)
- Ciro Russo
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| |
Collapse
|
29
|
Zhang K, Abdoli N, Gilley P, Sadri Y, Chen X, Thai TC, Dockery L, Moore K, Mannel RS, Qiu Y. Developing a novel image marker to predict the clinical outcome of neoadjuvant chemotherapy (NACT) for ovarian cancer patients. Comput Biol Med 2024; 172:108240. [PMID: 38460312 DOI: 10.1016/j.compbiomed.2024.108240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/13/2024] [Accepted: 02/26/2024] [Indexed: 03/11/2024]
Abstract
OBJECTIVE Neoadjuvant chemotherapy (NACT) is one kind of treatment for advanced stage ovarian cancer patients. However, due to the nature of tumor heterogeneity, the clinical outcomes to NACT vary significantly among different subgroups. Partial responses to NACT may lead to suboptimal debulking surgery, which will result in adverse prognosis. To address this clinical challenge, the purpose of this study is to develop a novel image marker to achieve high accuracy prognosis prediction of NACT at an early stage. METHODS For this purpose, we first computed a total of 1373 radiomics features to quantify the tumor characteristics, which can be grouped into three categories: geometric, intensity, and texture features. Second, all these features were optimized by principal component analysis algorithm to generate a compact and informative feature cluster. This cluster was used as input for developing and optimizing support vector machine (SVM) based classifiers, which indicated the likelihood of receiving suboptimal cytoreduction after the NACT treatment. Two different kernels for SVM algorithm were explored and compared. A total of 42 ovarian cancer cases were retrospectively collected to validate the scheme. A nested leave-one-out cross-validation framework was adopted for model performance assessment. RESULTS The results demonstrated that the model with a Gaussian radial basis function kernel SVM yielded an AUC (area under the ROC [receiver characteristic operation] curve) of 0.806 ± 0.078. Meanwhile, this model achieved overall accuracy (ACC) of 83.3%, positive predictive value (PPV) of 81.8%, and negative predictive value (NPV) of 83.9%. CONCLUSION This study provides meaningful information for the development of radiomics based image markers in NACT treatment outcome prediction.
Collapse
Affiliation(s)
- Ke Zhang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, USA, 73019; School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA, 73019
| | - Neman Abdoli
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA, 73019
| | - Patrik Gilley
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA, 73019
| | - Youkabed Sadri
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA, 73019
| | - Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA, 73019
| | - Theresa C Thai
- Department of Radiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA, 73104
| | - Lauren Dockery
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA, 73104
| | - Kathleen Moore
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA, 73104
| | - Robert S Mannel
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA, 73104
| | - Yuchen Qiu
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, USA, 73019; School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA, 73019.
| |
Collapse
|
30
|
Sadikine A, Badic B, Tasu JP, Noblet V, Ballet P, Visvikis D, Conze PH. Improving abdominal image segmentation with overcomplete shape priors. Comput Med Imaging Graph 2024; 113:102356. [PMID: 38340573 DOI: 10.1016/j.compmedimag.2024.102356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/11/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
The extraction of abdominal structures using deep learning has recently experienced a widespread interest in medical image analysis. Automatic abdominal organ and vessel segmentation is highly desirable to guide clinicians in computer-assisted diagnosis, therapy, or surgical planning. Despite a good ability to extract large organs, the capacity of U-Net inspired architectures to automatically delineate smaller structures remains a major issue, especially given the increase in receptive field size as we go deeper into the network. To deal with various abdominal structure sizes while exploiting efficient geometric constraints, we present a novel approach that integrates into deep segmentation shape priors from a semi-overcomplete convolutional auto-encoder (S-OCAE) embedding. Compared to standard convolutional auto-encoders (CAE), it exploits an over-complete branch that projects data onto higher dimensions to better characterize anatomical structures with a small spatial extent. Experiments on abdominal organs and vessel delineation performed on various publicly available datasets highlight the effectiveness of our method compared to state-of-the-art, including U-Net trained without and with shape priors from a traditional CAE. Exploiting a semi-overcomplete convolutional auto-encoder embedding as shape priors improves the ability of deep segmentation models to provide realistic and accurate abdominal structure contours.
Collapse
Affiliation(s)
- Amine Sadikine
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University of Western Brittany, Brest, 29200, France
| | - Bogdan Badic
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University Hospital of Brest, Brest, 29200, France
| | - Jean-Pierre Tasu
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University Hospital of Poitiers, Poitiers, 86000, France
| | | | - Pascal Ballet
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University of Western Brittany, Brest, 29200, France
| | | | - Pierre-Henri Conze
- LaTIM UMR 1101, Inserm, Brest, 29200, France; IMT Atlantique, Brest, 29200, France.
| |
Collapse
|
31
|
Yin Z, Li GY, Zhang Z, Zheng Y, Cao Y. SWENet: A Physics-Informed Deep Neural Network (PINN) for Shear Wave Elastography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1434-1448. [PMID: 38032772 DOI: 10.1109/tmi.2023.3338178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2023]
Abstract
Shear wave elastography (SWE) enables the measurement of elastic properties of soft materials in a non-invasive manner and finds broad applications in various disciplines. The state-of-the-art SWE methods rely on the measurement of local shear wave speeds to infer material parameters and suffer from wave diffraction when applied to soft materials with strong heterogeneity. In the present study, we overcome this challenge by proposing a physics-informed neural network (PINN)-based SWE (SWENet) method. The spatial variation of elastic properties of inhomogeneous materials has been introduced in the governing equations, which are encoded in SWENet as loss functions. Snapshots of wave motions have been used to train neural networks, and during this course, the elastic properties within a region of interest illuminated by shear waves are inferred simultaneously. We performed finite element simulations, tissue-mimicking phantom experiments, and ex vivo experiments to validate the method. Our results show that the shear moduli of soft composites consisting of matrix and inclusions of several millimeters in cross-section dimensions with either regular or irregular geometries can be identified with excellent accuracy. The advantages of the SWENet over conventional SWE methods consist of using more features of the wave motions and enabling seamless integration of multi-source data in the inverse analysis. Given the advantages of SWENet, it may find broad applications where full wave fields get involved to infer heterogeneous mechanical properties, such as identifying small solid tumors with ultrasound SWE, and differentiating gray and white matters of the brain with magnetic resonance elastography.
Collapse
|
32
|
Liu R, Xu R, Yan S, Li P, Jia C, Sun H, Sheng K, Wang Y, Zhang Q, Guo J, Xin X, Li X, Guo D. Hi-C, a chromatin 3D structure technique advancing the functional genomics of immune cells. Front Genet 2024; 15:1377238. [PMID: 38586584 PMCID: PMC10995239 DOI: 10.3389/fgene.2024.1377238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 03/13/2024] [Indexed: 04/09/2024] Open
Abstract
The functional performance of immune cells relies on a complex transcriptional regulatory network. The three-dimensional structure of chromatin can affect chromatin status and gene expression patterns, and plays an important regulatory role in gene transcription. Currently available techniques for studying chromatin spatial structure include chromatin conformation capture techniques and their derivatives, chromatin accessibility sequencing techniques, and others. Additionally, the recently emerged deep learning technology can be utilized as a tool to enhance the analysis of data. In this review, we elucidate the definition and significance of the three-dimensional chromatin structure, summarize the technologies available for studying it, and describe the research progress on the chromatin spatial structure of dendritic cells, macrophages, T cells, B cells, and neutrophils.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | - Dianhao Guo
- School of Clinical and Basic Medical Sciences, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| |
Collapse
|
33
|
Karunanayake N, Makhanov SS. When deep learning is not enough: artificial life as a supplementary tool for segmentation of ultrasound images of breast cancer. Med Biol Eng Comput 2024:10.1007/s11517-024-03026-x. [PMID: 38498125 DOI: 10.1007/s11517-024-03026-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 01/16/2024] [Indexed: 03/20/2024]
Abstract
Segmentation of tumors in ultrasound (US) images of the breast is a critical issue in medical imaging. Due to the poor quality of US images and the varying specifications of US machines, segmentation and classification of abnormalities present difficulties even for trained radiologists. The paper aims to introduce a novel AI-based hybrid model for US segmentation that offers high accuracy, requires relatively smaller datasets, and is capable of handling previously unseen data. The software can be used for diagnostics and the US-guided biopsies. A unique and robust hybrid approach that combines deep learning (DL) and multi-agent artificial life (AL) has been introduced. The algorithms are verified on three US datasets. The method outperforms 14 selected state-of-the-art algorithms applied to US images characterized by complex geometry and high level of noise. The paper offers an original classification of the images and tests to analyze the limits of the DL. The model has been trained and verified on 1264 ultrasound images. The images are in the JPEG and PNG formats. The age of the patients ranges from 22 to 73 years. The 14 benchmark algorithms include deformable shapes, edge linking, superpixels, machine learning, and DL methods. The tests use eight-region shape- and contour-based evaluation metrics. The proposed method (DL-AL) produces excellent results in terms of the dice coefficient (region) and the relative Hausdorff distance H3 (contour-based) as follows: the easiest image complexity level, Dice = 0.96 and H3 = 0.26; the medium complexity level, Dice = 0.91 and H3 = 0.82; and the hardest complexity level, Dice = 0.90 and H3 = 0.84. All other metrics follow the same pattern. The DL-AL outperforms the second best (Unet-based) method by 10-20%. The method has been also tested by a series of unconventional tests. The model was trained on low complexity images and applied to the entire set of images. These results are summarized below. (1) Only the low complexity images have been used for training (68% unknown images): Dice = 0.80 and H3 = 2.01. (2) The low and the medium complexity images have been used for training (51% unknown images): Dice = 0.86 and H3 = 1.32. (3) The low, medium, and hard complexity images have been used for training (35% unknown images): Dice = 0.92 and H3 = 0.76. These tests show a significant advantage of DL-AL over 30%. A video demo illustrating the algorithm is at http://tinyurl.com/mr4ah687 .
Collapse
Affiliation(s)
- Nalan Karunanayake
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand
| | - Stanislav S Makhanov
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand.
| |
Collapse
|
34
|
Chen JW, Ting M, Chang PY, Jung CJ, Chang CH, Fang SY, Liu LW, Yang KJ, Yu SH, Chen YS, Chi NH, Hsu RB, Wang CH, Wu IH, Yu HY, Chan CY. Computer-assisted image analysis of preexisting histological patterns of the cephalic vein to predict wrist arteriovenous fistula non-maturation. J Formos Med Assoc 2024:S0929-6646(24)00149-9. [PMID: 38492985 DOI: 10.1016/j.jfma.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 01/07/2024] [Accepted: 03/06/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND We used computer-assisted image analysis to determine whether preexisting histological features of the cephalic vein influence the risk of non-maturation of wrist fistulas. METHODS This study focused on patients aged 20-80 years who underwent their first wrist fistula creation. A total of 206 patients participated, and vein samples for Masson's trichrome staining were collected from 134 patients. From these, 94 patients provided a complete girth of the venous specimen for automatic image analysis. Maturation was assessed using ultrasound within 90 days after surgery. RESULTS The collagen to muscle ratio in the target vein, measured by computer-assisted imaging, was a strong predictor of non-maturation in wrist fistulas. Receiver operating characteristic analysis revealed an area under the curve of 0.864 (95% confidence interval of 0.782-0.946, p < 0.001). The optimal cut-off value for the ratio was 1.138, as determined by the Youden index maximum method, with a sensitivity of 89.0% and specificity of 71.4%. For easy application, we used a cutoff value of 1.0; the non-maturation rates for patients with ratios >1 and ≤ 1 were 51.7% (15 out of 29 patients) and 9.2% (6 out of 65 patients), respectively. Chi-square testing revealed significantly different non-maturation rates between the two groups (X2 (1, N = 94) = 20.9, p < 0.01). CONCLUSION Computer-assisted image interpretation can help to quantify the preexisting histological patterns of the cephalic vein, while the collagen-to-muscle ratio can predict non-maturation of wrist fistula development at an early stage.
Collapse
Affiliation(s)
- Jeng-Wei Chen
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan; Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Mao Ting
- Department of Surgery, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Po-Ya Chang
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Chiau-Jing Jung
- Department of Microbiology and Immunology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Chin-Hao Chang
- Department of Medical Research, National Taiwan University Hospital, Taipei, Taiwan
| | - Shi-Yu Fang
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Li-Wei Liu
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Kelvin Jeason Yang
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Sz-Han Yu
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Yih-Sharng Chen
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Nai-Hsin Chi
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Ron-Bin Hsu
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Chih-Hsien Wang
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - I-Hui Wu
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Hsi-Yu Yu
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan
| | - Chih-Yang Chan
- Department of Surgery, National Taiwan University Hospital, Taipei, Taiwan.
| |
Collapse
|
35
|
Farrahi V, Collings PJ, Oussalah M. Deep learning of movement behavior profiles and their association with markers of cardiometabolic health. BMC Med Inform Decis Mak 2024; 24:74. [PMID: 38481262 PMCID: PMC10936042 DOI: 10.1186/s12911-024-02474-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/04/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND Traditionally, existing studies assessing the health associations of accelerometer-measured movement behaviors have been performed with few averaged values, mainly representing the duration of physical activities and sedentary behaviors. Such averaged values cannot naturally capture the complex interplay between the duration, timing, and patterns of accumulation of movement behaviors, that altogether may be codependently related to health outcomes in adults. In this study, we introduce a novel approach to visually represent recorded movement behaviors as images using original accelerometer outputs. Subsequently, we utilize these images for cluster analysis employing deep convolutional autoencoders. METHODS Our method involves converting minute-by-minute accelerometer outputs (activity counts) into a 2D image format, capturing the entire spectrum of movement behaviors performed by each participant. By utilizing convolutional autoencoders, we enable the learning of these image-based representations. Subsequently, we apply the K-means algorithm to cluster these learned representations. We used data from 1812 adult (20-65 years) participants in the National Health and Nutrition Examination Survey (NHANES, 2003-2006 cycles) study who worn a hip-worn accelerometer for 7 seven consecutive days and provided valid accelerometer data. RESULTS Deep convolutional autoencoders were able to learn the image representation, encompassing the entire spectrum of movement behaviors. The images were encoded into 32 latent variables, and cluster analysis based on these learned representations for the movement behavior images resulted in the identification of four distinct movement behavior profiles characterized by varying levels, timing, and patterns of accumulation of movement behaviors. After adjusting for potential covariates, the movement behavior profile characterized as "Early-morning movers" and the profile characterized as "Highest activity" both had lower levels of insulin (P < 0.01 for both), triglycerides (P < 0.05 and P < 0.01, respectively), HOMA-IR (P < 0.01 for both), and plasma glucose (P < 0.05 and P < 0.1, respectively) compared to the "Lowest activity" profile. No significant differences were observed for the "Least sedentary movers" profile compared to the "Lowest activity" profile. CONCLUSIONS Deep learning of movement behavior profiles revealed that, in addition to duration and patterns of movement behaviors, the timing of physical activity may also be crucial for gaining additional health benefits.
Collapse
Affiliation(s)
- Vahid Farrahi
- Institute for Sport and Sport Science, TU Dortmund University, Dortmund, Germany.
| | - Paul J Collings
- Physical Activity, Sport and Health Research Group, Department of Precision Health, Luxembourg Institute of Health, Strassen, Luxembourg
| | - Mourad Oussalah
- Centre of Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Oulu, Finland
| |
Collapse
|
36
|
Chen H, Li Q, Zhou L, Li F. Deep learning-based algorithms for low-dose CT imaging: A review. Eur J Radiol 2024; 172:111355. [PMID: 38325188 DOI: 10.1016/j.ejrad.2024.111355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 01/05/2024] [Accepted: 01/31/2024] [Indexed: 02/09/2024]
Abstract
The computed tomography (CT) technique is extensively employed as an imaging modality in clinical settings. The radiation dose of CT, however, is significantly high, thereby raising concerns regarding the potential radiation damage it may cause. The reduction of X-ray exposure dose in CT scanning may result in a significant decline in imaging quality, thereby elevating the risk of missed diagnosis and misdiagnosis. The reduction of CT radiation dose and acquisition of high-quality images to meet clinical diagnostic requirements have always been a critical research focus and challenge in the field of CT. Over the years, scholars have conducted extensive research on enhancing low-dose CT (LDCT) imaging algorithms, among which deep learning-based algorithms have demonstrated superior performance. In this review, we initially introduced the conventional algorithms for CT image reconstruction along with their respective advantages and disadvantages. Subsequently, we provided a detailed description of four aspects concerning the application of deep neural networks in LDCT imaging process: preprocessing in the projection domain, post-processing in the image domain, dual-domain processing imaging, and direct deep learning-based reconstruction (DLR). Furthermore, an analysis was conducted to evaluate the merits and demerits of each method. The commercial and clinical applications of the LDCT-DLR algorithm were also presented in an overview. Finally, we summarized the existing issues pertaining to LDCT-DLR and concluded the paper while outlining prospective trends for algorithmic advancement.
Collapse
Affiliation(s)
- Hongchi Chen
- School of Medical Information Engineering, Gannan Medical University, Ganzhou 341000, China
| | - Qiuxia Li
- School of Medical Information Engineering, Gannan Medical University, Ganzhou 341000, China
| | - Lazhen Zhou
- School of Medical Information Engineering, Gannan Medical University, Ganzhou 341000, China
| | - Fangzuo Li
- School of Medical Information Engineering, Gannan Medical University, Ganzhou 341000, China; Key Laboratory of Prevention and Treatment of Cardiovascular and Cerebrovascular Diseases, Ministry of Education, Gannan Medical University, Ganzhou 341000, China.
| |
Collapse
|
37
|
Yin R, Chen H, Tao T, Zhang K, Yang G, Shi F, Jiang Y, Gui J. Expanding from unilateral to bilateral: A robust deep learning-based approach for predicting radiographic osteoarthritis progression. Osteoarthritis Cartilage 2024; 32:338-347. [PMID: 38113994 DOI: 10.1016/j.joca.2023.11.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 10/31/2023] [Accepted: 11/29/2023] [Indexed: 12/21/2023]
Abstract
OBJECTIVE To develop and validate a deep learning (DL) model for predicting osteoarthritis (OA) progression based on bilateral knee joint views. METHODS In this retrospective study, knee joints from bilateral posteroanterior knee radiographs of participants in the Osteoarthritis Initiative were analyzed. At baseline, participants were divided into testing set 1 and development set according to the different enrolled sites. The development set was further divided into a training set and a validation set in an 8:2 ratio for model development. At 48-month follow-up, eligible patients were formed testing set 2. The Bilateral Knee Neural Network (BikNet) was developed using bilateral views, with the knee to be predicted as the main view and the contralateral knee as the auxiliary view. DenseNet and ResNext were also trained and compared as the unilateral model. Two reader tests were conducted to evaluate the model's value in predicting incident OA. RESULTS Totally 3583 participants were evaluated. The BikNet we proposed outperformed ResNext and DenseNet (all area under the curve [AUC] < 0.71, P < 0.001) with AUC values of 0.761 and 0.745 in testing sets 1 and 2, respectively. With assistance of the BikNet increased clinicians' sensitivity (from 28.1-63.2% to 42.1-68.4%) and specificity (from 57.4-83.4% to 64.1-87.5%) of incident OA prediction and improved inter-observer reliability. CONCLUSION The DL model, constructed based on bilateral knee views, holds promise for enhancing the assessment of OA and demonstrates greater robustness during subsequent follow-up evaluations as compared with unilateral models. BikNet represents a potential tool or imaging biomarker for predicting OA progression.
Collapse
Affiliation(s)
- Rui Yin
- Nanjing Medical University, Nanjing, China; Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing, China.
| | - Hao Chen
- School of Computer Science, University of Birmingham, Birmingham, UK.
| | - Tianqi Tao
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing, China.
| | - Kaibin Zhang
- Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing, China.
| | - Guangxu Yang
- Department of Orthopedic Surgery, Nanjing Pukou Hospital, Nanjing, China.
| | - Fajian Shi
- Department of Orthopedic Surgery, Nanjing Pukou Hospital, Nanjing, China.
| | - Yiqiu Jiang
- Nanjing Medical University, Nanjing, China; Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing, China.
| | - Jianchao Gui
- Nanjing Medical University, Nanjing, China; Department of Sports Medicine and Joint Surgery, Nanjing First Hospital, Nanjing, China.
| |
Collapse
|
38
|
Yang K, Song J, Liu M, Xue L, Liu S, Yin X, Liu K. TBACkp: HER2 expression status classification network focusing on intrinsic subenvironmental characteristics of breast cancer liver metastases. Comput Biol Med 2024; 170:108002. [PMID: 38277921 DOI: 10.1016/j.compbiomed.2024.108002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
The HER2 expression status in breast cancer liver metastases is a crucial indicator for the diagnosis, treatment, and prognosis assessment of patients. And typical diagnosis involves assessing the HER2 expression status through invasive procedures like biopsy. However, this method has certain drawbacks, such as being difficult in obtaining tissue samples and requiring long examination periods. To address these limitations, we propose an AI-aided diagnostic model. This model enables rapid diagnosis. It diagnoses a patient's HER2 expression status on the basis of preprocessed images, which is the region of the lesion extracted from a CT image rather than from an actual tissue sample. The algorithm of the model adopts a parallel structure, including a Branch Block and a Trunk Block. The Branch Block is responsible for extracting the gradient characteristics between the tumor sub-environments, and the Trunk Block is for fusing the characteristics extracted by the Branch Block. The Branch Block contains CNN with self-attention, which combines the advantages of CNN and self-attention to extract more meticulous and comprehensive image features. And the Trunk Block is so designed that it fuses the extracted image feature information without affecting the transmission of the original image features. The Conv-Attention is used to calculate the attention in the Trunk Block, which uses kernel dot product and is responsible for providing the weight for the self-attention in the process of using convolution induced deviation calculation. Combined with the structure of the model and the method used, we refer to this model as TBACkp. The dataset comprises the enhanced abdominal CT images of 151 patients with liver metastases from breast cancer, together with the corresponding HER2 expression levels for each patient. The experimental results are as follows: (AUC: 0.915, ACC: 0.854, specificity: 0.809, precision: 0.863, recall: 0.881, F1-score: 0.872). The results demonstrate that this method can accurately assess the HER2 expression status in patients when compared with other advanced deep learning model.
Collapse
Affiliation(s)
- Kun Yang
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Jie Song
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Meng Liu
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
| | - Linyan Xue
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Shuang Liu
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China
| | - Xiaoping Yin
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China; Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Hebei University, Baoding, China; The Outstanding Young Scientific Research and Innovation Team of Hebei University, Baoding, China.
| | - Kun Liu
- College of Quality and Technical Supervision, Hebei University, Baoding, China; Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, Baoding, China; Scientific Research and Innovation Team of Hebei University, Baoding, China.
| |
Collapse
|
39
|
Zhang J, Qing C, Li Y, Wang Y. BCSwinReg: A cross-modal attention network for CBCT-to-CT multimodal image registration. Comput Biol Med 2024; 171:107990. [PMID: 38377717 DOI: 10.1016/j.compbiomed.2024.107990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/26/2023] [Accepted: 01/13/2024] [Indexed: 02/22/2024]
Abstract
Computed tomography (CT) and cone beam computed tomography (CBCT) registration plays an important role in radiotherapy. However, the poor quality of CBCT makes CBCT-CT multimodal registration challenging. Effective feature fusion and mapping often lead to better registration results for multimodal registration. Therefore, we proposed a new backbone network BCSwinReg and a cross-modal attention module CrossSwin. Specifically, a cross-modal attention CrossSwin is designed to promote multi-modal feature fusion, map the multi-modal domain to the common domain, and thus helping the network learn the correspondence between images better. Furthermore, a new network, BCSwinReg, is proposed to discover correspondence through cross-attention exchange information, obtain multi-level semantic information through a multi-resolution strategy, and finally integrate the deformation of multi-resolutions by the divide-conquer cascade method. We performed experiments on the publicly available 4D-Lung dataset to demonstrate the effectiveness of CrossSwin and BCSwinReg. Compared with VoxelMorph, the BCSwinReg has obtained performance improvements of 3.3% in Dice Similarity Coefficient (DSC) and 0.19 in the average 95% Hausdorff distance (HD95).
Collapse
Affiliation(s)
- Jieming Zhang
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Chang Qing
- The East China University of Science and Technology, Shanghai, 200237, China.
| | - Yu Li
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Yaqi Wang
- The East China University of Science and Technology, Shanghai, 200237, China
| |
Collapse
|
40
|
Dong C, Hayashi S. Deep learning applications in vascular dementia using neuroimaging. Curr Opin Psychiatry 2024; 37:101-106. [PMID: 38226547 DOI: 10.1097/yco.0000000000000920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
PURPOSE OF REVIEW Vascular dementia (VaD) is the second common cause of dementia after Alzheimer's disease, and deep learning has emerged as a critical tool in dementia research. The aim of this article is to highlight the current deep learning applications in VaD-related imaging biomarkers and diagnosis. RECENT FINDINGS The main deep learning technology applied in VaD using neuroimaging data is convolutional neural networks (CNN). CNN models have been widely used for lesion detection and segmentation, such as white matter hyperintensities (WMH), cerebral microbleeds (CMBs), perivascular spaces (PVS), lacunes, cortical superficial siderosis, and brain atrophy. Applications in VaD subtypes classification also showed excellent results. CNN-based deep learning models have potential for further diagnosis and prognosis of VaD. SUMMARY Deep learning neural networks with neuroimaging data in VaD research represent significant promise for advancing early diagnosis and treatment strategies. Ongoing research and collaboration between clinicians, data scientists, and neuroimaging experts are essential to address challenges and unlock the full potential of deep learning in VaD diagnosis and management.
Collapse
Affiliation(s)
- Chao Dong
- Centre for Healthy Brain Ageing (CHeBA), Discipline of Psychiatry & Mental Health, School of Clinical Medicine, UNSW Sydney, NSW, Australia
| | | |
Collapse
|
41
|
Xu T, Zhang XY, Yang N, Jiang F, Chen GQ, Pan XF, Peng YX, Cui XW. A narrative review on the application of artificial intelligence in renal ultrasound. Front Oncol 2024; 13:1252630. [PMID: 38495082 PMCID: PMC10943690 DOI: 10.3389/fonc.2023.1252630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 12/12/2023] [Indexed: 03/19/2024] Open
Abstract
Kidney disease is a serious public health problem and various kidney diseases could progress to end-stage renal disease. The many complications of end-stage renal disease. have a significant impact on the physical and mental health of patients. Ultrasound can be the test of choice for evaluating the kidney and perirenal tissue as it is real-time, available and non-radioactive. To overcome substantial interobserver variability in renal ultrasound interpretation, artificial intelligence (AI) has the potential to be a new method to help radiologists make clinical decisions. This review introduces the applications of AI in renal ultrasound, including automatic segmentation of the kidney, measurement of the renal volume, prediction of the kidney function, diagnosis of the kidney diseases. The advantages and disadvantages of the applications will also be presented clinicians to conduct research. Additionally, the challenges and future perspectives of AI are discussed.
Collapse
Affiliation(s)
- Tong Xu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Na Yang
- Department of Ultrasound, Affiliated Hospital of Jilin Medical College, Jilin, China
| | - Fan Jiang
- Department of Medical Ultrasound, The Second Hospital of Anhui Medical University, Hefei, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Yue-Xiang Peng
- Department of Ultrasound, Wuhan Third Hospital, Tongren Hospital of Wuhan University, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
42
|
Xin H, Zhang Y, Lai Q, Liao N, Zhang J, Liu Y, Chen Z, He P, He J, Liu J, Zhou Y, Yang W, Zhou Y. Automatic origin prediction of liver metastases via hierarchical artificial-intelligence system trained on multiphasic CT data: a retrospective, multicentre study. EClinicalMedicine 2024; 69:102464. [PMID: 38333364 PMCID: PMC10847157 DOI: 10.1016/j.eclinm.2024.102464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 12/22/2023] [Accepted: 01/17/2024] [Indexed: 02/10/2024] Open
Abstract
Background Currently, the diagnostic testing for the primary origin of liver metastases (LMs) can be laborious, complicating clinical decision-making. Directly classifying the primary origin of LMs at computed tomography (CT) images has proven to be challenging, despite its potential to streamline the entire diagnostic workflow. Methods We developed ALMSS, an artificial intelligence (AI)-based LMs screening system, to provide automated liver contrast-enhanced CT analysis for distinguishing LMs from hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC), as well as subtyping primary origin of LMs as six organ systems. We processed a CECT dataset between January 1, 2013 and June 30, 2022 (n = 3105: 840 HCC, 354 ICC, and 1911 LMs) for training and internally testing ALMSS, and two additional cohorts (n = 622) for external validation of its diagnostic performance. The performance of radiologists with and without the assistance of ALMSS in diagnosing and subtyping LMs was assessed. Findings ALMSS achieved average area under the curve (AUC) of 0.917 (95% confidence interval [CI]: 0.899-0.931) and 0.923 (95% [CI]: 0.905-0.937) for differentiating LMs, HCC and ICC on both the internal testing set and external testing set, outperformed that of two radiologists. Moreover, ALMSS yielded average AUC of 0.815 (95% [CI]: 0.794-0.836) and 0.818 (95% [CI]: 0.790-0.842) for predicting six primary origins on both two testing sets. Interestingly, ALMSS assigned origin diagnoses for LMs with pathological phenotypes beyond the training categories with average AUC of 0.761 (95% [CI]: 0.657-0.842), which verify the model's diagnostic expandability. Interpretation Our study established an AI-based diagnostic system that effectively identifies and characterizes LMs directly from multiphasic CT images. Funding National Natural Science Foundation of China, Guangdong Provincial Key Laboratory of Medical Image Processing.
Collapse
Affiliation(s)
- Hongjie Xin
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qianwei Lai
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Naying Liao
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Jing Zhang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yanping Liu
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
- Department of Gastroenterology, The Second Affiliated Hospital, University of South China, Hengyang, China
| | - Zhihua Chen
- Department of Radiology, The Second Affiliated Hospital, University of South China, Hengyang, China
| | - Pengyuan He
- Department of Infectious Diseases, The Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai, China
| | - Jian He
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Junwei Liu
- Department of Infectious Diseases and Hepatology Unit, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yuchen Zhou
- Department of General Surgery, Cancer Center, Integrated Hospital of Traditional Chinese Medicine, Southern Medical University, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yuanping Zhou
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
43
|
Charters JA, Luximon D, Petragallo R, Neylon J, Low DA, Lamb JM. Automated detection of vertebral body misalignments in orthogonal kV and MV guided radiotherapy: application to a comprehensive retrospective dataset. Biomed Phys Eng Express 2024; 10:025039. [PMID: 38382110 DOI: 10.1088/2057-1976/ad2baa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 02/21/2024] [Indexed: 02/23/2024]
Abstract
Objective. In image-guided radiotherapy (IGRT), off-by-one vertebral body misalignments are rare but potentially catastrophic. In this study, a novel detection method for such misalignments in IGRT was investigated using densely-connected convolutional networks (DenseNets) for applications towards real-time error prevention and retrospective error auditing.Approach. A total of 4213 images acquired from 527 radiotherapy patients aligned with planar kV or MV radiographs were used to develop and test error-detection software modules. Digitally reconstructed radiographs (DRRs) and setup images were retrieved and co-registered according to the clinically applied alignment contained in the DICOM REG files. A semi-automated algorithm was developed to simulate patient positioning errors on the anterior-posterior (AP) and lateral (LAT) images shifted by one vertebral body. A DenseNet architecture was designed to classify either AP images individually or AP and LAT image pairs. Receiver-operator characteristic curves (ROC) and areas under the curves (AUC) were computed to evaluate the classifiers on test subsets. Subsequently, the algorithm was applied to the entire dataset in order to retrospectively determine the absolute off-by-one vertebral body error rate for planar radiograph guided RT at our institution from 2011-2021.Main results. The AUCs for the kV models were 0.98 for unpaired AP and 0.99 for paired AP-LAT. The AUC for the MV AP model was 0.92. For a specificity of 95%, the paired kV model achieved a sensitivity of 99%. Application of the model to the entire dataset yielded a per-fraction off-by-one vertebral body error rate of 0.044% [0.0022%, 0.21%] for paired kV IGRT including one previously unreported error.Significance. Our error detection algorithm was successful in classifying vertebral body positioning errors with sufficient accuracy for retrospective quality control and real-time error prevention. The reported positioning error rate for planar radiograph IGRT is unique in being determined independently of an error reporting system.
Collapse
Affiliation(s)
- John A Charters
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Dishane Luximon
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Rachel Petragallo
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Jack Neylon
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Daniel A Low
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - James M Lamb
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| |
Collapse
|
44
|
Cai F, Wen J, He F, Xia Y, Xu W, Zhang Y, Jiang L, Li J. SC-Unext: A Lightweight Image Segmentation Model with Cellular Mechanism for Breast Ultrasound Tumor Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01042-9. [PMID: 38424276 DOI: 10.1007/s10278-024-01042-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/13/2024] [Accepted: 02/05/2024] [Indexed: 03/02/2024]
Abstract
Automatic breast ultrasound image segmentation plays an important role in medical image processing. However, current methods for breast ultrasound segmentation suffer from high computational complexity and large model parameters, particularly when dealing with complex images. In this paper, we take the Unext network as a basis and utilize its encoder-decoder features. And taking inspiration from the mechanisms of cellular apoptosis and division, we design apoptosis and division algorithms to improve model performance. We propose a novel segmentation model which integrates the division and apoptosis algorithms and introduces spatial and channel convolution blocks into the model. Our proposed model not only improves the segmentation performance of breast ultrasound tumors, but also reduces the model parameters and computational resource consumption time. The model was evaluated on the breast ultrasound image dataset and our collected dataset. The experiments show that the SC-Unext model achieved Dice scores of 75.29% and accuracy of 97.09% on the BUSI dataset, and on the collected dataset, it reached Dice scores of 90.62% and accuracy of 98.37%. Meanwhile, we conducted a comparison of the model's inference speed on CPUs to verify its efficiency in resource-constrained environments. The results indicated that the SC-Unext model achieved an inference speed of 92.72 ms per instance on devices equipped only with CPUs. The model's number of parameters and computational resource consumption are 1.46M and 2.13 GFlops, respectively, which are lower compared to other network models. Due to its lightweight nature, the model holds significant value for various practical applications in the medical field.
Collapse
Affiliation(s)
- Fenglin Cai
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China
| | - Jiaying Wen
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Fangzhou He
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China
| | - Yulong Xia
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Weijun Xu
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Yong Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Li Jiang
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China.
| | - Jie Li
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China.
| |
Collapse
|
45
|
Li Y, Pillar N, Li J, Liu T, Wu D, Sun S, Ma G, de Haan K, Huang L, Zhang Y, Hamidi S, Urisman A, Keidar Haran T, Wallace WD, Zuckerman JE, Ozcan A. Virtual histological staining of unlabeled autopsy tissue. Nat Commun 2024; 15:1684. [PMID: 38396004 PMCID: PMC10891155 DOI: 10.1038/s41467-024-46077-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 02/09/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Collapse
Affiliation(s)
- Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Di Wu
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Songyu Sun
- Computer Science Department, University of California, Los Angeles, CA, 90095, USA
| | - Guangdong Ma
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- School of Physics, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Sepehr Hamidi
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Anatoly Urisman
- Department of Pathology, University of California, San Francisco, CA, 94143, USA
| | - Tal Keidar Haran
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, 91120, Israel
| | - William Dean Wallace
- Department of Pathology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Jonathan E Zuckerman
- Department of Pathology and Laboratory Medicine, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
- Department of Surgery, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
46
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
47
|
Xiao H, Fang W, Lin M, Zhou Z, Fei H, Chen C. [A multiscale carotid plaque detection method based on two-stage analysis]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2024; 44:387-396. [PMID: 38501425 PMCID: PMC10954526 DOI: 10.12122/j.issn.1673-4254.2024.02.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 03/20/2024]
Abstract
OBJECTIVE To develop a method for accurate identification of multiscale carotid plaques in ultrasound images. METHODS We proposed a two-stage carotid plaque detection method based on deep convolutional neural network (SM-YOLO).A series of algorithms such as median filtering, histogram equalization, and Gamma transformation were used to preprocess the dataset to improve image quality. In the first stage of the model construction, a candidate plaque set was built based on the YOLOX_l target detection network, using multiscale image training and multiscale image prediction strategies to accommodate carotid artery plaques of different shapes and sizes. In the second stage, the Histogram of Oriented Gradient (HOG) features and Local Binary Pattern (LBP) features were extracted and fused, and a Support Vector Machine (SVM) classifier was used to screen the candidate plaque set to obtain the final detection results. This model was compared quantitatively and visually with several target detection models (YOLOX_l, SSD, EfficientDet, YOLOV5_l, Faster R-CNN). RESULTS SM-YOLO achieved a recall of 89.44%, an accuracy of 90.96%, a F1-Score of 90.19%, and an AP of 92.70% on the test set, outperforming other models in all performance indicators and visual effects. The constructed model had a much shorter detection time than the Faster R-CNN model (only one third of that of the latter), thus meeting the requirements of real-time detection. CONCLUSION The proposed carotid artery plaque detection method has good performance for accurate identification of carotid plaques in ultrasound images.
Collapse
Affiliation(s)
- H Xiao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - W Fang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - M Lin
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Z Zhou
- Guangzhou Shangyi Network Information Technology Co., Ltd., Guangzhou 510515, China
| | - H Fei
- Guangdong Provincial People's Hospital Affiliated to Southern Medical University, Guangzhou 510180, China
| | - C Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
48
|
Zhang M, Ye Z, Yuan E, Lv X, Zhang Y, Tan Y, Xia C, Tang J, Huang J, Li Z. Imaging-based deep learning in kidney diseases: recent progress and future prospects. Insights Imaging 2024; 15:50. [PMID: 38360904 PMCID: PMC10869329 DOI: 10.1186/s13244-024-01636-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 01/27/2024] [Indexed: 02/17/2024] Open
Abstract
Kidney diseases result from various causes, which can generally be divided into neoplastic and non-neoplastic diseases. Deep learning based on medical imaging is an established methodology for further data mining and an evolving field of expertise, which provides the possibility for precise management of kidney diseases. Recently, imaging-based deep learning has been widely applied to many clinical scenarios of kidney diseases including organ segmentation, lesion detection, differential diagnosis, surgical planning, and prognosis prediction, which can provide support for disease diagnosis and management. In this review, we will introduce the basic methodology of imaging-based deep learning and its recent clinical applications in neoplastic and non-neoplastic kidney diseases. Additionally, we further discuss its current challenges and future prospects and conclude that achieving data balance, addressing heterogeneity, and managing data size remain challenges for imaging-based deep learning. Meanwhile, the interpretability of algorithms, ethical risks, and barriers of bias assessment are also issues that require consideration in future development. We hope to provide urologists, nephrologists, and radiologists with clear ideas about imaging-based deep learning and reveal its great potential in clinical practice.Critical relevance statement The wide clinical applications of imaging-based deep learning in kidney diseases can help doctors to diagnose, treat, and manage patients with neoplastic or non-neoplastic renal diseases.Key points• Imaging-based deep learning is widely applied to neoplastic and non-neoplastic renal diseases.• Imaging-based deep learning improves the accuracy of the delineation, diagnosis, and evaluation of kidney diseases.• The small dataset, various lesion sizes, and so on are still challenges for deep learning.
Collapse
Affiliation(s)
- Meng Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
- Medical Equipment Innovation Research Center, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
- Med+X Center for Manufacturing, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Zheng Ye
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Enyu Yuan
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Xinyang Lv
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Yiteng Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Yuqi Tan
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Chunchao Xia
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Jing Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| | - Jin Huang
- Medical Equipment Innovation Research Center, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
- Med+X Center for Manufacturing, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| | - Zhenlin Li
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| |
Collapse
|
49
|
Takahashi K, Ozawa E, Shimakura A, Mori T, Miyaaki H, Nakao K. Recent Advances in Endoscopic Ultrasound for Gallbladder Disease Diagnosis. Diagnostics (Basel) 2024; 14:374. [PMID: 38396413 PMCID: PMC10887964 DOI: 10.3390/diagnostics14040374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/01/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Gallbladder (GB) disease is classified into two broad categories: GB wall-thickening and protuberant lesions, which include various lesions, such as adenomyomatosis, cholecystitis, GB polyps, and GB carcinoma. This review summarizes recent advances in the differential diagnosis of GB lesions, focusing primarily on endoscopic ultrasound (EUS) and related technologies. Fundamental B-mode EUS and contrast-enhanced harmonic EUS (CH-EUS) have been reported to be useful for the diagnosis of GB diseases because they can evaluate the thickening of the GB wall and protuberant lesions in detail. We also outline the current status of EUS-guided fine-needle aspiration (EUS-FNA) for GB lesions, as there have been scattered reports on EUS-FNA in recent years. Furthermore, artificial intelligence (AI) technologies, ranging from machine learning to deep learning, have become popular in healthcare for disease diagnosis, drug discovery, drug development, and patient risk identification. In this review, we outline the current status of AI in the diagnosis of GB.
Collapse
Affiliation(s)
- Kosuke Takahashi
- Department of Gastroenterology and Hepatology, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki 852-8501, Japan; (E.O.); (T.M.); (H.M.); (K.N.)
| | | | | | | | | | | |
Collapse
|
50
|
Tan Y, Feng LJ, Huang YH, Xue JW, Long LL, Feng ZB. A comprehensive radiopathological nomogram for the prediction of pathological staging in gastric cancer using CT-derived and WSI-based features. Transl Oncol 2024; 40:101864. [PMID: 38141376 PMCID: PMC10788295 DOI: 10.1016/j.tranon.2023.101864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/08/2023] [Accepted: 12/12/2023] [Indexed: 12/25/2023] Open
Abstract
OBJECTIVE This study aims to develop and validate an innovative radiopathomics model that combines radiomics and pathomics features to effectively differentiate between stages I-II and stage III gastric cancer (pathological staging). METHODS Our study included 200 patients with well-defined stages of gastric cancer divided into a training cohort (n = 140) and a test cohort (n = 60). Radiomics features were extracted from contrast-enhanced CT images using PyRadiomics, while pathomics features were obtained from whole slide images of pathological specimens through a fine-tuned deep learning model (ResNet-18). After rigorous feature dimensionality reduction and selection, we constructed radiomics models (SVM_rad, LR_rad, and MLP_rad) and pathomics models (SVM_path, LR_path, and MLP_path) utilizing support vector machine (SVM), logistic regression (LR), and multilayer perceptron (MLP) algorithms. The optimal radiomics and pathomics models were chosen based on comprehensive evaluation criteria such as ROC curves, Hosmer‒Lemeshow tests, and calibration curve tests. Feature patterns extracted from the best-performing radiomics model (MLP_rad) and pathomics model (SVM_rad) were integrated to create a powerful radiopathomics nomogram. RESULTS From a pool of 1834 radiomics features extracted from CT images, 14 were selected to construct radiomics models. Among these, the MLP_rad model exhibited the most robust predictive performance (AUC, training cohort: 0.843; test cohort: 0.797). Likewise, 10 pathomics features were chosen from 512 extracted from whole slide images to build pathomics models, with the SVM_path model demonstrating the highest predictive efficiency (AUC, training cohort: 0.937; test cohort: 0.792). The combined radiopathomics nomogram model exhibited optimal discriminative ability (AUC, training cohort: 0.951; test cohort: 0.837), as confirmed by decision curve analysis (DCA), which indicated superior clinical effectiveness. CONCLUSION This study presents a cutting-edge radiopathomics nomogram model designed to predict pathological staging in gastric cancer, distinguishing between stages I-II and stage III. Our research leverages preoperative CT images and histopathological slides to forecast gastric cancer staging accurately, potentially facilitating the estimation of staging before radical gastric cancer surgery in the future.
Collapse
Affiliation(s)
- Yang Tan
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, PR China
| | - Li-Juan Feng
- Department of Radiology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, PR China
| | - Ying-He Huang
- Department of Pathology, The First Affiliated Hospital of Guangxi University of Chinese Medicine, Nanning, Guangxi, PR China
| | - Jia-Wen Xue
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, PR China
| | - Li-Ling Long
- Department of Radiology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, PR China; Key Laboratory of Early Prevention and Treatment for Regional High Frequency Tumor, Gaungxi Medical University, Ministry of Education, Nanning, Guangxi, PR China; Guangxi Key Laboratory of Immunology and Metabolism for Liver Diseases, Nanning, Guangxi, PR China.
| | - Zhen-Bo Feng
- Department of Pathology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, PR China.
| |
Collapse
|