1
|
Fallahpoor M, Nguyen D, Montahaei E, Hosseini A, Nikbakhtian S, Naseri M, Salahshour F, Farzanefar S, Abbasi M. Segmentation of liver and liver lesions using deep learning. Phys Eng Sci Med 2024; 47:611-619. [PMID: 38381270 DOI: 10.1007/s13246-024-01390-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
Segmentation of organs and lesions could be employed for the express purpose of dosimetry in nuclear medicine, assisted image interpretations, and mass image processing studies. Deep leaning created liver and liver lesion segmentation on clinical 3D MRI data has not been fully addressed in previous experiments. To this end, the required data were collected from 128 patients, including their T1w and T2w MRI images, and ground truth labels of the liver and liver lesions were generated. The collection of 110 T1w-T2w MRI image sets was divided, with 94 designated for training and 16 for validation. Furthermore, 18 more datasets were separately allocated for use as hold-out test datasets. The T1w and T2w MRI images were preprocessed into a two-channel format so that they were used as inputs to the deep learning model based on the Isensee 2017 network. To calculate the final Dice coefficient of the network performance on test datasets, the binary average of T1w and T2w predicted images was used. The deep learning model could segment all 18 test cases, with an average Dice coefficient of 88% for the liver and 53% for the liver tumor. Liver segmentation was carried out with rather a high accuracy; this could be achieved for liver dosimetry during systemic or selective radiation therapies as well as for attenuation correction in PET/MRI scanners. Nevertheless, the delineation of liver lesions was not optimal; therefore, tumor detection was not practical by the proposed method on clinical data.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 75390, Dallas, TX, USA
| | - Ehsan Montahaei
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
| | - Ali Hosseini
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran
| | - Shahram Nikbakhtian
- Departmemt of Artificial Intelligence and machine learning, Human Digital Healthcare, London, UK
| | - Maryam Naseri
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA, USA
| | - Faeze Salahshour
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Liver Transplantation Research Center, Imam-Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Saeed Farzanefar
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran
| | - Mehrshad Abbasi
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran.
| |
Collapse
|
2
|
Ding X, Huang Y, Zhao Y, Tian X, Feng G, Gao Z. Transfer learning for anatomical structure segmentation in otorhinolaryngology microsurgery. Int J Med Robot 2024; 20:e2634. [PMID: 38767083 DOI: 10.1002/rcs.2634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND Reducing the annotation burden is an active and meaningful area of artificial intelligence (AI) research. METHODS Multiple datasets for the segmentation of two landmarks were constructed based on 41 257 labelled images and 6 different microsurgical scenarios. These datasets were trained using the multi-stage transfer learning (TL) methodology. RESULTS The multi-stage TL enhanced segmentation performance over baseline (mIOU 0.6892 vs. 0.8869). Besides, Convolutional Neural Networks (CNNs) achieved a robust performance (mIOU 0.8917 vs. 0.8603) even when the training dataset size was reduced from 90% (30 078 images) to 10% (3342 images). When directly applying the weight from one certain surgical scenario to recognise the same target in images of other scenarios without training, CNNs still obtained an optimal mIOU of 0.6190 ± 0.0789. CONCLUSIONS Model performance can be improved with TL in datasets with reduced size and increased complexity. It is feasible for data-based domain adaptation among different microsurgical fields.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| |
Collapse
|
3
|
Zhou Z, Wang S, Zhang S, Pan X, Yang H, Zhuang Y, Lu Z. Deep learning-based spinal canal segmentation of computed tomography image for disease diagnosis: A proposed system for spinal stenosis diagnosis. Medicine (Baltimore) 2024; 103:e37943. [PMID: 38701305 PMCID: PMC11062721 DOI: 10.1097/md.0000000000037943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 03/29/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND Lumbar disc herniation was regarded as an age-related degenerative disease. Nevertheless, emerging reports highlight a discernible shift, illustrating the prevalence of these conditions among younger individuals. METHODS This study introduces a novel deep learning methodology tailored for spinal canal segmentation and disease diagnosis, emphasizing image processing techniques that delve into essential image attributes such as gray levels, texture, and statistical structures to refine segmentation accuracy. RESULTS Analysis reveals a progressive increase in the size of vertebrae and intervertebral discs from the cervical to lumbar regions. Vertebrae, bearing weight and safeguarding the spinal cord and nerves, are interconnected by intervertebral discs, resilient structures that counteract spinal pressure. Experimental findings demonstrate a lack of pronounced anteroposterior bending during flexion and extension, maintaining displacement and rotation angles consistently approximating zero. This consistency maintains uniform anterior and posterior vertebrae heights, coupled with parallel intervertebral disc heights, aligning with theoretical expectations. CONCLUSIONS Accuracy assessment employs 2 methods: IoU and Dice, and the average accuracy of IoU is 88% and that of Dice is 96.4%. The proposed deep learning-based system showcases promising results in spinal canal segmentation, laying a foundation for precise stenosis diagnosis in computed tomography images. This contributes significantly to advancements in spinal pathology understanding and treatment.
Collapse
Affiliation(s)
- Zhiyi Zhou
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Shenjun Wang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Shujun Zhang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China
| | - Haoxia Yang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Yin Zhuang
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| | - Zhengfeng Lu
- Department of Orthopaedics, Wuxi The Ninth People’s Hospital Affiliated to Soochow University, Wuxi, China
| |
Collapse
|
4
|
Li Y, Lai C, Wang M, Wu J, Li Y, Peng H, Qu L. Automated segmentation and recognition of C. elegans whole-body cells. Bioinformatics 2024; 40:btae324. [PMID: 38775410 PMCID: PMC11139520 DOI: 10.1093/bioinformatics/btae324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 04/17/2024] [Accepted: 05/20/2024] [Indexed: 06/01/2024] Open
Abstract
MOTIVATION Accurate segmentation and recognition of C.elegans cells are critical for various biological studies, including gene expression, cell lineages, and cell fates analysis at single-cell level. However, the highly dense distribution, similar shapes, and inhomogeneous intensity profiles of whole-body cells in 3D fluorescence microscopy images make automatic cell segmentation and recognition a challenging task. Existing methods either rely on additional fiducial markers or only handle a subset of cells. Given the difficulty or expense associated with generating fiducial features in many experimental settings, a marker-free approach capable of reliably segmenting and recognizing C.elegans whole-body cells is highly desirable. RESULTS We report a new pipeline, called automated segmentation and recognition (ASR) of cells, and applied it to 3D fluorescent microscopy images of L1-stage C.elegans with 558 whole-body cells. A novel displacement vector field based deep learning model is proposed to address the problem of reliable segmentation of highly crowded cells with blurred boundary. We then realize the cell recognition by encoding and exploiting statistical priors on cell positions and structural similarities of neighboring cells. To the best of our knowledge, this is the first method successfully applied to the segmentation and recognition of C.elegans whole-body cells. The ASR-segmentation module achieves an F1-score of 0.8956 on a dataset of 116 C.elegans image stacks with 64 728 cells (accuracy 0.9880, AJI 0.7813). Based on the segmentation results, the ASR recognition module achieved an average accuracy of 0.8879. We also show ASR's applicability to other cell types, e.g. platynereis and rat kidney cells. AVAILABILITY AND IMPLEMENTATION The code is available at https://github.com/reaneyli/ASR.
Collapse
Affiliation(s)
- Yuanyuan Li
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Chuxiao Lai
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Meng Wang
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Yongbin Li
- College of Life Sciences, Capital Normal University, Beijing 100048, China
| | - Hanchuan Peng
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230039, China
- Hefei National Laboratory, University of Science and Technology of China, Hefei, Anhui 230039, China
| |
Collapse
|
5
|
Bhimavarapu U, Chintalapudi N, Battineni G. Brain Tumor Detection and Categorization with Segmentation of Improved Unsupervised Clustering Approach and Machine Learning Classifier. Bioengineering (Basel) 2024; 11:266. [PMID: 38534540 DOI: 10.3390/bioengineering11030266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/28/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024] Open
Abstract
There is no doubt that brain tumors are one of the leading causes of death in the world. A biopsy is considered the most important procedure in cancer diagnosis, but it comes with drawbacks, including low sensitivity, risks during biopsy treatment, and a lengthy wait for results. Early identification provides patients with a better prognosis and reduces treatment costs. The conventional methods of identifying brain tumors are based on medical professional skills, so there is a possibility of human error. The labor-intensive nature of traditional approaches makes healthcare resources expensive. A variety of imaging methods are available to detect brain tumors, including magnetic resonance imaging (MRI) and computed tomography (CT). Medical imaging research is being advanced by computer-aided diagnostic processes that enable visualization. Using clustering, automatic tumor segmentation leads to accurate tumor detection that reduces risk and helps with effective treatment. This study proposed a better Fuzzy C-Means segmentation algorithm for MRI images. To reduce complexity, the most relevant shape, texture, and color features are selected. The improved Extreme Learning machine classifies the tumors with 98.56% accuracy, 99.14% precision, and 99.25% recall. The proposed classifier consistently demonstrates higher accuracy across all tumor classes compared to existing models. Specifically, the proposed model exhibits accuracy improvements ranging from 1.21% to 6.23% when compared to other models. This consistent enhancement in accuracy emphasizes the robust performance of the proposed classifier, suggesting its potential for more accurate and reliable brain tumor classification. The improved algorithm achieved accuracy, precision, and recall rates of 98.47%, 98.59%, and 98.74% on the Fig share dataset and 99.42%, 99.75%, and 99.28% on the Kaggle dataset, respectively, which surpasses competing algorithms, particularly in detecting glioma grades. The proposed algorithm shows an improvement in accuracy, of approximately 5.39%, in the Fig share dataset and of 6.22% in the Kaggle dataset when compared to existing models. Despite challenges, including artifacts and computational complexity, the study's commitment to refining the technique and addressing limitations positions the improved FCM model as a noteworthy advancement in the realm of precise and efficient brain tumor identification.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522302, India
| | - Nalini Chintalapudi
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| |
Collapse
|
6
|
Cai F, Wen J, He F, Xia Y, Xu W, Zhang Y, Jiang L, Li J. SC-Unext: A Lightweight Image Segmentation Model with Cellular Mechanism for Breast Ultrasound Tumor Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01042-9. [PMID: 38424276 DOI: 10.1007/s10278-024-01042-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/13/2024] [Accepted: 02/05/2024] [Indexed: 03/02/2024]
Abstract
Automatic breast ultrasound image segmentation plays an important role in medical image processing. However, current methods for breast ultrasound segmentation suffer from high computational complexity and large model parameters, particularly when dealing with complex images. In this paper, we take the Unext network as a basis and utilize its encoder-decoder features. And taking inspiration from the mechanisms of cellular apoptosis and division, we design apoptosis and division algorithms to improve model performance. We propose a novel segmentation model which integrates the division and apoptosis algorithms and introduces spatial and channel convolution blocks into the model. Our proposed model not only improves the segmentation performance of breast ultrasound tumors, but also reduces the model parameters and computational resource consumption time. The model was evaluated on the breast ultrasound image dataset and our collected dataset. The experiments show that the SC-Unext model achieved Dice scores of 75.29% and accuracy of 97.09% on the BUSI dataset, and on the collected dataset, it reached Dice scores of 90.62% and accuracy of 98.37%. Meanwhile, we conducted a comparison of the model's inference speed on CPUs to verify its efficiency in resource-constrained environments. The results indicated that the SC-Unext model achieved an inference speed of 92.72 ms per instance on devices equipped only with CPUs. The model's number of parameters and computational resource consumption are 1.46M and 2.13 GFlops, respectively, which are lower compared to other network models. Due to its lightweight nature, the model holds significant value for various practical applications in the medical field.
Collapse
Affiliation(s)
- Fenglin Cai
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China
| | - Jiaying Wen
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Fangzhou He
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China
| | - Yulong Xia
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Weijun Xu
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Yong Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Li Jiang
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China.
| | - Jie Li
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China.
| |
Collapse
|
7
|
Liu L, Wu K, Wang K, Han Z, Qiu J, Zhan Q, Wu T, Xu J, Zeng Z. SEU 2-Net: multi-scale U 2-Net with SE attention mechanism for liver occupying lesion CT image segmentation. PeerJ Comput Sci 2024; 10:e1751. [PMID: 38435550 PMCID: PMC10909188 DOI: 10.7717/peerj-cs.1751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 11/22/2023] [Indexed: 03/05/2024]
Abstract
Liver occupying lesions can profoundly impact an individual's health and well-being. To assist physicians in the diagnosis and treatment of abnormal areas in the liver, we propose a novel network named SEU2-Net by introducing the channel attention mechanism into U2-Net for accurate and automatic liver occupying lesion segmentation. We design the Residual U-block with Squeeze-and-Excitation (SE-RSU), which is to add the Squeeze-and-Excitation (SE) attention mechanism at the residual connections of the Residual U-blocks (RSU, the component unit of U2-Net). SEU2-Net not only retains the advantages of U2-Net in capturing contextual information at multiple scales, but can also adaptively recalibrate channel feature responses to emphasize useful feature information according to the channel attention mechanism. In addition, we present a new abdominal CT dataset for liver occupying lesion segmentation from Peking University First Hospital's clinical data (PUFH dataset). We evaluate the proposed method and compare it with eight deep learning networks on the PUFH and the Liver Tumor Segmentation Challenge (LiTS) datasets. The experimental results show that SEU2-Net has state-of-the-art performance and good robustness in liver occupying lesions segmentation.
Collapse
Affiliation(s)
- Lizhuang Liu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, China
| | - Kun Wu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, China
| | - Ke Wang
- Radiology Department, Peking University First Hospital, Beijing, China
| | - Zhenqi Han
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, China
| | - Jianxing Qiu
- Radiology Department, Peking University First Hospital, Beijing, China
| | - Qiao Zhan
- Department of Infectious Diseases, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Tian Wu
- Department of Infectious Diseases, Peking University First Hospital, Beijing, China
| | - Jinghang Xu
- Department of Infectious Diseases, Peking University First Hospital, Beijing, China
| | - Zheng Zeng
- Department of Infectious Diseases, Peking University First Hospital, Beijing, China
| |
Collapse
|
8
|
Vanmathi P, Jose D. An ensemble-based serial cascaded attention network and improved variational auto encoder for breast cancer prognosis prediction using data. Comput Methods Biomech Biomed Engin 2024; 27:98-115. [PMID: 38006210 DOI: 10.1080/10255842.2023.2280883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 11/02/2023] [Indexed: 11/26/2023]
Abstract
Breast cancer is one of the most common types of cancer in women and it produces a huge amount of death rate in the world. Early recognition is lessening its impact. The early recognition of breast cancer could convince patients to receive surgical therapy, which will significantly improve the chance of restoration. This information is used by the machine learning technique to find links between them and appraise our forecasts of fresh occurrences. Later recognition of breast cancer can lead to death. An accurate prescient framework for breast cancer prediction is urgently needed in the current era. In order to accomplish the objective, an adaptive ensemble model is proposed for breast cancer prognosis prediction using data. At the initial stage, the raw data are fetched from benchmark datasets. It is then followed by data cleaning and preprocessing. Subsequently, the pre-processed data is fed into the Improved Variational Autoencoder (IVAE), where the deep features are extracted. Finally, the resultant features are given as input to the Ensemble-based Serial Cascaded Attention Network (ESCANet), which is built with Deep Temporal Convolution Network (DTCN), Bi-directional Long Short-Term Memory (BiLSTM), and Recurrent Neural Network (RNN). The effectiveness of the model is validated and compared with conventional methodologies. Therefore, the results elucidate that the proposed methodology achieves extensive results; thus, it increases the system's efficiency.
Collapse
Affiliation(s)
- P Vanmathi
- Full time Research Scholar, Department of ECE, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
| | - Deepa Jose
- Professor, Department of ECE, KCG College of Technology, Karapakkam, Chennai, Tamil Nadu, India
| |
Collapse
|
9
|
Khoshkhabar M, Meshgini S, Afrouzian R, Danishvar S. Automatic Liver Tumor Segmentation from CT Images Using Graph Convolutional Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:7561. [PMID: 37688038 PMCID: PMC10490641 DOI: 10.3390/s23177561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/13/2023] [Accepted: 08/16/2023] [Indexed: 09/10/2023]
Abstract
Segmenting the liver and liver tumors in computed tomography (CT) images is an important step toward quantifiable biomarkers for a computer-aided decision-making system and precise medical diagnosis. Radiologists and specialized physicians use CT images to diagnose and classify liver organs and tumors. Because these organs have similar characteristics in form, texture, and light intensity values, other internal organs such as the heart, spleen, stomach, and kidneys confuse visual recognition of the liver and tumor division. Furthermore, visual identification of liver tumors is time-consuming, complicated, and error-prone, and incorrect diagnosis and segmentation can hurt the patient's life. Many automatic and semi-automatic methods based on machine learning algorithms have recently been suggested for liver organ recognition and tumor segmentation. However, there are still difficulties due to poor recognition precision and speed and a lack of dependability. This paper presents a novel deep learning-based technique for segmenting liver tumors and identifying liver organs in computed tomography maps. Based on the LiTS17 database, the suggested technique comprises four Chebyshev graph convolution layers and a fully connected layer that can accurately segment the liver and liver tumors. Thus, the accuracy, Dice coefficient, mean IoU, sensitivity, precision, and recall obtained based on the proposed method according to the LiTS17 dataset are around 99.1%, 91.1%, 90.8%, 99.4%, 99.4%, and 91.2%, respectively. In addition, the effectiveness of the proposed method was evaluated in a noisy environment, and the proposed network could withstand a wide range of environmental signal-to-noise ratios (SNRs). Thus, at SNR = -4 dB, the accuracy of the proposed method for liver organ segmentation remained around 90%. The proposed model has obtained satisfactory and favorable results compared to previous research. According to the positive results, the proposed model is expected to be used to assist radiologists and specialist doctors in the near future.
Collapse
Affiliation(s)
- Maryam Khoshkhabar
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666-16471, Iran
| | - Saeed Meshgini
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz 51666-16471, Iran
| | - Reza Afrouzian
- Miyaneh Faculty of Engineering, University of Tabriz, Miyaneh 51666-16471, Iran
| | - Sebelan Danishvar
- College of Engineering, Design, and Physical Sciences, Brunel University London, Uxbridge UB8 3PH, UK
| |
Collapse
|
10
|
Göker H. Automatic detection of Parkinson's disease from power spectral density of electroencephalography (EEG) signals using deep learning model. Phys Eng Sci Med 2023; 46:1163-1174. [PMID: 37245195 DOI: 10.1007/s13246-023-01284-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 05/18/2023] [Indexed: 05/29/2023]
Abstract
Parkinson's disease (PD) is characterized by slowed movements, speech disorders, an inability to control muscle movements, and tremors in the hands and feet. In the early stages of PD, the changes in these motor signs are very vague, so an objective and accurate diagnosis is difficult. The disease is complex, progressive, and very common. There are more than 10 million people worldwide suffering from PD. In this study, an EEG-based deep learning model was proposed for the automatic detection of PD to support experts. The EEG dataset comprises signals recorded by the University of Iowa from 14 PD patients and 14 healthy controls. First of all, the power spectral density values (PSDs) of the frequencies between 1 and 49 Hz of the EEG signals were calculated separately using periodogram, welch, and multitaper spectral analysis methods. 49 feature vectors were extracted for each of the three different experiments. Then, the performances of support vector machine, random forest, k-nearest neighbor, and bidirectional long-short-term memory (BiLSTM) algorithms were compared using the PSDs feature vectors. After the comparison, the model integrating welch spectral analysis and the BiLSTM algorithm showed the highest performance as a result of the experiments. The deep learning model achieved satisfactory performance with 0.965 specificity, 0.994 sensitivity, 0.964 precision, 0.978 f1-score, 0.958 Matthews correlation coefficient, and 97.92% accuracy. The study is a promising attempt to detect PD from EEG signals and it also provides evidence that deep learning algorithms are more effective than machine learning algorithms for EEG signal analysis.
Collapse
Affiliation(s)
- Hanife Göker
- Health Services Vocational College, Gazi University, 06830, Ankara, Turkey.
| |
Collapse
|
11
|
AlMulla J, Islam MT, Al-Absi HRH, Alam T. SoccerNet: A Gated Recurrent Unit-based model to predict soccer match winners. PLoS One 2023; 18:e0288933. [PMID: 37527260 PMCID: PMC10393150 DOI: 10.1371/journal.pone.0288933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 07/06/2023] [Indexed: 08/03/2023] Open
Abstract
Winning football matches is the major goal of all football clubs in the world. Football being the most popular game in the world, many studies have been conducted to analyze and predict match winners based on players' physical and technical performance. In this study, we analyzed the matches from the professional football league of Qatar Stars League (QSL) covering the matches held in the last ten seasons. We incorporated the highest number of professional matches from the last ten seasons covering from 2011 up to 2022 and proposed SoccerNet, a Gated Recurrent Unit (GRU)-based deep learning-based model to predict match winners with over 80% accuracy. We considered match- and player-related information captured by STATS platform in a time slot of 15 minutes. Then we analyzed players' performance at different positions on the field at different stages of the match. Our results indicated that in QSL, the defenders' role in matches is more dominant than midfielders and forwarders. Moreover, our analysis suggests that the last 15-30 minutes of match segments of the matches from QSL have a more significant impact on the match result than other match segments. To the best of our knowledge, the proposed model is the first DL-based model in predicting match winners from any professional football leagues in the Middle East and North Africa (MENA) region. We believe the results will support the coaching staff and team management for QSL in designing game strategies and improve the overall quality of performance of the players.
Collapse
Affiliation(s)
- Jassim AlMulla
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mohammad Tariqul Islam
- Computer Science Department, Southern Connecticut State University, New Haven, CT, United States of America
| | - Hamada R H Al-Absi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
12
|
Ferreira ACBH, Ferreira DD, Barbosa BHG, Aline de Oliveira U, Aparecida Padua E, Oliveira Chiarini F, Baena de Moraes Lopes MH. Neural network-based method to stratify people at risk for developing diabetic foot: A support system for health professionals. PLoS One 2023; 18:e0288466. [PMID: 37440514 DOI: 10.1371/journal.pone.0288466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Diabetes Mellitus (DM) is a chronic disease with a high worldwide prevalence. Diabetic foot is one of the DM complications and compromises health and quality of life, due to the risk of lower limb amputation. This work aimed to build a risk classification system for the evolution of diabetic foot, using Artificial Neural Networks (ANN). METHODS This methodological study used two databases, one for system design (training and validation) containing 250 participants with DM and another for testing, containing 141 participants. Each subject answered a questionnaire with 54 questions about foot care and sociodemographic information. Participants from both databases were classified by specialists as high or low risk for diabetic foot. Supervised ANN (multi-layer Perceptron-MLP) models were exploited and a smartphone app was built. The app returns a personalized report indicating self-care for each user. The System Usability Scale (SUS) was used for the usability evaluation. RESULTS MLP models were built and, based on the principle of parsimony, the simplest model was chosen to be implemented in the application. The model achieved accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 85%, 76%, 91%, 89%, and 79%, respectively, for the test data. The app presented good usability (93.33 points on a scale from 0 to 100). CONCLUSIONS The study showed that the proposed model has satisfactory performance and is simple, considering that it requires only 10 variables. This simplicity facilitates its use by health professionals and patients with diabetes.
Collapse
Affiliation(s)
- Ana Cláudia Barbosa Honório Ferreira
- School of Nursing, Universidade Estadual de Campinas, Campinas, São Paulo, Brazil
- University Center of Lavras, Unilavras, Lavras, Minas Gerais, Brazil
| | | | | | | | | | | | | |
Collapse
|
13
|
Feng Q, Liu S, Peng JX, Yan T, Zhu H, Zheng ZJ, Feng HC. Deep learning-based automatic sella turcica segmentation and morphology measurement in X-ray images. BMC Med Imaging 2023; 23:41. [PMID: 36964517 PMCID: PMC10039601 DOI: 10.1186/s12880-023-00998-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 03/14/2023] [Indexed: 03/26/2023] Open
Abstract
BACKGROUND Although the morphological changes of sella turcica have been drawing increasing attention, the acquirement of linear parameters of sella turcica relies on manual measurement. Manual measurement is laborious, time-consuming, and may introduce subjective bias. This paper aims to develop and evaluate a deep learning-based model for automatic segmentation and measurement of sella turcica in cephalometric radiographs. METHODS 1129 images were used to develop a deep learning-based segmentation network for automatic sella turcica segmentation. Besides, 50 images were used to test the generalization ability of the model. The performance of the segmented network was evaluated by the dice coefficient. Images in the test datasets were segmented by the trained segmentation network, and the segmentation results were saved in binary images. Then the extremum points and corner points were detected by calling the function in the OpenCV library to obtain the coordinates of the four landmarks of the sella turcica. Finally, the length, diameter, and depth of the sella turcica can be obtained by calculating the distance between the two points and the distance from the point to the straight line. Meanwhile, images were measured manually using Digimizer. Intraclass correlation coefficients (ICCs) and Bland-Altman plots were used to analyze the consistency between automatic and manual measurements to evaluate the reliability of the proposed methodology. RESULTS The dice coefficient of the segmentation network is 92.84%. For the measurement of sella turcica, there is excellent agreement between the automatic measurement and the manual measurement. In Test1, the ICCs of length, diameter and depth are 0.954, 0.953, and 0.912, respectively. In Test2, ICCs of length, diameter and depth are 0.906, 0.921, and 0.915, respectively. In addition, Bland-Altman plots showed the excellent reliability of the automated measurement method, with the majority measurements differences falling within ± 1.96 SDs intervals around the mean difference and no bias was apparent. CONCLUSIONS Our experimental results indicated that the proposed methodology could complete the automatic segmentation of the sella turcica efficiently, and reliably predict the length, diameter, and depth of the sella turcica. Moreover, the proposed method has generalization ability according to its excellent performance on Test2.
Collapse
Affiliation(s)
- Qi Feng
- College of Medicine, Guizhou University, Guiyang, 550025, China
| | - Shu Liu
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Ju-Xiang Peng
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Ting Yan
- Department of Radiology, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hong Zhu
- Department of Medical Information, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Zhi-Jun Zheng
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hong-Chao Feng
- College of Medicine, Guizhou University, Guiyang, 550025, China.
- Department of Oral and Maxillofacial Surgery, Guiyang Hospital of Stomatology, Guiyang, 550002, China.
| |
Collapse
|
14
|
Cardone D, Trevisi G, Perpetuini D, Filippini C, Merla A, Mangiola A. Intraoperative thermal infrared imaging in neurosurgery: machine learning approaches for advanced segmentation of tumors. Phys Eng Sci Med 2023; 46:325-337. [PMID: 36715852 PMCID: PMC10030394 DOI: 10.1007/s13246-023-01222-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 01/17/2023] [Indexed: 01/31/2023]
Abstract
Surgical resection is one of the most relevant practices in neurosurgery. Finding the correct surgical extent of the tumor is a key question and so far several techniques have been employed to assist the neurosurgeon in preserving the maximum amount of healthy tissue. Some of these methods are invasive for patients, not always allowing high precision in the detection of the tumor area. The aim of this study is to overcome these limitations, developing machine learning based models, relying on features obtained from a contactless and non-invasive technique, the thermal infrared (IR) imaging. The thermal IR videos of thirteen patients with heterogeneous tumors were recorded in the intraoperative context. Time (TD)- and frequency (FD)-domain features were extracted and fed different machine learning models. Models relying on FD features have proven to be the best solutions for the optimal detection of the tumor area (Average Accuracy = 90.45%; Average Sensitivity = 84.64%; Average Specificity = 93,74%). The obtained results highlight the possibility to accurately detect the tumor lesion boundary with a completely non-invasive, contactless, and portable technology, revealing thermal IR imaging as a very promising tool for the neurosurgeon.
Collapse
Affiliation(s)
- Daniela Cardone
- Department of Engineering and Geology, University G. d'Annunzio Chieti-Pescara, Pescara, Italy.
| | - Gianluca Trevisi
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - David Perpetuini
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Chiara Filippini
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Arcangelo Merla
- Department of Engineering and Geology, University G. d'Annunzio Chieti-Pescara, Pescara, Italy
| | - Annunziato Mangiola
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| |
Collapse
|
15
|
Ahmad M, Sanawar S, Alfandi O, Qadri SF, Saeed IA, Khan S, Hayat B, Ahmad A. Facial expression recognition using lightweight deep learning modeling. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:8208-8225. [PMID: 37161193 DOI: 10.3934/mbe.2023357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Facial expression is a type of communication and is useful in many areas of computer vision, including intelligent visual surveillance, human-robot interaction and human behavior analysis. A deep learning approach is presented to classify happy, sad, angry, fearful, contemptuous, surprised and disgusted expressions. Accurate detection and classification of human facial expression is a critical task in image processing due to the inconsistencies amid the complexity, including change in illumination, occlusion, noise and the over-fitting problem. A stacked sparse auto-encoder for facial expression recognition (SSAE-FER) is used for unsupervised pre-training and supervised fine-tuning. SSAE-FER automatically extracts features from input images, and the softmax classifier is used to classify the expressions. Our method achieved an accuracy of 92.50% on the JAFFE dataset and 99.30% on the CK+ dataset. SSAE-FER performs well compared to the other comparative methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Tobe Camp, Abbottabad-22060, Pakistan
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Saira Sanawar
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Omar Alfandi
- College of Technological Innovation at Zayed University in Abu Dhabi, UAE
| | - Syed Furqan Qadri
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Iftikhar Ahmed Saeed
- Department of Computer Science, the University of Lahore, Sargodha Campus 40100, Pakistan
| | - Salabat Khan
- College of Computer Science & Software Engineering, Shenzhen University, Shenzhen 518060, China
| | - Bashir Hayat
- Department of Computer Science, Institute of Management Sciences, Peshawar, Pakistan
| | - Arshad Ahmad
- Department of IT & CS, Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST), Haripur 22620, Pakistan
| |
Collapse
|
16
|
Balasubramanian PK, Lai WC, Seng GH, C K, Selvaraj J. APESTNet with Mask R-CNN for Liver Tumor Segmentation and Classification. Cancers (Basel) 2023; 15:cancers15020330. [PMID: 36672281 PMCID: PMC9857237 DOI: 10.3390/cancers15020330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023] Open
Abstract
Diagnosis and treatment of hepatocellular carcinoma or metastases rely heavily on accurate segmentation and classification of liver tumours. However, due to the liver tumor's hazy borders and wide range of possible shapes, sizes, and positions, accurate and automatic tumour segmentation and classification remains a difficult challenge. With the advancement of computing, new models in artificial intelligence have evolved. Following its success in Natural language processing (NLP), the transformer paradigm has been adopted by the computer vision (CV) community of the NLP. While there are already accepted approaches to classifying the liver, especially in clinical settings, there is room for advancement in terms of their precision. This paper makes an effort to apply a novel model for segmenting and classifying liver tumours built on deep learning. In order to accomplish this, the created model follows a three-stage procedure consisting of (a) pre-processing, (b) liver segmentation, and (c) classification. In the first phase, the collected Computed Tomography (CT) images undergo three stages of pre-processing, including contrast improvement via histogram equalization and noise reduction via the median filter. Next, an enhanced mask region-based convolutional neural networks (Mask R-CNN) model is used to separate the liver from the CT abdominal image. To prevent overfitting, the segmented picture is fed onto an Enhanced Swin Transformer Network with Adversarial Propagation (APESTNet). The experimental results prove the superior performance of the proposed perfect on a wide variety of CT images, as well as its efficiency and low sensitivity to noise.
Collapse
Affiliation(s)
- Prabhu Kavin Balasubramanian
- Department of Data Science and Business System, Kattankulathur Campus, SRM Institute of Science and Technology, Chennai 603203, Tamil Nadu, India
| | - Wen-Cheng Lai
- Bachelor Program in Industrial Projects, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
- Department of Electronic Engineering, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
| | - Gan Hong Seng
- Department of Data Science, UMK City Campus, University Malaysia Kelantan, Pengkalan Chepa, Kelantan 16100, Malaysia
- Correspondence: (G.H.S.); (K.C.)
| | - Kavitha C
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai 600119, Tamil Nadu, India
- Correspondence: (G.H.S.); (K.C.)
| | - Jeeva Selvaraj
- Department of Data Science and Business System, Kattankulathur Campus, SRM Institute of Science and Technology, Chennai 603203, Tamil Nadu, India
- Department of Data Science, UMK City Campus, University Malaysia Kelantan, Pengkalan Chepa, Kelantan 16100, Malaysia
| |
Collapse
|
17
|
Deep Learning Model for Grading Metastatic Epidural Spinal Cord Compression on Staging CT. Cancers (Basel) 2022; 14:cancers14133219. [PMID: 35804990 PMCID: PMC9264856 DOI: 10.3390/cancers14133219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 02/02/2023] Open
Abstract
Background: Metastatic epidural spinal cord compression (MESCC) is a disastrous complication of advanced malignancy. Deep learning (DL) models for automatic MESCC classification on staging CT were developed to aid earlier diagnosis. Methods: This retrospective study included 444 CT staging studies from 185 patients with suspected MESCC who underwent MRI spine studies within 60 days of the CT studies. The DL model training/validation dataset consisted of 316/358 (88%) and the test set of 42/358 (12%) CT studies. Training/validation and test datasets were labeled in consensus by two subspecialized radiologists (6 and 11-years-experience) using the MRI studies as the reference standard. Test sets were labeled by the developed DL models and four radiologists (2−7 years of experience) for comparison. Results: DL models showed almost-perfect interobserver agreement for classification of CT spine images into normal, low, and high-grade MESCC, with kappas ranging from 0.873−0.911 (p < 0.001). The DL models (lowest κ = 0.873, 95% CI 0.858−0.887) also showed superior interobserver agreement compared to two of the four radiologists for three-class classification, including a specialist (κ = 0.820, 95% CI 0.803−0.837) and general radiologist (κ = 0.726, 95% CI 0.706−0.747), both p < 0.001. Conclusion: DL models for the MESCC classification on a CT showed comparable to superior interobserver agreement to radiologists and could be used to aid earlier diagnosis.
Collapse
|
18
|
Ara RK, Matiolański A, Dziech A, Baran R, Domin P, Wieczorkiewicz A. Fast and Efficient Method for Optical Coherence Tomography Images Classification Using Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22134675. [PMID: 35808169 PMCID: PMC9269557 DOI: 10.3390/s22134675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 05/18/2023]
Abstract
The use of optical coherence tomography (OCT) in medical diagnostics is now common. The growing amount of data leads us to propose an automated support system for medical staff. The key part of the system is a classification algorithm developed with modern machine learning techniques. The main contribution is to present a new approach for the classification of eye diseases using the convolutional neural network model. The research concerns the classification of patients on the basis of OCT B-scans into one of four categories: Diabetic Macular Edema (DME), Choroidal Neovascularization (CNV), Drusen, and Normal. Those categories are available in a publicly available dataset of above 84,000 images utilized for the research. After several tested architectures, our 5-layer neural network gives us a promising result. We compared them to the other available solutions which proves the high quality of our algorithm. Equally important for the application of the algorithm is the computational time, which is reduced by the limited size of the model. In addition, the article presents a detailed method of image data augmentation and its impact on the classification results. The results of the experiments were also presented for several derived models of convolutional network architectures that were tested during the research. Improving processes in medical treatment is important. The algorithm cannot replace a doctor but, for example, can be a valuable tool for speeding up the process of diagnosis during screening tests.
Collapse
Affiliation(s)
- Rouhollah Kian Ara
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Andrzej Matiolański
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
- Correspondence:
| | - Andrzej Dziech
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Remigiusz Baran
- Faculty of Electrical Engineering, Automatic Control and Computer Science, Kielce University of Technology, 25-314 Kielce, Poland;
| | - Paweł Domin
- Consultronix S.A., 32-083 Balice, Poland; (P.D.); (A.W.)
| | | |
Collapse
|
19
|
Deep Learning to Measure the Intensity of Indocyanine Green in Endometriosis Surgeries with Intestinal Resection. J Pers Med 2022; 12:jpm12060982. [PMID: 35743768 PMCID: PMC9224804 DOI: 10.3390/jpm12060982] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 06/11/2022] [Accepted: 06/15/2022] [Indexed: 01/03/2023] Open
Abstract
Endometriosis is a gynecological pathology that affects between 6 and 15% of women of childbearing age. One of the manifestations is intestinal deep infiltrating endometriosis. This condition may force patients to resort to surgical treatment, often ending in resection. The level of blood perfusion at the anastomosis is crucial for its outcome, for this reason, indocyanine green (ICG), a fluorochrome that green stains the structures where it is present, is injected during surgery. This study proposes a novel method based on deep learning algorithms for quantifying the level of blood perfusion in anastomosis. Firstly, with a deep learning algorithm based on the U-Net, models capable of automatically segmenting the intestine from the surgical videos were generated. Secondly, blood perfusion level, from the already segmented video frames, was quantified. The frames were characterized using textures, precisely nine first- and second-order statistics, and then two experiments were carried out. In the first experiment, the differences in the perfusion between the two-anastomosis parts were determined, and in the second, it was verified that the ICG variation could be captured through the textures. The best model when segmenting has an accuracy of 0.92 and a dice coefficient of 0.96. It is concluded that segmentation of the bowel using the U-Net was successful, and the textures are appropriate descriptors for characterization of the blood perfusion in the images where ICG is present. This might help to predict whether postoperative complications will occur during surgery, enabling clinicians to act on this information.
Collapse
|
20
|
Ahmad M, Qadri SF, Ashraf MU, Subhi K, Khan S, Zareen SS, Qadri S. Efficient Liver Segmentation from Computed Tomography Images Using Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2665283. [PMID: 35634046 PMCID: PMC9132625 DOI: 10.1155/2022/2665283] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/06/2022] [Indexed: 12/11/2022]
Abstract
Segmentation of a liver in computed tomography (CT) images is an important step toward quantitative biomarkers for a computer-aided decision support system and precise medical diagnosis. To overcome the difficulties that come across the liver segmentation that are affected by fuzzy boundaries, stacked autoencoder (SAE) is applied to learn the most discriminative features of the liver among other tissues in abdominal images. In this paper, we propose a patch-based deep learning method for the segmentation of a liver from CT images using SAE. Unlike the traditional machine learning methods, instead of anticipating pixel by pixel learning, our algorithm utilizes the patches to learn the representations and identify the liver area. We preprocessed the whole dataset to get the enhanced images and converted each image into many overlapping patches. These patches are given as input to SAE for unsupervised feature learning. Finally, the learned features with labels of the images are fine tuned, and the classification is performed to develop the probability map in a supervised way. Experimental results demonstrate that our proposed algorithm shows satisfactory results on test images. Our method achieved a 96.47% dice similarity coefficient (DSC), which is better than other methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
- Department of Computer Science and IT, The University of Lahore, Sargodha Campus, 40100, Lahore, Pakistan
| | - Syed Furqan Qadri
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - M. Usman Ashraf
- Department of Computer Science, GC Women University, Sialkot 51310, Pakistan
| | - Khalid Subhi
- Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Salabat Khan
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - Syeda Shamaila Zareen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Salman Qadri
- Department of Computer Science, MNS University of Agriculture, Multan 60650, Pakistan
| |
Collapse
|
21
|
An Attention-Preserving Network-Based Method for Assisted Segmentation of Osteosarcoma MRI Images. MATHEMATICS 2022. [DOI: 10.3390/math10101665] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Osteosarcoma is a malignant bone tumor that is extremely dangerous to human health. Not only does it require a large amount of work, it is also a complicated task to outline the lesion area in an image manually, using traditional methods. With the development of computer-aided diagnostic techniques, more and more researchers are focusing on automatic segmentation techniques for osteosarcoma analysis. However, existing methods ignore the size of osteosarcomas, making it difficult to identify and segment smaller tumors. This is very detrimental to the early diagnosis of osteosarcoma. Therefore, this paper proposes a Contextual Axial-Preserving Attention Network (CaPaN)-based MRI image-assisted segmentation method for osteosarcoma detection. Based on the use of Res2Net, a parallel decoder is added to aggregate high-level features which effectively combines the local and global features of osteosarcoma. In addition, channel feature pyramid (CFP) and axial attention (A-RA) mechanisms are used. A lightweight CFP can extract feature mapping and contextual information of different sizes. A-RA uses axial attention to distinguish tumor tissues by mining, which reduces computational costs and thus improves the generalization performance of the model. We conducted experiments using a real dataset provided by the Second Xiangya Affiliated Hospital and the results showed that our proposed method achieves better segmentation results than alternative models. In particular, our method shows significant advantages with respect to small target segmentation. Its precision is about 2% higher than the average values of other models. For the segmentation of small objects, the DSC value of CaPaN is 0.021 higher than that of the commonly used U-Net method.
Collapse
|
22
|
Stadlbauer A, Marhold F, Oberndorfer S, Heinz G, Buchfelder M, Kinfe TM, Meyer-Bäse A. Radiophysiomics: Brain Tumors Classification by Machine Learning and Physiological MRI Data. Cancers (Basel) 2022; 14:cancers14102363. [PMID: 35625967 PMCID: PMC9139355 DOI: 10.3390/cancers14102363] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/04/2022] [Accepted: 05/09/2022] [Indexed: 01/06/2023] Open
Abstract
Simple Summary The pretreatment diagnosis of contrast-enhancing brain tumors is still challenging in clinical neuro-oncology due to their very similar appearance on conventional MRI. A precise initial characterization, however, is essential to initiate appropriate treatment management, which can substantially differ between brain tumor entities. To overcome the disadvantage of the low specificity of conventional MRI, several new neuroimaging methods have been developed and validated over the past decades. This increasing amount of diagnostic information makes a timely evaluation without computational support impossible in a clinical setting. Artificial intelligence methods such as machine learning offer new options to support clinicians. In this study, we combined nine common machine learning algorithms with a physiological MRI technique (we named this approach “radiophysiomics”) to investigate the effectiveness of the multiclass classification of contrast-enhancing brain tumors in a clinical setting. We were able to demonstrate that radiophysiomics could be helpful in the routine diagnostics of contrast-enhancing brain tumors, but further automation using deep neural networks is required. Abstract The precise initial characterization of contrast-enhancing brain tumors has significant consequences for clinical outcomes. Various novel neuroimaging methods have been developed to increase the specificity of conventional magnetic resonance imaging (cMRI) but also the increased complexity of data analysis. Artificial intelligence offers new options to manage this challenge in clinical settings. Here, we investigated whether multiclass machine learning (ML) algorithms applied to a high-dimensional panel of radiomic features from advanced MRI (advMRI) and physiological MRI (phyMRI; thus, radiophysiomics) could reliably classify contrast-enhancing brain tumors. The recently developed phyMRI technique enables the quantitative assessment of microvascular architecture, neovascularization, oxygen metabolism, and tissue hypoxia. A training cohort of 167 patients suffering from one of the five most common brain tumor entities (glioblastoma, anaplastic glioma, meningioma, primary CNS lymphoma, or brain metastasis), combined with nine common ML algorithms, was used to develop overall 135 classifiers. Multiclass classification performance was investigated using tenfold cross-validation and an independent test cohort. Adaptive boosting and random forest in combination with advMRI and phyMRI data were superior to human reading in accuracy (0.875 vs. 0.850), precision (0.862 vs. 0.798), F-score (0.774 vs. 0.740), AUROC (0.886 vs. 0.813), and classification error (5 vs. 6). The radiologists, however, showed a higher sensitivity (0.767 vs. 0.750) and specificity (0.925 vs. 0.902). We demonstrated that ML-based radiophysiomics could be helpful in the clinical routine diagnosis of contrast-enhancing brain tumors; however, a high expenditure of time and work for data preprocessing requires the inclusion of deep neural networks.
Collapse
Affiliation(s)
- Andreas Stadlbauer
- Institute of Medical Radiology, University Clinic St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
- Department of Neurosurgery, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany; (M.B.); (T.M.K.)
- Correspondence:
| | - Franz Marhold
- Department of Neurosurgery, University Clinic of St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
| | - Stefan Oberndorfer
- Department of Neurology, University Clinic of St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
| | - Gertraud Heinz
- Institute of Medical Radiology, University Clinic St. Pölten, Karl Landsteiner University of Health Sciences, A-3100 St. Pölten, Austria;
| | - Michael Buchfelder
- Department of Neurosurgery, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany; (M.B.); (T.M.K.)
| | - Thomas M. Kinfe
- Department of Neurosurgery, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany; (M.B.); (T.M.K.)
- Division of Functional Neurosurgery and Stereotaxy, Friedrich-Alexander University (FAU) Erlangen-Nürnberg, D-91054 Erlangen, Germany
| | - Anke Meyer-Bäse
- Department of Scientific Computing, Florida State University, 400 Dirac Science Library, Tallahassee, FL 32306-4120, USA;
| |
Collapse
|