1
|
De La Hoz EC, Verstockt J, Verspeek S, Clarys W, Thiessen FEF, Tondu T, Tjalma WAA, Steenackers G, Vanlanduit S. Automated thermographic detection of blood vessels for DIEP flap reconstructive surgery. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03199-8. [PMID: 39014178 DOI: 10.1007/s11548-024-03199-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 05/27/2024] [Indexed: 07/18/2024]
Abstract
PURPOSE Inadequate perfusion is the most common cause of partial flap loss in tissue transfer for post-mastectomy breast reconstruction. The current state-of-the-art uses computed tomography angiography (CTA) to locate the best perforators. Unfortunately, these techniques are expensive and time-consuming and not performed during surgery. Dynamic infrared thermography (DIRT) can offer a solution for these disadvantages. METHODS The research presented couples thermographic examination during DIEP flap breast reconstruction with automatic segmentation approach using a convolutional neural network. Traditional segmentation techniques and annotations by surgeons are used to create automatic labels for the training. RESULTS The network used for image annotation is able to label in real-time on minimal hardware and the labels created can be used to locate and quantify perforator candidates for selection with a dice score accuracy of 0.8 after 2 min and 0.9 after 4 min. CONCLUSIONS These results allow for a computational system that can be used in place during surgery to improve surgical success. The ability to track and measure perforators and their perfused area allows for less subjective results and helps the surgeon to select the most suitable perforator for DIEP flap breast reconstruction.
Collapse
Affiliation(s)
- Edgar Cardenas De La Hoz
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium.
| | - Jan Verstockt
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Simon Verspeek
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Warre Clarys
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Filip E F Thiessen
- Department of Plastic, Reconstructive and Aesthetic Surgery, Multidisciplinary Breast Clinic, Antwerp University Hospital, Wilrijkstraat 10, 2650, Antwerp, Antwerp, Belgium
- Department of Plastic, Reconstructive and Aesthetic Surgery, Ziekenhuis Netwerk Antwerpen, Lindendreef 1, 2020, Antwerp, Antwerp, Belgium
| | - Thierry Tondu
- Department of Plastic, Reconstructive and Aesthetic Surgery, Multidisciplinary Breast Clinic, Antwerp University Hospital, Wilrijkstraat 10, 2650, Antwerp, Antwerp, Belgium
- Department of Plastic, Reconstructive and Aesthetic Surgery, Ziekenhuis Netwerk Antwerpen, Lindendreef 1, 2020, Antwerp, Antwerp, Belgium
| | - Wiebren A A Tjalma
- Gynaecological Oncology Unit, Department of Obstetrics and Gynaecology, Multidisciplinary Breast Clinic, Antwerp University Hospital, Wilrijkstraat 10, 2650, Antwerp, Antwerp, Belgium
| | - Gunther Steenackers
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Steve Vanlanduit
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| |
Collapse
|
2
|
Dai S, Guo X, Liu S, Tu L, Hu X, Cui J, Ruan Q, Tan X, Lu H, Jiang T, Xu J. Application of intelligent tongue image analysis in Conjunction with microbiomes in the diagnosis of MAFLD. Heliyon 2024; 10:e29269. [PMID: 38617943 PMCID: PMC11015139 DOI: 10.1016/j.heliyon.2024.e29269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 03/22/2024] [Accepted: 04/03/2024] [Indexed: 04/16/2024] Open
Abstract
Background Metabolic associated fatty liver disease (MAFLD) is a widespread liver disease that can lead to liver fibrosis and cirrhosis. Therefore, it is essential to develop early diagnosic and screening methods. Methods We performed a cross-sectional observational study. In this study, based on data from 92 patients with MAFLD and 74 healthy individuals, we observed the characteristics of tongue images, tongue coating and intestinal flora. A generative adversarial network was used to extract tongue image features, and 16S rRNA sequencing was performed using the tongue coating and intestinal flora. We then applied tongue image analysis technology combined with microbiome technology to obtain an MAFLD early screening model with higher accuracy. In addition, we compared different modelling methods, including Extreme Gradient Boosting (XGBoost), random forest, neural networks(MLP), stochastic gradient descent(SGD), and support vector machine(SVM). Results The results show that tongue-coating Streptococcus and Rothia, intestinal Blautia, and Streptococcus are potential biomarkers for MAFLD. The diagnostic model jointly incorporating tongue image features, basic information (gender, age, BMI), and tongue coating marker flora (Streptococcus, Rothia), can have an accuracy of 96.39%, higher than the accuracy value except for bacteria. Conclusion Combining computer-intelligent tongue diagnosis with microbiome technology enhances MAFLD diagnostic accuracy and provides a convenient early screening reference.
Collapse
Affiliation(s)
- Shixuan Dai
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| | - Xiaojing Guo
- Department of Anesthesiology, Naval Medical University, No. 800, Xiangyin Road, Shanghai,200433, China
| | - Shi Liu
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| | - Liping Tu
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| | - Xiaojuan Hu
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| | - Ji Cui
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| | - QunSheng Ruan
- Department of Software, Xiamen University, No. 422, Siming South Road, Siming District, Xiamen City, Fujian Province, 361005, China
| | - Xin Tan
- Department of Computer Science and Technology, East China Normal University, No. 3663, Zhongshan North Road, Shanghai, 200062, China
| | - Hao Lu
- Department of Endocrinology, Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, No. 528, Zhangheng Road, Shanghai,200021, China
| | - Tao Jiang
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| | - Jiatuo Xu
- Department of College of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, 1200 Road, Shanghai, 201203, China
| |
Collapse
|
3
|
Esposito C, Janneh M, Spaziani S, Calcagno V, Bernardi ML, Iammarino M, Verdone C, Tagliamonte M, Buonaguro L, Pisco M, Aversano L, Cusano A. Assessment of Primary Human Liver Cancer Cells by Artificial Intelligence-Assisted Raman Spectroscopy. Cells 2023; 12:2645. [PMID: 37998378 PMCID: PMC10670489 DOI: 10.3390/cells12222645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 11/08/2023] [Accepted: 11/09/2023] [Indexed: 11/25/2023] Open
Abstract
We investigated the possibility of using Raman spectroscopy assisted by artificial intelligence methods to identify liver cancer cells and distinguish them from their Non-Tumor counterpart. To this aim, primary liver cells (40 Tumor and 40 Non-Tumor cells) obtained from resected hepatocellular carcinoma (HCC) tumor tissue and the adjacent non-tumor area (negative control) were analyzed by Raman micro-spectroscopy. Preliminarily, the cells were analyzed morphologically and spectrally. Then, three machine learning approaches, including multivariate models and neural networks, were simultaneously investigated and successfully used to analyze the cells' Raman data. The results clearly demonstrate the effectiveness of artificial intelligence (AI)-assisted Raman spectroscopy for Tumor cell classification and prediction with an accuracy of nearly 90% of correct predictions on a single spectrum.
Collapse
Affiliation(s)
- Concetta Esposito
- Optoelectronic Division-Engineering Department, University of Sannio, 82100 Benevento, Italy
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
| | - Mohammed Janneh
- Optoelectronic Division-Engineering Department, University of Sannio, 82100 Benevento, Italy
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
| | - Sara Spaziani
- Optoelectronic Division-Engineering Department, University of Sannio, 82100 Benevento, Italy
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
| | - Vincenzo Calcagno
- Optoelectronic Division-Engineering Department, University of Sannio, 82100 Benevento, Italy
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
| | - Mario Luca Bernardi
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
- Informatics Group, Engineering Department, University of Sannio, 82100 Benevento, Italy
| | - Martina Iammarino
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
- Informatics Group, Engineering Department, University of Sannio, 82100 Benevento, Italy
| | - Chiara Verdone
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
- Informatics Group, Engineering Department, University of Sannio, 82100 Benevento, Italy
| | - Maria Tagliamonte
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
- National Cancer Institute-IRCCS “Pascale”, Via Mariano Semmola, 52, 80131 Napoli, Italy
| | - Luigi Buonaguro
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
- National Cancer Institute-IRCCS “Pascale”, Via Mariano Semmola, 52, 80131 Napoli, Italy
| | - Marco Pisco
- Optoelectronic Division-Engineering Department, University of Sannio, 82100 Benevento, Italy
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
| | - Lerina Aversano
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
- Informatics Group, Engineering Department, University of Sannio, 82100 Benevento, Italy
| | - Andrea Cusano
- Optoelectronic Division-Engineering Department, University of Sannio, 82100 Benevento, Italy
- Centro Regionale Information Communication Technology (CeRICT Scrl), 82100 Benevento, Italy; (M.L.B.); (L.B.)
| |
Collapse
|
4
|
Huang S, Lu Z, Shi Y, Dong J, Hu L, Yang W, Huang C. A Novel Method for Filled/Unfilled Grain Classification Based on Structured Light Imaging and Improved PointNet+. SENSORS (BASEL, SWITZERLAND) 2023; 23:6331. [PMID: 37514625 PMCID: PMC10384795 DOI: 10.3390/s23146331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/01/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
China is the largest producer and consumer of rice, and the classification of filled/unfilled rice grains is of great significance for rice breeding and genetic analysis. The traditional method for filled/unfilled rice grain identification was generally manual, which had the disadvantages of low efficiency, poor repeatability, and low precision. In this study, we have proposed a novel method for filled/unfilled grain classification based on structured light imaging and Improved PointNet++. Firstly, the 3D point cloud data of rice grains were obtained by structured light imaging. And then the specified processing algorithms were developed for the single grain segmentation, and data enhancement with normal vector. Finally, the PointNet++ network was improved by adding an additional Set Abstraction layer and combining the maximum pooling of normal vectors to realize filled/unfilled rice grain point cloud classification. To verify the model performance, the Improved PointNet++ was compared with six machine learning methods, PointNet and PointConv. The results showed that the optimal machine learning model is XGboost, with a classification accuracy of 91.99%, while the classification accuracy of Improved PointNet++ was 98.50% outperforming the PointNet 93.75% and PointConv 92.25%. In conclusion, this study has demonstrated a novel and effective method for filled/unfilled grain recognition.
Collapse
Affiliation(s)
- Shihao Huang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518000, China
| | - Zhihao Lu
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Yuxuan Shi
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Jiale Dong
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Lin Hu
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
| | - Wanneng Yang
- National Key Laboratory of Crop Genetic Improvement, National Center of Plant Gene Research (Wuhan), Huazhong Agricultural University, Wuhan 430070, China
| | - Chenglong Huang
- College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Institute of Nutrition and Health, Huazhong Agricultural University, Wuhan 430070, China
- Shenzhen Branch, Guangdong Laboratory for Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, Shenzhen 518000, China
| |
Collapse
|
5
|
Macías-Macías JM, Ramírez-Quintana JA, Chacón-Murguía MI, Torres-García AA, Corral-Martínez LF. Interpretation of a deep analysis of speech imagery features extracted by a capsule neural network. Comput Biol Med 2023; 159:106909. [PMID: 37071937 DOI: 10.1016/j.compbiomed.2023.106909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 03/28/2023] [Accepted: 04/10/2023] [Indexed: 04/20/2023]
Abstract
Speech imagery has been successfully employed in developing Brain-Computer Interfaces because it is a novel mental strategy that generates brain activity more intuitively than evoked potentials or motor imagery. There are many methods to analyze speech imagery signals, but those based on deep neural networks achieve the best results. However, more research is necessary to understand the properties and features that describe imagined phonemes and words. In this paper, we analyze the statistical properties of speech imagery EEG signals from the KaraOne dataset to design a method that classifies imagined phonemes and words. With this analysis, we propose a Capsule Neural Network that categorizes speech imagery patterns into bilabial, nasal, consonant-vocal, and vowels/iy/ and/uw/. The method is called Capsules for Speech Imagery Analysis (CapsK-SI). The input of CapsK-SI is a set of statistical features of EEG speech imagery signals. The architecture of the Capsule Neural Network is composed of a convolution layer, a primary capsule layer, and a class capsule layer. The average accuracy reached is 90.88%±7 for bilabial, 90.15%±8 for nasal, 94.02%±6 for consonant-vowel, 89.70%±8 for word-phoneme, 94.33%± for/iy/ vowel and, 94.21%±3 for/uw/ vowel detection. Finally, with the activity vectors of the CapsK-SI capsules, we generated brain maps to represent brain activity in the production of bilabial, nasal, and consonant-vocal signals.
Collapse
Affiliation(s)
- José M Macías-Macías
- Tecnológico Nacional de México/IT Chihuahua, Av. Tecnológico 2909, Chihuahua, 31310, Chihuahua, Mexico.
| | - Juan A Ramírez-Quintana
- Tecnológico Nacional de México/IT Chihuahua, Av. Tecnológico 2909, Chihuahua, 31310, Chihuahua, Mexico
| | - Mario I Chacón-Murguía
- Tecnológico Nacional de México/IT Chihuahua, Av. Tecnológico 2909, Chihuahua, 31310, Chihuahua, Mexico
| | - Alejandro A Torres-García
- Instituto Nacional de Astrofísica Óptica y Electrónica, Luis Enrique Erro No 1, Tonanzintla, 72840, Puebla, Mexico
| | - Luis F Corral-Martínez
- Tecnológico Nacional de México/IT Chihuahua, Av. Tecnológico 2909, Chihuahua, 31310, Chihuahua, Mexico
| |
Collapse
|
6
|
Computer-Aided Multiclass Classification of Corn from Corn Images Integrating Deep Feature Extraction. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2062944. [PMID: 35990122 PMCID: PMC9385333 DOI: 10.1155/2022/2062944] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 06/16/2022] [Accepted: 06/28/2022] [Indexed: 12/24/2022]
Abstract
Corn has great importance in terms of production in the field of agriculture and animal feed. Obtaining pure corn seeds in corn production is quite significant for seed quality. For this reason, the distinction of corn seeds that have numerous varieties plays an essential role in marketing. This study was conducted with 14,469 images of BT6470, Calipso, Es_Armandi, and Hiva types of corn licensed by BIOTEK. The classification of images was carried out in three stages. At the first stage, deep feature extraction of the four types of corn images was performed with the pretrained CNN model SqueezeNet 1000 deep features were obtained for each image. In the second stage, in order to reduce these features obtained from deep feature extraction with SqueezeNet, separate feature selection processes were performed with the Bat Optimization (BA), Whale Optimization (WOA), and Gray Wolf Optimization (GWO) algorithms among optimization algorithms. Finally, in the last stage, the features obtained from the first and second stages were classified by using the machine learning methods Decision Tree (DT), Naive Bayes (NB), multi-class Support Vector Machine (mSVM), k-Nearest Neighbor (KNN), and Neural Network (NN). In the classification processes of the features obtained in the first stage, the mSVM model has achieved the highest classification success with 89.40%. In the second stage, as a result of the classifications performed through the active features selected by using three types of feature selection algorithms (BA, WOA, GWO), the classification success obtained with the mSVM model was 88.82%, 88.72%, and 88.95%, respectively. The classification accuracies of the tested methods and the classification accuracies obtained in the first stage are close to each other in terms of classification success. However, with the algorithms used in feature selection, successful classification processes have been carried out with fewer features and in a shorter time. The results of the study, in which classification was carried out in the inexpensive, the objective, and the shorter time of processing for the corn types, present a different perspective in terms of classification performance.
Collapse
|
7
|
Advances in Machine Learning. ELECTRONICS 2022. [DOI: 10.3390/electronics11091428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Since its inception as a branch of Artificial Intelligence, Machine Learning (ML) has flourished in recent years [...]
Collapse
|