1
|
Jing Q, Dai X, Wang Z, Zhou Y, Shi Y, Yang S, Wang D. Fully automated deep learning model for detecting proximity of mandibular third molar root to inferior alveolar canal using panoramic radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:671-678. [PMID: 38614873 DOI: 10.1016/j.oooo.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 02/02/2024] [Accepted: 02/08/2024] [Indexed: 04/15/2024]
Abstract
OBJECTIVE This study endeavored to develop a novel, fully automated deep-learning model to determine the topographic relationship between mandibular third molar (MM3) roots and the inferior alveolar canal (IAC) using panoramic radiographs (PRs). STUDY DESIGN A total of 1570 eligible subjects with MM3s who had paired PR and cone beam computed tomography (CBCT) from January 2019 to December 2020 were retrospectively collected and randomly grouped into training (80%), validation (10%), and testing (10%) cohorts. The spatial relationship of MM3/IAC was assessed by CBCT and set as the ground truth. MM3-IACnet, a modified deep learning network based on YOLOv5 (You only look once), was trained to detect MM3/IAC proximity using PR. Its diagnostic performance was further compared with dentists, AlexNet, GoogleNet, VGG-16, ResNet-50, and YOLOv5 in another independent cohort with 100 high-risk MM3 defined as root overlapping with IAC on PR. RESULTS The MM3-IACnet performed best in predicting the MM3/IAC proximity, as evidenced by the highest accuracy (0.885), precision (0.899), area under the curve value (0.95), and minimal time-spending compared with other models. Moreover, our MM3-IACnet outperformed other models in MM3/IAC risk prediction in high-risk cases. CONCLUSION MM3-IACnet model can assist clinicians in MM3s risk assessment and treatment planning by detecting MM3/IAC topographic relationship using PR.
Collapse
Affiliation(s)
- Qiuping Jing
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Xiubin Dai
- School of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, China; Smart Health Big Data Analysis and Location Services Engineering Research Center of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Zhifan Wang
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Yanqi Zhou
- School of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, China; Smart Health Big Data Analysis and Location Services Engineering Research Center of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yijin Shi
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Shengjun Yang
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC
| | - Dongmiao Wang
- Department of Oral and Maxillofacial Surgery, Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing, China PRC; Jiangsu Province Key Laboratory of Oral Disease, Nanjing Medical University, Jiangsu, China PRC; Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Jiangsu, China PRC.
| |
Collapse
|
2
|
Tarimo SA, Jang MA, Ngasa EE, Shin HB, Shin H, Woo J. WBC YOLO-ViT: 2 Way - 2 stage white blood cell detection and classification with a combination of YOLOv5 and vision transformer. Comput Biol Med 2024; 169:107875. [PMID: 38154163 DOI: 10.1016/j.compbiomed.2023.107875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 11/24/2023] [Accepted: 12/18/2023] [Indexed: 12/30/2023]
Abstract
Accurate detection and classification of white blood cells, otherwise known as leukocytes, play a critical role in diagnosing and monitoring various illnesses. However, conventional methods, such as manual classification by trained professionals, must be revised in terms of accuracy, efficiency, and potential bias. Moreover, applying deep learning techniques to detect and classify white blood cells using microscopic images is challenging owing to limited data, resolution noise, irregular shapes, and varying colors from different sources. This study presents a novel approach integrating object detection and classification for numerous type-white blood cell. We designed a 2-way approach to use two types of images: WBC and nucleus. YOLO (fast object detection) and ViT (powerful image representation capabilities) are effectively integrated into 16 classes. The proposed model demonstrates an exceptional 96.449% accuracy rate in classification.
Collapse
Affiliation(s)
- Servas Adolph Tarimo
- Department of Future Convergence Technology, Soonchunhyang University, Asan, South Korea
| | - Mi-Ae Jang
- Department of Laboratory Medicine and Genetics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Emmanuel Edward Ngasa
- Department of Future Convergence Technology, Soonchunhyang University, Asan, South Korea
| | - Hee Bong Shin
- Department of Laboratory Medicine, Soonchunhyang University Bucheon Hospital, Bucheon, South Korea.
| | - HyoJin Shin
- Department of ICT Convergence, Soonchunhyang University, Asan, South Korea
| | - Jiyoung Woo
- Department of ICT Convergence, Soonchunhyang University, Asan, South Korea.
| |
Collapse
|
3
|
Dong F, Song J, Chen B, Xie X, Cheng J, Song J, Huang Q. Improved detection of aortic dissection in non-contrast-enhanced chest CT using an attention-based deep learning model. Heliyon 2024; 10:e24547. [PMID: 38304839 PMCID: PMC10831773 DOI: 10.1016/j.heliyon.2024.e24547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 12/22/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024] Open
Abstract
Rationale and objectives This study investigated the effects of implementing an attention-based deep learning model for the detection of aortic dissection (AD) using non-contrast-enhanced chest computed tomography (CT). Materials and methods We analysed the records of 1300 patients who underwent contrast-enhanced chest CT at 2 medical centres between January 2015 and February 2023. We considered an internal cohort of 200 patients with AD and 200 patients without AD and an external test cohort of 40 patients with AD and 40 patients without AD. The internal cohort was divided into training and test sets, and a deep learning model was trained using 9600 CT images. A convolutional block attention module (CBAM) and a traditional deep learning architecture (namely, You Only Look Once version 5 [YOLOv5]) were combined into an attention-based model (i.e., YOLOv5-CBAM). Its performance was measured against the unmodified YOLOv5 model, and the accuracy, sensitivity, and specificity of the algorithm were evaluated by two independent radiologists. Results The CBAM-based model outperformed the traditional deep learning model. In the external testing set, YOLOv5-CBAM achieved an area under the curve (AUC) of 0.938, accuracy of 91.5 %, sensitivity of 90.0 %, and specificity of 92.9 %, whereas the unmodified model achieved an AUC of 0.844, accuracy of 83.6 %, sensitivity of 71.2 %, and specificity of 96.0 %. The sensitivity results of the unmodified algorithms were not significantly different from those of the radiologists; however, the proposed YOLOv5-CBAM algorithm outperformed the unmodified algorithms in terms of detection. Conclusions Incorporating the CBAM attention mechanism into a deep learning model can significantly improve AD detection in non-contrast-enhanced chest CT. This approach may aid radiologists in the timely and accurate diagnosis of AD, which is important for improving patient outcomes.
Collapse
Affiliation(s)
- Fenglei Dong
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Jiao Song
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Bo Chen
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Xiaoxiao Xie
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Jianmin Cheng
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Jiawen Song
- Department of Radiology, The Second Affiliated Hospital and Yuying Children’s Hospital of Wenzhou Medical University, 1111 east section of Wenzhou avenue, Longwan District, Wenzhou, China
| | - Qun Huang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, No. 1 Fanhai West Road, Ouhai District, Wenzhou, China
| |
Collapse
|
4
|
Luo G, Li Z, Ge W, Ji Z, Qiao S, Pan S. Residual networks models detection of atrial septal defect from chest radiographs. LA RADIOLOGIA MEDICA 2024; 129:48-55. [PMID: 38082195 PMCID: PMC10808252 DOI: 10.1007/s11547-023-01744-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 10/19/2023] [Indexed: 01/25/2024]
Abstract
OBJECT The purpose of this study was to explore a machine learning-based residual networks (ResNets) model to detect atrial septal defect (ASD) on chest radiographs. METHODS This retrospective study included chest radiographs consecutively collected at our hospital from June 2017 to May 2022. Qualified chest radiographs were obtained from patients who had finished echocardiography. These chest radiographs were labeled as positive or negative for ASD based on the echocardiographic reports and were divided into training, validation, and test dataset. Six ResNets models were employed to examine and compare by using the training dataset and was tuned using the validation dataset. The area under the curve, recall, precision and F1-score were taken as the evaluation metrics for classification result in the test dataset. Visualizing regions of interest for the ResNets models using heat maps. RESULTS This study included a total of 2105 chest radiographs of children with ASD (mean age 4.14 ± 2.73 years, 54% male), patients were randomly assigned to training, validation, and test dataset with an 8:1:1 ratio. Healthy children's images were supplemented to three datasets in a 1:1 ratio with ASD patients. Following the training, ResNet-10t and ResNet-18D have a better estimation performance, with precision, recall, accuracy, F1-score, and the area under the curve being (0.92, 0.93), (0.91, 0.91), (0.90, 0.90), (0.91, 0.91) and (0.97, 0.96), respectively. Compared to ResNet-18D, ResNet-10t was more focused on the distribution of the heat map of the interest region for most chest radiographs from ASD patients. CONCLUSION The ResNets model is feasible for identifying ASD through children's chest radiographs. ResNet-10t stands out as the preferable estimation model, providing exceptional performance and clear interpretability.
Collapse
Affiliation(s)
- Gang Luo
- Heart Center, Women and Children's Hospital, Qingdao University, 6, Tongfu Road, Qingdao, 266034, China
| | - Zhixin Li
- Heart Center, Women and Children's Hospital, Qingdao University, 6, Tongfu Road, Qingdao, 266034, China
| | - Wen Ge
- Department of Radiology, Women and Children's Hospital, Qingdao University, Qingdao, 266034, China
| | - Zhixian Ji
- Heart Center, Women and Children's Hospital, Qingdao University, 6, Tongfu Road, Qingdao, 266034, China
| | - Sibo Qiao
- The School of Computer Science and Technology, China University of Petroleum, Qingdao, 266580, China
| | - Silin Pan
- Heart Center, Women and Children's Hospital, Qingdao University, 6, Tongfu Road, Qingdao, 266034, China.
| |
Collapse
|
5
|
Healthcare Engineering JO. Retracted: Intelligent Solutions in Chest Abnormality Detection Based on YOLOv5 and ResNet50. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:9813092. [PMID: 37946893 PMCID: PMC10631970 DOI: 10.1155/2023/9813092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 10/31/2023] [Indexed: 11/12/2023]
Abstract
[This retracts the article DOI: 10.1155/2021/2267635.].
Collapse
|
6
|
Yang Y, Wu B, Wu H, Xu W, Lyu G, Liu P, He S. Classification of normal and abnormal fetal heart ultrasound images and identification of ventricular septal defects based on deep learning. J Perinat Med 2023; 51:1052-1058. [PMID: 37161929 DOI: 10.1515/jpm-2023-0041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 04/19/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVES Congenital heart defects (CHDs) are the most common birth defects. Recently, artificial intelligence (AI) was used to assist in CHD diagnosis. No comparison has been made among the various types of algorithms that can assist in the prenatal diagnosis. METHODS Normal and abnormal fetal ultrasound heart images, including five standard views, were collected according to the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) Practice guidelines. You Only Look Once version 5 (YOLOv5) models were trained and tested. An excellent model was screened out after comparing YOLOv5 with other classic detection methods. RESULTS On the training set, YOLOv5n performed slightly better than the others. On the validation set, YOLOv5n attained the highest overall accuracy (90.67 %). On the CHD test set, YOLOv5n, which only needed 0.007 s to recognize each image, had the highest overall accuracy (82.93 %), and YOLOv5l achieved the best accuracy on the abnormal dataset (71.93 %). On the VSD test set, YOLOv5l had the best performance, with a 92.79 % overall accuracy rate and 92.59 % accuracy on the abnormal dataset. The YOLOv5 models achieved better performance than the Fast region-based convolutional neural network (RCNN) & ResNet50 model and the Fast RCNN & MobileNetv2 model on the CHD test set (p<0.05) and VSD test set (p<0.01). CONCLUSIONS YOLOv5 models are able to accurately distinguish normal and abnormal fetal heart ultrasound images, especially with respect to the identification of VSD, which have the potential to assist ultrasound in prenatal diagnosis.
Collapse
Affiliation(s)
- Yiru Yang
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| | - Bingzheng Wu
- College of Engineering, Huaqiao University, Quanzhou, Fujian, P.R. China
| | - Huiling Wu
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| | - Wu Xu
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| | - Guorong Lyu
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou, Fujian, P.R. China
| | - Peizhong Liu
- College of Engineering, Huaqiao University, Quanzhou, Fujian, P.R. China
| | - Shaozheng He
- The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian, P.R. China
| |
Collapse
|
7
|
Souid A, Alsubaie N, Soufiene BO, Alqahtani MS, Abbas M, Jambi LK, Sakli H. Improving diagnosis accuracy with an intelligent image retrieval system for lung pathologies detection: a features extractor approach. Sci Rep 2023; 13:16619. [PMID: 37789095 PMCID: PMC10547797 DOI: 10.1038/s41598-023-42366-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 09/09/2023] [Indexed: 10/05/2023] Open
Abstract
Detecting lung pathologies is critical for precise medical diagnosis. In the realm of diagnostic methods, various approaches, including imaging tests, physical examinations, and laboratory tests, contribute to this process. Of particular note, imaging techniques like X-rays, CT scans, and MRI scans play a pivotal role in identifying lung pathologies with their non-invasive insights. Deep learning, a subset of artificial intelligence, holds significant promise in revolutionizing the detection and diagnosis of lung pathologies. By leveraging expansive datasets, deep learning algorithms autonomously discern intricate patterns and features within medical images, such as chest X-rays and CT scans. These algorithms exhibit an exceptional capacity to recognize subtle markers indicative of lung diseases. Yet, while their potential is evident, inherent limitations persist. The demand for abundant labeled data during training and the susceptibility to data biases challenge their accuracy. To address these formidable challenges, this research introduces a tailored computer-assisted system designed for the automatic retrieval of annotated medical images that share similar content. At its core lies an intelligent deep learning-based features extractor, adept at simplifying the retrieval of analogous images from an extensive chest radiograph database. The crux of our innovation rests upon the fusion of YOLOv5 and EfficientNet within the features extractor module. This strategic fusion synergizes YOLOv5's rapid and efficient object detection capabilities with EfficientNet's proficiency in combating noisy predictions. The result is a distinctive amalgamation that redefines the efficiency and accuracy of features extraction. Through rigorous experimentation conducted on an extensive and diverse dataset, our proposed solution decisively surpasses conventional methodologies. The model's achievement of a mean average precision of 0.488 with a threshold of 0.9 stands as a testament to its effectiveness, overshadowing the results of YOLOv5 + ResNet and EfficientDet, which achieved 0.234 and 0.257 respectively. Furthermore, our model demonstrates a marked precision improvement, attaining a value of 0.864 across all pathologies-a noteworthy leap of approximately 0.352 compared to YOLOv5 + ResNet and EfficientDet. This research presents a significant stride toward enhancing radiologists' workflow efficiency, offering a refined and proficient tool for retrieving analogous annotated medical images.
Collapse
Affiliation(s)
- Abdelbaki Souid
- MACS Research Laboratory RL16ES22, National Engineering School of Gabes, Gabes, Tunisia
| | - Najah Alsubaie
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Ben Othman Soufiene
- PRINCE Laboratory Research, ISITcom, University of Sousse, Hammam Sousse, Tunisia.
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE17RH, UK
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Layal K Jambi
- Radiological Sciences Department, College of Applied Medical Sciences, King Saud University, P.O. Box 10219, 11433, Riyadh, Saudi Arabia
| | - Hedi Sakli
- MACS Research Laboratory RL16ES22, National Engineering School of Gabes, Gabes, Tunisia
- EITA Consulting, 5 Rue Du Chant Des Oiseaux, 78360, Montesson, France
| |
Collapse
|
8
|
Sakashita S, Sakamoto N, Kojima M, Taki T, Miyazaki S, Minakata N, Sasabe M, Kinoshita T, Ishii G, Ochiai A. Requirement of image standardization for AI-based macroscopic diagnosis for surgical specimens of gastric cancer. J Cancer Res Clin Oncol 2023; 149:6467-6477. [PMID: 36773090 DOI: 10.1007/s00432-022-04570-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 12/31/2022] [Indexed: 02/12/2023]
Abstract
PURPOSE The pathological diagnosis of surgically resected gastric cancer involves both a macroscopic diagnosis by gross observation and a microscopic diagnosis by microscopy. Macroscopic diagnosis determines the location and stage of the disease and the involvement of other organs and surgical margin. Lesion recognition is, thus, an important diagnostic step that requires a skilled pathologist. Nonetheless, artificial intelligence (AI) technologies could allow even inexperienced doctors and laboratory technicians to examine surgically resected specimens without the need for pathologists. However, organ imaging conditions vary across hospitals, and an AI algorithm created in one setting may not work properly in another. Thus, we identified and standardized factors affecting the quality of pathological macroscopic images, which could further affect lesion identification using AI. METHODS We examined necessary image standardization for developing cancer detection AI for surgically resected gastric cancer by changing the following imaging conditions: focus, resolution, brightness, and contrast. RESULTS Regarding focus, brightness, and contrast, the farther away the test data were from the training macro-image, the less likely the inference was to be correct. Little change was observed for resolution, even with differing conditions for the training and test data. Regarding focus, brightness, and contrast, there were conditions appropriate for AI. Contrast, in particular, was far from the conditions appropriate for humans. CONCLUSION Standardizing focus, brightness, and contrast is important in the development of AI methodologies for lesion detection in surgically resected gastric cancer. This standardization is essential for AI to be implemented across hospitals.
Collapse
Affiliation(s)
- Shingo Sakashita
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center, Kashiwa, Chiba, Japan.
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan.
| | - Naoya Sakamoto
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center, Kashiwa, Chiba, Japan
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Motohiro Kojima
- Division of Pathology, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center, Kashiwa, Chiba, Japan
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Tetsuro Taki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Saori Miyazaki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Nobuhisa Minakata
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Maasa Sasabe
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Takahiro Kinoshita
- Department of Gastric Surgery, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Genichiro Ishii
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Chiba, Japan
| | - Atsushi Ochiai
- National Cancer Center, Kashiwa, Chiba, Japan
- Research Institute for Biomedical Sciences, Tokyo University of Science, Noda, Chiba, Japan
| |
Collapse
|
9
|
Zhou LX, Xia Y, Dai R, Liu AR, Zhu SW, Shi P, Song W, Yuan XC. Non-uniform image reconstruction for fast photoacoustic microscopy of histology imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:2080-2090. [PMID: 37206133 PMCID: PMC10191656 DOI: 10.1364/boe.487622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/18/2023] [Accepted: 04/02/2023] [Indexed: 05/21/2023]
Abstract
Photoacoustic microscopic imaging utilizes the characteristic optical absorption properties of pigmented materials in tissues to enable label-free observation of fine morphological and structural features. Since DNA/RNA can strongly absorb ultraviolet light, ultraviolet photoacoustic microscopy can highlight the cell nucleus without complicated sample preparations such as staining, which is comparable to the standard pathological images. Further improvements in the imaging acquisition speed are critical to advancing the clinical translation of photoacoustic histology imaging technology. However, improving the imaging speed with additional hardware is hampered by considerable costs and complex design. In this work, considering heavy redundancy in the biological photoacoustic images that overconsume the computing power, we propose an image reconstruction framework called non-uniform image reconstruction (NFSR), which exploits an object detection network to reconstruct low-sampled photoacoustic histology images into high-resolution images. The sampling speed of photoacoustic histology imaging is significantly improved, saving 90% of the time cost. Furthermore, NFSR focuses on the reconstruction of the region of interest while maintaining high PSNR and SSIM evaluation indicators of more than 99% but reducing the overall computation by 60%.
Collapse
Affiliation(s)
- Ling Xiao Zhou
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Yu Xia
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Renxiang Dai
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - An Ran Liu
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Si Wei Zhu
- The Institute of Translational Medicine, Tianjin Union Medical Center of Nankai University, Tianjin, 300121, China
| | - Peng Shi
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Wei Song
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Xiao Cong Yuan
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| |
Collapse
|
10
|
Application of Artificial Intelligence in Anatomical Structure Recognition of Standard Section of Fetal Heart. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:5650378. [PMID: 36733613 PMCID: PMC9889146 DOI: 10.1155/2023/5650378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 12/25/2022] [Accepted: 01/03/2023] [Indexed: 01/25/2023]
Abstract
Congenital heart defect (CHD) refers to the overall structural abnormality of the heart or large blood vessels in the chest cavity. It is the most common type of fetal congenital defects. Prenatal diagnosis of congenital heart disease can improve the prognosis of the fetus to a certain extent. At present, prenatal diagnosis of CHD mainly uses 2D ultrasound to directly evaluate the development and function of fetal heart and main structures in the second trimester of pregnancy. Artificial recognition of fetal heart 2D ultrasound is a highly complex and tedious task, which requires a long period of prenatal training and practical experience. Compared with manual scanning, computer automatic identification and classification can significantly save time, ensure efficiency, and improve the accuracy of diagnosis. In this paper, an effective artificial intelligence recognition model is established by combining ultrasound images with artificial intelligence technology to assist ultrasound doctors in prenatal ultrasound fetal heart standard section recognition. The method data in this paper were obtained from the Second Affiliated Hospital of Fujian Medical University. The fetal apical four-chamber heart section, three vessel catheter section, three vessel trachea section, right ventricular outflow tract section, and left ventricular outflow tract section were collected at 20-24 weeks of gestation. 2687 image data were used for model establishment, and 673 image data were used for model validation. The experiment shows that the map value of this method in identifying different anatomical structures reaches 94.30%, the average accuracy rate reaches 94.60%, the average recall rate reaches 91.0%, and the average F1 coefficient reaches 93.40%. The experimental results show that this method can effectively identify the anatomical structures of different fetal heart sections and judge the standard sections according to these anatomical structures, which can provide an auxiliary diagnostic basis for ultrasound doctors to scan and lay a solid foundation for the diagnosis of congenital heart disease.
Collapse
|
11
|
Bandari E, Beuzen T, Habashy L, Raza J, Yang X, Kapeluto J, Meneilly G, Madden K. Machine Learning Decision Support for Bedside Ultrasound to Detect Lipohypertrophy. JMIR Form Res 2022; 6:e34830. [PMID: 35404833 PMCID: PMC9123536 DOI: 10.2196/34830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 03/14/2022] [Accepted: 04/09/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The most common dermatological complication of insulin therapy is lipohypertrophy. OBJECTIVE As a proof-of-concept, we built and tested an automated model using a convolutional neural network (CNN) to detect the presence of lipohypertrophy in ultrasound images. METHODS Ultrasound images were obtained in a blinded fashion using a portable GE LOGIQe machine with an L8-18i-D probe (5-18 MHz; GE Healthcare, Frankfurt, Germany). The data was split into train, validation and test splits of 70%, 15%, and 15% respectively. Given the small size of the dataset, image augmentation techniques were used to expand the size of the training set and improve the model's generalizability. To compare the performance of the different architectures, the team considered the accuracy and recall of the models when tested on our test set. RESULTS The DenseNet CNN architecture was found to have the highest accuracy (76%) and recall (76%) in detecting lipohypertrophy in ultrasound images, when compared to other CNN architectures. Additional work showed that the YOLOv5m object detection model could be used to help identify the approximate location of lipohypertrophy in ultrasound images identified as containing lipohypertrophy by the DenseNet CNN. CONCLUSIONS We were able to demonstrate the ability of machine learning approaches to automate the process of detecting and locating lipohypertrophy. CLINICALTRIAL
Collapse
Affiliation(s)
- Ela Bandari
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Tomas Beuzen
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Lara Habashy
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Javairia Raza
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Xudong Yang
- Master's in Data Science Program, University of British Columbia, Vancouver, CA
| | - Jordanna Kapeluto
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Endocrinology, Department of Medicine, University of British Columbia, Vancouver, CA
| | - Graydon Meneilly
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Gordon and Leslie Diamond Health Care Centre2775 Laurel Street, Vancouver, CA
| | - Kenneth Madden
- Gerontology and Diabetes Research Laboratory, University of British Columbia, 828 West 10th Avenue, Vancouver, CA.,Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Gordon and Leslie Diamond Health Care Centre2775 Laurel Street, Vancouver, CA.,Centre for Hip Health and Mobility, Vancouver, CA
| |
Collapse
|
12
|
Detection and Characterization of Stressed Sweet Cherry Tissues Using Machine Learning. DRONES 2021. [DOI: 10.3390/drones6010003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Recent technological developments in the primary sector and machine learning algorithms allow the combined application of many promising solutions in precision agriculture. For example, the YOLOv5 (You Only Look Once) and ResNet Deep Learning architecture provide high-precision real-time identifications of objects. The advent of datasets from different perspectives provides multiple benefits, such as spheric view of objects, increased information, and inference results from multiple objects detection per image. However, it also raises crucial obstacles such as total identifications (ground truths) and processing concerns that can lead to devastating consequences, including false-positive detections with other erroneous conclusions or even the inability to extract results. This paper introduces experimental results from the machine learning algorithm (Yolov5) on a novel dataset based on perennial fruit crops, such as sweet cherries, aiming to enhance precision agriculture resiliency. Detection is oriented on two points of interest: (a) Infected leaves and (b) Infected branches. It is noteworthy that infected leaves or branches indicate stress, which may be due to either a stress/disease (e.g., Armillaria for sweet cherries trees, etc.) or other factors (e.g., water shortage, etc). Correspondingly, the foliage of a tree shows symptoms, while this indicates the stages of the disease.
Collapse
|