1
|
Yang J, Tian J, Miao J, Chen Y. Adaptive loss-guided multi-stage residual ASPP for lesion segmentation and disease detection in cucumber under complex backgrounds. BMC Bioinformatics 2024; 25:262. [PMID: 39118026 PMCID: PMC11312732 DOI: 10.1186/s12859-024-05890-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 07/31/2024] [Indexed: 08/10/2024] Open
Abstract
BACKGROUND In complex agricultural environments, the presence of shadows, leaf debris, and uneven illumination can hinder the performance of leaf segmentation models for cucumber disease detection. This is further exacerbated by the imbalance in pixel ratios between background and lesion areas, which affects the accuracy of lesion extraction. RESULTS An original image segmentation framework, the LS-ASPP model, which utilizes a two-stage Atrous Spatial Pyramid Pooling (ASPP) approach combined with adaptive loss to address these challenges has been proposed. The Leaf-ASPP stage employs attention modules and residual structures to capture multi-scale semantic information and enhance edge perception, allowing for precise extraction of leaf contours from complex backgrounds. In the Spot-ASPP stage, we adjust the dilation rate of ASPP and introduce a Convolutional Attention Block Module (CABM) to accurately segment lesion areas. CONCLUSIONS The LS-ASPP model demonstrates improved performance in semantic segmentation accuracy under complex conditions, providing a robust solution for precise cucumber lesion segmentation. By focusing on challenging pixels and adapting to the specific requirements of agricultural image analysis, our framework has the potential to enhance disease detection accuracy and facilitate timely and effective crop management decisions.
Collapse
Affiliation(s)
- Jie Yang
- School of Information Engineering, Xinjiang University of Technology, Xinjiang, 843100, China
| | - Jiya Tian
- School of Information Engineering, Xinjiang University of Technology, Xinjiang, 843100, China.
| | - Jinchao Miao
- School of Information Engineering, Xinjiang University of Technology, Xinjiang, 843100, China
| | - Yunsheng Chen
- School of Information Engineering, Xinjiang University of Technology, Xinjiang, 843100, China
| |
Collapse
|
2
|
Li H, Chen G, Zhang L, Xu C, Wen J. A review of psoriasis image analysis based on machine learning. Front Med (Lausanne) 2024; 11:1414582. [PMID: 39170035 PMCID: PMC11337201 DOI: 10.3389/fmed.2024.1414582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 07/02/2024] [Indexed: 08/23/2024] Open
Abstract
Machine Learning (ML), an Artificial Intelligence (AI) technique that includes both Traditional Machine Learning (TML) and Deep Learning (DL), aims to teach machines to automatically learn tasks by inferring patterns from data. It holds significant promise in aiding medical care and has become increasingly important in improving professional processes, particularly in the diagnosis of psoriasis. This paper presents the findings of a systematic literature review focusing on the research and application of ML in psoriasis analysis over the past decade. We summarized 53 publications by searching the Web of Science, PubMed and IEEE Xplore databases and classified them into three categories: (i) lesion localization and segmentation; (ii) lesion recognition; (iii) lesion severity and area scoring. We have presented the most common models and datasets for psoriasis analysis, discussed the key challenges, and explored future trends in ML within this field. Our aim is to suggest directions for subsequent research.
Collapse
Affiliation(s)
- Huihui Li
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Guangjie Chen
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Li Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Department of Dermatology, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Chunlin Xu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Ju Wen
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Department of Dermatology, Guangdong Second Provincial General Hospital, Guangzhou, China
| |
Collapse
|
3
|
Liao AH, Wang CH, Wang CY, Liu HL, Chuang HC, Tseng WJ, Weng WC, Shih CP, Tsui PH. Computer-Aided Diagnosis of Duchenne Muscular Dystrophy Based on Texture Pattern Recognition on Ultrasound Images Using Unsupervised Clustering Algorithms and Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1058-1068. [PMID: 38637169 DOI: 10.1016/j.ultrasmedbio.2024.03.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/28/2024] [Accepted: 03/31/2024] [Indexed: 04/20/2024]
Abstract
OBJECTIVE The feasibility of using deep learning in ultrasound imaging to predict the ambulatory status of patients with Duchenne muscular dystrophy (DMD) was previously explored for the first time. The present study further used clustering algorithms for the texture reconstruction of ultrasound images of DMD data sets and analyzed the difference in echo intensity between disease stages. METHODS k-means (Kms) and fuzzy c-means (FCM) clustering algorithms were used to reconstruct the DMD data-set textures. Each image was reconstructed using seven texture-feature categories, six of which were used as the primary analysis items. The task of automatically identifying the ambulatory function and DMD severity was performed by establishing a machine-learning model. RESULTS The experimental results indicated that the Gaussian Naïve Bayes and k-nearest neighbors classification models achieved an accuracy of 86.78% in ambulatory function classification. The decision-tree model achieved an identification accuracy of 83.80% in severity classification. A deep convolutional neural network model was established as the main structure of the deep-learning model while automatic auxiliary interpretation tasks of ambulatory function and severity were performed, and data augmentation was used to improve the recognition performance of the trained model. Both the visual geometry group (VGG)-16 and VGG-19 models achieved 98.53% accuracy in ambulatory-function classification. The VGG-19 model achieved 92.64% accuracy in severity classification. CONCLUSION Regarding the overall results, the Kms and FCM clustering algorithms were used in this study to reconstruct the characteristic texture of the gastrocnemius muscle group in DMD, which was indeed helpful in quantitatively analyzing the deterioration of the gastrocnemius muscle group in patients with DMD at different stages. Subsequent combination of machine-learning and deep-learning technologies can automatically and accurately assist in identifying DMD symptoms and tracking DMD deterioration for long-term observation.
Collapse
Affiliation(s)
- Ai-Ho Liao
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan; Department of Biomedical Engineering, National Defense Medical Center, Taipei, Taiwan.
| | - Chih-Hung Wang
- Division of Otolaryngology, Taipei Veterans General Hospital, Taoyuan Branch, Taoyuan, Taiwan; Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei, Taiwan; Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chong-Yu Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hao-Li Liu
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Ho-Chiao Chuang
- Department of Mechanical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Wei-Jye Tseng
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Wen-Chin Weng
- Department of Pediatrics, National Taiwan University Hospital, and College of Medicine, National Taiwan University, Taipei, Taiwan; Department of Pediatric Neurology, National Taiwan University Children's Hospital, Taipei, Taiwan
| | - Cheng-Ping Shih
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan; Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan; Research Center for Radiation Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
4
|
Yu H, Yang Z, Zhang Z, Wang T, Ran M, Wang Z, Liu L, Liu Y, Zhang Y. Multiple organ segmentation framework for brain metastasis radiotherapy. Comput Biol Med 2024; 177:108637. [PMID: 38824789 DOI: 10.1016/j.compbiomed.2024.108637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 04/24/2024] [Accepted: 05/18/2024] [Indexed: 06/04/2024]
Abstract
Radiotherapy is a preferred treatment for brain metastases, which kills cancer cells via high doses of radiation meanwhile hardly avoiding damage to surrounding healthy cells. Therefore, the delineation of organs-at-risk (OARs) is vital in treatment planning to minimize radiation-induced toxicity. However, the following aspects make OAR delineation a challenging task: extremely imbalanced organ sizes, ambiguous boundaries, and complex anatomical structures. To alleviate these challenges, we imitate how specialized clinicians delineate OARs and present a novel cascaded multi-OAR segmentation framework, called OAR-SegNet. OAR-SegNet comprises two distinct levels of segmentation networks: an Anatomical-Prior-Guided network (APG-Net) and a Point-Cloud-Guided network (PCG-Net). Specifically, APG-Net handles segmentation for all organs, where multi-view segmentation modules and a deep prior loss are designed under the guidance of prior knowledge. After APG-Net, PCG-Net refines small organs through the mini-segmentation and the point-cloud alignment heads. The mini-segmentation head is further equipped with the deep prior feature. Extensive experiments were conducted to demonstrate the superior performance of the proposed method compared to other state-of-the-art medical segmentation methods.
Collapse
Affiliation(s)
- Hui Yu
- College of Computer Science, Sichuan University, China
| | - Ziyuan Yang
- College of Computer Science, Sichuan University, China
| | | | - Tao Wang
- College of Computer Science, Sichuan University, China
| | - Maoson Ran
- College of Computer Science, Sichuan University, China
| | - Zhiwen Wang
- College of Computer Science, Sichuan University, China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, China.
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, China
| |
Collapse
|
5
|
Li Pomi F, Papa V, Borgia F, Vaccaro M, Pioggia G, Gangemi S. Artificial Intelligence: A Snapshot of Its Application in Chronic Inflammatory and Autoimmune Skin Diseases. Life (Basel) 2024; 14:516. [PMID: 38672786 PMCID: PMC11051135 DOI: 10.3390/life14040516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 04/10/2024] [Accepted: 04/16/2024] [Indexed: 04/28/2024] Open
Abstract
Immuno-correlated dermatological pathologies refer to skin disorders that are closely associated with immune system dysfunction or abnormal immune responses. Advancements in the field of artificial intelligence (AI) have shown promise in enhancing the diagnosis, management, and assessment of immuno-correlated dermatological pathologies. This intersection of dermatology and immunology plays a pivotal role in comprehending and addressing complex skin disorders with immune system involvement. The paper explores the knowledge known so far and the evolution and achievements of AI in diagnosis; discusses segmentation and the classification of medical images; and reviews existing challenges, in immunological-related skin diseases. From our review, the role of AI has emerged, especially in the analysis of images for both diagnostic and severity assessment purposes. Furthermore, the possibility of predicting patients' response to therapies is emerging, in order to create tailored therapies.
Collapse
Affiliation(s)
- Federica Li Pomi
- Department of Precision Medicine in Medical, Surgical and Critical Care (Me.Pre.C.C.), University of Palermo, 90127 Palermo, Italy;
| | - Vincenzo Papa
- Department of Clinical and Experimental Medicine, School and Operative Unit of Allergy and Clinical Immunology, University of Messina, 98125 Messina, Italy; (V.P.); (S.G.)
| | - Francesco Borgia
- Department of Clinical and Experimental Medicine, Section of Dermatology, University of Messina, 98125 Messina, Italy;
| | - Mario Vaccaro
- Department of Clinical and Experimental Medicine, Section of Dermatology, University of Messina, 98125 Messina, Italy;
| | - Giovanni Pioggia
- Institute for Biomedical Research and Innovation (IRIB), National Research Council of Italy (CNR), 98164 Messina, Italy;
| | - Sebastiano Gangemi
- Department of Clinical and Experimental Medicine, School and Operative Unit of Allergy and Clinical Immunology, University of Messina, 98125 Messina, Italy; (V.P.); (S.G.)
| |
Collapse
|
6
|
Zhao Y, Zhou X, Pan T, Gao S, Zhang W. Correspondence-based Generative Bayesian Deep Learning for semi-supervised volumetric medical image segmentation. Comput Med Imaging Graph 2024; 113:102352. [PMID: 38341947 DOI: 10.1016/j.compmedimag.2024.102352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 02/03/2024] [Accepted: 02/03/2024] [Indexed: 02/13/2024]
Abstract
Automated medical image segmentation plays a crucial role in diverse clinical applications. The high annotation costs of fully-supervised medical segmentation methods have spurred a growing interest in semi-supervised methods. Existing semi-supervised medical segmentation methods train the teacher segmentation network using labeled data to establish pseudo labels for unlabeled data. The quality of these pseudo labels is constrained as these methods fail to effectively address the significant bias in the data distribution learned from the limited labeled data. To address these challenges, this paper introduces an innovative Correspondence-based Generative Bayesian Deep Learning (C-GBDL) model. Built upon the teacher-student architecture, we design a multi-scale semantic correspondence method to aid the teacher model in generating high-quality pseudo labels. Specifically, our teacher model, embedded with the multi-scale semantic correspondence, learns a better-generalized data distribution from input volumes by feature matching with the reference volumes. Additionally, a double uncertainty estimation schema is proposed to further rectify the noisy pseudo labels. The double uncertainty estimation takes the predictive entropy as the first uncertainty estimation and takes the structural similarity between the input volume and its corresponding reference volumes as the second uncertainty estimation. Four groups of comparative experiments conducted on two public medical datasets demonstrate the effectiveness and the superior performance of our proposed model. Our code is available on https://github.com/yumjoo/C-GBDL.
Collapse
Affiliation(s)
- Yuzhou Zhao
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Xinyu Zhou
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Tongxin Pan
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China
| | - Shuyong Gao
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China.
| | - Wenqiang Zhang
- Shanghai Key Lab of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, China; Shanghai Engineering Research Center of AI & Robotics, Academy for Engineering and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
7
|
Czajkowska J, Juszczyk J, Bugdol MN, Glenc-Ambroży M, Polak A, Piejko L, Pietka E. High-frequency ultrasound in anti-aging skin therapy monitoring. Sci Rep 2023; 13:17799. [PMID: 37853086 PMCID: PMC10584894 DOI: 10.1038/s41598-023-45126-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 10/16/2023] [Indexed: 10/20/2023] Open
Abstract
Over the last few decades, high-frequency ultrasound has found multiple applications in various diagnostic fields. The fast development of this imaging technique opens up new diagnostic paths in dermatology, allergology, cosmetology, and aesthetic medicine. In this paper, being the first in this area, we discuss the usability of HFUS in anti-aging skin therapy assessment. The fully automated algorithm combining high-quality image selection and entry echo layer segmentation steps followed by the dermal parameters estimation enables qualitative and quantitative evaluation of the effectiveness of anti-aging products. Considering the parameters of subcutaneous layers, the proposed framework provides a reliable tool for TCA-peel therapy assessment; however, it can be successfully applied to other skin-condition-related problems. In this randomized controlled clinical trial, forty-six postmenopausal women were randomly assigned to the experimental and control groups. Women were treated four times at one-week intervals and applied skin cream daily between visits. The three month follow-up study enables measurement of the long-term effect of the therapy. According to the results, the TCA-based therapy increased epidermal (entry echo layer) thickness, indicating that the thinning process has slowed down and the skin's condition has improved. An interesting outcome is the obtained growth in the intensity of the upper dermis in the experimental group, which might suggest a reduced photo-aging effect of TCA-peel and increased water content. The same conclusions connected with the anti-aging effect of TCA-peel can be drawn by observing the parameters describing the contribution of low and medium-intensity pixels in the upper dermis. The decreased share of low-intensity pixels and increased share of medium-intensity pixels in the upper dermis suggest a significant increase in local protein synthesis.
Collapse
Affiliation(s)
- Joanna Czajkowska
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800, Zabrze, Poland.
| | - Jan Juszczyk
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800, Zabrze, Poland
| | - Monika Natalia Bugdol
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800, Zabrze, Poland
| | | | - Anna Polak
- Jerzy Kukuczka Academy of Physical Education, Institute of Physiotherapy and Health Sciences, 40-065, Katowice, Poland
| | - Laura Piejko
- Jerzy Kukuczka Academy of Physical Education, Institute of Physiotherapy and Health Sciences, 40-065, Katowice, Poland
| | - Ewa Pietka
- Faculty of Biomedical Engineering, Silesian University of Technology, 41-800, Zabrze, Poland
| |
Collapse
|
8
|
Hoque MZ, Keskinarkaus A, Nyberg P, Xu H, Seppänen T. Invasion depth estimation of carcinoma cells using adaptive stain normalization to improve epidermis segmentation accuracy. Comput Med Imaging Graph 2023; 108:102276. [PMID: 37611486 DOI: 10.1016/j.compmedimag.2023.102276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/25/2023]
Abstract
Submucosal invasion depth is a significant prognostic factor when assessing lymph node metastasis and cancer itself to plan proper treatment for the patient. Conventionally, oncologists measure the invasion depth by hand which is a laborious, subjective, and time-consuming process. The manual pathological examination by measuring accurate carcinoma cell invasion with considerable inter-observer and intra-observer variations is still challenging. The increasing use of medical imaging and artificial intelligence reveals a significant role in clinical medicine and pathology. In this paper, we propose an approach to study invasive behavior and measure the invasion depth of carcinoma from stained histopathology images. Specifically, our model includes adaptive stain normalization, color decomposition, and morphological reconstruction with adaptive thresholding to separate the epithelium with blue ratio image. Our method splits the image into multiple non-overlapping meaningful segments and successfully finds the homogeneous segments to measure accurate invasion depth. The invasion depths are measured from the inner epithelium edge to outermost pixels of the deepest part of particles in image. We conduct our experiments on skin melanoma tissue samples as well as on organotypic invasion model utilizing myoma tissue and oral squamous cell carcinoma. The performance is experimentally compared to three closely related reference methods and our method provides a superior result in measuring invasion depth. This computational technique will be beneficial for the segmentation of epithelium and other particles for the development of novel computer-aided diagnostic tools in biobank applications.
Collapse
Affiliation(s)
- Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland; Division of Nephrology and Intelligent Critical Care, Department of Medicine, University of Florida, Gainesville, USA.
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pia Nyberg
- Biobank Borealis of Northern Finland, Oulu University Hospital, Finland; Translational Medicine Research Unit, Medical Research Center Oulu, Faculty of Medicine, University of Oulu, Finland
| | - Hongming Xu
- Department of Electrical and Computer Engineering, University of Alberta, Canada; School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
9
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
10
|
Wang J, Qin L, Chen D, Wang J, Han BW, Zhu Z, Qiao G. An improved Hover-net for nuclear segmentation and classification in histopathology images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08394-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
11
|
Lee SH, Lee S, Lee J, Lee JK, Moon NJ. Effective encoder-decoder neural network for segmentation of orbital tissue in computed tomography images of Graves' orbitopathy patients. PLoS One 2023; 18:e0285488. [PMID: 37163543 PMCID: PMC10171592 DOI: 10.1371/journal.pone.0285488] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023] Open
Abstract
PURPOSE To propose a neural network (NN) that can effectively segment orbital tissue in computed tomography (CT) images of Graves' orbitopathy (GO) patients. METHODS We analyzed orbital CT scans from 701 GO patients diagnosed between 2010 and 2019 and devised an effective NN specializing in semantic orbital tissue segmentation in GO patients' CT images. After four conventional (Attention U-Net, DeepLab V3+, SegNet, and HarDNet-MSEG) and the proposed NN train the various manual orbital tissue segmentations, we calculated the Dice coefficient and Intersection over Union for comparison. RESULTS CT images of the eyeball, four rectus muscles, the optic nerve, and the lacrimal gland tissues from all 701 patients were analyzed in this study. In the axial image with the largest eyeball area, the proposed NN achieved the best performance, with Dice coefficients of 98.2% for the eyeball, 94.1% for the optic nerve, 93.0% for the medial rectus muscle, and 91.1% for the lateral rectus muscle. The proposed NN also gave the best performance for the coronal image. Our qualitative analysis demonstrated that the proposed NN outputs provided more sophisticated orbital tissue segmentations for GO patients than the conventional NNs. CONCLUSION We concluded that our proposed NN exhibited an improved CT image segmentation for GO patients over conventional NNs designed for semantic segmentation tasks.
Collapse
Affiliation(s)
- Seung Hyeun Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Sanghyuck Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jaesung Lee
- Department of Artificial Intelligence, Chung-Ang University, Seoul, Korea
| | - Jeong Kyu Lee
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| | - Nam Ju Moon
- Department of Ophthalmology, Chung-Ang University College of Medicine, Chung-Ang University Hospital, Seoul, Korea
| |
Collapse
|
12
|
Czajkowska J, Borak M. Computer-Aided Diagnosis Methods for High-Frequency Ultrasound Data Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8326. [PMID: 36366024 PMCID: PMC9653964 DOI: 10.3390/s22218326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/21/2022] [Accepted: 10/25/2022] [Indexed: 05/31/2023]
Abstract
Over the last few decades, computer-aided diagnosis systems have become a part of clinical practice. They have the potential to assist clinicians in daily diagnostic tasks. The image processing techniques are fast, repeatable, and robust, which helps physicians to detect, classify, segment, and measure various structures. The recent rapid development of computer methods for high-frequency ultrasound image analysis opens up new diagnostic paths in dermatology, allergology, cosmetology, and aesthetic medicine. This paper, being the first in this area, presents a research overview of high-frequency ultrasound image processing techniques, which have the potential to be a part of computer-aided diagnosis systems. The reviewed methods are categorized concerning the application, utilized ultrasound device, and image data-processing type. We present the bridge between diagnostic needs and already developed solutions and discuss their limitations and future directions in high-frequency ultrasound image analysis. A search was conducted of the technical literature from 2005 to September 2022, and in total, 31 studies describing image processing methods were reviewed. The quantitative and qualitative analysis included 39 algorithms, which were selected as the most effective in this field. They were completed by 20 medical papers and define the needs and opportunities for high-frequency ultrasound application and CAD development.
Collapse
Affiliation(s)
- Joanna Czajkowska
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland
| | | |
Collapse
|
13
|
MVI-Mind: A Novel Deep-Learning Strategy Using Computed Tomography (CT)-Based Radiomics for End-to-End High Efficiency Prediction of Microvascular Invasion in Hepatocellular Carcinoma. Cancers (Basel) 2022; 14:cancers14122956. [PMID: 35740620 PMCID: PMC9221272 DOI: 10.3390/cancers14122956] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/24/2022] [Accepted: 06/09/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Microvascular invasion is an important indicator for reflecting the prognosis of hepatocellular carcinoma, but the traditional diagnosis requires a postoperative pathological examination. This study is the first to propose an end-to-end deep learning architecture for predicting microvascular invasion in hepatocellular carcinoma by collecting retrospective data. This method can achieve noninvasive, accurate and efficient preoperative prediction only through the patient’s radiomic data, which is very beneficial to doctors for clinical decision making in HCC patients. Abstract Microvascular invasion (MVI) in hepatocellular carcinoma (HCC) directly affects a patient’s prognosis. The development of preoperative noninvasive diagnostic methods is significant for guiding optimal treatment plans. In this study, we investigated 138 patients with HCC and presented a novel end-to-end deep learning strategy based on computed tomography (CT) radiomics (MVI-Mind), which integrates data preprocessing, automatic segmentation of lesions and other regions, automatic feature extraction, and MVI prediction. A lightweight transformer and a convolutional neural network (CNN) were proposed for the segmentation and prediction modules, respectively. To demonstrate the superiority of MVI-Mind, we compared the framework’s performance with that of current, mainstream segmentation, and classification models. The test results showed that MVI-Mind returned the best performance in both segmentation and prediction. The mean intersection over union (mIoU) of the segmentation module was 0.9006, and the area under the receiver operating characteristic curve (AUC) of the prediction module reached 0.9223. Additionally, it only took approximately 1 min to output a prediction for each patient, end-to-end using our computing device, which indicated that MVI-Mind could noninvasively, efficiently, and accurately predict the presence of MVI in HCC patients before surgery. This result will be helpful for doctors to make rational clinical decisions.
Collapse
|
14
|
A Study on the Dynamic Effects and Ecological Stress of Eco-Environment in the Headwaters of the Yangtze River Based on Improved DeepLab V3+ Network. REMOTE SENSING 2022. [DOI: 10.3390/rs14092225] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The headwaters of the Yangtze River are a complicated system composed of different eco-environment elements. The abnormal moisture and energy exchanges between the atmosphere and earth systems caused by global climate change are predicted to produce drastic changes in these eco-environment elements. In order to study the dynamic effect and ecological stress in the eco-environment, we adapted the Double Attention Mechanism (DAM) to improve the performance of the DeepLab V3+ network in large-scale semantic segmentation. We proposed Elements Fragmentation (EF) and Elements Information Content (EIC) to quantitatively analyze the spatial distribution characteristics and spatial relationships of eco-environment elements. In this paper, the following conclusions were drawn: (1) we established sample sets based on “Sentinel-2” remote sensing images using the interpretation signs of eco-environment elements; (2) the mAP, mIoU, and Kappa of the improved DeepLab V3+ method were 0.639, 0.778, and 0.825, respectively, which demonstrates a good ability to distinguish the eco-environment elements; (3) between 2015 and 2021, EF gradually increased from 0.2234 to 0.2394, and EIC increased from 23.80 to 25.32, which shows that the eco-environment is oriented to complex, heterogeneous, and discontinuous processes; (4) the headwaters of the Yangtze River are a community of life, and thus we should build a multifunctional ecological management system with which to implement well-organized and efficient scientific ecological rehabilitation projects.
Collapse
|
15
|
High-Frequency Ultrasound Dataset for Deep Learning-Based Image Quality Assessment. SENSORS 2022; 22:s22041478. [PMID: 35214381 PMCID: PMC8875486 DOI: 10.3390/s22041478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/09/2022] [Accepted: 02/12/2022] [Indexed: 12/04/2022]
Abstract
This study aims at high-frequency ultrasound image quality assessment for computer-aided diagnosis of skin. In recent decades, high-frequency ultrasound imaging opened up new opportunities in dermatology, utilizing the most recent deep learning-based algorithms for automated image analysis. An individual dermatological examination contains either a single image, a couple of pictures, or an image series acquired during the probe movement. The estimated skin parameters might depend on the probe position, orientation, or acquisition setup. Consequently, the more images analyzed, the more precise the obtained measurements. Therefore, for the automated measurements, the best choice is to acquire the image series and then analyze its parameters statistically. However, besides the correctly received images, the resulting series contains plenty of non-informative data: Images with different artifacts, noise, or the images acquired for the time stamp when the ultrasound probe has no contact with the patient skin. All of them influence further analysis, leading to misclassification or incorrect image segmentation. Therefore, an automated image selection step is crucial. To meet this need, we collected and shared 17,425 high-frequency images of the facial skin from 516 measurements of 44 patients. Two experts annotated each image as correct or not. The proposed framework utilizes a deep convolutional neural network followed by a fuzzy reasoning system to assess the acquired data’s quality automatically. Different approaches to binary and multi-class image analysis, based on the VGG-16 model, were developed and compared. The best classification results reach 91.7% accuracy for the first, and 82.3% for the second analysis, respectively.
Collapse
|