1
|
Cui R, Liang S, Zhao W, Liu Z, Lin Z, He W, He Y, Du C, Peng J, Huang H. A Shape-Consistent Deep-Learning Segmentation Architecture for Low-Quality and High-Interference Myocardial Contrast Echocardiography. ULTRASOUND IN MEDICINE & BIOLOGY 2024:S0301-5629(24)00235-7. [PMID: 39147622 DOI: 10.1016/j.ultrasmedbio.2024.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 05/31/2024] [Accepted: 06/04/2024] [Indexed: 08/17/2024]
Abstract
OBJECTIVE Myocardial contrast echocardiography (MCE) plays a crucial role in diagnosing ischemia, infarction, masses and other cardiac conditions. In the realm of MCE image analysis, accurate and consistent myocardial segmentation results are essential for enabling automated analysis of various heart diseases. However, current manual diagnostic methods in MCE suffer from poor repeatability and limited clinical applicability. MCE images often exhibit low quality and high noise due to the instability of ultrasound signals, while interference structures can further disrupt segmentation consistency. METHODS To overcome these challenges, we proposed a deep-learning network for the segmentation of MCE. This architecture leverages dilated convolutions to capture high-scale information without sacrificing positional accuracy and modifies multi-head self-attention to enhance global context and ensure consistency, effectively overcoming issues related to low image quality and interference. Furthermore, we also adapted the cascade application of transformers with convolutional neural networks for improved segmentation in MCE. RESULTS In our experiments, our architecture achieved the best Dice score of 84.35% for standard MCE views compared with that of several state-of-the-art segmentation models. For non-standard views and frames with interfering structures (mass), our models also attained the best Dice scores of 83.33% and 83.97%, respectively. CONCLUSION These studies proved that our architecture is of excellent shape consistency and robustness, which allows it to deal with segmentation of various types of MCE. Our relatively precise and consistent myocardial segmentation results provide fundamental conditions for the automated analysis of various heart diseases, with the potential to discover underlying pathological features and reduce healthcare costs.
Collapse
Affiliation(s)
- Rongpu Cui
- College of Computer Science, Sichuan University, Chengdu, China
| | - Shichu Liang
- Department of Cardiology, West China Hospital, Sichuan University, Chengdu, China
| | - Weixin Zhao
- College of Computer Science, Sichuan University, Chengdu, China
| | - Zhiyue Liu
- Department of Cardiology, West China Hospital, Sichuan University, Chengdu, China
| | - Zhicheng Lin
- College of Computer Science, Sichuan University, Chengdu, China
| | - Wenfeng He
- Department of Cardiology, West China Hospital, Sichuan University, Chengdu, China
| | - Yujun He
- College of Computer Science, Sichuan University, Chengdu, China
| | - Chaohui Du
- Department of Cardiology, West China Hospital, Sichuan University, Chengdu, China
| | - Jian Peng
- College of Computer Science, Sichuan University, Chengdu, China.
| | - He Huang
- Department of Cardiology, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
2
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
3
|
Masoumi N, Rivaz H, Hacihaliloglu I, Ahmad MO, Reinertsen I, Xiao Y. The Big Bang of Deep Learning in Ultrasound-Guided Surgery: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:909-919. [PMID: 37028313 DOI: 10.1109/tuffc.2023.3255843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound (US) imaging is a paramount modality in many image-guided surgeries and percutaneous interventions, thanks to its high portability, temporal resolution, and cost-efficiency. However, due to its imaging principles, the US is often noisy and difficult to interpret. Appropriate image processing can greatly enhance the applicability of the imaging modality in clinical practice. Compared with the classic iterative optimization and machine learning (ML) approach, deep learning (DL) algorithms have shown great performance in terms of accuracy and efficiency for US processing. In this work, we conduct a comprehensive review on deep-learning algorithms in the applications of US-guided interventions, summarize the current trends, and suggest future directions on the topic.
Collapse
|
4
|
Zhao JZ, Ni R, Chow R, Rink A, Weersink R, Croke J, Raman S. Artificial intelligence applications in brachytherapy: A literature review. Brachytherapy 2023; 22:429-445. [PMID: 37248158 DOI: 10.1016/j.brachy.2023.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 05/31/2023]
Abstract
PURPOSE Artificial intelligence (AI) has the potential to simplify and optimize various steps of the brachytherapy workflow, and this literature review aims to provide an overview of the work done in this field. METHODS AND MATERIALS We conducted a literature search in June 2022 on PubMed, Embase, and Cochrane for papers that proposed AI applications in brachytherapy. RESULTS A total of 80 papers satisfied inclusion/exclusion criteria. These papers were categorized as follows: segmentation (24), registration and image processing (6), preplanning (13), dose prediction and treatment planning (11), applicator/catheter/needle reconstruction (16), and quality assurance (10). AI techniques ranged from classical models such as support vector machines and decision tree-based learning to newer techniques such as U-Net and deep reinforcement learning, and were applied to facilitate small steps of a process (e.g., optimizing applicator selection) or even automate the entire step of the workflow (e.g., end-to-end preplanning). Many of these algorithms demonstrated human-level performance and offer significant improvements in speed. CONCLUSIONS AI has potential to augment, automate, and/or accelerate many steps of the brachytherapy workflow. We recommend that future studies adhere to standard reporting guidelines. We also stress the importance of using larger sample sizes and reporting results using clinically interpretable measures.
Collapse
Affiliation(s)
- Jonathan Zl Zhao
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Ruiyan Ni
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Ronald Chow
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alexandra Rink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Robert Weersink
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Jennifer Croke
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Srinivas Raman
- Princess Margaret Hospital Cancer Centre, Radiation Medicine Program, Toronto, Canada; Department of Radiation Oncology, University of Toronto, Toronto, Canada.
| |
Collapse
|
5
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
6
|
Spatiotemporal consistent selection-correction network for deep interactive image segmentation. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08210-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
7
|
Aboobacker S, Vijayasenan D, S SD, Suresh PK, Sreeram S. Semantic segmentation of low magnification effusion cytology images: A semi-supervised approach. Comput Biol Med 2022; 150:106179. [PMID: 36252367 DOI: 10.1016/j.compbiomed.2022.106179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 09/09/2022] [Accepted: 10/01/2022] [Indexed: 11/17/2022]
Abstract
Cytopathologists examine microscopic images obtained at various magnifications to identify malignancy in effusions. They locate the malignant cell clusters at a low magnification and then zoom in to investigate cell-level features at a high magnification. This study predicts the malignancy at low magnification levels such as 4X and 10X in effusion cytology images to reduce scanning time. However, the most challenging problem is annotating the low magnification images, particularly the 4X images. This paper extends two semi-supervised learning (SSL) models, MixMatch and FixMatch, for semantic segmentation. The original FixMatch and MixMatch algorithms are designed for classification tasks. While performing image augmentation, the generated pseudo labels are spatially altered. We introduce reverse augmentation to compensate for the effect of the spatial alterations. The extended models are trained using labelled 10X and unlabelled 4X images. The average F-score of benign and malignant pixels on the predictions of 4X images is improved approximately by 9% for both Extended MixMatch and Extended FixMatch respectively compared with the baseline model. In the Extended MixMatch, 62% sub-regions of low magnification images are eliminated from scanning at a higher magnification, thereby saving scanning time.
Collapse
Affiliation(s)
- Shajahan Aboobacker
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, 575025, Karnataka, India.
| | - Deepu Vijayasenan
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, 575025, Karnataka, India
| | - Sumam David S
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, 575025, Karnataka, India
| | - Pooja K Suresh
- Department of Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, 575001, Karnataka, India
| | - Saraswathy Sreeram
- Department of Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, 575001, Karnataka, India
| |
Collapse
|
8
|
|
9
|
Effect Evaluation of Perioperative Fast-Track Surgery Nursing for Tibial Fracture Patients with Computerized Tomography Images under Intelligent Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:2629868. [PMID: 35845737 PMCID: PMC9249477 DOI: 10.1155/2022/2629868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 05/18/2022] [Accepted: 05/23/2022] [Indexed: 11/23/2022]
Abstract
This study aimed to study the application value of computerized tomography (CT) images under the graph cut algorithm in the effect evaluation of perioperative fast-track surgery (FTS) nursing in tibial fracture. In this study, 80 tibial fracture patients in the perioperative period were selected as the research objects. These objects were randomly divided into two groups according to the examination method. In group A, routine CT examination was performed; in group B, CT examination under the graph cut algorithm was applied. The imaging results showed that there were still 16 cases with collapse of group A and 34 cases with collapse of group B; the difference was statistically significant (P < 0.05). As for 16 cases with collapse in both groups, the average collapse shown in group A was about 2.79 ± 1.31 mm, while that in group B was 5.51 ± 1.88 mm, with a statistically significant difference (P < 0.05). The average broadening in the images of group A was 3.17 ± 1.41 mm and that of group B was 5.72 ± 1.83 mm, suggesting that the difference was statistically significant (P < 0.05). The broadening distance of 3-4 mm was mainly shown in the images of group A and that of 5-8 mm was shown in group B, with a statistical difference (P < 0.05). In terms of the total score, there were 26, 44, 8, and 2 cases that were assessed as excellent, good, common, and bad, respectively, in group A, while 44 cases were assessed as good and 36 cases were assessed as common in group B, which were significantly different (P < 0.05). In summary, the graph cut algorithm not only had a good segmentation effect and segmentation efficiency but also could improve the evaluation of CT images for perioperative FTS nursing effect in patients with tibial fracture.
Collapse
|
10
|
New Methods for the Acoustic-Signal Segmentation of the Temporomandibular Joint. J Clin Med 2022; 11:jcm11102706. [PMID: 35628833 PMCID: PMC9145358 DOI: 10.3390/jcm11102706] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 04/25/2022] [Accepted: 05/05/2022] [Indexed: 12/10/2022] Open
Abstract
(1) Background: The stethoscope is one of the main accessory tools in the diagnosis of temporomandibular joint disorders (TMD). However, the clinical auscultation of the masticatory system still lacks computer-aided support, which would decrease the time needed for each diagnosis. This can be achieved with digital signal processing and classification algorithms. The segmentation of acoustic signals is usually the first step in many sound processing methodologies. We postulate that it is possible to implement the automatic segmentation of the acoustic signals of the temporomandibular joint (TMJ), which can contribute to the development of advanced TMD classification algorithms. (2) Methods: In this paper, we compare two different methods for the segmentation of TMJ sounds which are used in diagnosis of the masticatory system. The first method is based solely on digital signal processing (DSP) and includes filtering and envelope calculation. The second method takes advantage of a deep learning approach established on a U-Net neural network, combined with long short-term memory (LSTM) architecture. (3) Results: Both developed methods were validated against our own TMJ sound database created from the signals recorded with an electronic stethoscope during a clinical diagnostic trail of TMJ. The Dice score of the DSP method was 0.86 and the sensitivity was 0.91; for the deep learning approach, Dice score was 0.85 and there was a sensitivity of 0.98. (4) Conclusions: The presented results indicate that with the use of signal processing and deep learning, it is possible to automatically segment the TMJ sounds into sections of diagnostic value. Such methods can provide representative data for the development of TMD classification algorithms.
Collapse
|
11
|
Shahedi M, Dormer JD, Halicek M, Fei B. Technical note: The effect of image annotation with minimal manual interaction for semiautomatic prostate segmentation in CT images using fully convolutional neural networks. Med Phys 2022; 49:1153-1160. [PMID: 34902166 PMCID: PMC10014149 DOI: 10.1002/mp.15404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 11/19/2021] [Accepted: 11/20/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE The goal is to study the performance improvement of a deep learning algorithm in three-dimensional (3D) image segmentation through incorporating minimal user interaction into a fully convolutional neural network (CNN). METHODS A U-Net CNN was trained and tested for 3D prostate segmentation in computed tomography (CT) images. To improve the segmentation accuracy, the CNN's input images were annotated with a set of border landmarks to supervise the network for segmenting the prostate. The network was trained and tested again with annotated images after 5, 10, 15, 20, or 30 landmark points were used. RESULTS Compared to fully automatic segmentation, the Dice similarity coefficient increased up to 9% when 5-30 sparse landmark points were involved, with the segmentation accuracy improving as more border landmarks were used. CONCLUSIONS When a limited number of sparse border landmarks are used on the input image, the CNN performance approaches the interexpert observer difference observed in manual segmentation.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas, USA
| | - James D Dormer
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas, USA
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas, USA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas, USA.,Advanced Imaging Research Center, UT Southwestern Medical Center, Dallas, Texas, USA.,Department of Radiology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
12
|
Xiong H, Liu S, Sharan RV, Coiera E, Berkovsky S. Weak label based Bayesian U-Net for optic disc segmentation in fundus images. Artif Intell Med 2022; 126:102261. [DOI: 10.1016/j.artmed.2022.102261] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 01/18/2022] [Accepted: 02/20/2022] [Indexed: 01/27/2023]
|
13
|
All You Need Is a Few Dots to Label CT Images for Organ Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Image segmentation is used to analyze medical images quantitatively for diagnosis and treatment planning. Since manual segmentation requires considerable time and effort from experts, research to automatically perform segmentation is in progress. Recent studies using deep learning have improved performance but need many labeled data. Although there are public datasets for research, manual labeling is required in an area where labeling is not performed to train a model. We propose a deep-learning-based tool that can easily create training data to alleviate this inconvenience. The proposed tool receives a CT image and the pixels of organs the user wants to segment as inputs and extract the features of the CT image using a deep learning network. Then, pixels that have similar features are classified to the identical organ. The advantage of the proposed tool is that it can be trained with a small number of labeled data. After training with 25 labeled CT images, our tool shows competitive results when it is compared to the state-of-the-art segmentation algorithms, such as UNet and DeepNetV3.
Collapse
|
14
|
Deep convolutional neural network-based classification of cancer cells on cytological pleural effusion images. Mod Pathol 2022; 35:609-614. [PMID: 35013527 PMCID: PMC9042694 DOI: 10.1038/s41379-021-00987-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 11/20/2021] [Accepted: 11/22/2021] [Indexed: 11/23/2022]
Abstract
Lung cancer is one of the leading causes of cancer-related death worldwide. Cytology plays an important role in the initial evaluation and diagnosis of patients with lung cancer. However, due to the subjectivity of cytopathologists and the region-dependent diagnostic levels, the low consistency of liquid-based cytological diagnosis results in certain proportions of misdiagnoses and missed diagnoses. In this study, we performed a weakly supervised deep learning method for the classification of benign and malignant cells in lung cytological images through a deep convolutional neural network (DCNN). A total of 404 cases of lung cancer cells in effusion cytology specimens from Shanghai Pulmonary Hospital were investigated, in which 266, 78, and 60 cases were used as the training, validation and test sets, respectively. The proposed method was evaluated on 60 whole-slide images (WSIs) of lung cancer pleural effusion specimens. This study showed that the method had an accuracy, sensitivity, and specificity respectively of 91.67%, 87.50% and 94.44% in classifying malignant and benign lesions (or normal). The area under the receiver operating characteristic (ROC) curve (AUC) was 0.9526 (95% confidence interval (CI): 0.9019-9.9909). In contrast, the average accuracies of senior and junior cytopathologists were 98.34% and 83.34%, respectively. The proposed deep learning method will be useful and may assist pathologists with different levels of experience in the diagnosis of cancer cells on cytological pleural effusion images in the future.
Collapse
|
15
|
de Siqueira VS, Borges MM, Furtado RG, Dourado CN, da Costa RM. Artificial intelligence applied to support medical decisions for the automatic analysis of echocardiogram images: A systematic review. Artif Intell Med 2021; 120:102165. [PMID: 34629153 DOI: 10.1016/j.artmed.2021.102165] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 08/07/2021] [Accepted: 08/31/2021] [Indexed: 12/16/2022]
Abstract
The echocardiogram is a test that is widely used in Heart Disease Diagnoses. However, its analysis is largely dependent on the physician's experience. In this regard, artificial intelligence has become an essential technology to assist physicians. This study is a Systematic Literature Review (SLR) of primary state-of-the-art studies that used Artificial Intelligence (AI) techniques to automate echocardiogram analyses. Searches on the leading scientific article indexing platforms using a search string returned approximately 1400 articles. After applying the inclusion and exclusion criteria, 118 articles were selected to compose the detailed SLR. This SLR presents a thorough investigation of AI applied to support medical decisions for the main types of echocardiogram (Transthoracic, Transesophageal, Doppler, Stress, and Fetal). The article's data extraction indicated that the primary research interest of the studies comprised four groups: 1) Improvement of image quality; 2) identification of the cardiac window vision plane; 3) quantification and analysis of cardiac functions, and; 4) detection and classification of cardiac diseases. The articles were categorized and grouped to show the main contributions of the literature to each type of ECHO. The results indicate that the Deep Learning (DL) methods presented the best results for the detection and segmentation of the heart walls, right and left atrium and ventricles, and classification of heart diseases using images/videos obtained by echocardiography. The models that used Convolutional Neural Network (CNN) and its variations showed the best results for all groups. The evidence produced by the results presented in the tabulation of the studies indicates that the DL contributed significantly to advances in echocardiogram automated analysis processes. Although several solutions were presented regarding the automated analysis of ECHO, this area of research still has great potential for further studies to improve the accuracy of results already known in the literature.
Collapse
Affiliation(s)
- Vilson Soares de Siqueira
- Federal Institute of Tocantins, Av. Bernado Sayão, S/N, Santa Maria, Colinas do Tocantins, TO, Brazil; Federal University of Goias, Alameda Palmeiras, Quadra D, Câmpus Samambaia, Goiânia, GO, Brazil.
| | - Moisés Marcos Borges
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil
| | - Rogério Gomes Furtado
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil
| | - Colandy Nunes Dourado
- Diagnostic Imaging Center - CDI, Av. Portugal, 1155, St. Marista, Goiânia, GO, Brazil. http://www.cdigoias.com.br
| | - Ronaldo Martins da Costa
- Federal University of Goias, Alameda Palmeiras, Quadra D, Câmpus Samambaia, Goiânia, GO, Brazil.
| |
Collapse
|
16
|
Intelligent Algorithm-Based Magnetic Resonance Imaging in Radical Gastrectomy under Laparoscope. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:1701447. [PMID: 34621143 PMCID: PMC8455201 DOI: 10.1155/2021/1701447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Revised: 08/28/2021] [Accepted: 08/31/2021] [Indexed: 11/21/2022]
Abstract
The study focused on the influence of intelligent algorithm-based magnetic resonance imaging (MRI) on short-term curative effects of laparoscopic radical gastrectomy for gastric cancer. A convolutional neural network- (CNN-) based algorithm was used to segment MRI images of patients with gastric cancer, and 158 subjects admitted at hospital were selected as research subjects and randomly divided into the 3D laparoscopy group and 2D laparoscopy group, with 79 cases in each group. The two groups were compared for operation time, intraoperative blood loss, number of dissected lymph nodes, exhaust time, time to get out of bed, postoperative hospital stay, and postoperative complications. The results showed that the CNN-based algorithm had high accuracy with clear contours. The similarity coefficient (DSC) was 0.89, the sensitivity was 0.93, and the average time to process an image was 1.1 min. The 3D laparoscopic group had shorter operation time (86.3 ± 21.0 min vs. 98 ± 23.3 min) and less intraoperative blood loss (200 ± 27.6 mL vs. 209 ± 29.8 mL) than the 2D laparoscopic group, and the difference was statistically significant (P < 0.05). The number of dissected lymph nodes was 38.4 ± 8.5 in the 3D group and 36.1 ± 6.0 in the 2D group, showing no statistically significant difference (P > 0.05). At the same time, no statistically significant difference was noted in postoperative exhaust time, time to get out of bed, postoperative hospital stay, and the incidence of complications (P > 0.05). It was concluded that the algorithm in this study can accurately segment the target area, providing a basis for the preoperative examination of gastric cancer, and that 3D laparoscopic surgery can shorten the operation time and reduce intraoperative bleeding, while achieving similar short-term curative effects to 2D laparoscopy.
Collapse
|
17
|
Pérez-Pelegrí M, Monmeneu JV, López-Lereu MP, Pérez-Pelegrí L, Maceira AM, Bodí V, Moratal D. Automatic left ventricle volume calculation with explainability through a deep learning weak-supervision methodology. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106275. [PMID: 34274609 DOI: 10.1016/j.cmpb.2021.106275] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 07/02/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging is the most reliable imaging technique to assess the heart. More specifically there is great importance in the analysis of the left ventricle, as the main pathologies directly affect this region. In order to characterize the left ventricle, it is necessary to extract its volume. In this work we present a neural network architecture that is capable of directly estimating the left ventricle volume in short axis cine Magnetic Resonance Imaging in the end-diastolic frame and provide a segmentation of the region which is the basis of the volume calculation, thus offering explainability to the estimated value. METHODS The network was designed to directly target the volumes to estimate, not requiring any labeled segmentation on the images. The network was based on a 3D U-net with extra layers defined in a scanning module that learned features like the circularity of the objects and the volumes to estimate in a weakly-supervised manner. The only targets defined were the left ventricle volumes and the circularity of the object detected through the estimation of the π value derived from its shape. We had access to 397 cases corresponding to 397 different subjects. We randomly selected 98 cases to use as test set. RESULTS The results show a good match between the real and estimated volumes in the test set, with a mean relative error of 8% and a mean absolute error of 9.12 ml with a Pearson correlation coefficient of 0.95. The derived segmentations obtained by the network achieved Dice coefficients with a mean value of 0.79. CONCLUSIONS The proposed method is capable of obtaining the left ventricle volume biomarker in the end-diastole and offer an explanation of how it obtains the result in the form of a segmentation mask without the need of segmentation labels to train the algorithm, making it a potentially more trustworthy method for clinicians and a way to train neural networks more easily when segmentation labels are not readily available.
Collapse
Affiliation(s)
- Manuel Pérez-Pelegrí
- Center for Biomaterials and Tissue Engineering, Universitat Politècnica de València, Camí de Vera, s/n, 46022 Valencia, Spain
| | - José V Monmeneu
- Unidad de Imagen Cardíaca, ERESA-ASCIRES Grupo Biomédico, Valencia, Spain
| | | | - Lucía Pérez-Pelegrí
- Facultad de Enfermería, Universidad Católica de Valencia San Vicente Mártir, Valencia, Spain
| | - Alicia M Maceira
- Unidad de Imagen Cardíaca, ERESA-ASCIRES Grupo Biomédico, Valencia, Spain
| | - Vicente Bodí
- Departamento de Medicina, Universitat de València, Estudi General, Valencia, Spain; Servicio de Cardiología, Hospital Clínico Universitario de Valencia, INCLIVA, CIBERCV, Valencia, Spain
| | - David Moratal
- Center for Biomaterials and Tissue Engineering, Universitat Politècnica de València, Camí de Vera, s/n, 46022 Valencia, Spain.
| |
Collapse
|
18
|
Sun Y, Ji Y. AAWS-Net: Anatomy-aware weakly-supervised learning network for breast mass segmentation. PLoS One 2021; 16:e0256830. [PMID: 34460852 PMCID: PMC8405027 DOI: 10.1371/journal.pone.0256830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 08/16/2021] [Indexed: 11/18/2022] Open
Abstract
Accurate segmentation of breast masses is an essential step in computer aided diagnosis of breast cancer. The scarcity of annotated training data greatly hinders the model’s generalization ability, especially for the deep learning based methods. However, high-quality image-level annotations are time-consuming and cumbersome in medical image analysis scenarios. In addition, a large amount of weak annotations is under-utilized which comprise common anatomy features. To this end, inspired by teacher-student networks, we propose an Anatomy-Aware Weakly-Supervised learning Network (AAWS-Net) for extracting useful information from mammograms with weak annotations for efficient and accurate breast mass segmentation. Specifically, we adopt a weakly-supervised learning strategy in the Teacher to extract anatomy structure from mammograms with weak annotations by reconstructing the original image. Besides, knowledge distillation is used to suggest morphological differences between benign and malignant masses. Moreover, the prior knowledge learned from the Teacher is introduced to the Student in an end-to-end way, which improves the ability of the student network to locate and segment masses. Experiments on CBIS-DDSM have shown that our method yields promising performance compared with state-of-the-art alternative models for breast mass segmentation in terms of segmentation accuracy and IoU.
Collapse
Affiliation(s)
- Yeheng Sun
- School of Business, University of Shanghai for Science and Technology, Shanghai, China
- * E-mail:
| | - Yule Ji
- School of Business, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
19
|
Abstract
The genetic development of the commercial broiler has led to body misconfiguration and consequent walking disabilities, mainly at the slaughter age. The present study aimed to identify broiler locomotion ability using image analysis automatically. A total of 40 broilers that were 40 d old (male and female) were placed to walk on a specially built runway, and their locomotion was recorded. An image segmentation algorithm was developed, and the coordinates of the bird’s center of mass were extracted from the segmented images for each frame analyzed, and the unrest index (UI) was applied. We calculated the center of mass’s movement of the broiler walking lateral images capturing the bird’s displacement speed in the onward direction. Results indicated that broiler walking speed on the runway tends to decrease with the increase of the gait score. The locomotion did not differ between males or females. The proposed algorithm was efficient in predicting the broiler gait score based on their displacement speed.
Collapse
|
20
|
Girum KB, Crehange G, Lalande A. Learning With Context Feedback Loop for Robust Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1542-1554. [PMID: 33606627 DOI: 10.1109/tmi.2021.3060497] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning has successfully been leveraged for medical image segmentation. It employs convolutional neural networks (CNN) to learn distinctive image features from a defined pixel-wise objective function. However, this approach can lead to less output pixel interdependence producing incomplete and unrealistic segmentation results. In this paper, we present a fully automatic deep learning method for robust medical image segmentation by formulating the segmentation problem as a recurrent framework using two systems. The first one is a forward system of an encoder-decoder CNN that predicts the segmentation result from the input image. The predicted probabilistic output of the forward system is then encoded by a fully convolutional network (FCN)-based context feedback system. The encoded feature space of the FCN is then integrated back into the forward system's feed-forward learning process. Using the FCN-based context feedback loop allows the forward system to learn and extract more high-level image features and fix previous mistakes, thereby improving prediction accuracy over time. Experimental results, performed on four different clinical datasets, demonstrate our method's potential application for single and multi-structure medical image segmentation by outperforming the state of the art methods. With the feedback loop, deep learning methods can now produce results that are both anatomically plausible and robust to low contrast images. Therefore, formulating image segmentation as a recurrent framework of two interconnected networks via context feedback loop can be a potential method for robust and efficient medical image analysis.
Collapse
|