1
|
Guo X, Liu T, Yang Y, Dai J, Wang L, Tang D, Sun H. Automatic Segmentation of Type A Aortic Dissection on Computed Tomography Images Using Deep Learning Approach. Diagnostics (Basel) 2024; 14:1332. [PMID: 39001223 PMCID: PMC11240582 DOI: 10.3390/diagnostics14131332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 06/16/2024] [Accepted: 06/19/2024] [Indexed: 07/16/2024] Open
Abstract
PURPOSE Type A aortic dissection (TAAD) is a life-threatening aortic disease. The tear involves the ascending aorta and progresses into the separation of the layers of the aortic wall and the occurrence of a false lumen. Accurate segmentation of TAAD could provide assistance for disease assessment and guidance for clinical treatment. METHODS This study applied nnU-Net, a state-of-the-art biomedical segmentation network architecture, to segment contrast-enhanced CT images and quantify the morphological features for TAAD. CT datasets were acquired from 24 patients with TAAD. Manual segmentation and annotation of the CT images was used as the ground-truth. Two-dimensional (2D) nnU-Net and three-dimensional (3D) nnU-Net architectures with Dice- and cross entropy-based loss functions were utilized to segment the true lumen (TL), false lumen (FL), and intimal flap on the images. Four-fold cross validation was performed to evaluate the performance of the two nnU-Net architectures. Six metrics, including accuracy, precision, recall, Intersection of Union, Dice similarity coefficient (DSC), and Hausdorff distance, were calculated to evaluate the performance of the 2D and 3D nnU-Net algorithms in TAAD datasets. Aortic morphological features from both 2D and 3D nnU-Net algorithms were quantified based on the segmented results and compared. RESULTS Overall, 3D nnU-Net architectures had better performance in TAAD CT datasets, with TL and FL segmentation accuracy up to 99.9%. The DSCs of TLs and FLs based on the 3D nnU-Net were 88.42% and 87.10%. For the aortic TL and FL diameters, the FL area calculated from the segmentation results of the 3D nnU-Net architecture had smaller relative errors (3.89-6.80%), compared to the 2D nnU-Net architecture (relative errors: 4.35-9.48%). CONCLUSIONS The nnU-Net architectures may serve as a basis for automatic segmentation and quantification of TAAD, which could aid in rapid diagnosis, surgical planning, and subsequent biomechanical simulation of the aorta.
Collapse
Affiliation(s)
- Xiaoya Guo
- School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Tianshu Liu
- School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Yi Yang
- School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Jianxin Dai
- School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Liang Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
| | - Dalin Tang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing 210096, China
- Mathematical Sciences Department, Worcester Polytechnic Institute, Worcester, MA 01609, USA
| | - Haoliang Sun
- Department of Cardiovascular Surgery, First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| |
Collapse
|
2
|
Raj A, Allababidi A, Kayed H, Gerken ALH, Müller J, Schoenberg SO, Zöllner FG, Rink JS. Streamlining Acute Abdominal Aortic Dissection Management-An AI-based CT Imaging Workflow. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01164-0. [PMID: 38864947 DOI: 10.1007/s10278-024-01164-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 05/24/2024] [Accepted: 06/04/2024] [Indexed: 06/13/2024]
Abstract
Life-threatening acute aortic dissection (AD) demands timely diagnosis for effective intervention. To streamline intrahospital workflows, automated detection of AD in abdominal computed tomography (CT) scans seems useful to assist humans. We aimed at creating a robust convolutional neural network (CNN)-based pipeline capable of real-time screening for signs of abdominal AD in CT. In this retrospective study, abdominal CT data from AD patients presenting with AD and from non-AD patients were collected (n 195, AD cases 94, mean age 65.9 years, female ratio 35.8%). A CNN-based algorithm was developed with the goal of enabling a robust, automated, and highly sensitive detection of abdominal AD. Two sets from internal (n = 32, AD cases 16) and external sources (n = 1189, AD cases 100) were procured for validation. The abdominal region was extracted, followed by the automatic isolation of the aorta region of interest (ROI) and highlighting of the membrane via edge extraction, followed by classification of the aortic ROI as dissected/healthy. A fivefold cross-validation was employed on the internal set, and an ensemble of the 5 trained models was used to predict the internal and external validation set. Evaluation metrics included receiver operating characteristic curve (AUC) and balanced accuracy. The AUC, balanced accuracy, and sensitivity scores of the internal dataset were 0.932 (CI 0.891-0.963), 0.860, and 0.885, respectively. For the internal validation dataset, the AUC, balanced accuracy, and sensitivity scores were 0.887 (CI 0.732-0.988), 0.781, and 0.875, respectively. Furthermore, for the external validation dataset, AUC, balanced accuracy, and sensitivity scores were 0.993 (CI 0.918-0.994), 0.933, and 1.000, respectively. The proposed automated pipeline could assist humans in expediting acute aortic dissection management when integrated into clinical workflows.
Collapse
Affiliation(s)
- Anish Raj
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany.
- Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany.
| | - Ahmad Allababidi
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
| | - Hany Kayed
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
| | - Andreas L H Gerken
- Department of Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
| | - Julia Müller
- Mediri GmbH, Eppelheimer Straße 13, D-69115, Heidelberg, Germany
| | - Stefan O Schoenberg
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
| | - Frank G Zöllner
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
- Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
| | - Johann S Rink
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Theodor-Kutzer-Ufer 1-3, D-68167, Mannheim, Germany
| |
Collapse
|
3
|
Koç U, Sezer EA, Özkaya YA, Yarbay Y, Beşler MS, Taydaş O, Yalçın A, Evrimler Ş, Kızıloğlu HA, Kesimal U, Atasoy D, Oruç M, Ertuğrul M, Karakaş E, Karademir F, Sebik NB, Topuz Y, Aktan ME, Sezer Ö, Aydın Ş, Varlı S, Akdoğan E, Ülgü MM, Birinci Ş. Elevating healthcare through artificial intelligence: analyzing the abdominal emergencies data set (TR_ABDOMEN_RAD_EMERGENCY) at TEKNOFEST-2022. Eur Radiol 2024; 34:3588-3597. [PMID: 37947834 DOI: 10.1007/s00330-023-10391-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 08/28/2023] [Accepted: 09/08/2023] [Indexed: 11/12/2023]
Abstract
OBJECTIVES The artificial intelligence competition in healthcare at TEKNOFEST-2022 provided a platform to address the complex multi-class classification challenge of abdominal emergencies using computer vision techniques. This manuscript aimed to comprehensively present the methodologies for data preparation, annotation procedures, and rigorous evaluation metrics. Moreover, it was conducted to introduce a meticulously curated abdominal emergencies data set to the researchers. METHODS The data set underwent a comprehensive central screening procedure employing diverse algorithms extracted from the e-Nabız (Pulse) and National Teleradiology System of the Republic of Türkiye, Ministry of Health. Full anonymization of the data set was conducted. Subsequently, the data set was annotated by a group of ten experienced radiologists. The evaluation process was executed by calculating F1 scores, which were derived from the intersection over union values between the predicted bounding boxes and the corresponding ground truth (GT) bounding boxes. The establishment of baseline performance metrics involved computing the average of the highest five F1 scores. RESULTS Observations indicated a progressive decline in F1 scores as the threshold value increased. Furthermore, it could be deduced that class 6 (abdominal aortic aneurysm/dissection) was relatively straightforward to detect compared to other classes, with class 5 (acute diverticulitis) presenting the most formidable challenge. It is noteworthy, however, that if all achieved outcomes for all classes were considered with a threshold of 0.5, the data set's complexity and associated challenges became pronounced. CONCLUSION This data set's significance lies in its pioneering provision of labels and GT-boxes for six classes, fostering opportunities for researchers. CLINICAL RELEVANCE STATEMENT The prompt identification and timely intervention in cases of emergent medical conditions hold paramount significance. The handling of patients' care can be augmented, while the potential for errors is minimized, particularly amidst high caseload scenarios, through the application of AI. KEY POINTS • The data set used in artificial intelligence competition in healthcare (TEKNOFEST-2022) provides a 6-class data set of abdominal CT images consisting of a great variety of abdominal emergencies. • This data set is compiled from the National Teleradiology System data repository of emergency radiology departments of 459 hospitals. • Radiological data on abdominal emergencies is scarce in literature and this annotated competition data set can be a valuable resource for further studies and new AI models.
Collapse
Affiliation(s)
- Ural Koç
- Department of Radiology, Ankara Bilkent City Hospital, Ankara, Türkiye.
| | - Ebru Akçapınar Sezer
- Artificial Intelligence Division, Department of Computer Engineering, Hacettepe University, Ankara, Türkiye
| | | | - Yasin Yarbay
- General Directorate of Health Information Systems, Ministry of Health, Ankara, Türkiye
| | | | - Onur Taydaş
- Department of Radiology, Faculty of Medicine, Sakarya University, Sakarya, Türkiye
| | - Ahmet Yalçın
- Department of Radiology, Faculty of Medicine, Erzurum Atatürk University, Erzurum, Türkiye
| | - Şehnaz Evrimler
- Department of Radiology, Ankara Etlik City Hospital, Ankara, Türkiye
| | | | - Uğur Kesimal
- Department of Radiology, Ankara Training and Research Hospital, Ankara, Türkiye
| | - Dilara Atasoy
- Department of Radiology, Sivas Numune State Hospital, Sivas, Türkiye
| | - Meltem Oruç
- Department of Radiology, Karaman Training and Research Hospital, Karaman, Türkiye
| | - Mustafa Ertuğrul
- Department of Radiology, Ürgüp State Hospital, Nevşehir, Türkiye
| | - Emrah Karakaş
- General Directorate of Health Information Systems, Ministry of Health, Ankara, Türkiye
| | | | - Nihat Barış Sebik
- General Directorate of Health Information Systems, Ministry of Health, Ankara, Türkiye
| | | | | | - Özgür Sezer
- General Directorate of Health Information Systems, Ministry of Health, Ankara, Türkiye
| | - Şahin Aydın
- General Directorate of Health Information Systems, Ministry of Health, Ankara, Türkiye
| | - Songül Varlı
- Health Institutes of Türkiye, İstanbul, Türkiye
- Department of Computer Engineering, Yıldız Technical University, İstanbul, Türkiye
| | - Erhan Akdoğan
- Health Institutes of Türkiye, İstanbul, Türkiye
- Department of Mechatronics Engineering, Faculty of Mechanical Engineering, Yıldız Technical University, İstanbul, Türkiye
| | - Mustafa Mahir Ülgü
- General Directorate of Health Information Systems, Ministry of Health, Ankara, Türkiye
| | | |
Collapse
|
4
|
Lin W, Gao Z, Liu H, Zhang H. A Deformable Constraint Transport Network for Optimal Aortic Segmentation From CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1462-1475. [PMID: 38048241 DOI: 10.1109/tmi.2023.3339142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
Aortic segmentation from computed tomography (CT) is crucial for facilitating aortic intervention, as it enables clinicians to visualize aortic anatomy for diagnosis and measurement. However, aortic segmentation faces the challenge of variable geometry in space, as the geometric diversity of different diseases and the geometric transformations that occur between raw and measured images. Existing constraint-based methods can potentially solve the challenge, but they are hindered by two key issues: inaccurate definition of properties and inappropriate topology of transformation in space. In this paper, we propose a deformable constraint transport network (DCTN). The DCTN adaptively extracts aortic features to define intra-image constrained properties and guides topological implementation in space to constrain inter-image geometric transformation between raw and curved planar reformation (CPR) images. The DCTN contains a deformable attention extractor, a geometry-aware decoder and an optimal transport guider. The extractor generates variable patches that preserve semantic integrity and long-range dependency in long-sequence images. The decoder enhances the perception of geometric texture and semantic features, particularly for low-intensity aortic coarctation and false lumen, which removes background interference. The guider explores the geometric discrepancies between raw and CPR images, constructs probability distributions of discrepancies, and matches them with inter-image transformation to guide geometric topology in space. Experimental studies on 267 aortic subjects and four public datasets show the superiority of our DCTN over 23 methods. The results demonstrate DCTN's advantages in aortic segmentation for different types of aortic disease, for different aortic segments, and in the measurement of clinical indexes.
Collapse
|
5
|
Zhang X, Cheng G, Han X, Li S, Xiong J, Wu Z, Zhang H, Chen D. Deep learning-based multi-stage postoperative type-b aortic dissection segmentation using global-local fusion learning. Phys Med Biol 2023; 68:235011. [PMID: 37774717 DOI: 10.1088/1361-6560/acfec7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective.Type-b aortic dissection (AD) is a life-threatening cardiovascular disease and the primary treatment is thoracic endovascular aortic repair (TEVAR). Due to the lack of a rapid and accurate segmentation technique, the patient-specific postoperative AD model is unavailable in clinical practice, resulting in impracticable 3D morphological and hemodynamic analyses during TEVAR assessment. This work aims to construct a deep learning-based segmentation framework for postoperative type-b AD.Approach.The segmentation is performed in a two-stage manner. A multi-class segmentation of the contrast-enhanced aorta, thrombus (TH), and branch vessels (BV) is achieved in the first stage based on the cropped image patches. True lumen (TL) and false lumen (FL) are extracted from a straightened image containing the entire aorta in the second stage. A global-local fusion learning mechanism is designed to improve the segmentation of TH and BR by compensating for the missing contextual features of the cropped images in the first stage.Results.The experiments are conducted on a multi-center dataset comprising 133 patients with 306 follow-up images. Our framework achieves the state-of-the-art dice similarity coefficient (DSC) of 0.962, 0.921, 0.811, and 0.884 for TL, FL, TH, and BV, respectively. The global-local fusion learning mechanism increases the DSC of TH and BV by 2.3% (p< 0.05) and 1.4% (p< 0.05), respectively, based on the baseline. Segmenting TH in stage 1 can achieve significantly better DSC for FL (0.921 ± 0.055 versus 0.857 ± 0.220,p< 0.01) and TH (0.811 ± 0.137 versus 0.797 ± 0.146,p< 0.05) than in stage 2. Our framework supports more accurate vascular volume quantifications compared with previous segmentation model, especially for the patients with enlarged TH+FL after TEVAR, and shows good generalizability to different hospital settings.Significance.Our framework can quickly provide accurate patient-specific AD models, supporting the clinical practice of 3D morphological and hemodynamic analyses for quantitative and more comprehensive patient-specific TEVAR assessments.
Collapse
Affiliation(s)
- Xuyang Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Guoliang Cheng
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Xiaofeng Han
- Department of Diagnostic and Interventional Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Shilong Li
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Jiang Xiong
- Department of Vascular and Endovascular Surgery, Chinese PLA General Hospital, Beijing, People's Republic of China
| | - Ziheng Wu
- Department of Vascular Surgery, The First Affiliated Hospital, Zhejiang University, Hangzhou, People's Republic of China
| | - Hongkun Zhang
- Department of Vascular Surgery, The First Affiliated Hospital, Zhejiang University, Hangzhou, People's Republic of China
| | - Duanduan Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| |
Collapse
|
6
|
Kesävuori R, Kaseva T, Salli E, Raivio P, Savolainen S, Kangasniemi M. Deep learning-aided extraction of outer aortic surface from CT angiography scans of patients with Stanford type B aortic dissection. Eur Radiol Exp 2023; 7:35. [PMID: 37380806 DOI: 10.1186/s41747-023-00342-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 04/01/2023] [Indexed: 06/30/2023] Open
Abstract
BACKGROUND Guidelines recommend that aortic dimension measurements in aortic dissection should include the aortic wall. This study aimed to evaluate two-dimensional (2D)- and three-dimensional (3D)-based deep learning approaches for extraction of outer aortic surface in computed tomography angiography (CTA) scans of Stanford type B aortic dissection (TBAD) patients and assess the speed of different whole aorta (WA) segmentation approaches. METHODS A total of 240 patients diagnosed with TBAD between January 2007 and December 2019 were retrospectively reviewed for this study; 206 CTA scans from 206 patients with acute, subacute, or chronic TBAD acquired with various scanners in multiple different hospital units were included. Ground truth (GT) WAs for 80 scans were segmented by a radiologist using an open-source software. The remaining 126 GT WAs were generated via semi-automatic segmentation process in which an ensemble of 3D convolutional neural networks (CNNs) aided the radiologist. Using 136 scans for training, 30 for validation, and 40 for testing, 2D and 3D CNNs were trained to automatically segment WA. Main evaluation metrics for outer surface extraction and segmentation accuracy were normalized surface Dice (NSD) and Dice coefficient score (DCS), respectively. RESULTS 2D CNN outperformed 3D CNN in NSD score (0.92 versus 0.90, p = 0.009), and both CNNs had equal DCS (0.96 versus 0.96, p = 0.110). Manual and semi-automatic segmentation times of one CTA scan were approximately 1 and 0.5 h, respectively. CONCLUSIONS Both CNNs segmented WA with high DCS, but based on NSD, better accuracy may be required before clinical application. CNN-based semi-automatic segmentation methods can expedite the generation of GTs. RELEVANCE STATEMENT Deep learning can speeds up the creation of ground truth segmentations. CNNs can extract the outer aortic surface in patients with type B aortic dissection. KEY POINTS • 2D and 3D convolutional neural networks (CNNs) can extract the outer aortic surface accurately. • Equal Dice coefficient score (0.96) was reached with 2D and 3D CNNs. • Deep learning can expedite the creation of ground truth segmentations.
Collapse
Affiliation(s)
- Risto Kesävuori
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland.
| | - Tuomas Kaseva
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Eero Salli
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Peter Raivio
- Department of Cardiac Surgery, Heart and Lung Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Marko Kangasniemi
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| |
Collapse
|
7
|
Xiang D, Qi J, Wen Y, Zhao H, Zhang X, Qin J, Ma X, Ren Y, Hu H, Liu W, Yang F, Zhao H, Wang X, Zheng C. ADSeg: A flap-attention-based deep learning approach for aortic dissection segmentation. PATTERNS (NEW YORK, N.Y.) 2023; 4:100727. [PMID: 37223272 PMCID: PMC10201300 DOI: 10.1016/j.patter.2023.100727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 01/16/2023] [Accepted: 03/14/2023] [Indexed: 05/25/2023]
Abstract
Accurate and rapid segmentation of the lumen in an aortic dissection (AD) is an important prerequisite for risk evaluation and medical planning for patients with this serious condition. Although some recent studies have pioneered technical advances for the challenging AD segmentation task, they generally neglect the intimal flap structure that separates the true and false lumens. Identification and segmentation of the intimal flap may simplify AD segmentation, and the incorporation of long-distance z axis information interaction along the curved aorta may improve segmentation accuracy. This study proposes a flap attention module that focuses on key flap voxels and performs operations with long-distance attention. In addition, a pragmatic cascaded network structure with feature reuse and a two-step training strategy are presented to fully exploit network representation power. The proposed ADSeg method was evaluated on a multicenter dataset of 108 cases, with or without thrombus; ADSeg outperformed previous state-of-the-art methods by a significant margin and was robust against center variation.
Collapse
Affiliation(s)
- Dongqiao Xiang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Jiyang Qi
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Yiqing Wen
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Hui Zhao
- Department of Interventional Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Xiaolin Zhang
- Department of Radiology, Yichang Central People’s Hospital, Yichang 443003, China
| | - Jia Qin
- Department of Radiology, Yichang Central People’s Hospital, Yichang 443003, China
| | - Xiaomeng Ma
- Department of Radiology, Jingzhou First People’s Hospital of Hubei province, Jingzhou 434000, China
| | - Yaguang Ren
- Research Laboratory for Biomedical Optics and Molecular Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hongyao Hu
- Department of Interventional Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Wenyu Liu
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Fan Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Huangxuan Zhao
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Xinggang Wang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| |
Collapse
|
8
|
Zhu Y, Xu XY, Rosendahl U, Pepper J, Mirsadraee S. Advanced risk prediction for aortic dissection patients using imaging-based computational flow analysis. Clin Radiol 2023; 78:e155-e165. [PMID: 36610929 DOI: 10.1016/j.crad.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/28/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022]
Abstract
Patients with either a repaired or medically managed aortic dissection have varying degrees of risk of developing late complications. High-risk patients would benefit from earlier intervention to improve their long-term survival. Currently serial imaging is used for risk stratification, which is not always reliable. On the other hand, understanding aortic haemodynamics within a dissection is essential to fully evaluate the disease and predict how it may progress. In recent decades, computational fluid dynamics (CFD) has been extensively applied to simulate complex haemodynamics within aortic diseases, and more recently, four-dimensional (4D)-flow magnetic resonance imaging (MRI) techniques have been developed for in vivo haemodynamic measurement. This paper presents a comprehensive review on the application of image-based CFD simulations and 4D-flow MRI analysis for risk prediction in aortic dissection. The key steps involved in patient-specific CFD analyses are demonstrated. Finally, we propose a workflow incorporating computational modelling for personalised assessment to aid in risk stratification and treatment decision-making.
Collapse
Affiliation(s)
- Y Zhu
- Department of Chemical Engineering, Imperial College London, London, UK
| | - X Y Xu
- Department of Chemical Engineering, Imperial College London, London, UK
| | - U Rosendahl
- Department of Cardiac Surgery, Royal Brompton and Harefield Hospitals, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - J Pepper
- Department of Cardiac Surgery, Royal Brompton and Harefield Hospitals, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - S Mirsadraee
- National Heart and Lung Institute, Imperial College London, London, UK; Department of Radiology, Royal Brompton and Harefield Hospitals, London, UK.
| |
Collapse
|
9
|
Inter-observer variability of expert-derived morphologic risk predictors in aortic dissection. Eur Radiol 2023; 33:1102-1111. [PMID: 36029344 PMCID: PMC10017115 DOI: 10.1007/s00330-022-09056-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 07/20/2022] [Accepted: 07/24/2022] [Indexed: 02/03/2023]
Abstract
OBJECTIVES Establishing the reproducibility of expert-derived measurements on CTA exams of aortic dissection is clinically important and paramount for ground-truth determination for machine learning. METHODS Four independent observers retrospectively evaluated CTA exams of 72 patients with uncomplicated Stanford type B aortic dissection and assessed the reproducibility of a recently proposed combination of four morphologic risk predictors (maximum aortic diameter, false lumen circumferential angle, false lumen outflow, and intercostal arteries). For the first inter-observer variability assessment, 47 CTA scans from one aortic center were evaluated by expert-observer 1 in an unconstrained clinical assessment without a standardized workflow and compared to a composite of three expert-observers (observers 2-4) using a standardized workflow. A second inter-observer variability assessment on 30 out of the 47 CTA scans compared observers 3 and 4 with a constrained, standardized workflow. A third inter-observer variability assessment was done after specialized training and tested between observers 3 and 4 in an external population of 25 CTA scans. Inter-observer agreement was assessed with intraclass correlation coefficients (ICCs) and Bland-Altman plots. RESULTS Pre-training ICCs of the four morphologic features ranged from 0.04 (-0.05 to 0.13) to 0.68 (0.49-0.81) between observer 1 and observers 2-4 and from 0.50 (0.32-0.69) to 0.89 (0.78-0.95) between observers 3 and 4. ICCs improved after training ranging from 0.69 (0.52-0.87) to 0.97 (0.94-0.99), and Bland-Altman analysis showed decreased bias and limits of agreement. CONCLUSIONS Manual morphologic feature measurements on CTA images can be optimized resulting in improved inter-observer reliability. This is essential for robust ground-truth determination for machine learning models. KEY POINTS • Clinical fashion manual measurements of aortic CTA imaging features showed poor inter-observer reproducibility. • A standardized workflow with standardized training resulted in substantial improvements with excellent inter-observer reproducibility. • Robust ground truth labels obtained manually with excellent inter-observer reproducibility are key to develop reliable machine learning models.
Collapse
|
10
|
Mastrodicasa D, Willemink MJ, Turner VL, Hinostroza V, Codari M, Hanneman K, Ouzounian M, Ocazionez Trujillo D, Afifi RO, Hedgire S, Burris NS, Yang B, Lacomis JM, Gleason TG, Pacini D, Folesani G, Lovato L, Hinzpeter R, Alkadhi H, Stillman AE, Chen EP, van Kuijk SMJ, Schurink GWH, Sailer AM, Bäumler K, Miller DC, Fischbein MP, Fleischmann D. Registry of Aortic Diseases to Model Adverse Events and Progression (ROADMAP) in Uncomplicated Type B Aortic Dissection: Study Design and Rationale. Radiol Cardiothorac Imaging 2022; 4:e220039. [PMID: 36601455 PMCID: PMC9806732 DOI: 10.1148/ryct.220039] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Revised: 09/01/2022] [Accepted: 11/09/2022] [Indexed: 12/24/2022]
Abstract
Purpose To describe the design and methodological approach of a multicenter, retrospective study to externally validate a clinical and imaging-based model for predicting the risk of late adverse events in patients with initially uncomplicated type B aortic dissection (uTBAD). Materials and Methods The Registry of Aortic Diseases to Model Adverse Events and Progression (ROADMAP) is a collaboration between 10 academic aortic centers in North America and Europe. Two centers have previously developed and internally validated a recently developed risk prediction model. Clinical and imaging data from eight ROADMAP centers will be used for external validation. Patients with uTBAD who survived the initial hospitalization between January 1, 2001, and December 31, 2013, with follow-up until 2020, will be retrospectively identified. Clinical and imaging data from the index hospitalization and all follow-up encounters will be collected at each center and transferred to the coordinating center for analysis. Baseline and follow-up CT scans will be evaluated by cardiovascular imaging experts using a standardized technique. Results The primary end point is the occurrence of late adverse events, defined as aneurysm formation (≥6 cm), rapid expansion of the aorta (≥1 cm/y), fatal or nonfatal aortic rupture, new refractory pain, uncontrollable hypertension, and organ or limb malperfusion. The previously derived multivariable model will be externally validated by using Cox proportional hazards regression modeling. Conclusion This study will show whether a recent clinical and imaging-based risk prediction model for patients with uTBAD can be generalized to a larger population, which is an important step toward individualized risk stratification and therapy.Keywords: CT Angiography, Vascular, Aorta, Dissection, Outcomes Analysis, Aortic Dissection, MRI, TEVAR© RSNA, 2022See also the commentary by Rajiah in this issue.
Collapse
|
11
|
Feng H, Fu Z, Wang Y, Zhang P, Lai H, Zhao J. Automatic segmentation of thrombosed aortic dissection in post-operative CT-angiography images. Med Phys 2022. [PMID: 36542417 DOI: 10.1002/mp.16169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/02/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Abstract
PURPOSE The thrombus in the false lumen (FL) of aortic dissection (AD) patients is a meaningful indicator to determine aortic remodeling but difficult to measure in clinic. In this study, a novel segmentation strategy based on deep learning was proposed to automatically extract the thrombus in the FL in post-operative computed tomography angiography (CTA) images of AD patients, which provided an efficient and convenient segmentation method with high accuracy. METHODS A two-step segmentation strategy was proposed. Each step contained a convolutional neural network (CNN) to segment the aorta and the thrombus, respectively. In the first step, a CNN was used to obtain the binary segmentation mask of the whole aorta. In the second step, another CNN was introduced to segment the thrombus. The results of the first step were used as additional input to the second step to highlight the aorta in the complex background. Moreover, skip connection attention refinement (SAR) modules were designed and added in the second step to improve the segmentation accuracy of the thrombus details by efficiently using the low-level features. RESULTS The proposed method provided accurate thrombus segmentation results (0.903 ± 0.062 in dice score, 0.828 ± 0.092 in Jaccard index, and 2.209 ± 2.945 in 95% Hausdorff distance), which showed improvement compared to the methods without prior information (0.846 ± 0.085 in dice score) and the method without SAR (0.899 ± 0.060 in dice score). Moreover, the proposed method achieved 0.967 ± 0.029 and 0.948 ± 0.041 in dice score of true lumen (TL) and patent FL (PFL) segmentation, respectively, indicating the excellence of the proposed method in the segmentation task of the overall aorta. CONCLUSIONS A novel CNN-based segmentation framework was proposed to automatically obtain thrombus segmentation for thrombosed AD in post-operative CTA images, which provided a useful tool for further application of thrombus-related indicators in clinical and research application.
Collapse
Affiliation(s)
- Hanying Feng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Zheng Fu
- Department of Cardiovascular Surgery, Zhongshan Hospital Fudan University, Shanghai, People's Republic of China
| | - Yulin Wang
- Department of Cardiovascular Surgery, Zhongshan Hospital Fudan University, Shanghai, People's Republic of China
| | - Puming Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Hao Lai
- Department of Cardiovascular Surgery, Zhongshan Hospital Fudan University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
12
|
Mastrodicasa D, Codari M, Bäumler K, Sandfort V, Shen J, Mistelbauer G, Hahn LD, Turner VL, Desjardins B, Willemink MJ, Fleischmann D. Artificial Intelligence Applications in Aortic Dissection Imaging. Semin Roentgenol 2022; 57:357-363. [PMID: 36265987 PMCID: PMC10013132 DOI: 10.1053/j.ro.2022.07.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/25/2022] [Accepted: 07/02/2022] [Indexed: 11/11/2022]
Affiliation(s)
- Domenico Mastrodicasa
- Department of Radiology, Stanford University School of Medicine, Stanford, CA; Stanford Cardiovascular Institute, Stanford University School of Medicine, Stanford, CA.
| | - Marina Codari
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Kathrin Bäumler
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Veit Sandfort
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Jody Shen
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Gabriel Mistelbauer
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Lewis D Hahn
- University of California San Diego, Department of Radiology, La Jolla, CA
| | - Valery L Turner
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Benoit Desjardins
- Department of Radiology, Stanford University School of Medicine, Stanford, CA; Department of Radiology, University of Pennsylvania, Philadelphia, PA
| | - Martin J Willemink
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| | - Dominik Fleischmann
- Department of Radiology, Stanford University School of Medicine, Stanford, CA
| |
Collapse
|
13
|
Raman AG, Jones C, Weiss CR. Machine Learning for Hepatocellular Carcinoma Segmentation at MRI: Radiology In Training. Radiology 2022; 304:509-515. [PMID: 35536132 DOI: 10.1148/radiol.212386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A 68-year-old woman with a history of hepatocellular carcinoma underwent conventional transarterial chemoembolization. Manual tumor segmentation on images, which can be used to assess disease progression, is time consuming and may suffer from interobserver reliability issues. The authors present a how-to guide to develop machine learning algorithms for fully automatic segmentation of hepatocellular carcinoma and other tumors for lesion tracking over time.
Collapse
Affiliation(s)
- Alex G Raman
- From the Western University of Health Sciences, College of Osteopathic Medicine of the Pacific, 309 E 2nd St, Pomona, CA 91766 (A.G.R.); Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Balrimore, Md (C.J.); and Department of Radiology and Radiologic Science, Division of Interventional Radiology, Johns Hopkins Hospital, Baltimore, MD (C.R.W.)
| | - Craig Jones
- From the Western University of Health Sciences, College of Osteopathic Medicine of the Pacific, 309 E 2nd St, Pomona, CA 91766 (A.G.R.); Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Balrimore, Md (C.J.); and Department of Radiology and Radiologic Science, Division of Interventional Radiology, Johns Hopkins Hospital, Baltimore, MD (C.R.W.)
| | - Clifford R Weiss
- From the Western University of Health Sciences, College of Osteopathic Medicine of the Pacific, 309 E 2nd St, Pomona, CA 91766 (A.G.R.); Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Balrimore, Md (C.J.); and Department of Radiology and Radiologic Science, Division of Interventional Radiology, Johns Hopkins Hospital, Baltimore, MD (C.R.W.)
| |
Collapse
|
14
|
Fleischmann D, Afifi RO, Casanegra AI, Elefteriades JA, Gleason TG, Hanneman K, Roselli EE, Willemink MJ, Fischbein MP. Imaging and Surveillance of Chronic Aortic Dissection: A Scientific Statement From the American Heart Association. Circ Cardiovasc Imaging 2022; 15:e000075. [PMID: 35172599 DOI: 10.1161/hci.0000000000000075] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
All patients surviving an acute aortic dissection require continued lifelong surveillance of their diseased aorta. Late complications, driven predominantly by chronic false lumen degeneration and aneurysm formation, often require surgical, endovascular, or hybrid interventions to treat or prevent aortic rupture. Imaging plays a central role in the medical decision-making of patients with chronic aortic dissection. Accurate aortic diameter measurements and rigorous, systematic documentation of diameter changes over time with different imaging equipment and modalities pose a range of practical challenges in these complex patients. Currently, no guidelines or recommendations for imaging surveillance in patients with chronic aortic dissection exist. In this document, we present state-of-the-art imaging and measurement techniques for patients with chronic aortic dissection and clarify the need for standardized measurements and reporting for lifelong surveillance. We also examine the emerging role of imaging and computer simulations to predict aortic false lumen degeneration, remodeling, and biomechanical failure from morphological and hemodynamic features. These insights may improve risk stratification, individualize contemporary treatment options, and potentially aid in the conception of novel treatment strategies in the future.
Collapse
|
15
|
Hahn LD, Hall K, Alebdi T, Kligerman SJ, Hsiao A. Automated Deep Learning Analysis for Quality Improvement of CT Pulmonary Angiography. Radiol Artif Intell 2022; 4:e210162. [PMID: 35391776 DOI: 10.1148/ryai.210162] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 01/24/2022] [Accepted: 02/03/2022] [Indexed: 11/11/2022]
Abstract
CT pulmonary angiography (CTPA) is the first-line imaging test for evaluation of acute pulmonary emboli. However, diagnostic quality is heterogeneous across institutions and is frequently limited by suboptimal pulmonary artery (PA) contrast enhancement. In this retrospective study, a deep learning algorithm for measuring enhancement of the central PAs was developed and assessed for feasibility of its use in quality improvement of CTPA. In a convenience sample of 450 patients, automated measurement of CTPA enhancement showed high agreement with manual radiologist measurement (r = 0.996). Using a threshold of less than 250 HU for suboptimal enhancement, the sensitivity and specificity of the automated classification were 100% and 99.5%, respectively. The algorithm was further evaluated in a random sampling of 3195 CTPA examinations from January 2019 through May 2021. Beginning in January 2021, the scanning protocol was transitioned from bolus tracking to a timing bolus strategy. Automated analysis of these examinations showed that most suboptimal examinations following the change in protocol were performed using one scanner, highlighting the potential value of deep learning algorithms for quality improvement in the radiology department. Keywords: CT Angiography, Pulmonary Arteries © RSNA, 2022.
Collapse
Affiliation(s)
- Lewis D Hahn
- Department of Radiology, University of California San Diego School of Medicine, 9300 Campus Point Dr, MC 0841, La Jolla, CA 92037-0841 (L.D.H., T.A., S.J.K., A.H.); and Naval Hospital Camp Pendleton, Oceanside, Calif (K.H.)
| | - Kent Hall
- Department of Radiology, University of California San Diego School of Medicine, 9300 Campus Point Dr, MC 0841, La Jolla, CA 92037-0841 (L.D.H., T.A., S.J.K., A.H.); and Naval Hospital Camp Pendleton, Oceanside, Calif (K.H.)
| | - Thamer Alebdi
- Department of Radiology, University of California San Diego School of Medicine, 9300 Campus Point Dr, MC 0841, La Jolla, CA 92037-0841 (L.D.H., T.A., S.J.K., A.H.); and Naval Hospital Camp Pendleton, Oceanside, Calif (K.H.)
| | - Seth J Kligerman
- Department of Radiology, University of California San Diego School of Medicine, 9300 Campus Point Dr, MC 0841, La Jolla, CA 92037-0841 (L.D.H., T.A., S.J.K., A.H.); and Naval Hospital Camp Pendleton, Oceanside, Calif (K.H.)
| | - Albert Hsiao
- Department of Radiology, University of California San Diego School of Medicine, 9300 Campus Point Dr, MC 0841, La Jolla, CA 92037-0841 (L.D.H., T.A., S.J.K., A.H.); and Naval Hospital Camp Pendleton, Oceanside, Calif (K.H.)
| |
Collapse
|
16
|
Laur O, Wang B. Musculoskeletal trauma and artificial intelligence: current trends and projections. Skeletal Radiol 2022; 51:257-269. [PMID: 34089338 DOI: 10.1007/s00256-021-03824-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 05/13/2021] [Accepted: 05/18/2021] [Indexed: 02/02/2023]
Abstract
Musculoskeletal trauma accounts for a significant fraction of emergency department visits and patients seeking urgent care, with a high financial cost to society. Diagnostic imaging is indispensable in the workup and management of trauma patients. However, diagnostic imaging represents a complex multifaceted system, with many aspects of its workflow prone to inefficiencies or human error. Recent technological innovations in artificial intelligence and machine learning have shown promise to revolutionize our systems for providing medical care to patients. This review will provide a general overview of the current state of artificial intelligence and machine learning applications in different aspects of trauma imaging and provide a vision for how such applications could be leveraged to enhance our diagnostic imaging systems and optimize patient outcomes.
Collapse
Affiliation(s)
- Olga Laur
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA
| | - Benjamin Wang
- Division of Musculoskeletal Radiology, Department of Radiology, NYU Langone Health, 301 East 17th Street, 6th Floor, New York, NY, 10003, USA.
| |
Collapse
|
17
|
Abstract
Positron emission tomography (PET) offers an incredible wealth of diverse research applications in vascular disease, providing a depth of molecular, functional, structural, and spatial information. Despite this, vascular PET imaging has not yet assumed the same clinical use as vascular ultrasound, CT, and MR imaging which provides information about late-onset, structural tissue changes. The current clinical utility of PET relies heavily on visual inspection and suboptimal parameters such as SUVmax; emerging applications have begun to harness the tool of whole-body PET to better understand the disease. Even still, without automation, this is a time-consuming and variable process. This review summarizes PET applications in vascular disorders, highlights emerging AI methods, and discusses the unlocked potential of AI in the clinical space.
Collapse
|
18
|
Egger J, Pepe A, Gsaxner C, Jin Y, Li J, Kern R. Deep learning-a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact. PeerJ Comput Sci 2021; 7:e773. [PMID: 34901429 PMCID: PMC8627237 DOI: 10.7717/peerj-cs.773] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 10/15/2021] [Indexed: 05/07/2023]
Abstract
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, Zhejiang, China
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University of Graz, Graz, Austria
| | - Roman Kern
- Knowledge Discovery, Know-Center, Graz, Austria
- Institute of Interactive Systems and Data Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
19
|
Wobben LD, Codari M, Mistelbauer G, Pepe A, Higashigaito K, Hahn LD, Mastrodicasa D, Turner VL, Hinostroza V, Baumler K, Fischbein MP, Fleischmann D, Willemink MJ. Deep Learning-Based 3D Segmentation of True Lumen, False Lumen, and False Lumen Thrombosis in Type-B Aortic Dissection. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3912-3915. [PMID: 34892087 PMCID: PMC9261941 DOI: 10.1109/embc46164.2021.9631067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Patients with initially uncomplicated typeB aortic dissection (uTBAD) remain at high risk for developing late complications. Identification of morphologic features for improving risk stratification of these patients requires automated segmentation of computed tomography angiography (CTA) images. We developed three segmentation models utilizing a 3D residual U-Net for segmentation of the true lumen (TL), false lumen (FL), and false lumen thrombosis (FLT). Model 1 segments all labels at once, whereas model 2 segments them sequentially. Best results for TL and FL segmentation were achieved by model 2, with median (interquartiles) Dice similarity coefficients (DSC) of 0.85 (0.77-0.88) and 0.84 (0.82-0.87), respectively. For FLT segmentation, model 1 was superior to model 2, with median (interquartiles) DSCs of 0.63 (0.40-0.78). To purely test the performance of the network to segment FLT, a third model segmented FLT starting from the manually segmented FL, resulting in median (interquartiles) DSCs of 0.99 (0.98-0.99) and 0.85 (0.73-0.94) for patent FL and FLT, respectively. While the ambiguous appearance of FLT on imaging remains a significant limitation for accurate segmentation, our pipeline has the potential to help in segmentation of aortic lumina and thrombosis in uTBAD patients.Clinical relevance- Most predictors of aortic dissection (AD) degeneration are identified through anatomical modeling, which is currently prohibitive in clinical settings due to the timeintense human interaction. False lumen thrombosis, which often develops in patients with type B AD, has proven to show significant prognostic value for predicting late adverse events. Our automated segmentation algorithm offers the potential of personalized treatment for AD patients, leading to an increase in long-term survival.
Collapse
|
20
|
Wieben O. Improved CT Surveillance of Thoracic Aortic Aneurysm Growth. Radiology 2021; 302:226-227. [PMID: 34665039 DOI: 10.1148/radiol.2021212122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Oliver Wieben
- From the Departments of Medical Physics and Radiology, University of Wisconsin-Madison, Wisconsin Institutes for Medical Research, 1111 Highland Ave, Suite 1127, Madison, WI 53705-2275
| |
Collapse
|
21
|
Abstract
PURPOSE OF REVIEW Discuss foundational concepts for artificial intelligence (AI) and review recent literature on its application to aortic disease. RECENT FINDINGS Machine learning (ML) techniques are rapidly evolving for the evaluation of aortic disease - broadly categorized as algorithms for aortic segmentation, detection of pathology, and risk stratification. Advances in deep learning, particularly U-Net architectures, have revolutionized segmentation of the aorta and show potential for monitoring the size of aortic aneurysm and characterizing aortic dissection. These algorithms also facilitate application of more complex technologies including analysis of flow dynamics with 4D Flow magnetic resonance imaging (MRI) and computational simulation of fluid dynamics for aortic coarctation. In addition, AI algorithms have been proposed to assist in 'opportunistic' screening from routine imaging exams, including automated aortic calcification score, which has emerged as a strong predictor of cardiovascular risk. Finally, several ML algorithms are being explored for risk stratification of patients with aortic aneurysm and dissection, in addition to prediction of postprocedural complications. SUMMARY Multiple ML techniques have potential for characterization and risk prediction of aortic aneurysm, dissection, coarctation, and atherosclerotic disease on computed tomography and MRI. This nascent field shows considerable promise with many applications in development and in early preclinical evaluation.
Collapse
|
22
|
Sieren MM, Widmann C, Weiss N, Moltz JH, Link F, Wegner F, Stahlberg E, Horn M, Oecherting TH, Goltz JP, Barkhausen J, Frydrychowicz A. Automated segmentation and quantification of the healthy and diseased aorta in CT angiographies using a dedicated deep learning approach. Eur Radiol 2021; 32:690-701. [PMID: 34170365 DOI: 10.1007/s00330-021-08130-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 03/26/2021] [Indexed: 11/29/2022]
Abstract
OBJECTIVES To develop and validate a deep learning-based algorithm for segmenting and quantifying the physiological and diseased aorta in computed tomography angiographies. METHODS CTA exams of the aorta of 191 patients (68.1 ± 14 years, 128 male), performed between 2015 and 2018, were retrospectively identified from our imaging archive and manually segmented by two investigators. A 3D U-Net model was trained on the data, which was divided into a training, a validation, and a test group at a ratio of 7:1:2. Cases in the test group (n = 41) were evaluated to compare manual and automatic segmentations. Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff surface distance (HSD) were extracted. Maximum diameter, effective diameter, and area were quantified and compared between both segmentations at eight anatomical landmarks, and at the maximum area of an aneurysms if present (n = 14). Statistics included error calculation, intraclass correlation coefficient, and Bland-Altman analysis. RESULTS A DSC of 0.95 [0.94; 0.95] and an MSD of 0.76 [0.06; 0.99] indicated close agreement between segmentations. HSD was 8.00 [4.47; 10.00]. The largest absolute errors were found in the ascending aorta with 0.8 ± 1.5 mm for maximum diameter and at the coeliac trunk with - 30.0 ± 81.6 mm2 for area. Results for absolute errors in aneurysms were - 0.5 ± 2.3 mm for maximum diameter, 0.3 ± 1.6 mm for effective diameter, and 64.9 ± 114.9 mm2 for area. ICC showed excellent agreement (> 0.9; p < 0.05) between quantitative measurements. CONCLUSIONS Automated segmentation of the aorta on CTA data using a deep learning algorithm is feasible and allows for accurate quantification of the aortic lumen even if the vascular architecture is altered by disease. KEY POINTS • A deep learning-based algorithm can automatically segment the aorta, mostly within acceptable margins of error, even if the vascular architecture is altered by disease. • Quantifications performed in the segmentations were mostly within clinically acceptable limits, even in pathologically altered segments of the aorta.
Collapse
Affiliation(s)
- Malte Maria Sieren
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| | - Cornelia Widmann
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Nick Weiss
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck/Bremen, Germany
| | - Jan Hendrik Moltz
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck/Bremen, Germany
| | - Florian Link
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck/Bremen, Germany
| | - Franz Wegner
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Erik Stahlberg
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Marco Horn
- Department for Vascular Surgery, University Hospital Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Thekla Helene Oecherting
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Jan Peter Goltz
- Institute for Diagnostic and Interventional Radiology/Neuroradiology, Sana Clinic, Lübeck, Germany
| | - Joerg Barkhausen
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Alex Frydrychowicz
- Department of Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| |
Collapse
|