1
|
Lareyre F, Chaudhuri A, Raffort J. Artificial Intelligence Based Methods to Enhance Analysis of Non-Contrast Computed Tomography in Patients with Aortic Aneurysm. Eur J Vasc Endovasc Surg 2024; 68:418. [PMID: 38936690 DOI: 10.1016/j.ejvs.2024.05.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 05/14/2024] [Accepted: 05/28/2024] [Indexed: 06/29/2024]
Affiliation(s)
- Fabien Lareyre
- Department of Vascular Surgery, Hospital of Antibes Juan-les-Pins, Antibes, France; Université Côte d'Azur, CNRS, UMR7370, LP2M, Nice, France; Fédération Hospitalo-Universitaire FHU Plan&Go, Nice, France.
| | - Arindam Chaudhuri
- Bedfordshire - Milton Keynes Vascular Centre, Bedfordshire Hospitals, NHS Foundation Trust, Bedford, UK
| | - Juliette Raffort
- Université Côte d'Azur, CNRS, UMR7370, LP2M, Nice, France; Fédération Hospitalo-Universitaire FHU Plan&Go, Nice, France; Clinical Chemistry Laboratory, University Hospital of Nice, Nice, France; Institute 3IA Côte d'Azur, Université Côte d'Azur, Sophia Antipolis, France
| |
Collapse
|
2
|
D'Oria M, Raffort J, Condino S, Cutolo F, Bertagna G, Berchiolli R, Scali S, Griselli F, Troisi N, Lepidi S, Lareyre F. Computational surgery in the management of patients with abdominal aortic aneurysms: Opportunities, challenges, and future directions. Semin Vasc Surg 2024; 37:298-305. [PMID: 39277345 DOI: 10.1053/j.semvascsurg.2024.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/13/2024] [Accepted: 07/22/2024] [Indexed: 09/17/2024]
Abstract
Computational surgery (CS) is an interdisciplinary field that uses mathematical models and algorithms to focus specifically on operative planning, simulation, and outcomes analysis to improve surgical care provision. As the digital revolution transforms the surgical work environment through broader adoption of artificial intelligence and machine learning, close collaboration between surgeons and computational scientists is not only unavoidable, but will become essential. In this review, the authors summarize the main advances, as well as ongoing challenges and prospects, that surround the implementation of CS techniques in vascular surgery, with a particular focus on the care of patients affected by abdominal aortic aneurysms (AAAs). Several key areas of AAA care delivery, including patient-specific modelling, virtual surgery simulation, intraoperative imaging-guided surgery, and predictive analytics, as well as biomechanical analysis and machine learning, will be discussed. The overarching goals of these CS applications is to improve the precision and accuracy of AAA repair procedures, while enhancing safety and long-term outcomes. Accordingly, CS has the potential to significantly enhance patient care across the entire surgical journey, from preoperative planning and intraoperative decision making to postoperative surveillance. Moreover, CS-based approaches offer promising opportunities to augment AAA repair quality by enabling precise preoperative simulations, real-time intraoperative navigation, and robust postoperative monitoring. However, integrating these advanced computer-based technologies into medical research and clinical practice presents new challenges. These include addressing technical limitations, ensuring accuracy and reliability, and managing unique ethical considerations associated with their use. Thorough evaluation of these aspects of advanced computation techniques in AAA management is crucial before widespread integration into health care systems can be achieved.
Collapse
Affiliation(s)
- Mario D'Oria
- Division of Vascular and Endovascular Surgery, Department of Medical Surgical and Health Sciences, University of Trieste, Strada di Fiume 447, 34149, Trieste, Italy.
| | - Juliette Raffort
- Université Côte d'Azur, Le Centre National de la Recherche Scientifique, UMR7370, LP2M, Nice, France
| | - Sara Condino
- Department of Information Engineering, University of Pisa, Pisa, Italy; EndoCAS Center, University of Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy; EndoCAS Center, University of Pisa, Pisa, Italy
| | - Giulia Bertagna
- Vascular Surgery Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Raffaella Berchiolli
- Vascular Surgery Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Salvatore Scali
- Division of Vascular Surgery and Endovascular Therapy, University of Florida, Gainesville, FL
| | - Filippo Griselli
- Division of Vascular and Endovascular Surgery, Department of Medical Surgical and Health Sciences, University of Trieste, Strada di Fiume 447, 34149, Trieste, Italy
| | - Nicola Troisi
- Vascular Surgery Unit, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Italy
| | - Sandro Lepidi
- Division of Vascular and Endovascular Surgery, Department of Medical Surgical and Health Sciences, University of Trieste, Strada di Fiume 447, 34149, Trieste, Italy
| | - Fabien Lareyre
- Department of Vascular Surgery, Hospital of Antibes Juan-les-Pins, France
| |
Collapse
|
3
|
Stevenin G, Canonge J, Gervais M, Fiore A, Lareyre F, Touma J, Desgranges P, Raffort J, Sénémaud J. e-Health and environmental sustainability in vascular surgery. Semin Vasc Surg 2024; 37:333-341. [PMID: 39277350 DOI: 10.1053/j.semvascsurg.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 08/18/2024] [Accepted: 08/20/2024] [Indexed: 09/17/2024]
Abstract
e-Health technology holds great promise for improving the management of patients with vascular diseases and offers a unique opportunity to mitigate the environmental impact of vascular care, which remains an under-investigated field. The innovative potential of e-Health operates in a complex environment with finite resources. As the expansion of digital health will increase demand for devices, contributing to the environmental burden of electronics and energy use, the sustainability of e-Health technology is of crucial importance, especially in the context of increasing prevalence of cardiovascular diseases. This review discusses the environmental impact of care related to vascular surgery and e-Health innovation, the potential of e-Health technology to mitigate greenhouse gas emissions generated by the health care sector, and to provide leads to research promoting e-Heath technology sustainability. A multifaceted approach, including ethical design, validated eco-audits methodology and reporting standards, technological refinement, electronic and medical devices reuse and recycling, and effective policies is required to provide a sustainable and optimal level of care to vascular patients.
Collapse
Affiliation(s)
- Gabrielle Stevenin
- Department of Vascular Surgery, Henri Mondor University Hospital, 1 rue Gustave Eiffel, 94000 Créteil, France; Université Paris-Est, Créteil, France
| | - Jennifer Canonge
- Department of Vascular Surgery, Henri Mondor University Hospital, 1 rue Gustave Eiffel, 94000 Créteil, France; Université Paris-Est, Créteil, France
| | - Marianne Gervais
- Université Paris-Est, Créteil, France; Institut Mondor de Recherche Biomédicale, U955 INSERM, Créteil, France
| | - Antonio Fiore
- Université Paris-Est, Créteil, France; Department of Cardiac Surgery, Henri Mondor University Hospital, Créteil, France
| | - Fabien Lareyre
- Department of Vascular Surgery, Hospital of Antibes Juan-les-Pins, France,; Université Côte d'Azur, Le Centre National de la Recherche Scientifique, UMR7370, LP2M, Nice, France; Fédération Hospitalo-Universitaire Plan&Go, Nice, France
| | - Joseph Touma
- Department of Vascular Surgery, Henri Mondor University Hospital, 1 rue Gustave Eiffel, 94000 Créteil, France; Université Paris-Est, Créteil, France
| | - Pascal Desgranges
- Department of Vascular Surgery, Henri Mondor University Hospital, 1 rue Gustave Eiffel, 94000 Créteil, France; Université Paris-Est, Créteil, France
| | - Juliette Raffort
- Université Côte d'Azur, Le Centre National de la Recherche Scientifique, UMR7370, LP2M, Nice, France; Fédération Hospitalo-Universitaire Plan&Go, Nice, France; Clinical Chemistry Laboratory, University Hospital of Nice, France; Institute 3IA Côte d'Azur, Université Côte d'Azur, France
| | - Jean Sénémaud
- Department of Vascular Surgery, Henri Mondor University Hospital, 1 rue Gustave Eiffel, 94000 Créteil, France; Université Paris-Est, Créteil, France; Laboratory for Vascular Translational Science, U1148 INSERM, Paris, France.
| |
Collapse
|
4
|
Coastaliou Q, Webster C, Bicknell C, Pouncey A, Ducasse E, Caradu C. Artificial Intelligence With Deep Learning Enables Assessment of Aortic Aneurysm Diameter and Volume Through Different Computed Tomography Phases. Eur J Vasc Endovasc Surg 2024; 68:408-409. [PMID: 38614229 DOI: 10.1016/j.ejvs.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/21/2024] [Accepted: 04/10/2024] [Indexed: 04/15/2024]
Affiliation(s)
- Quentin Coastaliou
- Bordeaux University Hospital, Department of Vascular Surgery, Bordeaux, France
| | - Claire Webster
- Imperial College London, Department of Vascular Surgery, London, UK
| | - Colin Bicknell
- Imperial College London, Department of Vascular Surgery, London, UK
| | - Anna Pouncey
- Imperial College London, Department of Vascular Surgery, London, UK
| | - Eric Ducasse
- Bordeaux University Hospital, Department of Vascular Surgery, Bordeaux, France
| | - Caroline Caradu
- Bordeaux University Hospital, Department of Vascular Surgery, Bordeaux, France.
| |
Collapse
|
5
|
Chen B, Liu Z, Lu J, Li Z, Kuang K, Yang J, Wang Z, Sun Y, Du B, Qi L, Li M. Deep learning parametric response mapping from inspiratory chest CT scans: a new approach for small airway disease screening. Respir Res 2023; 24:299. [PMID: 38017476 PMCID: PMC10683250 DOI: 10.1186/s12931-023-02611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/16/2023] [Indexed: 11/30/2023] Open
Abstract
OBJECTIVES Parametric response mapping (PRM) enables the evaluation of small airway disease (SAD) at the voxel level, but requires both inspiratory and expiratory chest CT scans. We hypothesize that deep learning PRM from inspiratory chest CT scans can effectively evaluate SAD in individuals with normal spirometry. METHODS We included 537 participants with normal spirometry, a history of smoking or secondhand smoke exposure, and divided them into training, tuning, and test sets. A cascaded generative adversarial network generated expiratory CT from inspiratory CT, followed by a UNet-like network predicting PRM using real inspiratory CT and generated expiratory CT. The performance of the prediction is evaluated using SSIM, RMSE and dice coefficients. Pearson correlation evaluated the correlation between predicted and ground truth PRM. ROC curves evaluated predicted PRMfSAD (the volume percentage of functional small airway disease, fSAD) performance in stratifying SAD. RESULTS Our method can generate expiratory CT of good quality (SSIM 0.86, RMSE 80.13 HU). The predicted PRM dice coefficients for normal lung, emphysema, and fSAD regions are 0.85, 0.63, and 0.51, respectively. The volume percentages of emphysema and fSAD showed good correlation between predicted and ground truth PRM (|r| were 0.97 and 0.64, respectively, p < 0.05). Predicted PRMfSAD showed good SAD stratification performance with ground truth PRMfSAD at thresholds of 15%, 20% and 25% (AUCs were 0.84, 0.78, and 0.84, respectively, p < 0.001). CONCLUSION Our deep learning method generates high-quality PRM using inspiratory chest CT and effectively stratifies SAD in individuals with normal spirometry.
Collapse
Affiliation(s)
- Bin Chen
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, 221, Yanan West Road, Jingan Temple Street, Jingan District, Shanghai, China
- Zhang Guozhen Small Pulmonary Nodules Diagnosis and Treatment Center, Shanghai, China
| | - Ziyi Liu
- School of Computer Science, Wuhan University, LuoJiaShan, WuChang District, Wuhan, Hubei, China
- Artificial Intelligence Institute of Wuhan University, Wuhan, Hubei, China
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan, Hubei, China
| | - Jinjuan Lu
- Department of Radiology, Shanghai Geriatric Medical Center, Shanghai, China
| | - Zhihao Li
- School of Computer Science, Wuhan University, LuoJiaShan, WuChang District, Wuhan, Hubei, China
- Artificial Intelligence Institute of Wuhan University, Wuhan, Hubei, China
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan, Hubei, China
| | - Kaiming Kuang
- Dianei Technology, Shanghai, China
- University of California San Diego, La Jolla, USA
| | - Jiancheng Yang
- Dianei Technology, Shanghai, China
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Zengmao Wang
- School of Computer Science, Wuhan University, LuoJiaShan, WuChang District, Wuhan, Hubei, China
- Artificial Intelligence Institute of Wuhan University, Wuhan, Hubei, China
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan, Hubei, China
| | - Yingli Sun
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, 221, Yanan West Road, Jingan Temple Street, Jingan District, Shanghai, China
- Zhang Guozhen Small Pulmonary Nodules Diagnosis and Treatment Center, Shanghai, China
| | - Bo Du
- School of Computer Science, Wuhan University, LuoJiaShan, WuChang District, Wuhan, Hubei, China.
- Artificial Intelligence Institute of Wuhan University, Wuhan, Hubei, China.
- Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan, Hubei, China.
| | - Lin Qi
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, 221, Yanan West Road, Jingan Temple Street, Jingan District, Shanghai, China.
- Zhang Guozhen Small Pulmonary Nodules Diagnosis and Treatment Center, Shanghai, China.
| | - Ming Li
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, 221, Yanan West Road, Jingan Temple Street, Jingan District, Shanghai, China.
- Zhang Guozhen Small Pulmonary Nodules Diagnosis and Treatment Center, Shanghai, China.
| |
Collapse
|
6
|
Zhang R, Turkbey B. Deep Learning Unveils Hidden Angiography in Noncontrast CT Scans. Radiology 2023; 309:e232784. [PMID: 37962504 DOI: 10.1148/radiol.232784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Affiliation(s)
- Ran Zhang
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.Z.); and Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr, Room B3B85, Bethesda, MD 20892 (B.T.)
| | - Baris Turkbey
- From the Department of Radiology, University of Wisconsin-Madison, Madison, Wis (R.Z.); and Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr, Room B3B85, Bethesda, MD 20892 (B.T.)
| |
Collapse
|
7
|
Azarfar G, Ko SB, Adams SJ, Babyn PS. Applications of deep learning to reduce the need for iodinated contrast media for CT imaging: a systematic review. Int J Comput Assist Radiol Surg 2023; 18:1903-1914. [PMID: 36947337 DOI: 10.1007/s11548-023-02862-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 02/27/2023] [Indexed: 03/23/2023]
Abstract
PURPOSE The usage of iodinated contrast media (ICM) can improve the sensitivity and specificity of computed tomography (CT) for many clinical indications. However, the adverse effects of ICM administration can include renal injury, life-threatening allergic-like reactions, and environmental contamination. Deep learning (DL) models can generate full-dose ICM CT images from non-contrast or low-dose ICM administration or generate non-contrast CT from full-dose ICM CT. Eliminating the need for both contrast-enhanced and non-enhanced imaging or reducing the amount of required contrast while maintaining diagnostic capability may reduce overall patient risk, improve efficiency and minimize costs. We reviewed the current capabilities of DL to reduce the need for contrast administration in CT. METHODS We conducted a systematic review of articles utilizing DL to reduce the amount of ICM required in CT, searching MEDLINE, Embase, Compendex, Inspec, and Scopus to identify papers published from 2016 to 2022. We classified the articles based on the DL model and ICM reduction. RESULTS Eighteen papers met the inclusion criteria for analysis. Of these, ten generated synthetic full-dose (100%) ICM from real non-contrast CT, while four augmented low-dose to full-dose ICM CT. Three used DL to create synthetic non-contrast CT from real 100% ICM CT, while one paper used DL to translate the 100% ICM to non-contrast CT and vice versa. DL models commonly used generative adversarial networks trained and tested by paired contrast-enhanced and non-contrast or low ICM CTs. Image quality metrics such as peak signal-to-noise ratio and structural similarity index were frequently used for comparing synthetic versus real CT image quality. CONCLUSION DL-generated contrast-enhanced or non-contrast CT may assist in diagnosis and radiation therapy planning; however, further work to optimize protocols to reduce or eliminate ICM for specific pathology is still needed along with a dedicated assessment of the clinical utility of these synthetic images.
Collapse
Affiliation(s)
- Ghazal Azarfar
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada.
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Seok-Bum Ko
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK, Canada
| | - Scott J Adams
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Paul S Babyn
- Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
8
|
Pasquini L, Napolitano A, Pignatelli M, Tagliente E, Parrillo C, Nasta F, Romano A, Bozzao A, Di Napoli A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics 2022; 14:pharmaceutics14112378. [PMID: 36365197 PMCID: PMC9695136 DOI: 10.3390/pharmaceutics14112378] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of 'virtual' and 'augmented' contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media.
Collapse
Affiliation(s)
- Luca Pasquini
- Neuroradiology Unit, Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
- Correspondence:
| | - Matteo Pignatelli
- Radiology Department, Castelli Hospital, Via Nettunense Km 11.5, 00040 Ariccia, Italy
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Chiara Parrillo
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Francesco Nasta
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Andrea Romano
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alessandro Bozzao
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alberto Di Napoli
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
- Neuroimaging Lab, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| |
Collapse
|
9
|
Chandrashekar A, Handa A, Ward J, Grau V, Lee R. A deep learning pipeline to simulate fluorodeoxyglucose (FDG) uptake in head and neck cancers using non-contrast CT images without the administration of radioactive tracer. Insights Imaging 2022; 13:45. [PMID: 35286501 PMCID: PMC8921434 DOI: 10.1186/s13244-022-01161-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Accepted: 01/15/2022] [Indexed: 11/10/2022] Open
Abstract
Abstract
Objectives
Positron emission tomography (PET) imaging is a costly tracer-based imaging modality used to visualise abnormal metabolic activity for the management of malignancies. The objective of this study is to demonstrate that non-contrast CTs alone can be used to differentiate regions with different Fluorodeoxyglucose (FDG) uptake and simulate PET images to guide clinical management.
Methods
Paired FDG-PET and CT images (n = 298 patients) with diagnosed head and neck squamous cell carcinoma (HNSCC) were obtained from The cancer imaging archive. Random forest (RF) classification of CT-derived radiomic features was used to differentiate metabolically active (tumour) and inactive tissues (ex. thyroid tissue). Subsequently, a deep learning generative adversarial network (GAN) was trained for this CT to PET transformation task without tracer injection. The simulated PET images were evaluated for technical accuracy (PERCIST v.1 criteria) and their ability to predict clinical outcome [(1) locoregional recurrence, (2) distant metastasis and (3) patient survival].
Results
From 298 patients, 683 hot spots of elevated FDG uptake (elevated SUV, 6.03 ± 1.71) were identified. RF models of intensity-based CT-derived radiomic features were able to differentiate regions of negligible, low and elevated FDG uptake within and surrounding the tumour. Using the GAN-simulated PET image alone, we were able to predict clinical outcome to the same accuracy as that achieved using FDG-PET images.
Conclusion
This pipeline demonstrates a deep learning methodology to simulate PET images from CT images in HNSCC without the use of radioactive tracer. The same pipeline can be applied to other pathologies that require PET imaging.
Collapse
|
10
|
Yi Y, Mao L, Wang C, Guo Y, Luo X, Jia D, Lei Y, Pan J, Li J, Li S, Li XL, Jin Z, Wang Y. Advanced Warning of Aortic Dissection on Non-Contrast CT: The Combination of Deep Learning and Morphological Characteristics. Front Cardiovasc Med 2022; 8:762958. [PMID: 35071345 PMCID: PMC8767113 DOI: 10.3389/fcvm.2021.762958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 11/24/2021] [Indexed: 12/02/2022] Open
Abstract
Background: The identification of aortic dissection (AD) at baseline plays a crucial role in clinical practice. Non-contrast CT scans are widely available, convenient, and easy to perform. However, the detection of AD on non-contrast CT scans by radiologists currently lacks sensitivity and is suboptimal. Methods: A total of 452 patients who underwent aortic CT angiography (CTA) were enrolled retrospectively from two medical centers in China to form the internal cohort (341 patients, 139 patients with AD, 202 patients with non-AD) and the external testing cohort (111 patients, 46 patients with AD, 65 patients with non-AD). The internal cohort was divided into the training cohort (n = 238), validation cohort (n = 35), and internal testing cohort (n = 68). Morphological characteristics were extracted from the aortic segmentation. A deep-integrated model based on the Gaussian Naive Bayes algorithm was built to differentiate AD from non-AD, using the combination of the three-dimensional (3D) deep-learning model score and morphological characteristics. The areas under the receiver operating characteristic curve (AUCs), accuracy, sensitivity, and specificity were used to evaluate the model performance. The proposed model was also compared with the subjective assessment of radiologists. Results: After the combination of all the morphological characteristics, our proposed deep-integrated model significantly outperformed the 3D deep-learning model (AUC: 0.948 vs. 0.803 in the internal testing cohort and 0.969 vs. 0.814 in the external testing cohort, both p < 0.05). The accuracy, sensitivity, and specificity of our model reached 0.897, 0.862, and 0.923 in the internal testing cohort and 0.730, 0.978, and 0.554 in the external testing cohort, respectively. The accuracy for AD detection showed no significant difference between our model and the radiologists (p > 0.05). Conclusion: The proposed model presented good performance for AD detection on non-contrast CT scans; thus, early diagnosis and prompt treatment would be available.
Collapse
Affiliation(s)
- Yan Yi
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Li Mao
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Cheng Wang
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Yubo Guo
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Xiao Luo
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | | | - Yi Lei
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | - Judong Pan
- Department of Radiology and Biomedical Imaging, University of California San Francisco (UCSF), San Francisco, CA, United States
| | - Jiayue Li
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, United States
| | - Shufang Li
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
| | - Xiu-Li Li
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Zhengyu Jin
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
- Zhengyu Jin
| | - Yining Wang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
- *Correspondence: Yining Wang
| |
Collapse
|