1
|
Maino C, Vernuccio F, Cannella R, Franco PN, Giannini V, Dezio M, Pisani AR, Blandino AA, Faletti R, De Bernardi E, Ippolito D, Gatti M, Inchingolo R. Radiomics and liver: Where we are and where we are headed? Eur J Radiol 2024; 171:111297. [PMID: 38237517 DOI: 10.1016/j.ejrad.2024.111297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/03/2024] [Accepted: 01/07/2024] [Indexed: 02/10/2024]
Abstract
Hepatic diffuse conditions and focal liver lesions represent two of the most common scenarios to face in everyday radiological clinical practice. Thanks to the advances in technology, radiology has gained a central role in the management of patients with liver disease, especially due to its high sensitivity and specificity. Since the introduction of computed tomography (CT) and magnetic resonance imaging (MRI), radiology has been considered the non-invasive reference modality to assess and characterize liver pathologies. In recent years, clinical practice has moved forward to a quantitative approach to better evaluate and manage each patient with a more fitted approach. In this setting, radiomics has gained an important role in helping radiologists and clinicians characterize hepatic pathological entities, in managing patients, and in determining prognosis. Radiomics can extract a large amount of data from radiological images, which can be associated with different liver scenarios. Thanks to its wide applications in ultrasonography (US), CT, and MRI, different studies were focused on specific aspects related to liver diseases. Even if broadly applied, radiomics has some advantages and different pitfalls. This review aims to summarize the most important and robust studies published in the field of liver radiomics, underlying their main limitations and issues, and what they can add to the current and future clinical practice and literature.
Collapse
Affiliation(s)
- Cesare Maino
- Department of Radiology, Fondazione IRCCS San Gerardo dei Tintori, Monza 20900, Italy.
| | - Federica Vernuccio
- Institute of Radiology, University Hospital of Padova, Padova 35128, Italy
| | - Roberto Cannella
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo 90127, Italy
| | - Paolo Niccolò Franco
- Department of Radiology, Fondazione IRCCS San Gerardo dei Tintori, Monza 20900, Italy
| | - Valentina Giannini
- Department of Surgical Sciences, University of Turin, Turin 10126, Italy
| | - Michele Dezio
- Department of Radiology, Miulli Hospital, Acquaviva delle Fonti 70021, Bari, Italy
| | - Antonio Rosario Pisani
- Nuclear Medicine Unit, Interdisciplinary Department of Medicine, University of Bari, Bari 70121, Italy
| | - Antonino Andrea Blandino
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, Palermo 90127, Italy
| | - Riccardo Faletti
- Department of Surgical Sciences, University of Turin, Turin 10126, Italy
| | - Elisabetta De Bernardi
- Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, University of Milano Bicocca, Milano 20100, Italy; School of Medicine, University of Milano Bicocca, Milano 20100, Italy
| | - Davide Ippolito
- Department of Radiology, Fondazione IRCCS San Gerardo dei Tintori, Monza 20900, Italy; School of Medicine, University of Milano Bicocca, Milano 20100, Italy
| | - Marco Gatti
- Department of Surgical Sciences, University of Turin, Turin 10126, Italy
| | - Riccardo Inchingolo
- Unit of Interventional Radiology, F. Miulli Hospital, Acquaviva delle Fonti 70021, Italy
| |
Collapse
|
2
|
Wendler T, Kreissl MC, Schemmer B, Rogasch JMM, De Benetti F. Artificial Intelligence-powered automatic volume calculation in medical images - available tools, performance and challenges for nuclear medicine. Nuklearmedizin 2023; 62:343-353. [PMID: 37995707 PMCID: PMC10667065 DOI: 10.1055/a-2200-2145] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023]
Abstract
Volumetry is crucial in oncology and endocrinology, for diagnosis, treatment planning, and evaluating response to therapy for several diseases. The integration of Artificial Intelligence (AI) and Deep Learning (DL) has significantly accelerated the automatization of volumetric calculations, enhancing accuracy and reducing variability and labor. In this review, we show that a high correlation has been observed between Machine Learning (ML) methods and expert assessments in tumor volumetry; Yet, it is recognized as more challenging than organ volumetry. Liver volumetry has shown progression in accuracy with a decrease in error. If a relative error below 10 % is acceptable, ML-based liver volumetry can be considered reliable for standardized imaging protocols if used in patients without major anomalies. Similarly, ML-supported automatic kidney volumetry has also shown consistency and reliability in volumetric calculations. In contrast, AI-supported thyroid volumetry has not been extensively developed, despite initial works in 3D ultrasound showing promising results in terms of accuracy and reproducibility. Despite the advancements presented in the reviewed literature, the lack of standardization limits the generalizability of ML methods across diverse scenarios. The domain gap, i. e., the difference in probability distribution of training and inference data, is of paramount importance before clinical deployment of AI, to maintain accuracy and reliability in patient care. The increasing availability of improved segmentation tools is expected to further incorporate AI methods into routine workflows where volumetry will play a more prominent role in radionuclide therapy planning and quantitative follow-up of disease evolution.
Collapse
Affiliation(s)
- Thomas Wendler
- Clinical Computational Medical Imaging Research, Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, Germany
- Institute of Digital Medicine, Universitätsklinikum Augsburg, Germany
- Computer-Aided Medical Procedures and Augmented Reality School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | | | | | - Julian Manuel Michael Rogasch
- Department of Nuclear Medicine, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin,Germany
| | - Francesca De Benetti
- Computer-Aided Medical Procedures and Augmented Reality School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| |
Collapse
|
3
|
da Silva HEC, Santos GNM, Leite AF, Mesquita CRM, Figueiredo PTDS, Stefani CM, de Melo NS. The use of artificial intelligence tools in cancer detection compared to the traditional diagnostic imaging methods: An overview of the systematic reviews. PLoS One 2023; 18:e0292063. [PMID: 37796946 PMCID: PMC10553229 DOI: 10.1371/journal.pone.0292063] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 09/12/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND AND PURPOSE In comparison to conventional medical imaging diagnostic modalities, the aim of this overview article is to analyze the accuracy of the application of Artificial Intelligence (AI) techniques in the identification and diagnosis of malignant tumors in adult patients. DATA SOURCES The acronym PIRDs was used and a comprehensive literature search was conducted on PubMed, Cochrane, Scopus, Web of Science, LILACS, Embase, Scielo, EBSCOhost, and grey literature through Proquest, Google Scholar, and JSTOR for systematic reviews of AI as a diagnostic model and/or detection tool for any cancer type in adult patients, compared to the traditional diagnostic radiographic imaging model. There were no limits on publishing status, publication time, or language. For study selection and risk of bias evaluation, pairs of reviewers worked separately. RESULTS In total, 382 records were retrieved in the databases, 364 after removing duplicates, 32 satisfied the full-text reading criterion, and 09 papers were considered for qualitative synthesis. Although there was heterogeneity in terms of methodological aspects, patient differences, and techniques used, the studies found that several AI approaches are promising in terms of specificity, sensitivity, and diagnostic accuracy in the detection and diagnosis of malignant tumors. When compared to other machine learning algorithms, the Super Vector Machine method performed better in cancer detection and diagnosis. Computer-assisted detection (CAD) has shown promising in terms of aiding cancer detection, when compared to the traditional method of diagnosis. CONCLUSIONS The detection and diagnosis of malignant tumors with the help of AI seems to be feasible and accurate with the use of different technologies, such as CAD systems, deep and machine learning algorithms and radiomic analysis when compared with the traditional model, although these technologies are not capable of to replace the professional radiologist in the analysis of medical images. Although there are limitations regarding the generalization for all types of cancer, these AI tools might aid professionals, serving as an auxiliary and teaching tool, especially for less trained professionals. Therefore, further longitudinal studies with a longer follow-up duration are required for a better understanding of the clinical application of these artificial intelligence systems. TRIAL REGISTRATION Systematic review registration. Prospero registration number: CRD42022307403.
Collapse
Affiliation(s)
| | | | - André Ferreira Leite
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| | | | | | - Cristine Miron Stefani
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| | - Nilce Santos de Melo
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| |
Collapse
|
4
|
Kawula M, Vagni M, Cusumano D, Boldrini L, Placidi L, Corradini S, Belka C, Landry G, Kurz C. Prior knowledge based deep learning auto-segmentation in magnetic resonance imaging-guided radiotherapy of prostate cancer. Phys Imaging Radiat Oncol 2023; 28:100498. [PMID: 37928618 PMCID: PMC10624570 DOI: 10.1016/j.phro.2023.100498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 11/07/2023] Open
Abstract
Background and purpose Automation is desirable for organ segmentation in radiotherapy. This study compared deep learning methods for auto-segmentation of organs-at-risk (OARs) and clinical target volume (CTV) in prostate cancer patients undergoing fractionated magnetic resonance (MR)-guided adaptive radiation therapy. Models predicting dense displacement fields (DDFMs) between planning and fraction images were compared to patient-specific (PSM) and baseline (BM) segmentation models. Materials and methods A dataset of 92 patients with planning and fraction MR images (MRIs) from two institutions were used. DDFMs were trained to predict dense displacement fields (DDFs) between the planning and fraction images, which were subsequently used to propagate the planning contours of the bladder, rectum, and CTV to the daily MRI. The training was performed either with true planning-fraction image pairs or with planning images and their counterparts deformed by known DDFs. The BMs were trained on 53 planning images, while to generate PSMs, the BMs were fine-tuned using the planning image of a given single patient. The evaluation included Dice similarity coefficient (DSC), the average (HDavg) and the 95th percentile (HD95) Hausdorff distance (HD). Results The DDFMs with DSCs for bladder/rectum of 0.76/0.76 performed worse than PSMs (0.91/0.90) and BMs (0.89/0.88). The same trend was observed for HDs. For CTV, DDFM and PSM performed similarly yielding DSCs of 0.87 and 0.84, respectively. Conclusions DDFMs were found suitable for CTV delineation after rigid alignment. However, for OARs they were outperformed by PSMs, as they predicted only limited deformations even in the presence of substantial anatomical changes.
Collapse
Affiliation(s)
- Maria Kawula
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Marica Vagni
- Fondazione Policlinico Universitario “Agostino Gemelli” IRCCS, Rome, Italy
| | - Davide Cusumano
- Fondazione Policlinico Universitario “Agostino Gemelli” IRCCS, Rome, Italy
- Mater Olbia Hospital, Olbia (SS), Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario “Agostino Gemelli” IRCCS, Rome, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario “Agostino Gemelli” IRCCS, Rome, Italy
| | - Stefanie Corradini
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
- German Cancer Consortium (DKTK), Partner Site Munich, A Partnership Between DKFZ and LMU University Hospital Munich, Germany
- Bavarian Cancer Research Center (BZKF), Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
5
|
Berbís MA, Paulano Godino F, Royuela del Val J, Alcalá Mata L, Luna A. Clinical impact of artificial intelligence-based solutions on imaging of the pancreas and liver. World J Gastroenterol 2023; 29:1427-1445. [PMID: 36998424 PMCID: PMC10044858 DOI: 10.3748/wjg.v29.i9.1427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/13/2023] [Accepted: 02/27/2023] [Indexed: 03/07/2023] Open
Abstract
Artificial intelligence (AI) has experienced substantial progress over the last ten years in many fields of application, including healthcare. In hepatology and pancreatology, major attention to date has been paid to its application to the assisted or even automated interpretation of radiological images, where AI can generate accurate and reproducible imaging diagnosis, reducing the physicians’ workload. AI can provide automatic or semi-automatic segmentation and registration of the liver and pancreatic glands and lesions. Furthermore, using radiomics, AI can introduce new quantitative information which is not visible to the human eye to radiological reports. AI has been applied in the detection and characterization of focal lesions and diffuse diseases of the liver and pancreas, such as neoplasms, chronic hepatic disease, or acute or chronic pancreatitis, among others. These solutions have been applied to different imaging techniques commonly used to diagnose liver and pancreatic diseases, such as ultrasound, endoscopic ultrasonography, computerized tomography (CT), magnetic resonance imaging, and positron emission tomography/CT. However, AI is also applied in this context to many other relevant steps involved in a comprehensive clinical scenario to manage a gastroenterological patient. AI can also be applied to choose the most convenient test prescription, to improve image quality or accelerate its acquisition, and to predict patient prognosis and treatment response. In this review, we summarize the current evidence on the application of AI to hepatic and pancreatic radiology, not only in regard to the interpretation of images, but also to all the steps involved in the radiological workflow in a broader sense. Lastly, we discuss the challenges and future directions of the clinical application of AI methods.
Collapse
Affiliation(s)
- M Alvaro Berbís
- Department of Radiology, HT Médica, San Juan de Dios Hospital, Córdoba 14960, Spain
- Faculty of Medicine, Autonomous University of Madrid, Madrid 28049, Spain
| | | | | | - Lidia Alcalá Mata
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| | - Antonio Luna
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| |
Collapse
|
6
|
Rolfe SM, Whikehart SM, Maga AM. Deep learning enabled multi-organ segmentation of mouse embryos. Biol Open 2023; 12:bio059698. [PMID: 36802342 PMCID: PMC9990908 DOI: 10.1242/bio.059698] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 01/13/2023] [Indexed: 02/23/2023] Open
Abstract
The International Mouse Phenotyping Consortium (IMPC) has generated a large repository of three-dimensional (3D) imaging data from mouse embryos, providing a rich resource for investigating phenotype/genotype interactions. While the data is freely available, the computing resources and human effort required to segment these images for analysis of individual structures can create a significant hurdle for research. In this paper, we present an open source, deep learning-enabled tool, Mouse Embryo Multi-Organ Segmentation (MEMOS), that estimates a segmentation of 50 anatomical structures with a support for manually reviewing, editing, and analyzing the estimated segmentation in a single application. MEMOS is implemented as an extension on the 3D Slicer platform and is designed to be accessible to researchers without coding experience. We validate the performance of MEMOS-generated segmentations through comparison to state-of-the-art atlas-based segmentation and quantification of previously reported anatomical abnormalities in a Cbx4 knockout strain. This article has an associated First Person interview with the first author of the paper.
Collapse
Affiliation(s)
- S. M. Rolfe
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, WA 98101, USA
| | - S. M. Whikehart
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, WA 98101, USA
| | - A. M. Maga
- Center for Developmental Biology and Regenerative Medicine, Seattle Children's Research Institute, Seattle, WA 98101, USA
- Department of Pediatrics, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
7
|
Xiao G, Tian S, Yu L, Zhou Z, Zeng X. Siamese few-shot network: a novel and efficient network for medical image segmentation. APPL INTELL 2023. [DOI: 10.1007/s10489-022-04417-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
8
|
Wang S, Pang X, de Keyzer F, Feng Y, Swinnen JV, Yu J, Ni Y. AI-based MRI auto-segmentation of brain tumor in rodents, a multicenter study. Acta Neuropathol Commun 2023; 11:11. [PMID: 36641470 PMCID: PMC9840251 DOI: 10.1186/s40478-023-01509-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 01/06/2023] [Indexed: 01/15/2023] Open
Abstract
Automatic segmentation of rodent brain tumor on magnetic resonance imaging (MRI) may facilitate biomedical research. The current study aims to prove the feasibility for automatic segmentation by artificial intelligence (AI), and practicability of AI-assisted segmentation. MRI images, including T2WI, T1WI and CE-T1WI, of brain tumor from 57 WAG/Rij rats in KU Leuven and 46 mice from the cancer imaging archive (TCIA) were collected. A 3D U-Net architecture was adopted for segmentation of tumor bearing brain and brain tumor. After training, these models were tested with both datasets after Gaussian noise addition. Reduction of inter-observer disparity by AI-assisted segmentation was also evaluated. The AI model segmented tumor-bearing brain well for both Leuven and TCIA datasets, with Dice similarity coefficients (DSCs) of 0.87 and 0.85 respectively. After noise addition, the performance remained unchanged when the signal-noise ratio (SNR) was higher than two or eight, respectively. For the segmentation of tumor lesions, AI-based model yielded DSCs of 0.70 and 0.61 for Leuven and TCIA datasets respectively. Similarly, the performance is uncompromised when the SNR was over two and eight respectively. AI-assisted segmentation could significantly reduce the inter-observer disparities and segmentation time in both rats and mice. Both AI models for segmenting brain or tumor lesions could improve inter-observer agreement and therefore contributed to the standardization of the following biomedical studies.
Collapse
Affiliation(s)
- Shuncong Wang
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Xin Pang
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Faculty of Economics and Business, KU Leuven, 3000 Leuven, Belgium
| | - Frederik de Keyzer
- grid.5596.f0000 0001 0668 7884Department of Radiology, University Hospitals Leuven, KU Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Yuanbo Feng
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Johan V. Swinnen
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Jie Yu
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| | - Yicheng Ni
- grid.5596.f0000 0001 0668 7884Biomedical Group, Campus Gasthuisberg, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
9
|
Groendahl AR, Huynh BN, Tomic O, Søvik Å, Dale E, Malinen E, Skogmo HK, Futsaether CM. Automatic gross tumor segmentation of canine head and neck cancer using deep learning and cross-species transfer learning. Front Vet Sci 2023; 10:1143986. [PMID: 37026102 PMCID: PMC10070749 DOI: 10.3389/fvets.2023.1143986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/01/2023] [Indexed: 04/08/2023] Open
Abstract
Background Radiotherapy (RT) is increasingly being used on dogs with spontaneous head and neck cancer (HNC), which account for a large percentage of veterinary patients treated with RT. Accurate definition of the gross tumor volume (GTV) is a vital part of RT planning, ensuring adequate dose coverage of the tumor while limiting the radiation dose to surrounding tissues. Currently the GTV is contoured manually in medical images, which is a time-consuming and challenging task. Purpose The purpose of this study was to evaluate the applicability of deep learning-based automatic segmentation of the GTV in canine patients with HNC. Materials and methods Contrast-enhanced computed tomography (CT) images and corresponding manual GTV contours of 36 canine HNC patients and 197 human HNC patients were included. A 3D U-Net convolutional neural network (CNN) was trained to automatically segment the GTV in canine patients using two main approaches: (i) training models from scratch based solely on canine CT images, and (ii) using cross-species transfer learning where models were pretrained on CT images of human patients and then fine-tuned on CT images of canine patients. For the canine patients, automatic segmentations were assessed using the Dice similarity coefficient (Dice), the positive predictive value, the true positive rate, and surface distance metrics, calculated from a four-fold cross-validation strategy where each fold was used as a validation set and test set once in independent model runs. Results CNN models trained from scratch on canine data or by using transfer learning obtained mean test set Dice scores of 0.55 and 0.52, respectively, indicating acceptable auto-segmentations, similar to the mean Dice performances reported for CT-based automatic segmentation in human HNC studies. Automatic segmentation of nasal cavity tumors appeared particularly promising, resulting in mean test set Dice scores of 0.69 for both approaches. Conclusion In conclusion, deep learning-based automatic segmentation of the GTV using CNN models based on canine data only or a cross-species transfer learning approach shows promise for future application in RT of canine HNC patients.
Collapse
Affiliation(s)
- Aurora Rosvoll Groendahl
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
| | - Bao Ngoc Huynh
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Department of Data Science, Norwegian University of Life Sciences, Ås, Norway
| | - Åste Søvik
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences, Ås, Norway
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Eirik Malinen
- Department of Physics, University of Oslo, Oslo, Norway
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | - Hege Kippenes Skogmo
- Faculty of Veterinary Medicine, Department of Companion Animal Clinical Sciences, Norwegian University of Life Sciences, Ås, Norway
| | - Cecilia Marie Futsaether
- Faculty of Science and Technology, Department of Physics, Norwegian University of Life Sciences, Ås, Norway
- *Correspondence: Cecilia Marie Futsaether
| |
Collapse
|
10
|
Zhang F, Zheng Y, Wu J, Yang X, Che X. Multi-rater label fusion based on an information bottleneck for fundus image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
11
|
Liu X, Elbanan MG, Luna A, Haider MA, Smith AD, Sabottke CF, Spieler BM, Turkbey B, Fuentes D, Moawad A, Kamel S, Horvat N, Elsayes KM. Radiomics in Abdominopelvic Solid-Organ Oncologic Imaging: Current Status. AJR Am J Roentgenol 2022; 219:985-995. [PMID: 35766531 PMCID: PMC10616929 DOI: 10.2214/ajr.22.27695] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Radiomics is the process of extraction of high-throughput quantitative imaging features from medical images. These features represent noninvasive quantitative biomarkers that go beyond the traditional imaging features visible to the human eye. This article first reviews the steps of the radiomics pipeline, including image acquisition, ROI selection and image segmentation, image preprocessing, feature extraction, feature selection, and model development and application. Current evidence for the application of radiomics in abdominopelvic solid-organ cancers is then reviewed. Applications including diagnosis, subtype determination, treatment response assessment, and outcome prediction are explored within the context of hepatobiliary and pancreatic cancer, renal cell carcinoma, prostate cancer, gynecologic cancer, and adrenal masses. This literature review focuses on the strongest available evidence, including systematic reviews, meta-analyses, and large multicenter studies. Limitations of the available literature are highlighted, including marked heterogeneity in radiomics methodology, frequent use of small sample sizes with high risk of overfitting, and lack of prospective design, external validation, and standardized radiomics workflow. Thus, although studies have laid a foundation that supports continued investigation into radiomics models, stronger evidence is needed before clinical adoption.
Collapse
Affiliation(s)
- Xiaoyang Liu
- Joint Department of Medical Imaging, Division of Abdominal Imaging, University Health Network, University of Toronto, ON, Canada
| | - Mohamed G Elbanan
- Department of Radiology, Yale New Haven Health, Bridgeport Hospital, Bridgeport, CT
| | | | - Masoom A Haider
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON, Canada
- Joint Department of Medical Imaging, University Health Network, Sinai Health System and University of Toronto, Toronto, ON, Canada
| | - Andrew D Smith
- Department of Radiology, University of Alabama at Birmingham, Birmingham, AL
| | - Carl F Sabottke
- Department of Medical Imaging, University of Arizona College of Medicine, Tucson, AZ
| | - Bradley M Spieler
- Department of Radiology, University Medical Center, Louisiana State University Health Sciences Center, New Orleans, LA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, NIH, Bethesda, MD
| | - David Fuentes
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Ahmed Moawad
- Department of Diagnostic and Interventional Radiology, Mercy Catholic Medical Center, Darby, PA
| | - Serageldin Kamel
- Department of Lymphoma, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Natally Horvat
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Khaled M Elsayes
- Department of Abdominal Imaging, University of Texas MD Anderson Cancer Center, 1400 Pressler St, Houston, TX 77030
| |
Collapse
|
12
|
Koitka S, Gudlin P, Theysohn JM, Oezcelik A, Hoyer DP, Dayangac M, Hosch R, Haubold J, Flaschel N, Nensa F, Malamutmann E. Fully automated preoperative liver volumetry incorporating the anatomical location of the central hepatic vein. Sci Rep 2022; 12:16479. [PMID: 36183002 PMCID: PMC9526715 DOI: 10.1038/s41598-022-20778-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 09/19/2022] [Indexed: 11/12/2022] Open
Abstract
The precise preoperative calculation of functional liver volumes is essential prior major liver resections, as well as for the evaluation of a suitable donor for living donor liver transplantation. The aim of this study was to develop a fully automated, reproducible, and quantitative 3D volumetry of the liver from standard CT examinations of the abdomen as part of routine clinical imaging. Therefore, an in-house dataset of 100 venous phase CT examinations for training and 30 venous phase ex-house CT examinations with a slice thickness of 5 mm for testing and validating were fully annotated with right and left liver lobe. Multi-Resolution U-Net 3D neural networks were employed for segmenting these liver regions. The Sørensen-Dice coefficient was greater than 0.9726 ± 0.0058, 0.9639 ± 0.0088, and 0.9223 ± 0.0187 and a mean volume difference of 32.12 ± 19.40 ml, 22.68 ± 21.67 ml, and 9.44 ± 27.08 ml compared to the standard of reference (SoR) liver, right lobe, and left lobe annotation was achieved. Our results show that fully automated 3D volumetry of the liver on routine CT imaging can provide reproducible, quantitative, fast and accurate results without needing any examiner in the preoperative work-up for hepatobiliary surgery and especially for living donor liver transplantation.
Collapse
Affiliation(s)
- Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Phillip Gudlin
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Jens M Theysohn
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Arzu Oezcelik
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Dieter P Hoyer
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Murat Dayangac
- Department of Surgery, Medipol University Hospital, Istanbul, Turkey
| | - René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Nils Flaschel
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany. .,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany.
| | - Eugen Malamutmann
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| |
Collapse
|
13
|
Krass S, Lassen-Schmidt B, Schenk A. Computer-assisted image-based risk analysis and planning in lung surgery - a review. Front Surg 2022; 9:920457. [PMID: 36211288 PMCID: PMC9535081 DOI: 10.3389/fsurg.2022.920457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 09/08/2022] [Indexed: 11/16/2022] Open
Abstract
In this paper, we give an overview on current trends in computer-assisted image-based methods for risk analysis and planning in lung surgery and present our own developments with a focus on computed tomography (CT) based algorithms and applications. The methods combine heuristic, knowledge based image processing algorithms for segmentation, quantification and visualization based on CT images of the lung. Impact for lung surgery is discussed regarding risk assessment, quantitative assessment of resection strategies, and surgical guiding. In perspective, we discuss the role of deep-learning based AI methods for further improvements.
Collapse
Affiliation(s)
- Stefan Krass
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Correspondence: Stefan Krass
| | | | - Andrea Schenk
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Department of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| |
Collapse
|
14
|
Kawakita S, Mandal K, Mou L, Mecwan MM, Zhu Y, Li S, Sharma S, Hernandez AL, Nguyen HT, Maity S, de Barros NR, Nakayama A, Bandaru P, Ahadian S, Kim HJ, Herculano RD, Holler E, Jucaud V, Dokmeci MR, Khademhosseini A. Organ-On-A-Chip Models of the Blood-Brain Barrier: Recent Advances and Future Prospects. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2022; 18:e2201401. [PMID: 35978444 PMCID: PMC9529899 DOI: 10.1002/smll.202201401] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 06/22/2022] [Indexed: 05/09/2023]
Abstract
The human brain and central nervous system (CNS) present unique challenges in drug development for neurological diseases. One major obstacle is the blood-brain barrier (BBB), which hampers the effective delivery of therapeutic molecules into the brain while protecting it from blood-born neurotoxic substances and maintaining CNS homeostasis. For BBB research, traditional in vitro models rely upon Petri dishes or Transwell systems. However, these static models lack essential microenvironmental factors such as shear stress and proper cell-cell interactions. To this end, organ-on-a-chip (OoC) technology has emerged as a new in vitro modeling approach to better recapitulate the highly dynamic in vivo human brain microenvironment so-called the neural vascular unit (NVU). Such BBB-on-a-chip models have made substantial progress over the last decade, and concurrently there has been increasing interest in modeling various neurological diseases such as Alzheimer's disease and Parkinson's disease using OoC technology. In addition, with recent advances in other scientific technologies, several new opportunities to improve the BBB-on-a-chip platform via multidisciplinary approaches are available. In this review, an overview of the NVU and OoC technology is provided, recent progress and applications of BBB-on-a-chip for personalized medicine and drug discovery are discussed, and current challenges and future directions are delineated.
Collapse
Affiliation(s)
- Satoru Kawakita
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Kalpana Mandal
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Lei Mou
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
- Department of Clinical Laboratory, Third Affiliated Hospital of Guangzhou Medical University, Guangzhou Medical University, No. 63 Duobao Road, Liwan District, Guangzhou, Guangdong, 510150, P. R. China
| | | | - Yangzhi Zhu
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Shaopei Li
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Saurabh Sharma
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | | | - Huu Tuan Nguyen
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Surjendu Maity
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | | | - Aya Nakayama
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Praveen Bandaru
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Samad Ahadian
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Han-Jun Kim
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Rondinelli Donizetti Herculano
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
- Department of Bioprocess and Biotechnology Engineering, School of Pharmaceutical Sciences, São Paulo State University (Unesp), Araraquara, SP, 14801-902, Brazil
| | - Eggehard Holler
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | - Vadim Jucaud
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| | | | - Ali Khademhosseini
- Terasaki Institute for Biomedical Innovation, Los Angeles, CA, 90064, USA
| |
Collapse
|
15
|
Veiga-Canuto D, Cerdà-Alberich L, Sangüesa Nebot C, Martínez de las Heras B, Pötschger U, Gabelloni M, Carot Sierra JM, Taschner-Mandl S, Düster V, Cañete A, Ladenstein R, Neri E, Martí-Bonmatí L. Comparative Multicentric Evaluation of Inter-Observer Variability in Manual and Automatic Segmentation of Neuroblastic Tumors in Magnetic Resonance Images. Cancers (Basel) 2022; 14:cancers14153648. [PMID: 35954314 PMCID: PMC9367307 DOI: 10.3390/cancers14153648] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/21/2022] [Accepted: 07/26/2022] [Indexed: 02/05/2023] Open
Abstract
Simple Summary Tumor segmentation is a key step in oncologic imaging processing and is a time-consuming process usually performed manually by radiologists. To facilitate it, there is growing interest in applying deep-learning segmentation algorithms. Thus, we explore the variability between two observers performing manual segmentation and use the state-of-the-art deep learning architecture nnU-Net to develop a model to detect and segment neuroblastic tumors on MR images. We were able to show that the variability between nnU-Net and manual segmentation is similar to the inter-observer variability in manual segmentation. Furthermore, we compared the time needed to manually segment the tumors from scratch with the time required for the automatic model to segment the same cases, with posterior human validation with manual adjustment when needed. Abstract Tumor segmentation is one of the key steps in imaging processing. The goals of this study were to assess the inter-observer variability in manual segmentation of neuroblastic tumors and to analyze whether the state-of-the-art deep learning architecture nnU-Net can provide a robust solution to detect and segment tumors on MR images. A retrospective multicenter study of 132 patients with neuroblastic tumors was performed. Dice Similarity Coefficient (DSC) and Area Under the Receiver Operating Characteristic Curve (AUC ROC) were used to compare segmentation sets. Two more metrics were elaborated to understand the direction of the errors: the modified version of False Positive (FPRm) and False Negative (FNR) rates. Two radiologists manually segmented 46 tumors and a comparative study was performed. nnU-Net was trained-tuned with 106 cases divided into five balanced folds to perform cross-validation. The five resulting models were used as an ensemble solution to measure training (n = 106) and validation (n = 26) performance, independently. The time needed by the model to automatically segment 20 cases was compared to the time required for manual segmentation. The median DSC for manual segmentation sets was 0.969 (±0.032 IQR). The median DSC for the automatic tool was 0.965 (±0.018 IQR). The automatic segmentation model achieved a better performance regarding the FPRm. MR images segmentation variability is similar between radiologists and nnU-Net. Time leverage when using the automatic model with posterior visual validation and manual adjustment corresponds to 92.8%.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
- Correspondence:
| | - Leonor Cerdà-Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
| | - Cinta Sangüesa Nebot
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
| | - Blanca Martínez de las Heras
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (B.M.d.l.H.); (A.C.)
| | - Ulrike Pötschger
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy; (M.G.); (E.N.)
| | - José Miguel Carot Sierra
- Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Camí de Vera s/n, 46022 Valencia, Spain;
| | - Sabine Taschner-Mandl
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Vanessa Düster
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Adela Cañete
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (B.M.d.l.H.); (A.C.)
| | - Ruth Ladenstein
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy; (M.G.); (E.N.)
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
| |
Collapse
|
16
|
Hänsch A, Chlebus G, Meine H, Thielke F, Kock F, Paulus T, Abolmaali N, Schenk A. Improving automatic liver tumor segmentation in late-phase MRI using multi-model training and 3D convolutional neural networks. Sci Rep 2022; 12:12262. [PMID: 35851322 PMCID: PMC9293996 DOI: 10.1038/s41598-022-16388-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/08/2022] [Indexed: 02/08/2023] Open
Abstract
Automatic liver tumor segmentation can facilitate the planning of liver interventions. For diagnosis of hepatocellular carcinoma, dynamic contrast-enhanced MRI (DCE-MRI) can yield a higher sensitivity than contrast-enhanced CT. However, most studies on automatic liver lesion segmentation have focused on CT. In this study, we present a deep learning-based approach for liver tumor segmentation in the late hepatocellular phase of DCE-MRI, using an anisotropic 3D U-Net architecture and a multi-model training strategy. The 3D architecture improves the segmentation performance compared to a previous study using a 2D U-Net (mean Dice 0.70 vs. 0.65). A further significant improvement is achieved by a multi-model training approach (0.74), which is close to the inter-rater agreement (0.78). A qualitative expert rating of the automatically generated contours confirms the benefit of the multi-model training strategy, with 66 % of contours rated as good or very good, compared to only 43 % when performing a single training. The lesion detection performance with a mean F1-score of 0.59 is inferior to human raters (0.76). Overall, this study shows that correctly detected liver lesions in late-phase DCE-MRI data can be automatically segmented with high accuracy, but the detection, in particular of smaller lesions, can still be improved.
Collapse
Affiliation(s)
- Annika Hänsch
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Grzegorz Chlebus
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.,Medical Image Computing Group, University of Bremen, Bremen, Germany
| | - Felix Thielke
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Farina Kock
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Tobias Paulus
- Institut für Diagnostische und Interventionelle Radiologie und Nuklearmedizin, Katholisches Klinikum Bochum, Universitätsklinikum der Ruhr Universität Bochum, Bochum, Germany
| | - Nasreddin Abolmaali
- Institut für Diagnostische und Interventionelle Radiologie und Nuklearmedizin, Katholisches Klinikum Bochum, Universitätsklinikum der Ruhr Universität Bochum, Bochum, Germany
| | - Andrea Schenk
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
17
|
Dana J, Venkatasamy A, Saviano A, Lupberger J, Hoshida Y, Vilgrain V, Nahon P, Reinhold C, Gallix B, Baumert TF. Conventional and artificial intelligence-based imaging for biomarker discovery in chronic liver disease. Hepatol Int 2022; 16:509-522. [PMID: 35138551 PMCID: PMC9177703 DOI: 10.1007/s12072-022-10303-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 01/17/2022] [Indexed: 12/14/2022]
Abstract
Chronic liver diseases, resulting from chronic injuries of various causes, lead to cirrhosis with life-threatening complications including liver failure, portal hypertension, hepatocellular carcinoma. A key unmet medical need is robust non-invasive biomarkers to predict patient outcome, stratify patients for risk of disease progression and monitor response to emerging therapies. Quantitative imaging biomarkers have already been developed, for instance, liver elastography for staging fibrosis or proton density fat fraction on magnetic resonance imaging for liver steatosis. Yet, major improvements, in the field of image acquisition and analysis, are still required to be able to accurately characterize the liver parenchyma, monitor its changes and predict any pejorative evolution across disease progression. Artificial intelligence has the potential to augment the exploitation of massive multi-parametric data to extract valuable information and achieve precision medicine. Machine learning algorithms have been developed to assess non-invasively certain histological characteristics of chronic liver diseases, including fibrosis and steatosis. Although still at an early stage of development, artificial intelligence-based imaging biomarkers provide novel opportunities to predict the risk of progression from early-stage chronic liver diseases toward cirrhosis-related complications, with the ultimate perspective of precision medicine. This review provides an overview of emerging quantitative imaging techniques and the application of artificial intelligence for biomarker discovery in chronic liver disease.
Collapse
Affiliation(s)
- Jérémy Dana
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France.
- Institut Hospitalo-Universitaire (IHU), Strasbourg, France.
- Université de Strasbourg, Strasbourg, France.
- Department of Diagnostic Radiology, McGill University, Montreal, Canada.
| | - Aïna Venkatasamy
- Institut Hospitalo-Universitaire (IHU), Strasbourg, France
- Streinth Lab (Stress Response and Innovative Therapies), Inserm UMR_S 1113 IRFAC, Interface Recherche Fondamentale et Appliquée à la Cancérologie, 3 Avenue Moliere, Strasbourg, France
- Department of Radiology Medical Physics, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Killianstrasse 5a, 79106, Freiburg, Germany
| | - Antonio Saviano
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France
- Université de Strasbourg, Strasbourg, France
- Pôle Hépato-Digestif, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - Joachim Lupberger
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France
- Université de Strasbourg, Strasbourg, France
| | - Yujin Hoshida
- Liver Tumor Translational Research Program, Division of Digestive and Liver Diseases, Department of Internal Medicine, Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, USA
| | - Valérie Vilgrain
- Radiology Department, Hôpital Beaujon, Université de Paris, CRI, INSERM 1149, APHP. Nord, Paris, France
| | - Pierre Nahon
- Liver Unit, Assistance Publique-Hôpitaux de Paris (AP-HP), Hôpitaux Universitaires Paris Seine Saint-Denis, Bobigny, France
- Université Sorbonne Paris Nord, 93000, Bobigny, France
- Inserm, UMR-1138 "Functional Genomics of Solid Tumors", Paris, France
| | - Caroline Reinhold
- Department of Diagnostic Radiology, McGill University, Montreal, Canada
- Augmented Intelligence and Precision Health Laboratory, Research Institute of McGill University Health Centre, Montreal, Canada
- Montreal Imaging Experts Inc., Montreal, Canada
| | - Benoit Gallix
- Institut Hospitalo-Universitaire (IHU), Strasbourg, France
- Université de Strasbourg, Strasbourg, France
- Department of Diagnostic Radiology, McGill University, Montreal, Canada
| | - Thomas F Baumert
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France.
- Université de Strasbourg, Strasbourg, France.
- Pôle Hépato-Digestif, Hôpitaux Universitaires de Strasbourg, Strasbourg, France.
| |
Collapse
|
18
|
Río Bártulos C, Senk K, Schumacher M, Plath J, Kaiser N, Bade R, Woetzel J, Wiggermann P. Assessment of Liver Function With MRI: Where Do We Stand? Front Med (Lausanne) 2022; 9:839919. [PMID: 35463008 PMCID: PMC9018984 DOI: 10.3389/fmed.2022.839919] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 02/25/2022] [Indexed: 12/12/2022] Open
Abstract
Liver disease and hepatocellular carcinoma (HCC) have become a global health burden. For this reason, the determination of liver function plays a central role in the monitoring of patients with chronic liver disease or HCC. Furthermore, assessment of liver function is important, e.g., before surgery to prevent liver failure after hepatectomy or to monitor the course of treatment. Liver function and disease severity are usually assessed clinically based on clinical symptoms, biopsy, and blood parameters. These are rather static tests that reflect the current state of the liver without considering changes in liver function. With the development of liver-specific contrast agents for MRI, noninvasive dynamic determination of liver function based on signal intensity or using T1 relaxometry has become possible. The advantage of this imaging modality is that it provides additional information about the vascular structure, anatomy, and heterogeneous distribution of liver function. In this review, we summarized and discussed the results published in recent years on this technique. Indeed, recent data show that the T1 reduction rate seems to be the most appropriate value for determining liver function by MRI. Furthermore, attention has been paid to the development of automated tools for image analysis in order to uncover the steps necessary to obtain a complete process flow from image segmentation to image registration to image analysis. In conclusion, the published data show that liver function values obtained from contrast-enhanced MRI images correlate significantly with the global liver function parameters, making it possible to obtain both functional and anatomic information with a single modality.
Collapse
Affiliation(s)
- Carolina Río Bártulos
- Institut für Röntgendiagnostik und Nuklearmedizin, Städtisches Klinikum Braunschweig gGmbH, Braunschweig, Germany
| | - Karin Senk
- Institut für Röntgendiagnostik, Universtitätsklinikum Regensburg, Regensburg, Germany
| | | | - Jan Plath
- MeVis Medical Solutions AG, Bremen, Germany
| | | | | | | | - Philipp Wiggermann
- Institut für Röntgendiagnostik und Nuklearmedizin, Städtisches Klinikum Braunschweig gGmbH, Braunschweig, Germany
| |
Collapse
|
19
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
20
|
Shirokikh B, Dalechina A, Shevtsov A, Krivov E, Kostjuchenko V, Durgaryan A, Galkin M, Golanov A, Belyaev M. Systematic Clinical Evaluation of A Deep Learning Method for Medical Image Segmentation: Radiosurgery Application. IEEE J Biomed Health Inform 2022; 26:3037-3046. [PMID: 35213318 DOI: 10.1109/jbhi.2022.3153394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We systematically evaluate a Deep Learning model in a 3D medical image segmentation task. With our model, we address the flaws of manual segmentation: high inter-rater contouring variability and time consumption of the contouring process. The main extension over the existing evaluations is the careful and detailed analysis that could be further generalized on other medical image segmentation tasks. Firstly, we analyze the changes in the inter-rater detection agreement. We show that the model reduces the number of detection disagreements by 48% (p < 0.05). Secondly, we show that the model improves the inter-rater contouring agreement from 0.845 to 0.871 surface Dice Score (p < 0.05). Thirdly, we show that the model accelerates the delineation process between 1.6 and 2.0 times (p < 0.05). Finally, we design the setup of the clinical experiment to either exclude or estimate the evaluation biases; thus, preserving the significance of the results. Besides the clinical evaluation, we also share intuitions and practical ideas for building an efficient DL-based model for 3D medical image segmentation.
Collapse
|
21
|
Duan J, Bernard M, Downes L, Willows B, Feng X, Mourad W, St Clair W, Chen Q. Evaluating the clinical acceptability of deep learning contours of prostate and organs-at-risk in an automated prostate treatment planning process. Med Phys 2022; 49:2570-2581. [PMID: 35147216 DOI: 10.1002/mp.15525] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/17/2022] [Accepted: 01/29/2021] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Radiation treatment is considered an effective and the most common treatment option for prostate cancer. The treatment planning process requires accurate and precise segmentation of the prostate and organs at risk (OARs), which is laborious and time-consuming when contoured manually. Artificial intelligence (AI)-based auto-segmentation has the potential to significantly accelerate the radiation therapy treatment planning process; however, the accuracy of auto-segmentation needs to be validated before its full clinical adoption. PURPOSE A commercial AI-based contouring model was trained to provide segmentation of the prostate and surrounding OARs. The segmented structures were input to a commercial auto-planning module for automated prostate treatment planning. This study comprehensively evaluates the performance of this contouring model in the automated prostate treatment planning process. METHODS AND MATERIALS A 3D U-Net-based model (INTContour, Carina AI) was trained and validated on 84 computed tomography (CT) scans and tested on an additional 23 CT scans from patients treated in our local institution. Prostate and OARs contours generated by the AI model (AI contour) were geometrically evaluated against Reference contours. The prostate contours were further evaluated against AI, Reference, and two additional observer contours for comparison using inter-observer variation (IOV) and 3D boundaries discrepancy analyses. A blinded evaluation was introduced to assess subjectively the clinical acceptability of the AI contours. Finally, treatment plans were created from an automated prostate planning workflow using the AI contours and were evaluated for their clinical acceptability following the RTOG-0815 protocol. RESULTS The AI contours demonstrated good geometric accuracy on OARs and prostate contours, with average Dice similarity coefficients (DSC) for bladder, rectum, femoral heads, seminal vesicles, and penile bulb of 0.93, 0.85, 0.96, 0.72, and 0.53, respectively. The DSC, 95% directed Hausdorff Distance (HD95), and Mean Surface Distance (MSD) for the prostate were 0.83±0.05, 6.07±1.87 mm, and 2.07±0.73 mm, respectively. No significant differences were found when comparing with IOV. In the double-blinded evaluation, 95.7% of the AI contours were scored as either "Perfect" (34.8%) or "Acceptable" (60.9%), while only one case (4.3%) was scored as "Unacceptable with minor changes required". In total, 69.6% of the AI contours were considered equal to or better than the Reference contours by an independent radiation oncologist. Automated treatment plans created from the AI contours produced similar and clinically-acceptable dosimetric distributions as those from plans created from Reference contours. CONCLUSIONS The investigated AI-based commercial model for prostate segmentation demonstrated good performance in clinical practice. Using this model, the implementation of an automated prostate treatment planning process is clinically feasible. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jingwei Duan
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Mark Bernard
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Laura Downes
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Brooke Willows
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Xue Feng
- Carina Medical LLC, 145 Graham Ave, A168, Lexington, 40506, KY
| | - Waleed Mourad
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - William St Clair
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| |
Collapse
|
22
|
Cayot B, Milot L, Nempont O, Vlachomitrou AS, Langlois-Jacques C, Dumortier J, Boillot O, Arnaud K, Barten TRM, Drenth JPH, Valette PJ. Polycystic liver: automatic segmentation using deep learning on CT is faster and as accurate compared to manual segmentation. Eur Radiol 2022; 32:4780-4790. [PMID: 35142898 DOI: 10.1007/s00330-022-08549-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 12/18/2021] [Accepted: 12/20/2021] [Indexed: 11/29/2022]
Abstract
OBJECTIVE This study aimed to develop and investigate the performance of a deep learning model based on a convolutional neural network (CNN) for the automatic segmentation of polycystic livers at CT imaging. METHOD This retrospective study used CT images of polycystic livers. To develop the CNN, supervised training and validation phases were performed using 190 CT series. To assess performance, the test phase was performed using 41 CT series. Manual segmentation by an expert radiologist (Rad1a) served as reference for all comparisons. Intra-observer variability was determined by the same reader after 12 weeks (Rad1b), and inter-observer variability by a second reader (Rad2). The Dice similarity coefficient (DSC) evaluated overlap between segmentations. CNN performance was assessed using the concordance correlation coefficient (CCC) and the two-by-two difference between the CCCs; their confidence interval was estimated with bootstrap and Bland-Altman analyses. Liver segmentation time was automatically recorded for each method. RESULTS A total of 231 series from 129 CT examinations on 88 consecutive patients were collected. For the CNN, the DSC was 0.95 ± 0.03 and volume analyses yielded a CCC of 0.995 compared with reference. No statistical difference was observed in the CCC between CNN automatic segmentation and manual segmentations performed to evaluate inter-observer and intra-observer variability. While manual segmentation required 22.4 ± 10.4 min, central and graphics processing units took an average of 5.0 ± 2.1 s and 2.0 ± 1.4 s, respectively. CONCLUSION Compared with manual segmentation, automated segmentation of polycystic livers using a deep learning method achieved much faster segmentation with similar performance. KEY POINTS • Automatic volumetry of polycystic livers using artificial intelligence method allows much faster segmentation than expert manual segmentation with similar performance. • No statistical difference was observed between automatic segmentation, inter-observer variability, or intra-observer variability.
Collapse
Affiliation(s)
- Bénédicte Cayot
- Department of Medical Imaging, Hospices Civils de Lyon, University of Lyon, Lyon, France. .,Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.
| | - Laurent Milot
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Department of Medical Imaging, Edouard Herriot Hospital, Civil Hospices of Lyon, University of Lyon, Lyon, France
| | - Olivier Nempont
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Philips France, 33 rue de Verdun, CS 60 055, Cedex 92156, Suresnes, France
| | - Anna S Vlachomitrou
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Philips France, 33 rue de Verdun, CS 60 055, Cedex 92156, Suresnes, France
| | - Carole Langlois-Jacques
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Unit of Biostatistics, Civil Hospices of Lyon, Lyon ,CNRS UMR5558, Laboratory of Biometry and Evolutionary Biology, Biostatistics-Health Team, Lyon, France
| | - Jérôme Dumortier
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Department of Hepatology and Gastroenterology, Civil Hospices of Lyon, Edouard Herriot Hospital, Federation of Digestive Specialties, University of Lyon, Lyon, France.,University of Lyon, Lyon, France
| | - Olivier Boillot
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,University of Lyon, Lyon, France.,Department of Hepatobiliary-Pancreatic Surgery and Hepatology, Civil Hospices of Lyon, Edouard Herriot Hospital, University of Lyon, Lyon, France
| | - Karine Arnaud
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Edouard Herriot Hospital, Civil Hospices of Lyon, Lyon, France
| | - Thijs R M Barten
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Radboud University Medical Center, Nijmegen, the Netherlands
| | - Joost P H Drenth
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Department of Gastroenterology and Hepatology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Pierre-Jean Valette
- Service d'imagerie médicale et interventionnelle, Hôpital Edouard Herriot, 5 Place d'Arsonval, 69003, Lyon, France.,Department of Medical Imaging, Edouard Herriot Hospital, Civil Hospices of Lyon, University of Lyon, Lyon, France
| |
Collapse
|
23
|
Gross M, Spektor M, Jaffe A, Kucukkaya AS, Iseke S, Haider SP, Strazzabosco M, Chapiro J, Onofrey JA. Improved performance and consistency of deep learning 3D liver segmentation with heterogeneous cancer stages in magnetic resonance imaging. PLoS One 2021; 16:e0260630. [PMID: 34852007 PMCID: PMC8635384 DOI: 10.1371/journal.pone.0260630] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 11/13/2021] [Indexed: 11/23/2022] Open
Abstract
PURPOSE Accurate liver segmentation is key for volumetry assessment to guide treatment decisions. Moreover, it is an important pre-processing step for cancer detection algorithms. Liver segmentation can be especially challenging in patients with cancer-related tissue changes and shape deformation. The aim of this study was to assess the ability of state-of-the-art deep learning 3D liver segmentation algorithms to generalize across all different Barcelona Clinic Liver Cancer (BCLC) liver cancer stages. METHODS This retrospective study, included patients from an institutional database that had arterial-phase T1-weighted magnetic resonance images with corresponding manual liver segmentations. The data was split into 70/15/15% for training/validation/testing each proportionally equal across BCLC stages. Two 3D convolutional neural networks were trained using identical U-net-derived architectures with equal sized training datasets: one spanning all BCLC stages ("All-Stage-Net": AS-Net), and one limited to early and intermediate BCLC stages ("Early-Intermediate-Stage-Net": EIS-Net). Segmentation accuracy was evaluated by the Dice Similarity Coefficient (DSC) on a dataset spanning all BCLC stages and a Wilcoxon signed-rank test was used for pairwise comparisons. RESULTS 219 subjects met the inclusion criteria (170 males, 49 females, 62.8±9.1 years) from all BCLC stages. Both networks were trained using 129 subjects: AS-Net training comprised 19, 74, 18, 8, and 10 BCLC 0, A, B, C, and D patients, respectively; EIS-Net training comprised 21, 86, and 22 BCLC 0, A, and B patients, respectively. DSCs (mean±SD) were 0.954±0.018 and 0.946±0.032 for AS-Net and EIS-Net (p<0.001), respectively. The AS-Net 0.956±0.014 significantly outperformed the EIS-Net 0.941±0.038 on advanced BCLC stages (p<0.001) and yielded similarly good segmentation performance on early and intermediate stages (AS-Net: 0.952±0.021; EIS-Net: 0.949±0.027; p = 0.107). CONCLUSION To ensure robust segmentation performance across cancer stages that is independent of liver shape deformation and tumor burden, it is critical to train deep learning models on heterogeneous imaging data spanning all BCLC stages.
Collapse
Affiliation(s)
- Moritz Gross
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Michael Spektor
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Ariel Jaffe
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Ahmet S. Kucukkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Simon Iseke
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Stefan P. Haider
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Otorhinolaryngology, University Hospital of Ludwig Maximilians Universität München, Munich, Germany
| | - Mario Strazzabosco
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - John A. Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Urology, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States of America
| |
Collapse
|
24
|
Ren S, Zhan L, Chen S, Dai H, Ruan G, Li S, Liu L, Lin R, Chen H. Segmentation and Registration of the Liver in Dynamic Contrast-Enhanced Computed Tomography Images. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Dynamic contrast-enhanced computed tomography (DCE-CT) is the main auxiliary diagnostic tool for liver diseases. Liver segmentation and registration in all stages of DCE-CT images are the key technology for big data analysis of liver disease diagnosis. The change of imaging conditions
in different stages of DCE-CT brings enormous challenges to the segmentation of liver CT images. This study proposes an automatic model for liver segmentation from abdominal CT images in different stages of DCE on the basis of U-Net. The skip connection in U-Net can improve the ability of
complex feature recognition. A total of 4863 CT slices from 16 patients with hepatocellular carcinoma (HCC) were selected as the training set, and 1754 CT slices from 6 patients with HCC were selected as the test set. The training and test sets included plain scan, hepatic arterial-dominant
phase, and portal venous-dominant phase CT scans. Results showed that the Dice value of the proposed method was significantly higher than those of the full convolutional network and region-growing method. Then, 3D reconstruction and registration were performed on the segmentation results of
the liver region of DCE-CT images. The proposed method obtained the best performance, which can provide technical support for the big data analysis of liver diseases.
Collapse
Affiliation(s)
- Shuai Ren
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin 541004, China
| | - Ling Zhan
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin 541004, China
| | - Shuchao Chen
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin 541004, China
| | - Haitao Dai
- The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou 510080, China
| | - Guangying Ruan
- State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Center, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| | - Sai Li
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin 541004, China
| | - Lizhi Liu
- State Key Laboratory of Oncology in South China, Sun Yat-Sen University Cancer Center, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China
| | - Run Lin
- The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou 510080, China
| | - Hongbo Chen
- School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin 541004, China
| |
Collapse
|
25
|
Furtado P. Testing Segmentation Popular Loss and Variations in Three Multiclass Medical Imaging Problems. J Imaging 2021; 7:16. [PMID: 34460615 PMCID: PMC8321275 DOI: 10.3390/jimaging7020016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/16/2021] [Accepted: 01/22/2021] [Indexed: 12/15/2022] Open
Abstract
Image structures are segmented automatically using deep learning (DL) for analysis and processing. The three most popular base loss functions are cross entropy (crossE), intersect-over-the-union (IoU), and dice. Which should be used, is it useful to consider simple variations, such as modifying formula coefficients? How do characteristics of different image structures influence scores? Taking three different medical image segmentation problems (segmentation of organs in magnetic resonance images (MRI), liver in computer tomography images (CT) and diabetic retinopathy lesions in eye fundus images (EFI)), we quantify loss functions and variations, as well as segmentation scores of different targets. We first describe the limitations of metrics, since loss is a metric, then we describe and test alternatives. Experimentally, we observed that DeeplabV3 outperforms UNet and fully convolutional network (FCN) in all datasets. Dice scored 1 to 6 percentage points (pp) higher than cross entropy over all datasets, IoU improved 0 to 3 pp. Varying formula coefficients improved scores, but the best choices depend on the dataset: compared to crossE, different false positive vs. false negative weights improved MRI by 12 pp, and assigning zero weight to background improved EFI by 6 pp. Multiclass segmentation scored higher than n-uniclass segmentation in MRI by 8 pp. EFI lesions score low compared to more constant structures (e.g., optic disk or even organs), but loss modifications improve those scores significantly 6 to 9 pp. Our conclusions are that dice is best, it is worth assigning 0 weight to class background and to test different weights on false positives and false negatives.
Collapse
Affiliation(s)
- Pedro Furtado
- Dei/FCT/CISUC, University of Coimbra, Polo II, 3030-290 Coimbra, Portugal
| |
Collapse
|
26
|
Furtado P. Loss, post-processing and standard architecture improvements of liver deep learning segmentation from Computed Tomography and magnetic resonance. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|
27
|
PICCOLO White-Light and Narrow-Band Imaging Colonoscopic Dataset: A Performance Comparative of Models and Datasets. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10238501] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Colorectal cancer is one of the world leading death causes. Fortunately, an early diagnosis allows for effective treatment, increasing the survival rate. Deep learning techniques have shown their utility for increasing the adenoma detection rate at colonoscopy, but a dataset is usually required so the model can automatically learn features that characterize the polyps. In this work, we present the PICCOLO dataset, that comprises 3433 manually annotated images (2131 white-light images 1302 narrow-band images), originated from 76 lesions from 40 patients, which are distributed into training (2203), validation (897) and test (333) sets assuring patient independence between sets. Furthermore, clinical metadata are also provided for each lesion. Four different models, obtained by combining two backbones and two encoder–decoder architectures, are trained with the PICCOLO dataset and other two publicly available datasets for comparison. Results are provided for the test set of each dataset. Models trained with the PICCOLO dataset have a better generalization capacity, as they perform more uniformly along test sets of all datasets, rather than obtaining the best results for its own test set. This dataset is available at the website of the Basque Biobank, so it is expected that it will contribute to the further development of deep learning methods for polyp detection, localisation and classification, which would eventually result in a better and earlier diagnosis of colorectal cancer, hence improving patient outcomes.
Collapse
|
28
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
29
|
Theek B, Magnuska Z, Gremse F, Hahn H, Schulz V, Kiessling F. Automation of data analysis in molecular cancer imaging and its potential impact on future clinical practice. Methods 2020; 188:30-36. [PMID: 32615232 DOI: 10.1016/j.ymeth.2020.06.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 06/23/2020] [Indexed: 12/11/2022] Open
Abstract
Digitalization, especially the use of machine learning and computational intelligence, is considered to dramatically shape medical procedures in the near future. In the field of cancer diagnostics, radiomics, the extraction of multiple quantitative image features and their clustered analysis, is gaining increasing attention to obtain more detailed, reproducible, and meaningful information about the disease entity, its prognosis and the ideal therapeutic option. In this context, automation of diagnostic procedures can improve the entire pipeline, which comprises patient registration, planning and performing an imaging examination at the scanner, image reconstruction, image analysis, and feeding the diagnostic information from various sources into decision support systems. With a focus on cancer diagnostics, this review article reports and discusses how computer-assistance can be integrated into diagnostic procedures and which benefits and challenges arise from it. Besides a strong view on classical imaging modalities like x-ray, CT, MRI, ultrasound, PET, SPECT and hybrid imaging devices thereof, it is outlined how imaging data can be combined with data deriving from patient anamnesis, clinical chemistry, pathology, and different omics. In this context, the article also discusses IT infrastructures that are required to realize this integration in the clinical routine. Although there are still many challenges to comprehensively implement automated and integrated data analysis in molecular cancer imaging, the authors conclude that we are entering a new era of medical diagnostics and precision medicine.
Collapse
Affiliation(s)
- Benjamin Theek
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Zuzanna Magnuska
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany
| | - Felix Gremse
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Institute of Medical Informatics, RWTH Aachen University, Pauwelsstrasse 30, 52074 Aachen, Germany
| | - Horst Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Volkmar Schulz
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany; Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany
| | - Fabian Kiessling
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany.
| |
Collapse
|
30
|
Whole liver segmentation based on deep learning and manual adjustment for clinical use in SIRT. Eur J Nucl Med Mol Imaging 2020; 47:2742-2752. [DOI: 10.1007/s00259-020-04800-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/30/2020] [Indexed: 10/24/2022]
|
31
|
Gudigar A, Raghavendra U, Hegde A, Kalyani M, Ciaccio EJ, Rajendra Acharya U. Brain pathology identification using computer aided diagnostic tool: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105205. [PMID: 31786457 DOI: 10.1016/j.cmpb.2019.105205] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 11/12/2019] [Accepted: 11/12/2019] [Indexed: 05/28/2023]
Abstract
Computer aided diagnostic (CAD) has become a significant tool in expanding patient quality-of-life by reducing human errors in diagnosis. CAD can expedite decision-making on complex clinical data automatically. Since brain diseases can be fatal, rapid identification of brain pathology to prolong patient life is an important research topic. Many algorithms have been proposed for efficient brain pathology identification (BPI) over the past decade. Constant refinement of the various image processing algorithms must take place to expand performance of the automatic BPI task. In this paper, a systematic survey of contemporary BPI algorithms using brain magnetic resonance imaging (MRI) is presented. A summarization of recent literature provides investigators with a helpful synopsis of the domain. Furthermore, to enhance the performance of BPI, future research directions are indicated.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India.
| | - Ajay Hegde
- Neurosurgery, Institute of Neurological Sciences, NHS Greater Glasgow and Clyde, Glasgow, United Kingdom
| | - M Kalyani
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Edward J Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, United States
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Clementi 599491, Singapore; International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan
| |
Collapse
|