1
|
Brooks JA, Kallenbach M, Radu IP, Berzigotti A, Dietrich CF, Kather JN, Luedde T, Seraphin TP. Artificial Intelligence for Contrast-Enhanced Ultrasound of the Liver: A Systematic Review. Digestion 2024:1-18. [PMID: 39312896 DOI: 10.1159/000541540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 09/18/2024] [Indexed: 09/25/2024]
Abstract
INTRODUCTION The research field of artificial intelligence (AI) in medicine and especially in gastroenterology is rapidly progressing with the first AI tools entering routine clinical practice, for example, in colorectal cancer screening. Contrast-enhanced ultrasound (CEUS) is a highly reliable, low-risk, and low-cost diagnostic modality for the examination of the liver. However, doctors need many years of training and experience to master this technique and, despite all efforts to standardize CEUS, it is often believed to contain significant interrater variability. As has been shown for endoscopy, AI holds promise to support examiners at all training levels in their decision-making and efficiency. METHODS In this systematic review, we analyzed and compared original research studies applying AI methods to CEUS examinations of the liver published between January 2010 and February 2024. We performed a structured literature search on PubMed, Web of Science, and IEEE. Two independent reviewers screened the articles and subsequently extracted relevant methodological features, e.g., cohort size, validation process, machine learning algorithm used, and indicative performance measures from the included articles. RESULTS We included 41 studies with most applying AI methods for classification tasks related to focal liver lesions. These included distinguishing benign versus malignant or classifying the entity itself, while a few studies tried to classify tumor grading, microvascular invasion status, or response to transcatheter arterial chemoembolization directly from CEUS. Some articles tried to segment or detect focal liver lesions, while others aimed to predict survival and recurrence after ablation. The majority (25/41) of studies used hand-picked and/or annotated images as data input to their models. We observed mostly good to high reported model performances with accuracies ranging between 58.6% and 98.9%, while noticing a general lack of external validation. CONCLUSION Even though multiple proof-of-concept studies for the application of AI methods to CEUS examinations of the liver exist and report high performance, more prospective, externally validated, and multicenter research is needed to bring such algorithms from desk to bedside.
Collapse
Affiliation(s)
- James A Brooks
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| | - Michael Kallenbach
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| | - Iuliana-Pompilia Radu
- Department for Visceral Surgery and Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Annalisa Berzigotti
- Department for Visceral Surgery and Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Hirslanden Beau Site, Salem and Permanence, Bern, Switzerland
| | - Jakob N Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Tom Luedde
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| | - Tobias P Seraphin
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| |
Collapse
|
2
|
Vetter M, Waldner MJ, Zundler S, Klett D, Bocklitz T, Neurath MF, Adler W, Jesper D. Artificial intelligence for the classification of focal liver lesions in ultrasound - a systematic review. ULTRASCHALL IN DER MEDIZIN (STUTTGART, GERMANY : 1980) 2023; 44:395-407. [PMID: 37001563 DOI: 10.1055/a-2066-9372] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Focal liver lesions are detected in about 15% of abdominal ultrasound examinations. The diagnosis of frequent benign lesions can be determined reliably based on the characteristic B-mode appearance of cysts, hemangiomas, or typical focal fatty changes. In the case of focal liver lesions which remain unclear on B-mode ultrasound, contrast-enhanced ultrasound (CEUS) increases diagnostic accuracy for the distinction between benign and malignant liver lesions. Artificial intelligence describes applications that try to emulate human intelligence, at least in subfields such as the classification of images. Since ultrasound is considered to be a particularly examiner-dependent technique, the application of artificial intelligence could be an interesting approach for an objective and accurate diagnosis. In this systematic review we analyzed how artificial intelligence can be used to classify the benign or malignant nature and entity of focal liver lesions on the basis of B-mode or CEUS data. In a structured search on Scopus, Web of Science, PubMed, and IEEE, we found 52 studies that met the inclusion criteria. Studies showed good diagnostic performance for both the classification as benign or malignant and the differentiation of individual tumor entities. The results could be improved by inclusion of clinical parameters and were comparable to those of experienced investigators in terms of diagnostic accuracy. However, due to the limited spectrum of lesions included in the studies and a lack of independent validation cohorts, the transfer of the results into clinical practice is limited.
Collapse
Affiliation(s)
- Marcel Vetter
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Maximilian J Waldner
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Sebastian Zundler
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Daniel Klett
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Thomas Bocklitz
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-Universitat Jena, Jena, Germany
- Leibniz-Institute of Photonic Technology, Friedrich Schiller University Jena, Jena, Germany
| | - Markus F Neurath
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Werner Adler
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany
| | - Daniel Jesper
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| |
Collapse
|
3
|
Kawanishi K, Kakimoto A, Anegawa K, Tsutsumi M, Yamaguchi I, Kudo S. Automatic Identification of Ultrasound Images of the Tibial Nerve in Different Ankle Positions Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:4855. [PMID: 37430769 DOI: 10.3390/s23104855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 05/15/2023] [Accepted: 05/16/2023] [Indexed: 07/12/2023]
Abstract
Peripheral nerve tension is known to be related to the pathophysiology of neuropathy; however, assessing this tension is difficult in a clinical setting. In this study, we aimed to develop a deep learning algorithm for the automatic assessment of tibial nerve tension using B-mode ultrasound imaging. To develop the algorithm, we used 204 ultrasound images of the tibial nerve in three positions: the maximum dorsiflexion position and -10° and -20° plantar flexion from maximum dorsiflexion. The images were taken of 68 healthy volunteers who did not have any abnormalities in the lower limbs at the time of testing. The tibial nerve was manually segmented in all images, and 163 cases were automatically extracted as the training dataset using U-Net. Additionally, convolutional neural network (CNN)-based classification was performed to determine each ankle position. The automatic classification was validated using five-fold cross-validation from the testing data composed of 41 data points. The highest mean accuracy (0.92) was achieved using manual segmentation. The mean accuracy of the full auto-classification of the tibial nerve at each ankle position was more than 0.77 using five-fold cross-validation. Thus, the tension of the tibial nerve can be accurately assessed with different dorsiflexion angles using an ultrasound imaging analysis with U-Net and a CNN.
Collapse
Affiliation(s)
- Kengo Kawanishi
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Rehabilitation, Kano General Hospital, Osaka 531-0041, Japan
| | - Akihiro Kakimoto
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Radiological Sciences, Faculty of Health Sciences, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Keisuke Anegawa
- Graduate School of Health Science, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Masahiro Tsutsumi
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Physical Therapy, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Isao Yamaguchi
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Radiological Sciences, Faculty of Health Sciences, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Shintarou Kudo
- Inclusive Medical Science Research Institute, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- Department of Physical Therapy, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
- AR-Ex Medical Research Center, Tokyo 158-0082, Japan
| |
Collapse
|
4
|
Liu L, Tang C, Li L, Chen P, Tan Y, Hu X, Chen K, Shang Y, Liu D, Liu H, Liu H, Nie F, Tian J, Zhao M, He W, Guo Y. Deep learning radiomics for focal liver lesions diagnosis on long-range contrast-enhanced ultrasound and clinical factors. Quant Imaging Med Surg 2022; 12:3213-3226. [PMID: 35655832 PMCID: PMC9131334 DOI: 10.21037/qims-21-1004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 03/18/2022] [Indexed: 11/15/2023]
Abstract
BACKGROUND Routine clinical factors play an important role in the clinical diagnosis of focal liver lesions (FLLs); however, they are rarely used in computer-assisted diagnosis. Therefore, we developed a deep learning (DL) radiomics model, and investigated its effectiveness in diagnosing FLLs using long-range contrast-enhanced ultrasound (CEUS) cines and clinical factors. METHODS Herein, 303 patients with pathologically confirmed FLLs after surgery at three hospitals were retrospectively enrolled and divided into a training cohort (n=203), internal validation (IV) cohort (n=50) from one hospital with the ratio of 4:1, and external validation (EV) cohort (n=50) from the other two hospitals. Four DL radiomics models, namely Four Stream 3D convolutional neural network (FS3DU) (trained with CEUS cines only), FS3DU+A (trained with CEUS cines and alpha fetoprotein), FS3DU+H (trained with CEUS cines and hepatitis), and FS3DU+A+H (trained with CEUS cines, alpha fetoprotein, and hepatitis), were formed based on 3D convolutional neural networks (CNNs). They used approximately 20-s preoperative CEUS cines and/or clinical factors to extract spatiotemporal features for the classification of FLLs and the location of the region of interest. The area under curve of the receiver operating characteristic and diagnosis speed were calculated to evaluate the models in the IV and EV cohorts, and they were compared with those of two radiologists. Two-sided Delong tests were used to calculate the statistical differences between the models and radiologists. RESULTS FS3DU+A+H, which incorporated CEUS cines, hepatitis, and alpha fetoprotein, achieved the highest area under curve of 0.969 (95% CI: 0.901-1.000) and 0.957 (95% CI: 0.894-1.000) among radiologists and other models in IV and EV cohorts, respectively. A significant difference was observed when comparing FS3DU and radiologist 2 (all P<0.05). The diagnosis speed of all the models was the same (10.76 s per patient), and it was two times faster than those of the radiologists (radiologist 1: 23.74 and 27.75 s; radiologist 2: 25.95 and 29.50 s in IV and EV cohorts, respectively). CONCLUSIONS The proposed DL radiomics demonstrated excellent performance on the benign and malignant diagnosis of FLLs by combining CEUS cines and clinical factors. It could help the individualized characterization of FLLs, and enhance the accuracy of diagnosis in the future.
Collapse
Affiliation(s)
- Li Liu
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Third Military Medical University (Army Medical University), Chongqing, China
| | - Chunlin Tang
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Lu Li
- CHISON Medical Technologies Co., LTD, Wuxi, China
| | - Ping Chen
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Ying Tan
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Xiaofei Hu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Kaixuan Chen
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Yongning Shang
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Deng Liu
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - He Liu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Hongjun Liu
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Third Military Medical University (Army Medical University), Chongqing, China
| | - Fang Nie
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, China
| | - Jiawei Tian
- Department of Ultrasound, the Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | | | - Wen He
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yanli Guo
- Department of Ultrasound, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| |
Collapse
|
5
|
|
6
|
Xiang K, Jiang B, Shang D. The overview of the deep learning integrated into the medical imaging of liver: a review. Hepatol Int 2021; 15:868-880. [PMID: 34264509 DOI: 10.1007/s12072-021-10229-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/24/2021] [Indexed: 12/13/2022]
Abstract
Deep learning (DL) is a recently developed artificial intelligent method that can be integrated into numerous fields. For the imaging diagnosis of liver disease, several remarkable outcomes have been achieved with the application of DL currently. This advanced algorithm takes part in various sections of imaging processing such as liver segmentation, lesion delineation, disease classification, process optimization, etc. The DL optimized imaging diagnosis shows a broad prospect instead of the pathological biopsy for the advantages of convenience, safety, and inexpensiveness. In this paper, we reviewed the published representative DL-related hepatic imaging works, described the general situation of this new-rising technology in medical liver imaging and explored the future direction of DL development.
Collapse
Affiliation(s)
- Kailai Xiang
- Department of General Surgery, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China.,Clinical Laboratory of Integrative Medicine, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China
| | - Baihui Jiang
- Department of Ophthalmology, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China
| | - Dong Shang
- Department of General Surgery, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China. .,Clinical Laboratory of Integrative Medicine, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China.
| |
Collapse
|
7
|
Deep Neural Architectures for Contrast Enhanced Ultrasound (CEUS) Focal Liver Lesions Automated Diagnosis. SENSORS 2021; 21:s21124126. [PMID: 34208548 PMCID: PMC8235629 DOI: 10.3390/s21124126] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/04/2021] [Accepted: 06/10/2021] [Indexed: 12/15/2022]
Abstract
Computer vision, biomedical image processing and deep learning are related fields with a tremendous impact on the interpretation of medical images today. Among biomedical image sensing modalities, ultrasound (US) is one of the most widely used in practice, since it is noninvasive, accessible, and cheap. Its main drawback, compared to other imaging modalities, like computed tomography (CT) or magnetic resonance imaging (MRI), consists of the increased dependence on the human operator. One important step toward reducing this dependence is the implementation of a computer-aided diagnosis (CAD) system for US imaging. The aim of the paper is to examine the application of contrast enhanced ultrasound imaging (CEUS) to the problem of automated focal liver lesion (FLL) diagnosis using deep neural networks (DNN). Custom DNN designs are compared with state-of-the-art architectures, either pre-trained or trained from scratch. Our work improves on and broadens previous work in the field in several aspects, e.g., a novel leave-one-patient-out evaluation procedure, which further enabled us to formulate a hard-voting classification scheme. We show the effectiveness of our models, i.e., 88% accuracy reported against a higher number of liver lesion types: hepatocellular carcinomas (HCC), hypervascular metastases (HYPERM), hypovascular metastases (HYPOM), hemangiomas (HEM), and focal nodular hyperplasia (FNH).
Collapse
|
8
|
Wan P, Chen F, Liu C, Kong W, Zhang D. Hierarchical Temporal Attention Network for Thyroid Nodule Recognition Using Dynamic CEUS Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1646-1660. [PMID: 33651687 DOI: 10.1109/tmi.2021.3063421] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) has emerged as a popular imaging modality in thyroid nodule diagnosis due to its ability to visualize vascular distribution in real time. Recently, a number of learning-based methods are dedicated to mine pathological-related enhancement dynamics and make prediction at one step, ignoring a native diagnostic dependency. In clinics, the differentiation of benign or malignant nodules always precedes the recognition of pathological types. In this paper, we propose a novel hierarchical temporal attention network (HiTAN) for thyroid nodule diagnosis using dynamic CEUS imaging, which unifies dynamic enhancement feature learning and hierarchical nodules classification into a deep framework. Specifically, this method decomposes the diagnosis of nodules into an ordered two-stage classification task, where diagnostic dependency is modeled by Gated Recurrent Units (GRUs). Besides, we design a local-to-global temporal aggregation (LGTA) operator to perform a comprehensive temporal fusion along the hierarchical prediction path. Particularly, local temporal information is defined as typical enhancement patterns identified with the guidance of perfusion representation learned from the differentiation level. Then, we leverage an attention mechanism to embed global enhancement dynamics into each identified salient pattern. In this study, we evaluate the proposed HiTAN method on the collected CEUS dataset of thyroid nodules. Extensive experimental results validate the efficacy of dynamic patterns learning, fusion and hierarchical diagnosis mechanism.
Collapse
|
9
|
Mitrea D, Badea R, Mitrea P, Brad S, Nedevschi S. Hepatocellular Carcinoma Automatic Diagnosis within CEUS and B-Mode Ultrasound Images Using Advanced Machine Learning Methods. SENSORS (BASEL, SWITZERLAND) 2021; 21:2202. [PMID: 33801125 PMCID: PMC8004125 DOI: 10.3390/s21062202] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 03/12/2021] [Accepted: 03/16/2021] [Indexed: 02/06/2023]
Abstract
Hepatocellular Carcinoma (HCC) is the most common malignant liver tumor, being present in 70% of liver cancer cases. It usually evolves on the top of the cirrhotic parenchyma. The most reliable method for HCC diagnosis is the needle biopsy, which is an invasive, dangerous method. In our research, specific techniques for non-invasive, computerized HCC diagnosis are developed, by exploiting the information from ultrasound images. In this work, the possibility of performing the automatic diagnosis of HCC within B-mode ultrasound and Contrast-Enhanced Ultrasound (CEUS) images, using advanced machine learning methods based on Convolutional Neural Networks (CNN), was assessed. The recognition performance was evaluated separately on B-mode ultrasound images and on CEUS images, respectively, as well as on combined B-mode ultrasound and CEUS images. For this purpose, we considered the possibility of combining the input images directly, performing feature level fusion, then providing the resulted data at the entrances of representative CNN classifiers. In addition, several multimodal combined classifiers were experimented, resulted by the fusion, at classifier, respectively, at the decision levels of two different branches based on the same CNN architecture, as well as on different CNN architectures. Various combination methods, and also the dimensionality reduction method of Kernel Principal Component Analysis (KPCA), were involved in this process. These results were compared with those obtained on the same dataset, when employing advanced texture analysis techniques in conjunction with conventional classification methods and also with equivalent state-of-the-art approaches. An accuracy above 97% was achieved when our new methodology was applied.
Collapse
Affiliation(s)
- Delia Mitrea
- Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Baritiu Street, No. 26-28, 400027 Cluj-Napoca, Romania; (D.M.); (P.M.); (S.N.)
| | - Radu Badea
- Medical Imaging Department, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Babes Street, No. 8, 400012 Cluj-Napoca, Romania;
- Regional Institute of Gastroenterology and Hepatology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, 19-21 Croitorilor Street, 400162 Cluj-Napoca, Romania
| | - Paulina Mitrea
- Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Baritiu Street, No. 26-28, 400027 Cluj-Napoca, Romania; (D.M.); (P.M.); (S.N.)
| | - Stelian Brad
- Department of Design Engineering and Robotics, Faculty of Machine Building, Technical University of Cluj-Napoca, Muncii Boulevard, No. 103-105, 400641 Cluj-Napoca, Romania
| | - Sergiu Nedevschi
- Department of Computer Science, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Baritiu Street, No. 26-28, 400027 Cluj-Napoca, Romania; (D.M.); (P.M.); (S.N.)
| |
Collapse
|
10
|
Huang Q, Pan F, Li W, Yuan F, Hu H, Huang J, Yu J, Wang W. Differential Diagnosis of Atypical Hepatocellular Carcinoma in Contrast-Enhanced Ultrasound Using Spatio-Temporal Diagnostic Semantics. IEEE J Biomed Health Inform 2020; 24:2860-2869. [PMID: 32149699 DOI: 10.1109/jbhi.2020.2977937] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Atypical Hepatocellular Carcinoma (HCC) is very hard to distinguish from Focal Nodular Hyperplasia (FNH) in routine imaging. However little attention was paid to this problem. This paper proposes a novel liver tumor Computer-Aided Diagnostic (CAD) approach extracting spatio-temporal semantics for atypical HCC. With respect to useful diagnostic semantics, our model automatically calculates three types of semantic feature with equally down-sampled frames based on Contrast-Enhanced Ultrasound (CEUS). Thereafter, a Support Vector Machine (SVM) classifier is trained to make the final diagnosis. Compared with traditional methods for diagnosing HCC, the proposed model has the advantage of less computational complexity and being able to handle the atypical HCC cases. The experimental results show that our method obtained a pretty considerable performance and outperformed two traditional methods. According to the results, the average accuracy reaches 94.40%, recall rate 94.76%, F1-score value 94.62%, specificity 93.62% and sensitivity 94.76%, indicating good merit for automatically diagnosing atypical HCC cases.
Collapse
|