51
|
Lu Z, Li M, Annamalai A, Yang C. Recent advances in robot‐assisted echography: combining perception, control and cognition. COGNITIVE COMPUTATION AND SYSTEMS 2020. [DOI: 10.1049/ccs.2020.0015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Affiliation(s)
- Zhenyu Lu
- Bristol Robotics LaboratoryUniversity of the West of EnglandBristolUK
| | - Miao Li
- School of Power and Mechanical EngineeringWuhan UniversityWuhanPeople's Republic of China
| | | | - Chenguang Yang
- Bristol Robotics LaboratoryUniversity of the West of EnglandBristolUK
| |
Collapse
|
52
|
Wang F, Liu X, Yuan N, Qian B, Ruan L, Yin C, Jin C. Study on automatic detection and classification of breast nodule using deep convolutional neural network system. J Thorac Dis 2020; 12:4690-4701. [PMID: 33145042 PMCID: PMC7578508 DOI: 10.21037/jtd-19-3013] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Backgrounds Conventional ultrasound manual scanning and artificial diagnosis approaches in breast are considered to be operator-dependence, slight slow and error-prone. In this study, we used Automated Breast Ultrasound (ABUS) machine for the scanning, and deep convolutional neural network (CNN) technology, a kind of Deep Learning (DL) algorithm, for the detection and classification of breast nodules, aiming to achieve the automatic and accurate diagnosis of breast nodules. Methods Two hundred and ninety-three lesions from 194 patients with definite pathological diagnosis results (117 benign and 176 malignancy) were recruited as case group. Another 70 patients without breast diseases were enrolled as control group. All the breast scans were carried out by an ABUS machine and then randomly divided into training set, verification set and test set, with a proportion of 7:1:2. In the training set, we constructed a detection model by a three-dimensionally U-shaped convolutional neural network (3D U-Net) architecture for the purpose of segment the nodules from background breast images. Processes such as residual block, attention connections, and hard mining were used to optimize the model while strategies of random cropping, flipping and rotation for data augmentation. In the test phase, the current model was compared with those in previously reported studies. In the verification set, the detection effectiveness of detection model was evaluated. In the classification phase, multiple convolutional layers and fully-connected layers were applied to set up a classification model, aiming to identify whether the nodule was malignancy. Results Our detection model yielded a sensitivity of 91% and 1.92 false positive subjects per automatically scanned imaging. The classification model achieved a sensitivity of 87.0%, a specificity of 88.0% and an accuracy of 87.5%. Conclusions Deep CNN combined with ABUS maybe a promising tool for easy detection and accurate diagnosis of breast nodule.
Collapse
Affiliation(s)
- Feiqian Wang
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Xiaotong Liu
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Na Yuan
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Buyue Qian
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Litao Ruan
- Department of Ultrasound, The First Affiliated Hospital of Xi'an Jiaotong University, China
| | - Changchang Yin
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Ciping Jin
- National Engineering Lab for Big Data Analytics, Xi'an Jiaotong University, Xi'an, China.,School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
53
|
Abstract
Zusammenfassung
Hintergrund
Die periphere endovaskuläre Chirurgie ist nach wie vor durch die Anwendung von Röntgenstrahlen und Röntgenkontrastmittel für die intraprozedurale Navigation der Instrumentarien ein Verfahren mit potenziellen Risiken und Nebenwirkungen.
Projektziel
Ziel des RoGUS-PAD (Robotic-Guided Ultrasound System for Peripheral Arterial Disease)-Projektes ist die Entwicklung eines roboterbasierten ultraschallgesteuerten Assistenzsystems für periphere endovaskuläre Interventionen zur Verringerung und ggf. Vermeidung von Röntgenstrahlung und Röntgenkontrastmittel sowie Verbesserung der Echtzeitvisualisierung.
Material und Methoden
Für die Bildgebung wurde ein 2‑D-Ultraschall-Lineartastkopf (L12‑3, Philips Healthcare, Best, Niederlande) am Endeffektor eines Roboterarms (LBR iiwa 7 R800, KUKA, Augsburg, Deutschland) montiert. Die ersten Versuche wurden an einem eigens für dieses Projekt entwickelten ultraschallfähigen Phantom durchgeführt. Die Bildverarbeitung und Robotersteuerung erfolgten durch ein speziell entwickeltes Programm in C++.
Ergebnisse
Zur Testung der technischen Umsetzbarkeit des Projektes konnten wir einen semiautomatischen 2‑D-Ultraschallscan einer peripheren Arterie am Phantom durchführen. In 27 von 30 Durchläufen zeigte sich ein erfolgreicher Scanvorgang.
Schlussfolgerung
Unsere ersten Ergebnisse bestätigten, dass die Entwicklung eines roboterbasierten Assistenzsystems für ultraschallgesteuerte periphere endovaskuläre Interventionen technisch umsetzbar ist. Dies stützt unsere Ambitionen einer Translation des Systems in die tägliche klinische Praxis.
Collapse
|
54
|
Ramadan H, Lachqar C, Tairi H. Saliency-guided automatic detection and segmentation of tumor in breast ultrasound images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101945] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
55
|
Fujioka T, Katsuta L, Kubota K, Mori M, Kikuchi Y, Kato A, Oda G, Nakagawa T, Kitazume Y, Tateishi U. Classification of Breast Masses on Ultrasound Shear Wave Elastography using Convolutional Neural Networks. ULTRASONIC IMAGING 2020; 42:213-220. [PMID: 32501152 DOI: 10.1177/0161734620932609] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We aimed to use deep learning with convolutional neural networks (CNNs) to discriminate images of benign and malignant breast masses on ultrasound shear wave elastography (SWE). We retrospectively gathered 158 images of benign masses and 146 images of malignant masses as training data for SWE. A deep learning model was constructed using several CNN architectures (Xception, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, and NASNetMobile) with 50, 100, and 200 epochs. We analyzed SWE images of 38 benign masses and 35 malignant masses as test data. Two radiologists interpreted these test data through a consensus reading using a 5-point visual color assessment (SWEc) and the mean elasticity value (in kPa) (SWEe). Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. The best CNN model (which was DenseNet169 with 100 epochs), SWEc, and SWEe had a sensitivity of 0.857, 0.829, and 0.914 and a specificity of 0.789, 0.737, and 0.763 respectively. The CNNs exhibited a mean AUC of 0.870 (range, 0.844-0.898), and SWEc and SWEe had an AUC of 0.821 and 0.855. The CNNs had an equal or better diagnostic performance compared with radiologist readings. DenseNet169 with 100 epochs, Xception with 50 epochs, and Xception with 100 epochs had a better diagnostic performance compared with SWEc (P = 0.018-0.037). Deep learning with CNNs exhibited equal or higher AUC compared with radiologists when discriminating benign from malignant breast masses on ultrasound SWE.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
- Department of Radiology, Dokkyo Medical University, Tochigi, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuka Kikuchi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Arisa Kato
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Goshi Oda
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tsuyoshi Nakagawa
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yoshio Kitazume
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
56
|
Zhang E, Seiler S, Chen M, Lu W, Gu X. BIRADS features-oriented semi-supervised deep learning for breast ultrasound computer-aided diagnosis. Phys Med Biol 2020; 65:125005. [PMID: 32155605 DOI: 10.1088/1361-6560/ab7e7d] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
We propose a novel BIRADS-SSDL network that integrates clinically-approved breast lesion characteristics (BIRADS features) into task-oriented semi-supervised deep learning (SSDL) for accurate diagnosis of ultrasound (US) images with a small training dataset. Breast US images are converted to BIRADS-oriented feature maps (BFMs) using a distance-transformation coupled with a Gaussian filter. Then, the converted BFMs are used as the input of an SSDL network, which performs unsupervised stacked convolutional auto-encoder (SCAE) image reconstruction guided by lesion classification. This integrated multi-task learning allows SCAE to extract image features with the constraints from the lesion classification task, while the lesion classification is achieved by utilizing the SCAE encoder features with a convolutional network. We trained the BIRADS-SSDL network with an alternative learning strategy by balancing the reconstruction error and classification label prediction error. To demonstrate the effectiveness of our approach, we evaluated it using two breast US image datasets. We compared the performance of the BIRADS-SSDL network with conventional SCAE and SSDL methods that use the original images as inputs, as well as with an SCAE that use BFMs as inputs. The experimental results on two breast US datasets show that BIRADS-SSDL ranked the best among the four networks, with a classification accuracy of around 94.23 ± 3.33% and 84.38 ± 3.11% on two datasets. In the case of experiments across two datasets collected from two different institutions/and US devices, the developed BIRADS-SSDL is generalizable across the different US devices and institutions without overfitting to a single dataset and achieved satisfactory results. Furthermore, we investigate the performance of the proposed method by varying the model training strategies, lesion boundary accuracy, and Gaussian filter parameters. The experimental results showed that a pre-training strategy can help to speed up model convergence during training but with no improvement of the classification accuracy on the testing dataset. The classification accuracy decreases as the segmentation accuracy decreases. The proposed BIRADS-SSDL achieves the best results among the compared methods in each case and has the capacity to deal with multiple different datasets under one model. Compared with state-of-the-art methods, BIRADS-SSDL could be promising for effective breast US computer-aided diagnosis using small datasets.
Collapse
Affiliation(s)
- Erlei Zhang
- College of Information Science and Technology, Northwest University, Xi' an 710069, People's Republic of China. Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | | | | | | | | |
Collapse
|
57
|
Jeong Y, Kim JH, Chae HD, Park SJ, Bae JS, Joo I, Han JK. Deep learning-based decision support system for the diagnosis of neoplastic gallbladder polyps on ultrasonography: Preliminary results. Sci Rep 2020; 10:7700. [PMID: 32382062 PMCID: PMC7205977 DOI: 10.1038/s41598-020-64205-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 04/06/2020] [Indexed: 11/09/2022] Open
Abstract
Ultrasonography (US) has been considered image of choice for gallbladder (GB) polyp, however, it had limitations in differentiating between nonneoplastic polyps and neoplastic polyps. We developed and investigated the usefulness of a deep learning-based decision support system (DL-DSS) for the differential diagnosis of GB polyps on US. We retrospectively collected 535 patients, and they were divided into the development dataset (n = 437) and test dataset (n = 98). The binary classification convolutional neural network model was developed by transfer learning. Using the test dataset, three radiologists with different experience levels retrospectively graded the possibility of a neoplastic polyp using a 5-point confidence scale. The reviewers were requested to re-evaluate their grades using the DL-DSS assistant. The areas under the curve (AUCs) of three reviewers were 0.94, 0.78, and 0.87. The DL-DSS alone showed an AUC of 0.92. With the DL-DSS assistant, the AUCs of the reviewer’s improved to 0.95, 0.91, and 0.91. Also, the specificity of the reviewers was improved (65.1–85.7 to 71.4–93.7). The intraclass correlation coefficient (ICC) improved from 0.87 to 0.93. In conclusion, DL-DSS could be used as an assistant tool to decrease the gap between reviewers and to reduce the false positive rate.
Collapse
Affiliation(s)
- Younbeom Jeong
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Jung Hoon Kim
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea. .,Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea. .,Institute of Radiation Medicine, Seoul National University Medical Research Center, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| | - Hee-Dong Chae
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea. .,Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.
| | - Sae-Jin Park
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.,Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Jae Seok Bae
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.,Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Ijin Joo
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.,Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Joon Koo Han
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.,Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
| |
Collapse
|
58
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
59
|
Hu SY, Xu H, Li Q, Telfer BA, Brattain LJ, Samir AE. Deep Learning-Based Automatic Endometrium Segmentation and Thickness Measurement for 2D Transvaginal Ultrasound. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:993-997. [PMID: 31946060 DOI: 10.1109/embc.2019.8856367] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Endometrial thickness is closely related to gyneco-logical function and is an important biomarker in transvaginal ultrasound (TVUS) examinations for assessing female reproductive health. Manual measurement is time-consuming and subject to high inter- and intra- observer variability. In this paper, we present a fully automated endometrial thickness measurement method using deep learning. Our pipeline consists of: 1) endometrium segmentation using a VGG-based U-Net, and 2) endometrial thickness estimation using medial axis transformation. We conducted experimental studies on 137 2D TVUS cases (74/63 secretory phase/proliferative phase). On a test set of 27 cases/277 images, the segmentation Dice score is 0.83. For thickness measurement, we achieved mean absolute error of 1.23/1.38 mm and root mean squared error of 1.79/1.85 mm on two different test sets. The results are considered well within the clinically acceptable range of ±2 mm. Furthermore, our phase-stratified analysis shows that the measurement variance from the secretory phase is higher than that from the proliferative phase, largely due to the high variability of the endometrium appearance in the secretory phase. Future work will extend our current algorithm toward different clinical outcomes for a broader spectrum of clinical applications.
Collapse
|
60
|
Antico M, Sasazawa F, Dunnhofer M, Camps SM, Jaiprakash AT, Pandey AK, Crawford R, Carneiro G, Fontanarosa D. Deep Learning-Based Femoral Cartilage Automatic Segmentation in Ultrasound Imaging for Guidance in Robotic Knee Arthroscopy. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:422-435. [PMID: 31767454 DOI: 10.1016/j.ultrasmedbio.2019.10.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 10/06/2019] [Accepted: 10/18/2019] [Indexed: 06/10/2023]
Abstract
Knee arthroscopy is a minimally invasive surgery used in the treatment of intra-articular knee pathology which may cause unintended damage to femoral cartilage. An ultrasound (US)-guided autonomous robotic platform for knee arthroscopy can be envisioned to minimise these risks and possibly to improve surgical outcomes. The first necessary tool for reliable guidance during robotic surgeries was an automatic segmentation algorithm to outline the regions at risk. In this work, we studied the feasibility of using a state-of-the-art deep neural network (UNet) to automatically segment femoral cartilage imaged with dynamic volumetric US (at the refresh rate of 1 Hz), under simulated surgical conditions. Six volunteers were scanned which resulted in the extraction of 18278 2-D US images from 35 dynamic 3-D US scans, and these were manually labelled. The UNet was evaluated using a five-fold cross-validation with an average of 15531 training and 3124 testing labelled images per fold. An intra-observer study was performed to assess intra-observer variability due to inherent US physical properties. To account for this variability, a novel metric concept named Dice coefficient with boundary uncertainty (DSCUB) was proposed and used to test the algorithm. The algorithm performed comparably to an experienced orthopaedic surgeon, with DSCUB of 0.87. The proposed UNet has the potential to localise femoral cartilage in robotic knee arthroscopy with clinical accuracy.
Collapse
Affiliation(s)
- M Antico
- School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia; Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia
| | - F Sasazawa
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, 060-8638, Japan
| | - M Dunnhofer
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, 33100, Italy
| | - S M Camps
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, the Netherlands; Oncology Solutions Department, Philips Research, 5656 AE Eindhoven, the Netherlands
| | - A T Jaiprakash
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Electrical Engineering, Computer Science, Science and Engineering Faculty, Queensland 16 University of Technology, Brisbane, QLD 4000, Australia
| | - A K Pandey
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Electrical Engineering, Computer Science, Science and Engineering Faculty, Queensland 16 University of Technology, Brisbane, QLD 4000, Australia
| | - R Crawford
- School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD 4000, Australia; Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia
| | - G Carneiro
- Australian Institute for Machine Learning, School of Computer Science, the University of Adelaide, Adelaide, SA 5005, Australia
| | - D Fontanarosa
- Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, QLD 4059, Australia; School of Clinical Sciences, Queensland University of Technology, Gardens Point Campus, 2 George St, Brisbane, QLD 4000, Australia.
| |
Collapse
|
61
|
Cao Z, Yang G, Chen Q, Chen X, Lv F. Breast tumor classification through learning from noisy labeled ultrasound images. Med Phys 2019; 47:1048-1057. [PMID: 31837239 DOI: 10.1002/mp.13966] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 11/11/2019] [Accepted: 12/05/2019] [Indexed: 12/17/2022] Open
Abstract
PURPOSE To train deep learning models to differentiate benign and malignant breast tumors in ultrasound images, we need to collect many training samples with clear labels. In general, biopsy results can be used as benign/malignant labels. However, most clinical samples generally do not have biopsy results. Previous works have proposed generating benign/malignant labels according to Breast Imaging, Reporting and Data System (BI-RADS) ratings. However, this approach will cause noisy labels, which means that the benign/malignant labels produced from BI-RADS diagnoses may be inconsistent with the ground truths. Consequently, deep models will overfit the noisy labels and hence obtain poor generalization performance. In this work, we mainly focus on how to reduce the negative effect of noisy labels when they are used to train breast tumor classification models. METHODS We propose an effective approach called noise filter network (NF-Net) to address the problem of noisy labels when training breast tumor classification models. Specifically, to prevent deep models from overfitting the noisy labels, we propose incorporating two softmax layers for classification. Additionally, to strengthen the effect of clean labels, we design a teacher-student module for distilling the knowledge of clean labels. RESULTS We conduct extensive comparisons with the existing works on addressing noisy labels. Our method achieves a classification accuracy of 73%, with a precision of 69%, recall of 80%, and F1-score of 0.74. This result is significantly better than those of the existing state-of-the-art works on addressing noisy labels. CONCLUSIONS This work provides a means to overcome the label shortage problem in training breast tumor classification models. Specifically, we can generate benign/malignant labels according to the BI-RADS ratings. Although this approach will cause noisy labels, the design of NF-Net can effectively reduce the negative effect of such labels.
Collapse
Affiliation(s)
- Zhantao Cao
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Guowu Yang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qin Chen
- Sichuan Provincial Peoples's Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610072, China
| | - Xiaolong Chen
- Center of Statistical Research and School of Statistics, Southwestern University of Finance and Economics, Chengdu, 611130, China
| | - Fengmao Lv
- Center of Statistical Research and School of Statistics, Southwestern University of Finance and Economics, Chengdu, 611130, China
| |
Collapse
|
62
|
Xia S, Yao J, Zhou W, Dong Y, Xu S, Zhou J, Zhan W. A computer-aided diagnosing system in the evaluation of thyroid nodules-experience in a specialized thyroid center. World J Surg Oncol 2019; 17:210. [PMID: 31810469 PMCID: PMC6898946 DOI: 10.1186/s12957-019-1752-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Accepted: 11/14/2019] [Indexed: 02/07/2023] Open
Abstract
Background The evaluation of thyroid nodules with ultrasonography has created a large burden for radiologists. Artificial intelligence technology has been rapidly developed in recent years to reduce the cost of labor and improve the differentiation of thyroid malignancies. This study aimed to investigate the diagnostic performance of a novel computer-aided diagnosing system (CADs: S-detect) for the ultrasound (US) interpretation of thyroid nodule subtypes in a specialized thyroid center. Methods Our study prospectively included 180 thyroid nodules that underwent ultrasound interpretation. The CADs and radiologist assessed all nodules. The ultrasonographic features of different subtypes were analyzed, and the diagnostic performances of the CADs and radiologist were compared. Results There were seven subtypes of thyroid nodules, among which papillary thyroid cancer (PTC) accounted for 50.6% and follicular thyroid carcinoma (FTC) accounted for 2.2%. Among all thyroid nodules, the CADs presented a higher sensitivity and lower specificity than the radiologist (90.5% vs 81.1%; 41.2% vs 83.5%); the radiologist had a higher accuracy than the CADs (82.2% vs 67.2%) for diagnosing malignant thyroid nodules. The accuracy of the CADs was not as good as that of the radiologist in diagnosing PTCs (70.9% vs 82.1%). The CADs and radiologist presented accuracies of 43.8% and 60.9% in identifying FTCs, respectively. Conclusions The ultrasound CADs presented a higher sensitivity for identifying malignant thyroid nodules than experienced radiologists. The CADs was not as good as experienced radiologists in a specialized thyroid center in identifying PTCs. Radiologists maintained a higher specificity than the CADs for FTC detection.
Collapse
Affiliation(s)
- Shujun Xia
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China
| | - Jiejie Yao
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China
| | - Wei Zhou
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China
| | - Yijie Dong
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China
| | - Shangyan Xu
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China
| | - Jianqiao Zhou
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China
| | - Weiwei Zhan
- Department of Ultrasound, Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Rui Jin Er Road, Huang Pu District, Shanghai, 200025, People's Republic of China.
| |
Collapse
|
63
|
Application of Compressive Sensing to Ultrasound Images: A Review. BIOMED RESEARCH INTERNATIONAL 2019; 2019:7861651. [PMID: 31828130 PMCID: PMC6885152 DOI: 10.1155/2019/7861651] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 07/24/2019] [Accepted: 10/15/2019] [Indexed: 11/17/2022]
Abstract
Compressive sensing (CS) offers compression of data below the Nyquist rate, making it an attractive solution in the field of medical imaging, and has been extensively used for ultrasound (US) compression and sparse recovery. In practice, CS offers a reduction in data sensing, transmission, and storage. Compressive sensing relies on the sparsity of data; i.e., data should be sparse in original or in some transformed domain. A look at the literature reveals that rich variety of algorithms have been suggested to recover data using compressive sensing from far fewer samples accurately, but with tradeoffs for efficiency. This paper reviews a number of significant CS algorithms used to recover US images from the undersampled data along with the discussion of CS in 3D US images. In this paper, sparse recovery algorithms applied to US are classified in five groups. Algorithms in each group are discussed and summarized based on their unique technique, compression ratio, sparsifying transform, 3D ultrasound, and deep learning. Research gaps and future directions are also discussed in the conclusion of this paper. This study is aimed to be beneficial for young researchers intending to work in the area of CS and its applications, specifically to US.
Collapse
|
64
|
Gutiérrez-Martínez J, Pineda C, Sandoval H, Bernal-González A. Computer-aided diagnosis in rheumatic diseases using ultrasound: an overview. Clin Rheumatol 2019; 39:993-1005. [DOI: 10.1007/s10067-019-04791-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 08/07/2019] [Accepted: 09/21/2019] [Indexed: 12/12/2022]
|
65
|
Liu R, Li H, Liang F, Yao L, Liu J, Li M, Cao L, Song B. Diagnostic accuracy of different computer-aided diagnostic systems for malignant and benign thyroid nodules classification in ultrasound images: A systematic review and meta-analysis protocol. Medicine (Baltimore) 2019; 98:e16227. [PMID: 31335673 PMCID: PMC6709132 DOI: 10.1097/md.0000000000016227] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 06/06/2019] [Indexed: 02/01/2023] Open
Abstract
OBJECTIVE The aim of this study was to determine the diagnostic accuracy of different computer-aided diagnostic (CAD) systems for thyroid nodules classification. METHODS A systematic search of the literature was conducted from inception until March, 2019 using the PubMed, EMBASE, Web of science, and Cochrane library. Literature selection and data extraction were conducted by 2 independent reviewers. Numerical values for sensitivity and specificity were obtained from false negative (FN), false positive (FP), true negative (TN), and true positive (TP) rates, presented alongside graphical representations with boxes marking the values and horizontal lines showing the confidence intervals (CIs). Summary receiver operating characteristic (SROC) curves were applied to assess the performance of diagnostic tests. Data were processed using Review Manager 5.3 and Stata 15. The methodological quality of included studies was assessed using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. TRIAL REGISTRATION NUMBER PROSPERO CRD42019132540.
Collapse
Affiliation(s)
- Ruisheng Liu
- The First Hospital of Lanzhou University
- The First Clinical Medical College of Lanzhou University
| | - Huijuan Li
- School of Public Health, Evidence-based Social Science Research Center
- Evidence-based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou
| | - Fuxiang Liang
- The First Hospital of Lanzhou University
- The First Clinical Medical College of Lanzhou University
| | - Liang Yao
- Chinese Medicine Faculty of Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Jieting Liu
- The First Clinical Medical College of Lanzhou University
- The Second hospital of Lanzhou University, Lanzhou, P.R. China
| | - Meixuan Li
- School of Public Health, Evidence-based Social Science Research Center
- Evidence-based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou
| | - Liujiao Cao
- School of Public Health, Evidence-based Social Science Research Center
- Evidence-based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou
| | - Bing Song
- The First Hospital of Lanzhou University
- The First Clinical Medical College of Lanzhou University
| |
Collapse
|
66
|
Zhang E, Seiler S, Chen M, Lu W, Gu X. Boundary-aware Semi-supervised Deep Learning for Breast Ultrasound Computer-Aided Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:947-950. [PMID: 31946050 DOI: 10.1109/embc.2019.8856539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast ultrasound (US) is an effective imaging modality for breast cancer diagnosis. US computer-aided diagnosis (CAD) systems have been developed for decades and have employed either conventional handcrafted features or modern automatic deep-learned features, the former relying on clinical experience and the latter demanding large datasets. In this paper, we developed a novel BASDL method that integrates clinical-approved breast lesion boundary characteristics (features) into a semi-supervised deep learning (SDL) to achieve accurate diagnosis with a small training dataset. Original breast US images are converted to boundary-oriented feature maps (BFMs) using a distance-transformation coupled with a Gaussian filter. Then, the converted BFMs are used as the input of SDL network, which is characterized as lesion classification guided unsupervised image reconstruction based on stacked convolutional auto-encode (SCAE). We compared the performance of BASDL with conventional SCAE method and SDL method that used the original images as inputs, as well as SCAE method that used BFMs as inputs. Experimental results on two breast US datasets show that BASDL ranked the best among the four networks, with classification accuracy around 92.00±2.38%, which indicated that BASDL could be promising for effective breast US lesion CAD using small datasets.
Collapse
|
67
|
Norton JC, Slawinski PR, Lay HS, Martin JW, Cox BF, Cummins G, Desmulliez MP, Clutton RE, Obstein KL, Cochran S, Valdastri P. Intelligent magnetic manipulation for gastrointestinal ultrasound. Sci Robot 2019; 4:eaav7725. [PMID: 31380501 PMCID: PMC6677276 DOI: 10.1126/scirobotics.aav7725] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Diagnostic endoscopy in the gastrointestinal tract has remained largely unchanged for decades and is limited to the visualization of the tissue surface, the collection of biopsy samples for diagnoses, and minor interventions such as clipping or tissue removal. In this work, we present the autonomous servoing of a magnetic capsule robot for in-situ, subsurface diagnostics of microanatomy. We investigated and showed the feasibility of closed-loop magnetic control using digitized microultrasound (μUS) feedback; this is crucial for obtaining robust imaging in an unknown and unconstrained environment. We demonstrated the functionality of an autonomous servoing algorithm that uses μUS feedback, both on benchtop trials as well as in-vivo in a porcine model. We have validated this magnetic-μUS servoing in instances of autonomous linear probe motion and were able to locate markers in an agar phantom with 1.0 ± 0.9 mm position accuracy using a fusion of robot localization and μUS image information. This work demonstrates the feasibility of closed-loop robotic μUS imaging in the bowel without the need for either a rigid physical link between the transducer and extracorporeal tools or complex manual manipulation.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Keith L. Obstein
- STORM Lab USA, Vanderbilt University, Nashville, USA
- Vanderbilt University Medical Center, Nashville, USA
| | - Sandy Cochran
- University of Glasgow, School of Mechanical Engineering, Glasgow, UK
| | | |
Collapse
|
68
|
Waymel Q, Badr S, Demondion X, Cotten A, Jacques T. Impact of the rise of artificial intelligence in radiology: What do radiologists think? Diagn Interv Imaging 2019; 100:327-336. [PMID: 31072803 DOI: 10.1016/j.diii.2019.03.015] [Citation(s) in RCA: 84] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 03/21/2019] [Accepted: 03/29/2019] [Indexed: 12/18/2022]
Abstract
PURPOSE The purpose of this study was to assess the perception, knowledge, wishes and expectations of a sample of French radiologists towards the rise of artificial intelligence (AI) in radiology. MATERIAL AND METHOD A general data protection regulation-compliant electronic survey was sent by e-mail to the 617 radiologists registered in the French departments of Nord and Pas-de-Calais (93 radiology residents and 524 senior radiologists), from both public and private institutions. The survey included 42 questions focusing on AI in radiology, and data were collected between January 16th and January 31st, 2019. The answers were analyzed together by a senior radiologist and a radiology resident. RESULTS A total of 70 radiology residents and 200 senior radiologists participated to the survey, which corresponded to a response rate of 43.8% (270/617). One hundred ninety-eight radiologists (198/270; 73.3%) estimated they had received insufficient previous information on AI. Two hundred and fifty-five respondents (255/270; 94.4%) would consider attending a generic continuous medical education in this field and 187 (187/270; 69.3%) a technically advanced training on AI. Two hundred and fourteen respondents (214/270; 79.3%) thought that AI will have a positive impact on their future practice. The highest expectations were the lowering of imaging-related medical errors (219/270; 81%), followed by the lowering of the interpretation time of each examination (201/270; 74.4%) and the increase in the time spent with patients (141/270; 52.2%). CONCLUSION While respondents had the feeling of receiving insufficient previous information on AI, they are willing to improve their knowledge and technical skills on this field. They share an optimistic view and think that AI will have a positive impact on their future practice. A lower risk of imaging-related medical errors and an increase in the time spent with patients are among their main expectations.
Collapse
Affiliation(s)
- Q Waymel
- Department of Musculoskeletal Radiology, University Hospital of Lille, 59037 Lille, France
| | - S Badr
- Department of Musculoskeletal Radiology, University Hospital of Lille, 59037 Lille, France
| | - X Demondion
- Department of Musculoskeletal Radiology, University Hospital of Lille, 59037 Lille, France; Lille Medical School, University of Lille, 59045 Lille, France
| | - A Cotten
- Department of Musculoskeletal Radiology, University Hospital of Lille, 59037 Lille, France; Lille Medical School, University of Lille, 59045 Lille, France
| | - T Jacques
- Department of Musculoskeletal Radiology, University Hospital of Lille, 59037 Lille, France; Lille Medical School, University of Lille, 59045 Lille, France.
| |
Collapse
|
69
|
Nishida N, Yamakawa M, Shiina T, Kudo M. Current status and perspectives for computer-aided ultrasonic diagnosis of liver lesions using deep learning technology. Hepatol Int 2019; 13:416-421. [PMID: 30790230 DOI: 10.1007/s12072-019-09937-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Accepted: 02/02/2019] [Indexed: 12/13/2022]
Abstract
An ultrasound (US) examination is a common noninvasive technique widely applied for diagnosis of a variety of diseases. Based on the rapid development of US equipment, many US images have been accumulated and are now available and ready for the preparation of a database for the development of computer-aided US diagnosis with deep learning technology. On the contrary, because of the unique characteristics of the US image, there could be some issues that need to be resolved for the establishment of computer-aided diagnosis (CAD) system in this field. For example, compared to the other modalities, the quality of a US image is, currently, highly operator dependent; the conditions of examination should also directly affect the quality of US images. So far, these factors have hampered the application of deep learning-based technology in the field of US diagnosis. However, the development of CAD and US technologies will contribute to an increase in diagnostic quality, facilitate the development of remote medicine, and reduce the costs in the national health care through the early diagnosis of diseases. From this point of view, it may have a large enough potential to induce a paradigm shift in the field of US imaging and diagnosis of liver diseases.
Collapse
Affiliation(s)
- Naoshi Nishida
- Department of Gastroenterology and Hepatology, Faculty of Medicine, Kindai University, 337-2 Ohno-higashi, Osaka-sayama, Osaka, 589-8511, Japan.
| | - Makoto Yamakawa
- Department of Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Tsuyoshi Shiina
- Department of Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Masatoshi Kudo
- Department of Gastroenterology and Hepatology, Faculty of Medicine, Kindai University, 337-2 Ohno-higashi, Osaka-sayama, Osaka, 589-8511, Japan
| |
Collapse
|
70
|
Zhou LQ, Wang JY, Yu SY, Wu GG, Wei Q, Deng YB, Wu XL, Cui XW, Dietrich CF. Artificial intelligence in medical imaging of the liver. World J Gastroenterol 2019; 25:672-682. [PMID: 30783371 PMCID: PMC6378542 DOI: 10.3748/wjg.v25.i6.672] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 12/24/2018] [Accepted: 01/09/2019] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI), particularly deep learning algorithms, is gaining extensive attention for its excellent performance in image-recognition tasks. They can automatically make a quantitative assessment of complex medical image characteristics and achieve an increased accuracy for diagnosis with higher efficiency. AI is widely used and getting increasingly popular in the medical imaging of the liver, including radiology, ultrasound, and nuclear medicine. AI can assist physicians to make more accurate and reproductive imaging diagnosis and also reduce the physicians’ workload. This article illustrates basic technical knowledge about AI, including traditional machine learning and deep learning algorithms, especially convolutional neural networks, and their clinical application in the medical imaging of liver diseases, such as detecting and evaluating focal liver lesions, facilitating treatment, and predicting liver treatment response. We conclude that machine-assisted medical services will be a promising solution for future liver medical care. Lastly, we discuss the challenges and future directions of clinical application of deep learning techniques.
Collapse
Affiliation(s)
- Li-Qiang Zhou
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Jia-Yu Wang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Song-Yuan Yu
- Department of Ultrasound, Tianyou Hospital Affiliated to Wuhan University of Technology, Wuhan 430030, Hubei Province, China
| | - Ge-Ge Wu
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Qi Wei
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - You-Bin Deng
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Xing-Long Wu
- School of Mathematics and Computer Science, Wuhan Textitle University, Wuhan 430200, Hubei Province, China
| | - Xin-Wu Cui
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Christoph F Dietrich
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
- Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Würzburg, Würzburg 97980, Germany
| |
Collapse
|
71
|
Classification of Liver Diseases Based on Ultrasound Image Texture Features. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9020342] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
This paper discusses using computer-aided diagnosis (CAD) to distinguish between hepatocellular carcinoma (HCC), i.e., the most common type of primary liver malignancy and a leading cause of death in people with cirrhosis worldwide, and liver abscess based on ultrasound image texture features and a support vector machine (SVM) classifier. Among 79 cases of liver diseases including 44 cases of liver cancer and 35 cases of liver abscess, this research extracts 96 features including 52 features of the gray-level co-occurrence matrix (GLCM) and 44 features of the gray-level run-length matrix (GLRLM) from the regions of interest (ROIs) in ultrasound images. Three feature selection models—(i) sequential forward selection (SFS), (ii) sequential backward selection (SBS), and (iii) F-score—are adopted to distinguish the two liver diseases. Finally, the developed system can classify liver cancer and liver abscess by SVM with an accuracy of 88.875%. The proposed methods for CAD can provide diagnostic assistance while distinguishing these two types of liver lesions.
Collapse
|
72
|
Shuo WBS, Ji-Bin LMD, Ziyin ZMD, John EP. Artificial Intelligence in Ultrasound Imaging: Current Research and Applications. ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY 2019. [DOI: 10.37015/audt.2019.190811] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
|
73
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 692] [Impact Index Per Article: 115.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
74
|
Ultrasound-Based Detection of Lung Abnormalities Using Single Shot Detection Convolutional Neural Networks. SIMULATION, IMAGE PROCESSING, AND ULTRASOUND SYSTEMS FOR ASSISTED DIAGNOSIS AND NAVIGATION 2018. [DOI: 10.1007/978-3-030-01045-4_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|