51
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 98] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
52
|
Brion E, Léger J, Barragán-Montero AM, Meert N, Lee JA, Macq B. Domain adversarial networks and intensity-based data augmentation for male pelvic organ segmentation in cone beam CT. Comput Biol Med 2021; 131:104269. [PMID: 33639352 DOI: 10.1016/j.compbiomed.2021.104269] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 02/07/2021] [Accepted: 02/08/2021] [Indexed: 12/25/2022]
Abstract
In radiation therapy, a CT image is used to manually delineate the organs and plan the treatment. During the treatment, a cone beam CT (CBCT) is often acquired to monitor the anatomical modifications. For this purpose, automatic organ segmentation on CBCT is a crucial step. However, manual segmentations on CBCT are scarce, and models trained with CT data do not generalize well to CBCT images. We investigate adversarial networks and intensity-based data augmentation, two strategies leveraging large databases of annotated CTs to train neural networks for segmentation on CBCT. Adversarial networks consist of a 3D U-Net segmenter and a domain classifier. The proposed framework is aimed at encouraging the learning of filters producing more accurate segmentations on CBCT. Intensity-based data augmentation consists in modifying the training CT images to reduce the gap between CT and CBCT distributions. The proposed adversarial networks reach DSCs of 0.787, 0.447, and 0.660 for the bladder, rectum, and prostate respectively, which is an improvement over the DSCs of 0.749, 0.179, and 0.629 for "source only" training. Our brightness-based data augmentation reaches DSCs of 0.837, 0.701, and 0.734, which outperforms the morphons registration algorithms for the bladder (0.813) and rectum (0.653), while performing similarly on the prostate (0.731). The proposed adversarial training framework can be used for any segmentation application where training and test distributions differ. Our intensity-based data augmentation can be used for CBCT segmentation to help achieve the prescribed dose on target and lower the dose delivered to healthy organs.
Collapse
Affiliation(s)
- Eliott Brion
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium.
| | - Jean Léger
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
| | | | - Nicolas Meert
- Hôpital André Vésale, Montigny-le-Tilleul, 6110, Belgium
| | - John A Lee
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium; IREC/MIRO, UCLouvain, Brussels, 1200, Belgium
| | - Benoit Macq
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
| |
Collapse
|
53
|
Qayyum A, Qadir J, Bilal M, Al-Fuqaha A. Secure and Robust Machine Learning for Healthcare: A Survey. IEEE Rev Biomed Eng 2021; 14:156-180. [PMID: 32746371 DOI: 10.1109/rbme.2020.3013489] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research.
Collapse
|
54
|
Hamamoto R, Suvarna K, Yamada M, Kobayashi K, Shinkai N, Miyake M, Takahashi M, Jinnai S, Shimoyama R, Sakai A, Takasawa K, Bolatkan A, Shozu K, Dozen A, Machino H, Takahashi S, Asada K, Komatsu M, Sese J, Kaneko S. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers (Basel) 2020; 12:E3532. [PMID: 33256107 PMCID: PMC7760590 DOI: 10.3390/cancers12123532] [Citation(s) in RCA: 72] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 11/21/2020] [Accepted: 11/24/2020] [Indexed: 02/07/2023] Open
Abstract
In recent years, advances in artificial intelligence (AI) technology have led to the rapid clinical implementation of devices with AI technology in the medical field. More than 60 AI-equipped medical devices have already been approved by the Food and Drug Administration (FDA) in the United States, and the active introduction of AI technology is considered to be an inevitable trend in the future of medicine. In the field of oncology, clinical applications of medical devices using AI technology are already underway, mainly in radiology, and AI technology is expected to be positioned as an important core technology. In particular, "precision medicine," a medical treatment that selects the most appropriate treatment for each patient based on a vast amount of medical data such as genome information, has become a worldwide trend; AI technology is expected to be utilized in the process of extracting truly useful information from a large amount of medical data and applying it to diagnosis and treatment. In this review, we would like to introduce the history of AI technology and the current state of medical AI, especially in the oncology field, as well as discuss the possibilities and challenges of AI technology in the medical field.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Kruthi Suvarna
- Indian Institute of Technology Bombay, Powai, Mumbai 400 076, India;
| | - Masayoshi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku Tokyo 104-0045, Japan
| | - Kazuma Kobayashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Norio Shinkai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Mototaka Miyake
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Masamichi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Neurosurgery and Neuro-Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Shunichi Jinnai
- Department of Dermatologic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Ryo Shimoyama
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Akira Sakai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ken Takasawa
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Amina Bolatkan
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Kanto Shozu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Ai Dozen
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Hidenori Machino
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Satoshi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Masaaki Komatsu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Jun Sese
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Humanome Lab, 2-4-10 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Syuzo Kaneko
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| |
Collapse
|
55
|
Baydilli YY, Atila U, Elen A. Learn from one data set to classify all - A multi-target domain adaptation approach for white blood cell classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105645. [PMID: 32702574 DOI: 10.1016/j.cmpb.2020.105645] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 06/30/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Traditional machine learning methods assume that both training and test data come from the same distribution. In this way, it becomes possible to achieve high successes when modelling on the same domain. Unfortunately, in real-world problems, direct transfer between domains is adversely affected due to differences in the data collection process and the internal dynamics of the data. In order to cope with such drawbacks, researchers use a method called "domain adaptation", which enables the successful transfer of information learned in one domain to other domains. In this study, a model that can be used in the classification of white blood cells (WBC) and is not affected by domain differences was proposed. METHODS Only one data set was used as source domain, and an adaptation process was created that made possible the learned knowledge to be used effectively in other domains (multi-target domain adaptation). While constructing the model, we employed data augmentation, data generation and fine-tuning processes, respectively. RESULTS The proposed model has been able to extract "domain-invariant" features and achieved high success rates in the tests performed on nine different data sets. Multi-target domain adaptation accuracy was measured as %98.09. CONCLUSIONS At the end of the study, it has been observed that the proposed model ignores the domain differences and it can adapt in a successful way to target domains. In this way, it becomes possible to classify unlabeled samples rapidly by using only a few number of labeled ones.
Collapse
Affiliation(s)
- Yusuf Yargı Baydilli
- Department of Computer Engineering, Faculty of Engineering, Karabük University, Karabük, Turkey.
| | - Umit Atila
- Department of Computer Engineering, Faculty of Engineering, Karabük University, Karabük, Turkey.
| | - Abdullah Elen
- Department of Computer Technology, TOBB Vocational School of Technical Sciences, Karabük University, Karabük, Turkey.
| |
Collapse
|
56
|
Wang G, Song T, Dong Q, Cui M, Huang N, Zhang S. Automatic ischemic stroke lesion segmentation from computed tomography perfusion images by image synthesis and attention-based deep neural networks. Med Image Anal 2020; 65:101787. [DOI: 10.1016/j.media.2020.101787] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2019] [Revised: 07/04/2020] [Accepted: 07/16/2020] [Indexed: 12/24/2022]
|
57
|
Blaivas M, Arntfield R, White M. Creation and Testing of a Deep Learning Algorithm to Automatically Identify and Label Vessels, Nerves, Tendons, and Bones on Cross-sectional Point-of-Care Ultrasound Scans for Peripheral Intravenous Catheter Placement by Novices. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2020; 39:1721-1727. [PMID: 32181922 DOI: 10.1002/jum.15270] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 02/21/2020] [Accepted: 02/27/2020] [Indexed: 06/10/2023]
Abstract
OBJECTIVES We sought to create a deep learning (DL) algorithm to identify vessels, bones, nerves, and tendons on transverse upper extremity (UE) ultrasound (US) images to enable providers new to US-guided peripheral vascular access to identify anatomy. METHODS We used publicly available DL architecture (YOLOv3) and deidentified transverse US videos of the UE for algorithm development. Vessels, bones, tendons, and nerves were labeled with bounding boxes. A total of 203,966 images were generated from videos, with corresponding label box coordinates in a YOLOv3 format. Training accuracy, losses, and learning curves were tracked. As a final real-world test, 50 randomly selected images from unrelated UE US videos were used to test the DL algorithm. Four different versions of the YOLOv3 algorithm were tested with varied amounts of training and sensitivity settings. The same 50 images were labeled by 2 blinded point-of-care ultrasound (POCUS) experts. The area under the curve (AUC) was calculated for the DL algorithm and POCUS expert performance. RESULTS The algorithm outperformed POCUS experts in detection of all structures in the UE, with an AUC of 0.78 versus 0.69 and 0.71, respectively. When considering vessels, only one of the POCUS experts attained an AUC of 0.85, just ahead of the DL algorithm, with an AUC of 0.83. CONCLUSIONS Our DL algorithm proved accurate at identifying 4 common structures on cross-sectional US imaging of the UE, which would allow novice POCUS providers to more confidently and accurately target vessels for cannulation, avoiding other structures. Overall, the algorithm outperformed 2 blinded POCUS experts.
Collapse
Affiliation(s)
- Michael Blaivas
- University of South Carolina School of Medicine, Columbia, South Carolina, USA
- Department of Emergency Medicine, St Francis Hospital, Columbus, Georgia USA
| | - Robert Arntfield
- Department of Critical Care Medicine, Western University, London, Ontario, Canada
| | - Matthew White
- Department of Critical Care Medicine, Western University, London, Ontario, Canada
| |
Collapse
|
58
|
Wilson G, Cook DJ. A Survey of Unsupervised Deep Domain Adaptation. ACM T INTEL SYST TEC 2020; 11:1-46. [PMID: 34336374 PMCID: PMC8323662 DOI: 10.1145/3400066] [Citation(s) in RCA: 123] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/01/2020] [Indexed: 10/23/2022]
Abstract
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
Collapse
|
59
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
60
|
Wang G, Liu X, Li C, Xu Z, Ruan J, Zhu H, Meng T, Li K, Huang N, Zhang S. A Noise-Robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions From CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2653-2663. [PMID: 32730215 PMCID: PMC8544954 DOI: 10.1109/tmi.2020.3000314] [Citation(s) in RCA: 175] [Impact Index Per Article: 43.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 05/09/2020] [Accepted: 05/29/2020] [Indexed: 05/19/2023]
Abstract
Segmentation of pneumonia lesions from CT scans of COVID-19 patients is important for accurate diagnosis and follow-up. Deep learning has a potential to automate this task but requires a large set of high-quality annotations that are difficult to collect. Learning from noisy training labels that are easier to obtain has a potential to alleviate this problem. To this end, we propose a novel noise-robust framework to learn from noisy labels for the segmentation task. We first introduce a noise-robust Dice loss that is a generalization of Dice loss for segmentation and Mean Absolute Error (MAE) loss for robustness against noise, then propose a novel COVID-19 Pneumonia Lesion segmentation network (COPLE-Net) to better deal with the lesions with various scales and appearances. The noise-robust Dice loss and COPLE-Net are combined with an adaptive self-ensembling framework for training, where an Exponential Moving Average (EMA) of a student model is used as a teacher model that is adaptively updated by suppressing the contribution of the student to EMA when the student has a large training loss. The student model is also adaptive by learning from the teacher only when the teacher outperforms the student. Experimental results showed that: (1) our noise-robust Dice loss outperforms existing noise-robust loss functions, (2) the proposed COPLE-Net achieves higher performance than state-of-the-art image segmentation networks, and (3) our framework with adaptive self-ensembling significantly outperforms a standard training process and surpasses other noise-robust training approaches in the scenario of learning from noisy labels for COVID-19 pneumonia lesion segmentation.
Collapse
Affiliation(s)
- Guotai Wang
- School of Mechanical and Electrical EngineeringUniversity of Electronic Science and Technology of ChinaChengdu611731China
| | | | - Chaoping Li
- Fengcheng People’s HospitalFengcheng331100China
| | - Zhiyong Xu
- Huanggang Traditional Chinese Medicine HospitalHuanggang438000China
| | - Jiugen Ruan
- Xinyu City People’s HospitalXinyu338000China
| | - Haifeng Zhu
- Civil Aviation General HospitalBeijing100123China
| | - Tao Meng
- RIMAG Medical Imaging CorporationBeijing100022China
| | - Kang Li
- West China Biomedical Big Data CenterSichuan University, West China HospitalChengdu610041China
| | | | - Shaoting Zhang
- School of Mechanical and Electrical EngineeringUniversity of Electronic Science and Technology of ChinaChengdu611731China
- SenseTime ResearchShanghai200233China
| |
Collapse
|
61
|
Chen C, Dou Q, Chen H, Qin J, Heng PA. Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2494-2505. [PMID: 32054572 DOI: 10.1109/tmi.2020.2972701] [Citation(s) in RCA: 119] [Impact Index Per Article: 29.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Unsupervised domain adaptation has increasingly gained interest in medical image computing, aiming to tackle the performance degradation of deep neural networks when being deployed to unseen data with heterogeneous characteristics. In this work, we present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA), to effectively adapt a segmentation network to an unlabeled target domain. Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features by leveraging adversarial learning in multiple aspects and with a deeply supervised mechanism. The feature encoder is shared between both adaptive perspectives to leverage their mutual benefits via end-to-end learning. We have extensively evaluated our method with cardiac substructure segmentation and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images, and outperforms the state-of-the-art domain adaptation approaches by a large margin.
Collapse
|
62
|
Xia Y, Yang D, Yu Z, Liu F, Cai J, Yu L, Zhu Z, Xu D, Yuille A, Roth H. Uncertainty-aware multi-view co-training for semi-supervised medical image segmentation and domain adaptation. Med Image Anal 2020; 65:101766. [PMID: 32623276 DOI: 10.1016/j.media.2020.101766] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 04/23/2020] [Accepted: 06/22/2020] [Indexed: 10/24/2022]
Abstract
Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.
Collapse
Affiliation(s)
- Yingda Xia
- Johns Hopkins Unversity, Baltimore, MD, 21218, USA
| | - Dong Yang
- NVIDIA Corporation, Bethesda, MD, 20814, USA
| | - Zhiding Yu
- NVIDIA Corporation, Bethesda, MD, 20814, USA
| | - Fengze Liu
- Johns Hopkins Unversity, Baltimore, MD, 21218, USA
| | - Jinzheng Cai
- University of Florida, Gainesville, FL, 32611, USA
| | - Lequan Yu
- The Chinese University of Hong Kong, Hong Kong, China
| | - Zhuotun Zhu
- Johns Hopkins Unversity, Baltimore, MD, 21218, USA
| | - Daguang Xu
- NVIDIA Corporation, Bethesda, MD, 20814, USA
| | - Alan Yuille
- Johns Hopkins Unversity, Baltimore, MD, 21218, USA
| | - Holger Roth
- NVIDIA Corporation, Bethesda, MD, 20814, USA.
| |
Collapse
|
63
|
|
64
|
Coupé P, Mansencal B, Clément M, Giraud R, Denis de Senneville B, Ta VT, Lepetit V, Manjon JV. AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentation. Neuroimage 2020; 219:117026. [PMID: 32522665 DOI: 10.1016/j.neuroimage.2020.117026] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 05/28/2020] [Accepted: 06/04/2020] [Indexed: 10/24/2022] Open
Abstract
Whole brain segmentation of fine-grained structures using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a single convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions, unseen problem and reaching a relevant consensus. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. During our validation, AssemblyNet showed competitive performance compared to state-of-the-art methods such as U-Net, Joint label fusion and SLANT. Moreover, we investigated the scan-rescan consistency and the robustness to disease effects of our method. These experiences demonstrated the reliability of AssemblyNet. Finally, we showed the interest of using semi-supervised learning to improve the performance of our method.
Collapse
Affiliation(s)
- Pierrick Coupé
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France.
| | - Boris Mansencal
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Michaël Clément
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Rémi Giraud
- Bordeaux INP, Univ. Bordeaux, CNRS, IMS, UMR 5218, F-33400, Talence, France
| | | | - Vinh-Thong Ta
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Vincent Lepetit
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - José V Manjon
- ITACA, Universitat Politècnica de València, 46022, Valencia, Spain
| |
Collapse
|
65
|
Mårtensson G, Ferreira D, Granberg T, Cavallin L, Oppedal K, Padovani A, Rektorova I, Bonanni L, Pardini M, Kramberger MG, Taylor JP, Hort J, Snædal J, Kulisevsky J, Blanc F, Antonini A, Mecocci P, Vellas B, Tsolaki M, Kłoszewska I, Soininen H, Lovestone S, Simmons A, Aarsland D, Westman E. The reliability of a deep learning model in clinical out-of-distribution MRI data: A multicohort study. Med Image Anal 2020; 66:101714. [PMID: 33007638 DOI: 10.1016/j.media.2020.101714] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 04/17/2020] [Accepted: 04/24/2020] [Indexed: 01/12/2023]
Abstract
Deep learning (DL) methods have in recent years yielded impressive results in medical imaging, with the potential to function as clinical aid to radiologists. However, DL models in medical imaging are often trained on public research cohorts with images acquired with a single scanner or with strict protocol harmonization, which is not representative of a clinical setting. The aim of this study was to investigate how well a DL model performs in unseen clinical datasets-collected with different scanners, protocols and disease populations-and whether more heterogeneous training data improves generalization. In total, 3117 MRI scans of brains from multiple dementia research cohorts and memory clinics, that had been visually rated by a neuroradiologist according to Scheltens' scale of medial temporal atrophy (MTA), were included in this study. By training multiple versions of a convolutional neural network on different subsets of this data to predict MTA ratings, we assessed the impact of including images from a wider distribution during training had on performance in external memory clinic data. Our results showed that our model generalized well to datasets acquired with similar protocols as the training data, but substantially worse in clinical cohorts with visibly different tissue contrasts in the images. This implies that future DL studies investigating performance in out-of-distribution (OOD) MRI data need to assess multiple external cohorts for reliable results. Further, by including data from a wider range of scanners and protocols the performance improved in OOD data, which suggests that more heterogeneous training data makes the model generalize better. To conclude, this is the most comprehensive study to date investigating the domain shift in deep learning on MRI data, and we advocate rigorous evaluation of DL models on clinical data prior to being certified for deployment.
Collapse
Affiliation(s)
- Gustav Mårtensson
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden.
| | - Daniel Ferreira
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Tobias Granberg
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Radiology, Karolinska University Hospital, Stockholm, Sweden
| | - Lena Cavallin
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Radiology, Karolinska University Hospital, Stockholm, Sweden
| | - Ketil Oppedal
- Centre for Age-Related Medicine, Stavanger University Hospital, Stavanger, Norway; Stavanger Medical Imaging Laboratory (SMIL), Department of Radiology, Stavanger University Hospital, Stavanger, Norway; Department of Electrical Engineering and Computer Science, University of Stavanger, Stavanger, Norway
| | - Alessandro Padovani
- Neurology Unit, Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | - Irena Rektorova
- 1st Department of Neurology, Medical Faculty, St. Anne's Hospital and CEITEC, Masaryk University, Brno, Czech Republic
| | - Laura Bonanni
- Department of Neuroscience Imaging and Clinical Sciences and CESI, University G d'Annunzio of Chieti-Pescara, Chieti, Italy
| | - Matteo Pardini
- Department of Neuroscience (DINOGMI), University of Genoa and Neurology Clinics, Polyclinic San Martino Hospital, Genoa, Italy
| | - Milica G Kramberger
- Department of Neurology, University Medical Centre Ljubljana, Medical faculty, University of Ljubljana, Slovenia
| | - John-Paul Taylor
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
| | - Jakub Hort
- Memory Clinic, Department of Neurology, Charles University, 2nd Faculty of Medicine and Motol University Hospital, Prague, Czech Republic
| | - Jón Snædal
- Landspitali University Hospital, Reykjavik, Iceland
| | - Jaime Kulisevsky
- Movement Disorders Unit, Neurology Department, Sant Pau Hospital, Barcelona, Spain; Institut d'Investigacions Biomédiques Sant Pau (IIB-Sant Pau), Barcelona, Spain; Centro de Investigación en Red-Enfermedades Neurodegenerativas (CIBERNED), Barcelona, Spain; Universitat Autónoma de Barcelona (U.A.B.), Barcelona, Spain
| | - Frederic Blanc
- Day Hospital of Geriatrics, Memory Resource and Research Centre (CM2R) of Strasbourg, Department of Geriatrics, Hôpitaux Universitaires de Strasbourg, Strasbourg, France; University of Strasbourg and French National Centre for Scientific Research (CNRS), ICube Laboratory and Fédération de Médecine Translationnelle de Strasbourg (FMTS), Team Imagerie Multimodale Intégrative en Santé (IMIS)/ICONE, Strasbourg, France
| | - Angelo Antonini
- Department of Neuroscience, University of Padua, Padua & Fondazione Ospedale San Camillo, Venezia, Venice, Italy
| | - Patrizia Mecocci
- Institute of Gerontology and Geriatrics, University of Perugia, Perugia, Italy
| | - Bruno Vellas
- UMR INSERM 1027, gerontopole, CHU, University of Toulouse, France
| | - Magda Tsolaki
- 3rd Department of Neurology, Memory and Dementia Unit, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | | | - Hilkka Soininen
- Institute of Clinical Medicine, Neurology, University of Eastern Finland, Finland; Neurocenter, Neurology, Kuopio University Hospital, Kuopio, Finland
| | - Simon Lovestone
- Department of Psychiatry, Warneford Hospital, University of Oxford, Oxford, UK
| | - Andrew Simmons
- NIHR Biomedical Research Centre for Mental Health, London, UK; NIHR Biomedical Research Unit for Dementia, London, UK; Department of Neuroimaging, Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Dag Aarsland
- Centre for Age-Related Medicine, Stavanger University Hospital, Stavanger, Norway; Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden; Department of Neuroimaging, Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| |
Collapse
|
66
|
Lauw HW, Wong RCW, Ntoulas A, Lim EP, Ng SK, Pan SJ. Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING 2020. [PMCID: PMC7206232 DOI: 10.1007/978-3-030-47436-2_31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Magnetic Resonance Imaging (MRI) of the brain can come in the form of different modalities such as T1-weighted and Fluid Attenuated Inversion Recovery (FLAIR) which has been used to investigate a wide range of neurological disorders. Current state-of-the-art models for brain tissue segmentation and disease classification require multiple modalities for training and inference. However, the acquisition of all of these modalities are expensive, time-consuming, inconvenient and the required modalities are often not available. As a result, these datasets contain large amounts of unpaired data, where examples in the dataset do not contain all modalities. On the other hand, there is smaller fraction of examples that contain all modalities (paired data) and furthermore each modality is high dimensional when compared to number of datapoints. In this work, we develop a method to address these issues with semi-supervised learning in translating between two neuroimaging modalities. Our proposed model, Semi-Supervised Adversarial CycleGAN (SSA-CGAN), uses an adversarial loss to learn from unpaired data points, cycle loss to enforce consistent reconstructions of the mappings and another adversarial loss to take advantage of paired data points. Our experiments demonstrate that our proposed framework produces an improvement in reconstruction error and reduced variance for the pairwise translation of multiple modalities and is more robust to thermal noise when compared to existing methods.
Collapse
Affiliation(s)
- Hady W. Lauw
- School of Information Systems, Singapore Management University, Singapore, Singapore
| | - Raymond Chi-Wing Wong
- Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong, Hong Kong
| | - Alexandros Ntoulas
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens, Greece
| | - Ee-Peng Lim
- School of Information Systems, Singapore Management University, Singapore, Singapore
| | - See-Kiong Ng
- Institute of Data Science, National University of Singapore, Singapore, Singapore
| | - Sinno Jialin Pan
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
67
|
Kong B, Wang X, Bai J, Lu Y, Gao F, Cao K, Xia J, Song Q, Yin Y. Learning tree-structured representation for 3D coronary artery segmentation. Comput Med Imaging Graph 2019; 80:101688. [PMID: 31926366 DOI: 10.1016/j.compmedimag.2019.101688] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 11/13/2019] [Accepted: 12/06/2019] [Indexed: 12/13/2022]
Abstract
Extensive research has been devoted to the segmentation of the coronary artery. However, owing to its complex anatomical structure, it is extremely challenging to automatically segment the coronary artery from 3D coronary computed tomography angiography (CCTA). Inspired by recent ideas to use tree-structured long short-term memory (LSTM) to model the underlying tree structures for NLP tasks, we propose a novel tree-structured convolutional gated recurrent unit (ConvGRU) model to learn the anatomical structure of the coronary artery. However, unlike tree-structured LSTM proposed for semantic relatedness as well as sentiment classification in natural language processing, our tree-structured ConvGRU model considers the local spatial correlations in the input data as the convolutions are used for input-to-state as well as state-to-state transitions, thus more suitable for image analysis. To conduct voxel-wise segmentation, a tree-structured segmentation framework is presented. It consists of a fully convolutional network (FCN) for multi-scale discriminative feature extraction and the final prediction, and a tree-structured ConvGRU layer for anatomical structure modeling. The proposed framework is extensively evaluated on four large-scale 3D CCTA dataset (the largest to the best of our knowledge), and experiments show that our method is more accurate as well as efficient, compared with other coronary artery segmentation approaches.
Collapse
Affiliation(s)
- Bin Kong
- Department of Computer Science, UNC Charlotte, Charlotte, NC, USA.
| | - Xin Wang
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Junjie Bai
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Yi Lu
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Feng Gao
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Kunlin Cao
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Guangdong, China
| | - Qi Song
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| | - Youbing Yin
- Research and Development Department, Shenzhen Keya Medical Technology, Co., Ltd., Guangdong, China
| |
Collapse
|
68
|
Paugam F, Lefeuvre J, Perone CS, Gros C, Reich DS, Sati P, Cohen-Adad J. Open-source pipeline for multi-class segmentation of the spinal cord with deep learning. Magn Reson Imaging 2019; 64:21-27. [PMID: 31004711 PMCID: PMC6800813 DOI: 10.1016/j.mri.2019.04.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 04/15/2019] [Accepted: 04/17/2019] [Indexed: 12/26/2022]
Abstract
This paper presents an open-source pipeline to train neural networks to segment structures of interest from MRI data. The pipeline is tailored towards homogeneous datasets and requires relatively low amounts of manual segmentations (few dozen, or less depending on the homogeneity of the dataset). Two use-case scenarios for segmenting the spinal cord white and grey matter are presented: one in marmosets with variable numbers of lesions, and the other in the publicly available human grey matter segmentation challenge [1]. The pipeline is freely available at: https://github.com/neuropoly/multiclass-segmentation.
Collapse
Affiliation(s)
- François Paugam
- École Centrale de Lyon, Lyon, France; NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada.
| | - Jennifer Lefeuvre
- Translational Neuroradiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Christian S Perone
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
| | - Charley Gros
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
| | - Daniel S Reich
- Translational Neuroradiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Pascal Sati
- Translational Neuroradiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Julien Cohen-Adad
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada; Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada.
| |
Collapse
|
69
|
Orbes-Arteaga M, Varsavsky T, Sudre CH, Eaton-Rosen Z, Haddow LJ, Sørensen L, Nielsen M, Pai A, Ourselin S, Modat M, Nachev P, Cardoso MJ. Multi-domain Adaptation in Brain MRI Through Paired Consistency and Adversarial Learning. DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA : FIRST MICCAI WORKSHOP, DART 2019, AND FIRST INTERNATIONAL WORKSHOP, MIL3ID 2019, SHENZHEN, HELD IN CONJUNCTION WITH MICCAI 20... 2019; 2019:54-62. [PMID: 34109324 PMCID: PMC7610933 DOI: 10.1007/978-3-030-33391-1_7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Supervised learning algorithms trained on medical images will often fail to generalize across changes in acquisition parameters. Recent work in domain adaptation addresses this challenge and successfully leverages labeled data in a source domain to perform well on an unlabeled target domain. Inspired by recent work in semi-supervised learning we introduce a novel method to adapt from one source domain to n target domains (as long as there is paired data covering all domains). Our multi-domain adaptation method utilises a consistency loss combined with adversarial learning. We provide results on white matter lesion hyperintensity segmentation from brain MRIs using the MICCAI 2017 challenge data as the source domain and two target domains. The proposed method significantly outperforms other domain adaptation baselines.
Collapse
Affiliation(s)
- Mauricio Orbes-Arteaga
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Biomediq A/S, Copenhagen, Denmark
| | - Thomas Varsavsky
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Carole H. Sudre
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Medical Physics and Biomedical Engineering, UCL, London, UK
- Institute of Neurology, University College London, London, UK
| | - Zach Eaton-Rosen
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
- Department of Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Lewis J. Haddow
- Chelsea and Westminster Hospital NHS Foundation Trust, London, UK
| | - Lauge Sørensen
- Biomediq A/S, Copenhagen, Denmark
- Cereriu A/S, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Mads Nielsen
- Biomediq A/S, Copenhagen, Denmark
- Cereriu A/S, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Akshay Pai
- Biomediq A/S, Copenhagen, Denmark
- Cereriu A/S, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Sébastien Ourselin
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Marc Modat
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | | | - M. Jorge Cardoso
- Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
70
|
Gao Y, Zhang Y, Cao Z, Guo X, Zhang J. Decoding Brain States From fMRI Signals by Using Unsupervised Domain Adaptation. IEEE J Biomed Health Inform 2019; 24:1677-1685. [PMID: 31514162 DOI: 10.1109/jbhi.2019.2940695] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With the development of deep learning in medical image analysis, decoding brain states from functional magnetic resonance imaging (fMRI) signals has made significant progress. Previous studies often utilized deep neural networks to automatically classify brain activity patterns related to diverse cognitive states. However, due to the individual differences between subjects and the variation in acquisition parameters across devices, the inconsistency in data distributions degrades the performance of cross-subject decoding. Besides, most current networks were trained in a supervised way, which is not suitable for the actual scenarios in which massive amounts of data are unlabeled. To address these problems, we proposed the deep cross-subject adaptation decoding (DCAD) framework to decipher the brain states. The proposed volume-based 3D feature extraction architecture can automatically learn the common spatiotemporal features of labeled source data to generate a distinct descriptor. Then, the distance between the source and target distributions is minimized via an unsupervised domain adaptation (UDA) method, which can help to accurately decode the cognitive states across subjects. The performance of the DCAD was evaluated on task-fMRI (tfMRI) dataset from the Human Connectome Project (HCP). Experimental results showed that the proposed method achieved the state-of-the-art decoding performance with mean 81.9% and 84.9% accuracies under two conditions (4 brain states and 9 brain states respectively) of working memory task. Our findings also demonstrated that UDA can mitigate the impact of the data distribution shift, thereby providing a superior choice for increasing the performance of cross-subject decoding without depending on annotations.
Collapse
|