1
|
Wang R, Zheng G. PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation. Comput Med Imaging Graph 2024; 116:102406. [PMID: 38824715 DOI: 10.1016/j.compmedimag.2024.102406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/04/2024]
Abstract
Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder-decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder-decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.
Collapse
Affiliation(s)
- Runze Wang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
2
|
Banerjee S, Nysjö F, Toumpanakis D, Dhara AK, Wikström J, Strand R. Streamlining neuroradiology workflow with AI for improved cerebrovascular structure monitoring. Sci Rep 2024; 14:9245. [PMID: 38649692 PMCID: PMC11035663 DOI: 10.1038/s41598-024-59529-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 04/11/2024] [Indexed: 04/25/2024] Open
Abstract
Radiological imaging to examine intracranial blood vessels is critical for preoperative planning and postoperative follow-up. Automated segmentation of cerebrovascular anatomy from Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) can provide radiologists with a more detailed and precise view of these vessels. This paper introduces a domain generalized artificial intelligence (AI) solution for volumetric monitoring of cerebrovascular structures from multi-center MRAs. Our approach utilizes a multi-task deep convolutional neural network (CNN) with a topology-aware loss function to learn voxel-wise segmentation of the cerebrovascular tree. We use Decorrelation Loss to achieve domain regularization for the encoder network and auxiliary tasks to provide additional regularization and enable the encoder to learn higher-level intermediate representations for improved performance. We compare our method to six state-of-the-art 3D vessel segmentation methods using retrospective TOF-MRA datasets from multiple private and public data sources scanned at six hospitals, with and without vascular pathologies. The proposed model achieved the best scores in all the qualitative performance measures. Furthermore, we have developed an AI-assisted Graphical User Interface (GUI) based on our research to assist radiologists in their daily work and establish a more efficient work process that saves time.
Collapse
Affiliation(s)
- Subhashis Banerjee
- Department of Information Technology, Uppsala University, Uppsala, Sweden.
| | - Fredrik Nysjö
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Dimitrios Toumpanakis
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Ashis Kumar Dhara
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - Johan Wikström
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Robin Strand
- Department of Information Technology, Uppsala University, Uppsala, Sweden.
| |
Collapse
|
3
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
4
|
Tong L, Shi W, Isgut M, Zhong Y, Lais P, Gloster L, Sun J, Swain A, Giuste F, Wang MD. Integrating Multi-Omics Data With EHR for Precision Medicine Using Advanced Artificial Intelligence. IEEE Rev Biomed Eng 2024; 17:80-97. [PMID: 37824325 DOI: 10.1109/rbme.2023.3324264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
With the recent advancement of novel biomedical technologies such as high-throughput sequencing and wearable devices, multi-modal biomedical data ranging from multi-omics molecular data to real-time continuous bio-signals are generated at an unprecedented speed and scale every day. For the first time, these multi-modal biomedical data are able to make precision medicine close to a reality. However, due to data volume and the complexity, making good use of these multi-modal biomedical data requires major effort. Researchers and clinicians are actively developing artificial intelligence (AI) approaches for data-driven knowledge discovery and causal inference using a variety of biomedical data modalities. These AI-based approaches have demonstrated promising results in various biomedical and healthcare applications. In this review paper, we summarize the state-of-the-art AI models for integrating multi-omics data and electronic health records (EHRs) for precision medicine. We discuss the challenges and opportunities in integrating multi-omics data with EHRs and future directions. We hope this review can inspire future research and developing in integrating multi-omics data with EHRs for precision medicine.
Collapse
|
5
|
Kim MJ, Kim SH, Kim SM, Nam JH, Hwang YB, Lim YJ. The Advent of Domain Adaptation into Artificial Intelligence for Gastrointestinal Endoscopy and Medical Imaging. Diagnostics (Basel) 2023; 13:3023. [PMID: 37835766 PMCID: PMC10572560 DOI: 10.3390/diagnostics13193023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 09/01/2023] [Accepted: 09/12/2023] [Indexed: 10/15/2023] Open
Abstract
Artificial intelligence (AI) is a subfield of computer science that aims to implement computer systems that perform tasks that generally require human learning, reasoning, and perceptual abilities. AI is widely used in the medical field. The interpretation of medical images requires considerable effort, time, and skill. AI-aided interpretations, such as automated abnormal lesion detection and image classification, are promising areas of AI. However, when images with different characteristics are extracted, depending on the manufacturer and imaging environment, a so-called domain shift problem occurs in which the developed AI has a poor versatility. Domain adaptation is used to address this problem. Domain adaptation is a tool that generates a newly converted image which is suitable for other domains. It has also shown promise in reducing the differences in appearance among the images collected from different devices. Domain adaptation is expected to improve the reading accuracy of AI for heterogeneous image distributions in gastrointestinal (GI) endoscopy and medical image analyses. In this paper, we review the history and basic characteristics of domain shift and domain adaptation. We also address their use in gastrointestinal endoscopy and the medical field more generally through published examples, perspectives, and future directions.
Collapse
Affiliation(s)
- Min Ji Kim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Sang Hoon Kim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Suk Min Kim
- Department of Intelligent Systems and Robotics, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (S.M.K.); (Y.B.H.)
| | - Ji Hyung Nam
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Young Bae Hwang
- Department of Intelligent Systems and Robotics, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (S.M.K.); (Y.B.H.)
| | - Yun Jeong Lim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| |
Collapse
|
6
|
Gliner V, Makarov V, Avetisyan AI, Schuster A, Yaniv Y. Using domain adaptation for classification of healthy and disease conditions from mobile-captured images of standard 12-lead electrocardiograms. Sci Rep 2023; 13:14023. [PMID: 37640921 PMCID: PMC10462630 DOI: 10.1038/s41598-023-40693-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 08/16/2023] [Indexed: 08/31/2023] Open
Abstract
12-lead electrocardiogram (ECG) recordings can be collected in any clinic and the interpretation is performed by a clinician. Modern machine learning tools may make them automatable. However, a large fraction of 12-lead ECG data is still available in printed paper or image only and comes in various formats. To digitize the data, smartphone cameras can be used. Nevertheless, this approach may introduce various artifacts and occlusions into the obtained images. Here we overcome the challenges of automating 12-lead ECG analysis using mobile-captured images and a deep neural network that is trained using a domain adversarial approach. The net achieved an average 0.91 receiver operating characteristic curve on tested images captured by a mobile device. Assessment on image from unseen 12-lead ECG formats that the network was not trained on achieved high accuracy. We further show that the network accuracy can be improved by including a small number of unlabeled samples from unknown formats in the training data. Finally, our models also achieve high accuracy using signals as input rather than images. Using a domain adaptation approach, we successfully classified cardiac conditions on images acquired by a mobile device and showed the generalizability of the classification using various unseen image formats.
Collapse
Affiliation(s)
- Vadim Gliner
- Computer Science Department, Technion-IIT, Haifa, Israel
| | - Vladimir Makarov
- System Programming Lab, Novgorod State University, Veliky Novgorod, Russia
| | - Arutyun I Avetisyan
- Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, Russia
| | - Assaf Schuster
- Computer Science Department, Technion-IIT, Haifa, Israel
| | - Yael Yaniv
- Laboratory of Bioenergetic and Bioelectric Systems, Biomedical Engineering Faculty, Technion-IIT, Haifa, Israel.
| |
Collapse
|
7
|
Kawamura K, Lee C, Yoshikawa T, Hani AS, Usami Y, Toyosawa S, Tanaka S, Hiraoka SI. Prediction of cervical lymph node metastasis from immunostained specimens of tongue cancer using a multilayer perceptron neural network. Cancer Med 2023; 12:5312-5322. [PMID: 36307918 PMCID: PMC10028108 DOI: 10.1002/cam4.5343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 08/04/2022] [Accepted: 08/23/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Although cervical lymph node metastasis is an important prognostic factor for oral cancer, occult metastases remain undetected even by diagnostic imaging. We developed a learning model to predict lymph node metastasis in resected specimens of tongue cancer by classifying the level of immunohistochemical (IHC) staining for angiogenesis- and lymphangiogenesis-related proteins using a multilayer perceptron neural network (MNN). METHODS We obtained a dataset of 76 patients with squamous cell carcinoma of the tongue who had undergone primary tumor resection. All 76 specimens were IHC stained for the six types shown above (VEGF-C, VEGF-D, NRP1, NRP2, CCR7, and SEMA3E) and 456 slides were prepared. We scored the staining levels visually on all slides. We created virtual slides (4560 images) and the accuracy of the MNN model was verified by comparing it with a hue-saturation (HS) histogram, which quantifies the manually determined visual information. RESULTS The accuracy of the training model with the MNN was 98.6%, and when the training image was converted to grayscale, the accuracy decreased to 52.9%. This indicates that our MNN adequately evaluates the level of staining rather than the morphological features of the IHC images. Multivariate analysis revealed that CCR7 staining level and T classification were independent factors associated with the presence of cervical lymph node metastasis in both HS histograms and MNN. CONCLUSION These results suggest that IHC assessment using MNN may be useful for identifying lymph node metastasis in patients with tongue cancer.
Collapse
Affiliation(s)
- Kohei Kawamura
- 1st Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, Osaka, Japan
| | - Chonho Lee
- Cybermedia Center, Osaka University, Osaka, Japan
| | | | - Al-Shareef Hani
- Department of Oral & Maxillofacial Surgery, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Yu Usami
- Department of Oral Pathology, Osaka University Graduate School of Dentistry, Osaka, Japan
| | - Satoru Toyosawa
- Department of Oral Pathology, Osaka University Graduate School of Dentistry, Osaka, Japan
| | - Susumu Tanaka
- 1st Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, Osaka, Japan
| | - Shin-Ichiro Hiraoka
- 1st Department of Oral and Maxillofacial Surgery, Graduate School of Dentistry, Osaka University, Osaka, Japan
| |
Collapse
|
8
|
Chen C, Wang J, Pan J, Bian C, Zhang Z. GraphSKT: Graph-Guided Structured Knowledge Transfer for Domain Adaptive Lesion Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:507-518. [PMID: 36201413 DOI: 10.1109/tmi.2022.3212784] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Adversarial-based adaptation has dominated the area of domain adaptive detection over the past few years. Despite their general efficacy for various tasks, the learned representations may not capture the intrinsic topological structures of the whole images and thus are vulnerable to distributional shifts especially in real-world applications, such as geometric distortions across imaging devices in medical images. In this case, forcefully matching data distributions across domains cannot ensure precise knowledge transfer and are prone to result in the negative transfer. In this paper, we explore the problem of domain adaptive lesion detection from the perspective of relational reasoning, and propose a Graph-Structured Knowledge Transfer (GraphSKT) framework to perform hierarchical reasoning by modeling both the intra- and inter-domain topological structures. To be specific, we utilize cross-domain correspondence to mine meaningful foreground regions for representing graph nodes and explicitly endow each node with contextual information. Then, the intra- and inter-domain graphs are built on the top of instance-level features to achieve a high-level understanding of the lesion and whole medical image, and transfer the structured knowledge from source to target domains. The contextual and semantic information is propagated through graph nodes methodically, enhancing the expressive power of learned features for the lesion detection tasks. Extensive experiments on two types of challenging datasets demonstrate that the proposed GraphSKT significantly outperforms the state-of-the-art approaches for detection of polyps in colonoscopy images and of mass in mammographic images.
Collapse
|
9
|
Al Khalil Y, Amirrajab S, Lorenz C, Weese J, Pluim J, Breeuwer M. On the usability of synthetic data for improving the robustness of deep learning-based segmentation of cardiac magnetic resonance images. Med Image Anal 2023; 84:102688. [PMID: 36493702 DOI: 10.1016/j.media.2022.102688] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 09/14/2022] [Accepted: 11/07/2022] [Indexed: 11/18/2022]
Abstract
Deep learning-based segmentation methods provide an effective and automated way for assessing the structure and function of the heart in cardiac magnetic resonance (CMR) images. However, despite their state-of-the-art performance on images acquired from the same source (same scanner or scanner vendor) as images used during training, their performance degrades significantly on images coming from different domains. A straightforward approach to tackle this issue consists of acquiring large quantities of multi-site and multi-vendor data, which is practically infeasible. Generative adversarial networks (GANs) for image synthesis present a promising solution for tackling data limitations in medical imaging and addressing the generalization capability of segmentation models. In this work, we explore the usability of synthesized short-axis CMR images generated using a segmentation-informed conditional GAN, to improve the robustness of heart cavity segmentation models in a variety of different settings. The GAN is trained on paired real images and corresponding segmentation maps belonging to both the heart and the surrounding tissue, reinforcing the synthesis of semantically-consistent and realistic images. First, we evaluate the segmentation performance of a model trained solely with synthetic data and show that it only slightly underperforms compared to the baseline trained with real data. By further combining real with synthetic data during training, we observe a substantial improvement in segmentation performance (up to 4% and 40% in terms of Dice score and Hausdorff distance) across multiple data-sets collected from various sites and scanner. This is additionally demonstrated across state-of-the-art 2D and 3D segmentation networks, whereby the obtained results demonstrate the potential of the proposed method in tackling the presence of the domain shift in medical data. Finally, we thoroughly analyze the quality of synthetic data and its ability to replace real MR images during training, as well as provide an insight into important aspects of utilizing synthetic images for segmentation.
Collapse
Affiliation(s)
- Yasmina Al Khalil
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Sina Amirrajab
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | - Jürgen Weese
- Philips Research Laboratories, Hamburg, Germany.
| | - Josien Pluim
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marcel Breeuwer
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Healthcare, MR R&D - Clinical Science, Best, The Netherlands.
| |
Collapse
|
10
|
Gaggion N, Mansilla L, Mosquera C, Milone DH, Ferrante E. Improving Anatomical Plausibility in Medical Image Segmentation via Hybrid Graph Neural Networks: Applications to Chest X-Ray Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:546-556. [PMID: 36423313 DOI: 10.1109/tmi.2022.3224660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Anatomical segmentation is a fundamental task in medical image computing, generally tackled with fully convolutional neural networks which produce dense segmentation masks. These models are often trained with loss functions such as cross-entropy or Dice, which assume pixels to be independent of each other, thus ignoring topological errors and anatomical inconsistencies. We address this limitation by moving from pixel-level to graph representations, which allow to naturally incorporate anatomical constraints by construction. To this end, we introduce HybridGNet, an encoder-decoder neural architecture that leverages standard convolutions for image feature encoding and graph convolutional neural networks (GCNNs) to decode plausible representations of anatomical structures. We also propose a novel image-to-graph skip connection layer which allows localized features to flow from standard convolutional blocks to GCNN blocks, and show that it improves segmentation accuracy. The proposed architecture is extensively evaluated in a variety of domain shift and image occlusion scenarios, and audited considering different types of demographic domain shift. Our comprehensive experimental setup compares HybridGNet with other landmark and pixel-based models for anatomical segmentation in chest x-ray images, and shows that it produces anatomically plausible results in challenging scenarios where other models tend to fail.
Collapse
|
11
|
Jeong H, Kamaleswaran R. Pivotal challenges in artificial intelligence and machine learning applications for neonatal care. Semin Fetal Neonatal Med 2022; 27:101393. [PMID: 36266181 DOI: 10.1016/j.siny.2022.101393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Clinical decision support systems (CDSS) that are developed based on artificial intelligence and machine learning (AI/ML) approaches carry transformative potentials in improving the way neonatal care is practiced. From the use of the data available from electronic health records to physiological sensors and imaging modalities, CDSS can be used to predict clinical outcomes (such as mortality rate, hospital length of state, or surgical outcome) or early warning signs of diseases in neonates. However, only a limited number of clinical decision support systems for neonatal care are currently deployed in healthcare facilities or even implemented during pilot trials (or prospective studies). This is mostly due to the unresolved challenges in developing a real-time supported clinical decision support system, which mainly consists of three phases: model development, model evaluation, and real-time deployment. In this review, we introduce some of the pivotal challenges and factors we must consider during the implementation of real-time supported CDSS.
Collapse
Affiliation(s)
- Hayoung Jeong
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Rishikesan Kamaleswaran
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia, USA.
| |
Collapse
|
12
|
Segmentation of Breast Tubules in H&E Images Based on a DKS-DoubleU-Net Model. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2961610. [PMID: 36246965 PMCID: PMC9553497 DOI: 10.1155/2022/2961610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 09/10/2022] [Indexed: 11/21/2022]
Abstract
The formation of breast tubules plays an important role in the pathological grading of breast cancer. Breast tubules surrounded by a large number of epithelial cells are located in the subcutaneous tissue of the chest. The shapes of breast tubules are various, including tubular, round, and oval, which makes the process of breast tubule segmentation a difficult task. Deep learning technology, capable of learning complex data structures via efficient representation, could help pathologists accurately detect breast tubules in hematoxylin and eosin (H&E) stained images. In this paper, we propose a deep learning model named DKS-DoubleU-Net to accurately segment breast tubules with complex appearances in H&E images. The proposed DKS-DoubleU-Net model suggests using a DenseNet module as the encoder of the second subnetwork of DoubleU-Net, which utilizes dense features between layers and strengthens the propagation of features extracted in all previous layers, in order to better discover the intrinsic characteristics of breast tubules with complex structures and diverse shapes. Moreover, a feature fusing module called Kernel Selecting Module (KSM) is inserted before each output layer of the two U-Net branches of the DoubleU-Net, to implement a multiscale feature fusion via a self-adaptive kernel selecting for the sake of accurate segmentation of breast tubules in different sizes. The experiments on the public BRACS dataset and a private clinical dataset have shown that our model achieves better segmentation performance, compared to the state-of-art models of U-Net, DoubleU-Net, ResUnet++, HRNet, and DeepLabV3+. Specifically, on the public BRACS dataset, our method produced an F1-Score of 92.98%, which outperforms the F1-Score of U-Net, DoubleU-Net, and HRNet by 4.24%, 0.37%, and 1.68%, respectively, and is much better than performances of DeepLabV3+ and ResUnet++ by 7.83% and 23.84%, respectively. On the private clinic dataset, the proposed model achieved an F1-Score of 73.13%, which has shown an improvement of 10.31%, 1.89%, 4.88%, 15.47%, and 31.1% to the performances of the U-Net, DoubleU-Net, HRNet, DeepLabV3+, and ResUnet++, respectively. Superior performance could also be observed when comparing the proposed DKS-DoubleU-Net with the others using the metrics of Dice and mIou.
Collapse
|
13
|
Shimovolos S, Shushko A, Belyaev M, Shirokikh B. Adaptation to CT Reconstruction Kernels by Enforcing Cross-Domain Feature Maps Consistency. J Imaging 2022; 8:jimaging8090234. [PMID: 36135401 PMCID: PMC9503667 DOI: 10.3390/jimaging8090234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 07/20/2022] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
Deep learning methods provide significant assistance in analyzing coronavirus disease (COVID-19) in chest computed tomography (CT) images, including identification, severity assessment, and segmentation. Although the earlier developed methods address the lack of data and specific annotations, the current goal is to build a robust algorithm for clinical use, having a larger pool of available data. With the larger datasets, the domain shift problem arises, affecting the performance of methods on the unseen data. One of the critical sources of domain shift in CT images is the difference in reconstruction kernels used to generate images from the raw data (sinograms). In this paper, we show a decrease in the COVID-19 segmentation quality of the model trained on the smooth and tested on the sharp reconstruction kernels. Furthermore, we compare several domain adaptation approaches to tackle the problem, such as task-specific augmentation and unsupervised adversarial learning. Finally, we propose the unsupervised adaptation method, called F-Consistency, that outperforms the previous approaches. Our method exploits a set of unlabeled CT image pairs which differ only in reconstruction kernels within every pair. It enforces the similarity of the network’s hidden representations (feature maps) by minimizing the mean squared error (MSE) between paired feature maps. We show our method achieving a 0.64 Dice Score on the test dataset with unseen sharp kernels, compared to the 0.56 Dice Score of the baseline model. Moreover, F-Consistency scores 0.80 Dice Score between predictions on the paired images, which almost doubles the baseline score of 0.46 and surpasses the other methods. We also show F-Consistency to better generalize on the unseen kernels and without the presence of the COVID-19 lesions than the other methods trained on unlabeled data.
Collapse
Affiliation(s)
| | - Andrey Shushko
- Moscow Institute of Physics and Technology, 141701 Moscow, Russia
| | - Mikhail Belyaev
- Skolkovo Institute of Science and Technology, 143026 Moscow, Russia
- Artificial Intelligence Research Institute (AIRI), 105064 Moscow, Russia
| | - Boris Shirokikh
- Skolkovo Institute of Science and Technology, 143026 Moscow, Russia
- Artificial Intelligence Research Institute (AIRI), 105064 Moscow, Russia
- Correspondence:
| |
Collapse
|
14
|
Abstract
Topological and geometrical analysis of retinal blood vessels could be a cost-effective way to detect various common diseases. Automated vessel segmentation and vascular tree analysis models require powerful generalization capability in clinical applications. In this work, we constructed a novel benchmark RETA with 81 labelled vessel masks aiming to facilitate retinal vessel analysis. A semi-automated coarse-to-fine workflow was proposed for vessel annotation task. During database construction, we strived to control inter-annotator and intra-annotator variability by means of multi-stage annotation and label disambiguation on self-developed dedicated software. In addition to binary vessel masks, we obtained other types of annotations including artery/vein masks, vascular skeletons, bifurcations, trees and abnormalities. Subjective and objective quality validations of the annotated vessel masks demonstrated significantly improved quality over the existing open datasets. Our annotation software is also made publicly available serving the purpose of pixel-level vessel visualization. Researchers could develop vessel segmentation algorithms and evaluate segmentation performance using RETA. Moreover, it might promote the study of cross-modality tubular structure segmentation and analysis.
Collapse
|
15
|
Nadkarni P, Merchant SA. Enhancing medical-imaging artificial intelligence through holistic use of time-tested key imaging and clinical parameters: Future insights. Artif Intell Med Imaging 2022; 3:55-69. [DOI: 10.35711/aimi.v3.i3.55] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 04/12/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023] Open
Abstract
Much of the published literature in Radiology-related Artificial Intelligence (AI) focuses on single tasks, such as identifying the presence or absence or severity of specific lesions. Progress comparable to that achieved for general-purpose computer vision has been hampered by the unavailability of large and diverse radiology datasets containing different types of lesions with possibly multiple kinds of abnormalities in the same image. Also, since a diagnosis is rarely achieved through an image alone, radiology AI must be able to employ diverse strategies that consider all available evidence, not just imaging information. Using key imaging and clinical signs will help improve their accuracy and utility tremendously. Employing strategies that consider all available evidence will be a formidable task; we believe that the combination of human and computer intelligence will be superior to either one alone. Further, unless an AI application is explainable, radiologists will not trust it to be either reliable or bias-free; we discuss some approaches aimed at providing better explanations, as well as regulatory concerns regarding explainability (“transparency”). Finally, we look at federated learning, which allows pooling data from multiple locales while maintaining data privacy to create more generalizable and reliable models, and quantum computing, still prototypical but potentially revolutionary in its computing impact.
Collapse
Affiliation(s)
- Prakash Nadkarni
- College of Nursing, University of Iowa, Iowa City, IA 52242, United States
| | - Suleman Adam Merchant
- Department of Radiology, LTM Medical College & LTM General Hospital, Mumbai 400022, Maharashtra, India
| |
Collapse
|
16
|
Zhu Y, Venugopalan J, Zhang Z, Chanani NK, Maher KO, Wang MD. Domain Adaptation Using Convolutional Autoencoder and Gradient Boosting for Adverse Events Prediction in the Intensive Care Unit. Front Artif Intell 2022; 5:640926. [PMID: 35481281 PMCID: PMC9036368 DOI: 10.3389/frai.2022.640926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Abstract
More than 5 million patients have admitted annually to intensive care units (ICUs) in the United States. The leading causes of mortality are cardiovascular failures, multi-organ failures, and sepsis. Data-driven techniques have been used in the analysis of patient data to predict adverse events, such as ICU mortality and ICU readmission. These models often make use of temporal or static features from a single ICU database to make predictions on subsequent adverse events. To explore the potential of domain adaptation, we propose a method of data analysis using gradient boosting and convolutional autoencoder (CAE) to predict significant adverse events in the ICU, such as ICU mortality and ICU readmission. We demonstrate our results from a retrospective data analysis using patient records from a publicly available database called Multi-parameter Intelligent Monitoring in Intensive Care-II (MIMIC-II) and a local database from Children's Healthcare of Atlanta (CHOA). We demonstrate that after adopting novel data imputation on patient ICU data, gradient boosting is effective in both the mortality prediction task and the ICU readmission prediction task. In addition, we use gradient boosting to identify top-ranking temporal and non-temporal features in both prediction tasks. We discuss the relationship between these features and the specific prediction task. Lastly, we indicate that CAE might not be effective in feature extraction on one dataset, but domain adaptation with CAE feature extraction across two datasets shows promising results.
Collapse
Affiliation(s)
- Yuanda Zhu
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Janani Venugopalan
- Biomedical Engineering Department, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
| | - Zhenyu Zhang
- Biomedical Engineering Department, Georgia Institute of Technology, Atlanta, GA, United States
- Department of Biomedical Engineering, Peking University, Beijing, China
| | | | - Kevin O. Maher
- Pediatrics Department, Emory University, Atlanta, GA, United States
| | - May D. Wang
- Biomedical Engineering Department, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- *Correspondence: May D. Wang
| |
Collapse
|
17
|
CyCMIS: Cycle-consistent Cross-domain Medical Image Segmentation via diverse image augmentation. Med Image Anal 2021; 76:102328. [PMID: 34920236 DOI: 10.1016/j.media.2021.102328] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 11/15/2021] [Accepted: 12/01/2021] [Indexed: 01/26/2023]
Abstract
Domain shift, a phenomenon when there exists distribution discrepancy between training dataset (source domain) and test dataset (target domain), is very common in practical applications and may cause significant performance degradation, which hinders the effective deployment of deep learning models to clinical settings. Adaptation algorithms to improve the model generalizability from source domain to target domain has significant practical value. In this paper, we investigate unsupervised domain adaptation (UDA) technique to train a cross-domain segmentation method which is robust to domain shift, and which does not require any annotations on the test domain. To this end, we propose Cycle-consistent Cross-domain Medical Image Segmentation, referred as CyCMIS, integrating online diverse image translation via disentangled representation learning and semantic consistency regularization into one network. Different from learning one-to-one mapping, our method characterizes the complex relationship between domains as many-to-many mapping. A novel diverse inter-domain semantic consistency loss is then proposed to regularize the cross-domain segmentation process. We additionally introduce an intra-domain semantic consistency loss to encourage the segmentation consistency between the original input and the image after cross-cycle reconstruction. We conduct comprehensive experiments on two publicly available datasets to evaluate the effectiveness of the proposed method. Results demonstrate the efficacy of the present approach.
Collapse
|
18
|
Majumder MAA, Gaur U, Singh K, Kandamaran L, Gupta S, Haque M, Rahman S, Sa B, Rahman M, Rampersad F. Impact of COVID-19 pandemic on radiology education, training, and practice: A narrative review. World J Radiol 2021; 13:354-370. [PMID: 34904050 PMCID: PMC8637607 DOI: 10.4329/wjr.v13.i11.354] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 08/26/2021] [Accepted: 10/27/2021] [Indexed: 02/06/2023] Open
Abstract
Radiology education and training is of paramount clinical importance given the prominence of medical imaging utilization in effective clinical practice. The incorporation of basic radiology in the medical curriculum has continued to evolve, focusing on teaching image interpretation skills, the appropriate ordering of radiological investigations, judicious use of ionizing radiation, and providing exposure to interventional radiology. Advancements in radiology have been driven by the digital revolution, which has, in turn, had a positive impact on radiology education and training. Upon the advent of the corona virus disease 2019 (COVID-19) pandemic, many training institutions and hospitals adhered to directives which advised rescheduling of non-urgent outpatient appointments. This inevitably impacted the workflow of the radiology department, which resulted in the reduction of clinical in-person case reviews and consultations, as well as in-person teaching sessions. Several medical schools and research centers completely suspended face-to-face academic activity. This led to challenges for medical teachers to complete the radiology syllabus while ensuring that teaching activities continued safely and effectively. As a result, online teaching platforms have virtually replaced didactic face-to-face lectures. Radiology educators also sought other strategies to incorporate interactive teaching sessions while adopting the e-learning approach, as they were cognizant of the limitations that this may have on students’ clinical expertise. Migration to online methods to review live cases, journal clubs, simulation-based training, clinical interaction, and radiology examination protocolling are a few examples of successfully addressing the limitations in reduced clinical exposure. In this review paper, we discuss (1) The impact of the COVID-19 pandemic on radiology education, training, and practice; (2) Challenges and strategies involved in delivering online radiology education for undergraduates and postgraduates during the COVID-19 pandemic; and (3) Difference between the implementation of radiology education during the COVID-19 pandemic and pre-COVID-19 era.
Collapse
Affiliation(s)
- Md Anwarul Azim Majumder
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Cave Hill BB23034, Barbados
| | - Uma Gaur
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Cave Hill BB23034, Barbados
| | - Keerti Singh
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Cave Hill BB23034, Barbados
| | - Latha Kandamaran
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Cave Hill BB23034, Barbados
| | - Subir Gupta
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Cave Hill BB23034, Barbados
| | - Mainul Haque
- Unit of Pharmacology, Faculty of Medicine and Defence Health, Universiti Pertahanan Nasional Malaysia (National Defence University of Malaysia), Kem Perdana Sugai Besi, Kuala Lumpur 57000, Malaysia
| | - Sayeeda Rahman
- School of Medicine, American University of Integrative Sciences (AUIS), Bridgetown BB11318, Barbados
| | - Bidyadhar Sa
- Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, St Augustine 33178, Trinidad and Tobago
| | - Mizanur Rahman
- Principal's Office, International Medical College, Dhaka 1207, Bangladesh
| | - Fidel Rampersad
- Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, St Augustine 33178, Trinidad and Tobago
| |
Collapse
|
19
|
Perkonigg M, Hofmanninger J, Herold CJ, Brink JA, Pianykh O, Prosch H, Langs G. Dynamic memory to alleviate catastrophic forgetting in continual learning with medical imaging. Nat Commun 2021; 12:5678. [PMID: 34584080 PMCID: PMC8479083 DOI: 10.1038/s41467-021-25858-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 08/19/2021] [Indexed: 11/08/2022] Open
Abstract
Medical imaging is a central part of clinical diagnosis and treatment guidance. Machine learning has increasingly gained relevance because it captures features of disease and treatment response that are relevant for therapeutic decision-making. In clinical practice, the continuous progress of image acquisition technology or diagnostic procedures, the diversity of scanners, and evolving imaging protocols hamper the utility of machine learning, as prediction accuracy on new data deteriorates, or models become outdated due to these domain shifts. We propose a continual learning approach to deal with such domain shifts occurring at unknown time points. We adapt models to emerging variations in a continuous data stream while counteracting catastrophic forgetting. A dynamic memory enables rehearsal on a subset of diverse training data to mitigate forgetting while enabling models to expand to new domains. The technique balances memory by detecting pseudo-domains, representing different style clusters within the data stream. Evaluation of two different tasks, cardiac segmentation in magnetic resonance imaging and lung nodule detection in computed tomography, demonstrate a consistent advantage of the method.
Collapse
Affiliation(s)
- Matthias Perkonigg
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Johannes Hofmanninger
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Christian J Herold
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - James A Brink
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Oleg Pianykh
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Helmut Prosch
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
20
|
Asada K, Komatsu M, Shimoyama R, Takasawa K, Shinkai N, Sakai A, Bolatkan A, Yamada M, Takahashi S, Machino H, Kobayashi K, Kaneko S, Hamamoto R. Application of Artificial Intelligence in COVID-19 Diagnosis and Therapeutics. J Pers Med 2021; 11:886. [PMID: 34575663 PMCID: PMC8471764 DOI: 10.3390/jpm11090886] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 12/12/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) pandemic began at the end of December 2019, giving rise to a high rate of infections and causing COVID-19-associated deaths worldwide. It was first reported in Wuhan, China, and since then, not only global leaders, organizations, and pharmaceutical/biotech companies, but also researchers, have directed their efforts toward overcoming this threat. The use of artificial intelligence (AI) has recently surged internationally and has been applied to diverse aspects of many problems. The benefits of using AI are now widely accepted, and many studies have shown great success in medical research on tasks, such as the classification, detection, and prediction of disease, or even patient outcome. In fact, AI technology has been actively employed in various ways in COVID-19 research, and several clinical applications of AI-equipped medical devices for the diagnosis of COVID-19 have already been reported. Hence, in this review, we summarize the latest studies that focus on medical imaging analysis, drug discovery, and therapeutics such as vaccine development and public health decision-making using AI. This survey clarifies the advantages of using AI in the fight against COVID-19 and provides future directions for tackling the COVID-19 pandemic using AI techniques.
Collapse
Affiliation(s)
- Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryo Shimoyama
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ken Takasawa
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Norio Shinkai
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Akira Sakai
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Amina Bolatkan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masayoshi Yamada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Satoshi Takahashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Kazuma Kobayashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
21
|
Hsu W, Baumgartner C, Deserno TM. Notable Papers and New Directions in Sensors, Signals, and Imaging Informatics. Yearb Med Inform 2021; 30:150-158. [PMID: 34479386 PMCID: PMC8416210 DOI: 10.1055/s-0041-1726526] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
OBJECTIVE To identify and highlight research papers representing noteworthy developments in signals, sensors, and imaging informatics in 2020. METHOD A broad literature search was conducted on PubMed and Scopus databases. We combined Medical Subject Heading (MeSH) terms and keywords to construct particular queries for sensors, signals, and image informatics. We only considered papers that have been published in journals providing at least three articles in the query response. Section editors then independently reviewed the titles and abstracts of preselected papers assessed on a three-point Likert scale. Papers were rated from 1 (do not include) to 3 (should be included) for each topical area (sensors, signals, and imaging informatics) and those with an average score of 2 or above were subsequently read and assessed again by two of the three co-editors. Finally, the top 14 papers with the highest combined scores were considered based on consensus. RESULTS The search for papers was executed in January 2021. After removing duplicates and conference proceedings, the query returned a set of 101, 193, and 529 papers for sensors, signals, and imaging informatics, respectively. We filtered out journals that had less than three papers in the query results, reducing the number of papers to 41, 117, and 333, respectively. From these, the co-editors identified 22 candidate papers with more than 2 Likert points on average, from which 14 candidate best papers were nominated after intensive discussion. At least five external reviewers then rated the remaining papers. The four finalist papers were found using the composite rating of all external reviewers. These best papers were approved by consensus of the International Medical Informatics Association (IMIA) Yearbook editorial board. CONCLUSIONS Sensors, signals, and imaging informatics is a dynamic field of intense research. The four best papers represent advanced approaches for combining, processing, modeling, and analyzing heterogeneous sensor and imaging data. The selected papers demonstrate the combination and fusion of multiple sensors and sensor networks using electrocardiogram (ECG), electroencephalogram (EEG), or photoplethysmogram (PPG) with advanced data processing, deep and machine learning techniques, and present image processing modalities beyond state-of-the-art that significantly support and further improve medical decision making.
Collapse
Affiliation(s)
- William Hsu
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA, United States of America
| | - Christian Baumgartner
- Institute of Health Care Engineering with European Testing Center of Medical Devices, Graz University of Technology, Austria
| | - Thomas M. Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | | |
Collapse
|
22
|
Deep-Learning-Based Color Doppler Ultrasound Image Feature in the Diagnosis of Elderly Patients with Chronic Heart Failure Complicated with Sarcopenia. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:2603842. [PMID: 34367535 PMCID: PMC8346313 DOI: 10.1155/2021/2603842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/12/2021] [Accepted: 07/22/2021] [Indexed: 12/23/2022]
Abstract
The neural network algorithm of deep learning was applied to optimize and improve color Doppler ultrasound images, which was used for the research on elderly patients with chronic heart failure (CHF) complicated with sarcopenia, so as to analyze the effect of the deep-learning-based color Doppler ultrasound image on the diagnosis of CHF. 259 patients were selected randomly in this study, who were admitted to hospital from October 2017 to March 2020 and were diagnosed with sarcopenia. Then, all of them underwent cardiac ultrasound examination and were divided into two groups according to whether deep learning technology was used for image processing or not. A group of routine unprocessed images was set as the control group, and the images processed by deep learning were set as the experimental group. The results of color Doppler images before and after processing were analyzed and compared; that is, the processed images of the experimental group were clearer and had higher resolution than the unprocessed images of the control group, with the peak signal-to-noise ratio (PSNR) = 20 and structural similarity index measure (SSIM) = 0.09; the similarity between the final diagnosis results and the examination results of the experimental group (93.5%) was higher than that of the control group (87.0%), and the comparison was statistically significant (P < 0.05); among all the patients diagnosed with sarcopenia, 88.9% were also eventually diagnosed with CHF and only a small part of them were diagnosed with other diseases, with statistical significance (P < 0.05). In conclusion, deep learning technology had certain application value in processing color Doppler ultrasound images. Although there was no obvious difference between the color Doppler ultrasound images before and after processing, they could all make a better diagnosis. Moreover, the research results showed the correlation between CHF and sarcopenia.
Collapse
|
23
|
Calderon-Ramirez S, Yang S, Moemeni A, Elizondo D, Colreavy-Donnelly S, Chavarría-Estrada LF, Molina-Cabello MA. Correcting data imbalance for semi-supervised COVID-19 detection using X-ray chest images. Appl Soft Comput 2021; 111:107692. [PMID: 34276263 PMCID: PMC8276579 DOI: 10.1016/j.asoc.2021.107692] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 06/11/2021] [Accepted: 07/07/2021] [Indexed: 12/24/2022]
Abstract
A key factor in the fight against viral diseases such as the coronavirus (COVID-19) is the identification of virus carriers as early and quickly as possible, in a cheap and efficient manner. The application of deep learning for image classification of chest X-ray images of COVID-19 patients could become a useful pre-diagnostic detection methodology. However, deep learning architectures require large labelled datasets. This is often a limitation when the subject of research is relatively new as in the case of the virus outbreak, where dealing with small labelled datasets is a challenge. Moreover, in such context, the datasets are also highly imbalanced, with few observations from positive cases of the new disease. In this work we evaluate the performance of the semi-supervised deep learning architecture known as MixMatch with a very limited number of labelled observations and highly imbalanced labelled datasets. We demonstrate the critical impact of data imbalance to the model’s accuracy. Therefore, we propose a simple approach for correcting data imbalance, by re-weighting each observation in the loss function, giving a higher weight to the observations corresponding to the under-represented class. For unlabelled observations, we use the pseudo and augmented labels calculated by MixMatch to choose the appropriate weight. The proposed method improved classification accuracy by up to 18%, with respect to the non balanced MixMatch algorithm. We tested our proposed approach with several available datasets using 10, 15 and 20 labelled observations, for binary classification (COVID-19 positive and normal cases). For multi-class classification (COVID-19 positive, pneumonia and normal cases), we tested 30, 50, 70 and 90 labelled observations. Additionally, a new dataset is included among the tested datasets, composed of chest X-ray images of Costa Rican adult patients.
Collapse
Affiliation(s)
- Saul Calderon-Ramirez
- Centre for Computational Intelligence (CCI), De Montfort University, United Kingdom.,Instituto Tecnologico de Costa Rica, Costa Rica
| | - Shengxiang Yang
- Centre for Computational Intelligence (CCI), De Montfort University, United Kingdom
| | - Armaghan Moemeni
- School of Computer Science, University of Nottingham, United Kingdom
| | - David Elizondo
- Centre for Computational Intelligence (CCI), De Montfort University, United Kingdom
| | | | | | - Miguel A Molina-Cabello
- Department of Computer Languages and Computer Science, University of Málaga, Spain.,Instituto de Investigación Biomédica de Málaga (IBIMA), Spain
| |
Collapse
|
24
|
Shi W, Tong L, Zhu Y, Wang MD. COVID-19 Automatic Diagnosis With Radiographic Imaging: Explainable Attention Transfer Deep Neural Networks. IEEE J Biomed Health Inform 2021. [PMID: 33882010 DOI: 10.1109/jbhi.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
Researchers seek help from deep learning methods to alleviate the enormous burden of reading radiological images by clinicians during the COVID-19 pandemic. However, clinicians are often reluctant to trust deep models due to their black-box characteristics. To automatically differentiate COVID-19 and community-acquired pneumonia from healthy lungs in radiographic imaging, we propose an explainable attention-transfer classification model based on the knowledge distillation network structure. The attention transfer direction always goes from the teacher network to the student network. Firstly, the teacher network extracts global features and concentrates on the infection regions to generate attention maps. It uses a deformable attention module to strengthen the response of infection regions and to suppress noise in irrelevant regions with an expanded reception field. Secondly, an image fusion module combines attention knowledge transferred from teacher network to student network with the essential information in original input. While the teacher network focuses on global features, the student branch focuses on irregularly shaped lesion regions to learn discriminative features. Lastly, we conduct extensive experiments on public chest X-ray and CT datasets to demonstrate the explainability of the proposed architecture in diagnosing COVID-19.
Collapse
|
25
|
Unnikrishnan B, Nguyen C, Balaram S, Li C, Foo CS, Krishnaswamy P. Semi-supervised classification of radiology images with NoTeacher: A teacher that is not mean. Med Image Anal 2021; 73:102148. [PMID: 34274693 DOI: 10.1016/j.media.2021.102148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 06/07/2021] [Accepted: 06/24/2021] [Indexed: 12/30/2022]
Abstract
Deep learning models achieve strong performance for radiology image classification, but their practical application is bottle-necked by the need for large labeled training datasets. Semi-supervised learning (SSL) approaches leverage small labeled datasets alongside larger unlabeled datasets and offer potential for reducing labeling cost. In this work, we introduce NoTeacher, a novel consistency-based SSL framework which incorporates probabilistic graphical models. Unlike Mean Teacher which maintains a teacher network updated via a temporal ensemble, NoTeacher employs two independent networks, thereby eliminating the need for a teacher network. We demonstrate how NoTeacher can be customized to handle a range of challenges in radiology image classification. Specifically, we describe adaptations for scenarios with 2D and 3D inputs, uni and multi-label classification, and class distribution mismatch between labeled and unlabeled portions of the training data. In realistic empirical evaluations on three public benchmark datasets spanning the workhorse modalities of radiology (X-Ray, CT, MRI), we show that NoTeacher achieves over 90-95% of the fully supervised AUROC with less than 5-15% labeling budget. Further, NoTeacher outperforms established SSL methods with minimal hyperparameter tuning, and has implications as a principled and practical option for semi-supervised learning in radiology applications.
Collapse
Affiliation(s)
- Balagopal Unnikrishnan
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore.
| | - Cuong Nguyen
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore.
| | - Shafa Balaram
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore; National University of Singapore, Singapore
| | - Chao Li
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore
| | - Chuan Sheng Foo
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore
| | - Pavitra Krishnaswamy
- Institute for Infocomm Research, Agency for Science Technology and Research (A*STAR), Singapore.
| |
Collapse
|
26
|
Budd S, Robinson EC, Kainz B. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med Image Anal 2021; 71:102062. [PMID: 33901992 DOI: 10.1016/j.media.2021.102062] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 03/26/2021] [Accepted: 03/30/2021] [Indexed: 12/21/2022]
Abstract
Fully automatic deep learning has become the state-of-the-art technique for many tasks including image acquisition, analysis and interpretation, and for the extraction of clinically useful information for computer-aided detection, diagnosis, treatment planning, intervention and therapy. However, the unique challenges posed by medical image analysis suggest that retaining a human end-user in any deep learning enabled system will be beneficial. In this review we investigate the role that humans might play in the development and deployment of deep learning enabled diagnostic applications and focus on techniques that will retain a significant input from a human end user. Human-in-the-Loop computing is an area that we see as increasingly important in future research due to the safety-critical nature of working in the medical domain. We evaluate four key areas that we consider vital for deep learning in the clinical practice: (1) Active Learning to choose the best data to annotate for optimal model performance; (2) Interaction with model outputs - using iterative feedback to steer models to optima for a given prediction and offering meaningful ways to interpret and respond to predictions; (3) Practical considerations - developing full scale applications and the key considerations that need to be made before deployment; (4) Future Prospective and Unanswered Questions - knowledge gaps and related research fields that will benefit human-in-the-loop computing as they evolve. We offer our opinions on the most promising directions of research and how various aspects of each area might be unified towards common goals.
Collapse
Affiliation(s)
- Samuel Budd
- Department of Computing, Imperial College London, UK.
| | | | | |
Collapse
|
27
|
Hamamoto R, Suvarna K, Yamada M, Kobayashi K, Shinkai N, Miyake M, Takahashi M, Jinnai S, Shimoyama R, Sakai A, Takasawa K, Bolatkan A, Shozu K, Dozen A, Machino H, Takahashi S, Asada K, Komatsu M, Sese J, Kaneko S. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers (Basel) 2020; 12:E3532. [PMID: 33256107 PMCID: PMC7760590 DOI: 10.3390/cancers12123532] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 11/21/2020] [Accepted: 11/24/2020] [Indexed: 02/07/2023] Open
Abstract
In recent years, advances in artificial intelligence (AI) technology have led to the rapid clinical implementation of devices with AI technology in the medical field. More than 60 AI-equipped medical devices have already been approved by the Food and Drug Administration (FDA) in the United States, and the active introduction of AI technology is considered to be an inevitable trend in the future of medicine. In the field of oncology, clinical applications of medical devices using AI technology are already underway, mainly in radiology, and AI technology is expected to be positioned as an important core technology. In particular, "precision medicine," a medical treatment that selects the most appropriate treatment for each patient based on a vast amount of medical data such as genome information, has become a worldwide trend; AI technology is expected to be utilized in the process of extracting truly useful information from a large amount of medical data and applying it to diagnosis and treatment. In this review, we would like to introduce the history of AI technology and the current state of medical AI, especially in the oncology field, as well as discuss the possibilities and challenges of AI technology in the medical field.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Kruthi Suvarna
- Indian Institute of Technology Bombay, Powai, Mumbai 400 076, India;
| | - Masayoshi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku Tokyo 104-0045, Japan
| | - Kazuma Kobayashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Norio Shinkai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Mototaka Miyake
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Masamichi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Neurosurgery and Neuro-Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Shunichi Jinnai
- Department of Dermatologic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Ryo Shimoyama
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Akira Sakai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ken Takasawa
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Amina Bolatkan
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Kanto Shozu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Ai Dozen
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Hidenori Machino
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Satoshi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Masaaki Komatsu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Jun Sese
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Humanome Lab, 2-4-10 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Syuzo Kaneko
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| |
Collapse
|