1
|
Inomata S, Yoshimura T, Tang M, Ichikawa S, Sugimori H. Automatic Aortic Valve Extraction Using Deep Learning with Contrast-Enhanced Cardiac CT Images. J Cardiovasc Dev Dis 2024; 12:3. [PMID: 39852281 PMCID: PMC11766280 DOI: 10.3390/jcdd12010003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 12/16/2024] [Accepted: 12/21/2024] [Indexed: 01/26/2025] Open
Abstract
PURPOSE This study evaluates the use of deep learning techniques to automatically extract and delineate the aortic valve annulus region from contrast-enhanced cardiac CT images. Two approaches, namely, segmentation and object detection, were compared to determine their accuracy. MATERIALS AND METHODS A dataset of 32 contrast-enhanced cardiac CT scans was analyzed. The segmentation approach utilized the DeepLabv3+ model, while the object detection approach employed YOLOv2. The dataset was augmented through rotation and scaling, and five-fold cross-validation was applied. The accuracy of both methods was evaluated using the Dice similarity coefficient (DSC), and their performance in estimating the aortic valve annulus area was compared. RESULTS The object detection approach achieved a mean DSC of 0.809, significantly outperforming the segmentation approach, which had a mean DSC of 0.711. Object detection also demonstrated higher precision and recall, with fewer false positives and negatives. The aortic valve annulus area estimation had a mean error of 2.55 mm. CONCLUSIONS Object detection showed superior performance in identifying the aortic valve annulus region, suggesting its potential for clinical application in cardiac imaging. The results highlight the promise of deep learning in improving the accuracy and efficiency of preoperative planning for cardiovascular interventions.
Collapse
Affiliation(s)
- Soichiro Inomata
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan;
| | - Takaaki Yoshimura
- Department of Health Sciences and Technology, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
- Department of Medical Physics, Hokkaido University Hospital, Sapporo 060-8648, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8638, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
| | - Minghui Tang
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Diagnostic Imaging, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo 060-8638, Japan
| | - Shota Ichikawa
- Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Niigata University, Niigata 951-8518, Japan
- Institute for Research Administration, Niigata University, Niigata 950-2181, Japan
| | - Hiroyuki Sugimori
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8638, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Biomedical Science and Engineering, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| |
Collapse
|
2
|
Oura D, Gekka M, Sugimori H. The montage method improves the classification of suspected acute ischemic stroke using the convolution neural network and brain MRI. Radiol Phys Technol 2024; 17:297-305. [PMID: 37934345 DOI: 10.1007/s12194-023-00754-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/15/2023] [Accepted: 10/17/2023] [Indexed: 11/08/2023]
Abstract
This study investigated the usefulness of the montage method that combines four different magnetic resonance images into one images for automatic acute ischemic stroke (AIS) diagnosis with deep learning method. The montage image was consisted from diffusion weighted image (DWI), fluid attenuated inversion recovery (FLAIR), arterial spin labeling (ASL), and apparent diffusion coefficient (ASL). The montage method was compared with pseudo color map (pCM) which was consisted from FLAIR, ASL and ADC. 473 AIS patients were classified into four categories: mechanical thrombectomy, conservative therapy, hemorrhage, and other diseases. The results showed that the montage image significantly outperformed pCM in terms of accuracy (montage image = 0.76 ± 0.01, pCM = 0.54 ± 0.05) and the area under the curve (AUC) (montage image = 0.94 ± 0.01, pCM = 0.76 ± 0.01). This study demonstrates the usefulness of the montage method and its potential for overcoming the limitations of pCM.
Collapse
Affiliation(s)
- Daisuke Oura
- Department of Radiology, Otaru General Hospital, Otaru, 047-0152, Japan
- Graduate School of Health Sciences, Hokkaido University, Sapporo, 060-0812, Japan
| | - Masayuki Gekka
- Department of Neurosurgery, Otaru General Hospital, Otaru, 047-0152, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo, 060-0812, Japan.
| |
Collapse
|
3
|
Inomata S, Yoshimura T, Tang M, Ichikawa S, Sugimori H. Estimation of Left and Right Ventricular Ejection Fractions from cine-MRI Using 3D-CNN. SENSORS (BASEL, SWITZERLAND) 2023; 23:6580. [PMID: 37514888 PMCID: PMC10384911 DOI: 10.3390/s23146580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/17/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Cardiac function indices must be calculated using tracing from short-axis images in cine-MRI. A 3D-CNN (convolutional neural network) that adds time series information to images can estimate cardiac function indices without tracing using images with known values and cardiac cycles as the input. Since the short-axis image depicts the left and right ventricles, it is unclear which motion feature is captured. This study aims to estimate the indices by learning the short-axis images and the known left and right ventricular ejection fractions and to confirm the accuracy and whether each index is captured as a feature. A total of 100 patients with publicly available short-axis cine images were used. The dataset was divided into training:test = 8:2, and a regression model was built by training with the 3D-ResNet50. Accuracy was assessed using a five-fold cross-validation. The correlation coefficient, MAE (mean absolute error), and RMSE (root mean squared error) were determined as indices of accuracy evaluation. The mean correlation coefficient of the left ventricular ejection fraction was 0.80, MAE was 9.41, and RMSE was 12.26. The mean correlation coefficient of the right ventricular ejection fraction was 0.56, MAE was 11.35, and RMSE was 14.95. The correlation coefficient was considerably higher for the left ventricular ejection fraction. Regression modeling using the 3D-CNN indicated that the left ventricular ejection fraction was estimated more accurately, and left ventricular systolic function was captured as a feature.
Collapse
Affiliation(s)
- Soichiro Inomata
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| | - Takaaki Yoshimura
- Department of Health Sciences and Technology, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
- Department of Medical Physics, Hokkaido University Hospital, Sapporo 060-8648, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
| | - Minghui Tang
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Diagnostic Imaging, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo 060-8638, Japan
| | - Shota Ichikawa
- Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Niigata University, Niigata 951-8518, Japan
- Institute for Research Administration, Niigata University, Niigata 950-2181, Japan
| | - Hiroyuki Sugimori
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Biomedical Science and Engineering, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| |
Collapse
|
4
|
Tian J, Afebu KO, Bickerdike A, Liu Y, Prasad S, Nelson BJ. Fundamentals of Bowel Cancer for Biomedical Engineers. Ann Biomed Eng 2023; 51:679-701. [PMID: 36786901 PMCID: PMC9927048 DOI: 10.1007/s10439-023-03155-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 01/21/2023] [Indexed: 02/15/2023]
Abstract
Bowel cancer is a multifactorial disease arising from a combination of genetic predisposition and environmental factors. Detection of bowel cancer and its precursor lesions is predominantly performed by either visual inspection of the colonic mucosa during endoscopy or cross-sectional imaging. Most cases are diagnosed when the cancer is already at an advanced stage. These modalities are less reliable for detecting lesions at the earliest stages, when they are typically small or flat. Removal of lesions at the earliest possible stage reduces the risk of cancer death, which is largely due to a reduced risk of subsequent metastasis. In this review, we summarised the origin of bowel cancer and the mechanism of its metastasis. In particular, we reviewed a broad spectrum of literatures covering the biomechanics of bowel cancer and its measurement techniques that are pertinent to the successful development of a bowel cancer diagnostic device. We also reviewed existing bowel cancer diagnostic techniques that are available for clinical use. Finally, we outlined current clinical needs and highlighted the potential roles of medical robotics on early bowel cancer diagnosis.
Collapse
Affiliation(s)
- Jiyuan Tian
- Engineering Department, University of Exeter, North Park Road, Exeter, EX4 4QF UK
| | | | - Andrew Bickerdike
- Engineering Department, University of Exeter, North Park Road, Exeter, EX4 4QF UK
| | - Yang Liu
- Engineering Department, University of Exeter, North Park Road, Exeter, EX4 4QF UK
| | - Shyam Prasad
- Royal Devon University Healthcare NHS Foundation Trust, Barrack Road, Exeter, EX2 5DW UK
| | - Bradley J. Nelson
- Multi-Scale Robotics Lab, ETH Zürich, Tannenstrasse 3, 8092 Zurich, Switzerland
| |
Collapse
|
5
|
Hirata K, Sugimori H, Fujima N, Toyonaga T, Kudo K. Artificial intelligence for nuclear medicine in oncology. Ann Nucl Med 2022; 36:123-132. [PMID: 35028877 DOI: 10.1007/s12149-021-01693-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 11/07/2021] [Indexed: 12/12/2022]
Abstract
As in all other medical fields, artificial intelligence (AI) is increasingly being used in nuclear medicine for oncology. There are many articles that discuss AI from the viewpoint of nuclear medicine, but few focus on nuclear medicine from the viewpoint of AI. Nuclear medicine images are characterized by their low spatial resolution and high quantitativeness. It is noted that AI has been used since before the emergence of deep learning. AI can be divided into three categories by its purpose: (1) assisted interpretation, i.e., computer-aided detection (CADe) or computer-aided diagnosis (CADx). (2) Additional insight, i.e., AI provides information beyond the radiologist's eye, such as predicting genes and prognosis from images. It is also related to the field called radiomics/radiogenomics. (3) Augmented image, i.e., image generation tasks. To apply AI to practical use, harmonization between facilities and the possibility of black box explanations need to be resolved.
Collapse
Affiliation(s)
- Kenji Hirata
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan. .,Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan. .,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.
| | | | - Noriyuki Fujima
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan.,Global Center for Biomedical Science and Engineering, Hokkaido University Faculty of Medicine, Sapporo, Japan
| |
Collapse
|
6
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
7
|
Sugimori H, Shimizu K, Makita H, Suzuki M, Konno S. A Comparative Evaluation of Computed Tomography Images for the Classification of Spirometric Severity of the Chronic Obstructive Pulmonary Disease with Deep Learning. Diagnostics (Basel) 2021; 11:diagnostics11060929. [PMID: 34064240 PMCID: PMC8224354 DOI: 10.3390/diagnostics11060929] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/17/2021] [Accepted: 05/19/2021] [Indexed: 12/03/2022] Open
Abstract
Recently, deep learning applications in medical imaging have been widely applied. However, whether it is sufficient to simply input the entire image or whether it is necessary to preprocess the setting of the supervised image has not been sufficiently studied. This study aimed to create a classifier trained with and without preprocessing for the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification using CT images and to evaluate the classification accuracy of the GOLD classification by confusion matrix. According to former GOLD 0, GOLD 1, GOLD 2, and GOLD 3 or 4, eighty patients were divided into four groups (n = 20). The classification models were created by the transfer learning of the ResNet50 network architecture. The created models were evaluated by confusion matrix and AUC. Moreover, the rearranged confusion matrix for former stages 0 and ≥1 was evaluated by the same procedure. The AUCs of original and threshold images for the four-class analysis were 0.61 ± 0.13 and 0.64 ± 0.10, respectively, and the AUCs for the two classifications of former GOLD 0 and GOLD ≥ 1 were 0.64 ± 0.06 and 0.68 ± 0.12, respectively. In the two-class classification by threshold image, recall and precision were over 0.8 in GOLD ≥ 1, and in the McNemar–Bowker test, there was some symmetry. The results suggest that the preprocessed threshold image can be possibly used as a screening tool for GOLD classification without pulmonary function tests, rather than inputting the normal image into the convolutional neural network (CNN) for CT image learning.
Collapse
Affiliation(s)
- Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan;
| | - Kaoruko Shimizu
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
- Correspondence: ; Tel.: +81-11-706-5911
| | - Hironi Makita
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
- Hokkaido Medical Research Institute for Respiratory Diseases, Sapporo 064-0807, Japan
| | - Masaru Suzuki
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
| | - Satoshi Konno
- Department of Respiratory Medicine, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan; (H.M.); (M.S.); (S.K.)
| |
Collapse
|