1
|
Yamada A, Hanaoka S, Takenaga T, Miki S, Yoshikawa T, Nomura Y. Investigation of distributed learning for automated lesion detection in head MR images. Radiol Phys Technol 2024:10.1007/s12194-024-00827-5. [PMID: 39048847 DOI: 10.1007/s12194-024-00827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 06/11/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.
Collapse
Affiliation(s)
- Aiki Yamada
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan.
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan
| |
Collapse
|
2
|
Diot-Dejonghe T, Leporq B, Bouhamama A, Ratiney H, Pilleul F, Beuf O, Cervenansky F. Development of a Secure Web-Based Medical Imaging Analysis Platform: The AWESOMME Project. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01110-0. [PMID: 38689149 DOI: 10.1007/s10278-024-01110-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 03/12/2024] [Accepted: 04/02/2024] [Indexed: 05/02/2024]
Abstract
Precision medicine research benefits from machine learning in the creation of robust models adapted to the processing of patient data. This applies both to pathology identification in images, i.e., annotation or segmentation, and to computer-aided diagnostic for classification or prediction. It comes with the strong need to exploit and visualize large volumes of images and associated medical data. The work carried out in this paper follows on from a main case study piloted in a cancer center. It proposes an analysis pipeline for patients with osteosarcoma through segmentation, feature extraction and application of a deep learning model to predict response to treatment. The main aim of the AWESOMME project is to leverage this work and implement the pipeline on an easy-to-access, secure web platform. The proposed WEB application is based on a three-component architecture: a data server, a heavy computation and authentication server and a medical imaging web-framework with a user interface. These existing components have been enhanced to meet the needs of security and traceability for the continuous production of expert data. It innovates by covering all steps of medical imaging processing (visualization and segmentation, feature extraction and aided diagnostic) and enables the test and use of machine learning models. The infrastructure is operational, deployed in internal production and is currently being installed in the hospital environment. The extension of the case study and user feedback enabled us to fine-tune functionalities and proved that AWESOMME is a modular solution capable to analyze medical data and share research algorithms with in-house clinicians.
Collapse
Affiliation(s)
- Tiphaine Diot-Dejonghe
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Benjamin Leporq
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Amine Bouhamama
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
- Department of Radiology, Centre Léon Bérard, 28 Prom. Léa et Napoléon Bullukian, Lyon, 69008, France
| | - Helene Ratiney
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Frank Pilleul
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
- Department of Radiology, Centre Léon Bérard, 28 Prom. Léa et Napoléon Bullukian, Lyon, 69008, France
| | - Olivier Beuf
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France
| | - Frederic Cervenansky
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69XXX, France.
| |
Collapse
|
3
|
Nomura Y, Hanaoka S, Hayashi N, Yoshikawa T, Koshino S, Sato C, Tatsuta M, Tanaka Y, Kano S, Nakaya M, Inui S, Kusakabe M, Nakao T, Miki S, Watadani T, Nakaoka R, Shimizu A, Abe O. Performance changes due to differences among annotating radiologists for training data in computerized lesion detection. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03136-9. [PMID: 38625446 DOI: 10.1007/s11548-024-03136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/28/2024] [Indexed: 04/17/2024]
Abstract
PURPOSE The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Saori Koshino
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Chiaki Sato
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, Tokyo, Japan
| | - Momoko Tatsuta
- Department of Diagnostic Radiology, Kitasato University Hospital, Sagamihara, Kanagawa, Japan
| | - Yuya Tanaka
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shintaro Kano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Moto Nakaya
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | | | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryusuke Nakaoka
- Division of Medical Devices, National Institute of Health Sciences, Kawasaki, Kanagawa, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
4
|
Automated volume measurement of abdominal adipose tissue from entire abdominal cavity in Dixon MR images using deep learning. Radiol Phys Technol 2023; 16:28-38. [PMID: 36344662 DOI: 10.1007/s12194-022-00687-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 11/11/2022]
Abstract
The purpose of this study was to realize an automated volume measurement of abdominal adipose tissue from the entire abdominal cavity in Dixon magnetic resonance (MR) images using deep learning. Our algorithm involves a combination of extraction of the abdominal cavity and body trunk regions using deep learning and extraction of a fat region based on automatic thresholding. To evaluate the proposed method, we calculated the Dice coefficient (DC) between the extracted regions using deep learning and labeled images. We also compared the visceral adipose tissue (VAT) and subcutaneous adipose tissue volumes calculated by employing the proposed method with those calculated from computed tomography (CT) images scanned on the same day using the automatic calculation method previously developed by our group. We implemented our method as a plug-in in a web-based medical image processing platform. The DCs of the abdominal cavity and body trunk regions were 0.952 ± 0.014 and 0.995 ± 0.002, respectively. The VAT volume measured from MR images using the proposed method was almost equivalent to that measured from CT images. The time required for our plug-in to process the test set was 118.9 ± 28.0 s. Using our proposed method, the VAT volume measured from MR images can be an alternative to that measured from CT images.
Collapse
|
5
|
Kikuchi T, Hanaoka S, Nakao T, Nomura Y, Yoshikawa T, Alam A, Mori H, Hayashi N. Significance of FDG-PET standardized uptake values in predicting thyroid disease. Eur Thyroid J 2023; 12:ETJ-22-0165. [PMID: 36562641 PMCID: PMC9986380 DOI: 10.1530/etj-22-0165] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 12/23/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVE This study aimed to determine a standardized cut-off value for abnormal 18F-fluorodeoxyglucose (FDG) accumulation in the thyroid gland. METHODS Herein, 7013 FDG-PET/CT scans were included. An automatic thyroid segmentation method using two U-nets (2D- and 3D-U-net) was constructed; mean FDG standardized uptake value (SUV), CT value, and volume of the thyroid gland were obtained from each participant. The values were categorized by thyroid function into three groups based on serum thyroid-stimulating hormone levels. Thyroid function and mean SUV with increments of 1 were analyzed, and risk for thyroid dysfunction was calculated. Thyroid dysfunction detection ability was examined using a machine learning method (LightGBM, Microsoft) with age, sex, height, weight, CT value, volume, and mean SUV as explanatory variables. RESULTS Mean SUV was significantly higher in females with hypothyroidism. Almost 98.9% of participants in the normal group had mean SUV < 2 and 93.8% participants with mean SUV < 2 had normal thyroid function. The hypothyroidism group had more cases with mean SUV ≥ 2. The relative risk of having abnormal thyroid function was 4.6 with mean SUV ≥ 2. The sensitivity and specificity for detecting thyroid dysfunction using LightGBM (Microsoft) were 14.5 and 99%, respectively. CONCLUSIONS Mean SUV ≥ 2 was strongly associated with abnormal thyroid function in this large cohort, indicating that mean SUV with FDG-PET/CT can be used as a criterion for thyroid evaluation. Preliminarily, this study shows the potential utility of detecting thyroid dysfunction based on imaging findings.
Collapse
Affiliation(s)
- Tomohiro Kikuchi
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
- Department of Radiology, Jichi Medical University, School of Medicine, Shimotsuke, Tochigi, Japan
- Correspondence should be addressed to Tomohiro Kikuchi:
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
- Center for Frontier Medical Engineering, Chiba University, Yayoicho, Inage–ku, Chiba, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Ashraful Alam
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Harushi Mori
- Department of Radiology, Jichi Medical University, School of Medicine, Shimotsuke, Tochigi, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| |
Collapse
|
6
|
Nomura Y, Hanaoka S, Takenaga T, Nakao T, Shibata H, Miki S, Yoshikawa T, Watadani T, Hayashi N, Abe O. Preliminary study of generalized semiautomatic segmentation for 3D voxel labeling of lesions based on deep learning. Int J Comput Assist Radiol Surg 2021; 16:1901-1913. [PMID: 34652606 DOI: 10.1007/s11548-021-02504-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 09/17/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE The three-dimensional (3D) voxel labeling of lesions requires significant radiologists' effort in the development of computer-aided detection software. To reduce the time required for the 3D voxel labeling, we aimed to develop a generalized semiautomatic segmentation method based on deep learning via a data augmentation-based domain generalization framework. In this study, we investigated whether a generalized semiautomatic segmentation model trained using two types of lesion can segment previously unseen types of lesion. METHODS We targeted lung nodules in chest CT images, liver lesions in hepatobiliary-phase images of Gd-EOB-DTPA-enhanced MR imaging, and brain metastases in contrast-enhanced MR images. For each lesion, the 32 × 32 × 32 isotropic volume of interest (VOI) around the center of gravity of the lesion was extracted. The VOI was input into a 3D U-Net model to define the label of the lesion. For each type of target lesion, we compared five types of data augmentation and two types of input data. RESULTS For all considered target lesions, the highest dice coefficients among the training patterns were obtained when using a combination of the existing data augmentation-based domain generalization framework and random monochrome inversion and when using the resized VOI as the input image. The dice coefficients were 0.639 ± 0.124 for the lung nodules, 0.660 ± 0.137 for the liver lesions, and 0.727 ± 0.115 for the brain metastases. CONCLUSIONS Our generalized semiautomatic segmentation model could label unseen three types of lesion with different contrasts from the surroundings. In addition, the resized VOI as the input image enables the adaptation to the various sizes of lesions even when the size distribution differed between the training set and the test set.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. .,Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hisaichi Shibata
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
7
|
Nomura Y, Hanaoka S, Nakao T, Hayashi N, Yoshikawa T, Miki S, Watadani T, Abe O. Performance changes due to differences in training data for cerebral aneurysm detection in head MR angiography images. Jpn J Radiol 2021; 39:1039-1048. [PMID: 34125368 DOI: 10.1007/s11604-021-01153-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 06/08/2021] [Indexed: 01/10/2023]
Abstract
PURPOSE The performance of computer-aided detection (CAD) software depends on the quality and quantity of the dataset used for machine learning. If the data characteristics in development and practical use are different, the performance of CAD software degrades. In this study, we investigated changes in detection performance due to differences in training data for cerebral aneurysm detection software in head magnetic resonance angiography images. MATERIALS AND METHODS We utilized three types of CAD software for cerebral aneurysm detection in MRA images, which were based on 3D local intensity structure analysis, graph-based features, and convolutional neural network. For each type of CAD software, we compared three types of training pattern, which were two types of training using single-site data and one type of training using multisite data. We also carried out internal and external evaluations. RESULTS In training using single-site data, the performance of CAD software largely and unpredictably fluctuated when the training dataset was changed. Training using multisite data did not show the lowest performance among the three training patterns for any CAD software and dataset. CONCLUSION The training of cerebral aneurysm detection software using data collected from multiple sites is desirable to ensure the stable performance of the software.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
8
|
Miki S, Nakao T, Nomura Y, Okimoto N, Nyunoya K, Nakamura Y, Kurokawa R, Amemiya S, Yoshikawa T, Hanaoka S, Hayashi N, Abe O. Computer-aided detection of cerebral aneurysms with magnetic resonance angiography: usefulness of volume rendering to display lesion candidates. Jpn J Radiol 2021; 39:652-658. [PMID: 33638771 DOI: 10.1007/s11604-021-01099-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 01/29/2021] [Indexed: 11/29/2022]
Abstract
PURPOSE The clinical usefulness of computer-aided detection of cerebral aneurysms has been investigated using different methods to present lesion candidates, but suboptimal methods may have limited its usefulness. We compared three presentation methods to determine which can benefit radiologists the most by enabling them to detect more aneurysms. MATERIALS AND METHODS We conducted a multireader multicase observer performance study involving six radiologists and using 470 lesion candidates output by a computer-aided detection program, and compared the following three different presentation methods using the receiver operating characteristic analysis: (1) a lesion candidate is encircled on axial slices, (2) a lesion candidate is overlaid on a volume-rendered image, and (3) combination of (1) and (2). The response time was also compared. RESULTS As compared with axial slices, radiologists showed significantly better detection performance when presented with volume-rendered images. There was no significant difference in response time between the two methods. The combined method was associated with a significantly longer response time, but had no added merit in terms of diagnostic accuracy. CONCLUSION Even with the aid of computer-aided detection, radiologists overlook many aneurysms if the presentation method is not optimal. Overlaying colored lesion candidates on volume-rendered images can help them detect more aneurysms.
Collapse
Affiliation(s)
- Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naomasa Okimoto
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Keisuke Nyunoya
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Yuta Nakamura
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shiori Amemiya
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.,Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|