1
|
Hirano Y, Fujima N, Kameda H, Ishizaka K, Kwon J, Yoneyama M, Kudo K. High Resolution TOF-MRA Using Compressed Sensing-based Deep Learning Image Reconstruction for the Visualization of Lenticulostriate Arteries: A Preliminary Study. Magn Reson Med Sci 2024:mp.2024-0025. [PMID: 39034144 DOI: 10.2463/mrms.mp.2024-0025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024] Open
Abstract
PURPOSE To investigate the visibility of the lenticulostriate arteries (LSAs) in time-of-flight (TOF)-MR angiography (MRA) using compressed sensing (CS)-based deep learning (DL) image reconstruction by comparing its image quality with that obtained by the conventional CS algorithm. METHODS Five healthy volunteers were included. High-resolution TOF-MRA images with the reduction (R)-factor of 1 were acquired as full-sampling data. Images with R-factors of 2, 4, and 6 were then reconstructed using CS-DL and conventional CS (the combination of CS and sensitivity conceding; CS-SENSE) reconstruction, respectively. In the quantitative assessment, the number of visible LSAs (identified by two radiologists), length of each depicted LSA (evaluated by one radiological technologist), and normalized mean squared error (NMSE) value were assessed. In the qualitative assessment, the overall image quality and the visibility of the peripheral LSA were visually evaluated by two radiologists. RESULTS In the quantitative assessment of the DL-CS images, the number of visible LSAs was significantly higher than those obtained with CS-SENSE in the R-factors of 4 and 6 (Reader 1) and in the R-factor of 6 (Reader 2). The length of the depicted LSAs in the DL-CS images was significantly longer in the R-factor 6 compared to the CS-SENSE result. The NMSE value in CS-DL was significantly lower than in CS-SENSE for R-factors of 4 and 6. In the qualitative assessment of DL-CS images, the overall image quality was significantly higher than that obtained with CS-SENSE in the R-factors 4 and 6 (Reader 1) and in the R-factor 4 (Reader 2). The visibility of the peripheral LSA was significantly higher than that shown by CS-SENSE in all R-factors (Reader 1) and in the R-factors 2 and 4 (Reader 2). CONCLUSION CS-DL reconstruction demonstrated preserved image quality for the depiction of LSAs compared to the conventional CS-SENSE when the R-factor is elevated.
Collapse
Affiliation(s)
- Yuya Hirano
- Department of Radiological Technology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Hiroyuki Kameda
- Faculty of Dental Medicine, Department of Radiology, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Kinya Ishizaka
- Department of Radiological Technology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | | | | | - Kohsuke Kudo
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Sapporo, Hokkaido, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| |
Collapse
|
2
|
Walston SL, Seki H, Takita H, Mitsuyama Y, Sato S, Hagiwara A, Ito R, Hanaoka S, Miki Y, Ueda D. Data set terminology of deep learning in medicine: a historical review and recommendation. Jpn J Radiol 2024:10.1007/s11604-024-01608-1. [PMID: 38856878 DOI: 10.1007/s11604-024-01608-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 05/31/2024] [Indexed: 06/11/2024]
Abstract
Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word 'validation' in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the 'training set', the data for tuning of parameters referred to as the 'validation (or tuning) set', and the data for the evaluation of models as the 'test set'. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field.
Collapse
Affiliation(s)
- Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroshi Seki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shingo Sato
- Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Akifumi Hagiwara
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University, Nagoya, Japan
| | - Shouhei Hanaoka
- Department of Radiology, University of Tokyo Hospital, Tokyo, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| |
Collapse
|
3
|
Kamagata K, Naganawa S. Overview of Target-Oriented Project Group United for Nippon (TOP GUN): fostering interdisciplinary collaboration among young researchers in radiology on timely topics. Jpn J Radiol 2024:10.1007/s11604-024-01580-w. [PMID: 38780724 DOI: 10.1007/s11604-024-01580-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Affiliation(s)
- Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan.
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
4
|
Hirano Y, Fujima N, Ishizaka K, Aoike T, Nakagawa J, Yoneyama M, Kudo K. Utility of Echo Planar Imaging With Compressed Sensing-Sensitivity Encoding (EPICS) for the Evaluation of the Head and Neck Region. Cureus 2024; 16:e54203. [PMID: 38371431 PMCID: PMC10869950 DOI: 10.7759/cureus.54203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/13/2024] [Indexed: 02/20/2024] Open
Abstract
Purpose This study aimed to compare the image quality between echo planar imaging (EPI) with compressed sensing-sensitivity encoding (EPICS)-based diffusion-weighted imaging (DWI) and conventional parallel imaging (PI)-based DWI of the head and neck. Materials and methods Ten healthy volunteers participated in this study. EPICS-DWI was acquired based on an axial spin-echo EPI sequence with EPICS acceleration factors of 2, 3, and 4, respectively. Conventional PI-DWI was acquired using the same acceleration factors (i.e., 2, 3, and 4). Quantitative assessment was performed by measuring the signal-to-noise ratio (SNR) and apparent diffusion coefficient (ADC) in a circular region of interest (ROI) on the parotid and submandibular glands. For qualitative evaluation, a three-point visual grading system was used to assess the (1) overall image quality and (2) degree of image distortion. Results In the quantitative assessment, the SNR of the parotid gland in EPICS-DWI was significantly higher than that of PI-DWI in acceleration factors of 3 and 4 (p<0.05). In a comparison of ADC values, significant differences were not observed between EPICS-DWI and PI-DWI. In the qualitative assessment, the overall image quality of EPICS-DWI was significantly higher than that of PI-DWI for acceleration factors 3 and 4 (p<0.05). The degree of image distortion was significantly larger in EPICS-DWI with an acceleration factor of 2 than that of 3 or 4 (p<0.01, respectively). Conclusion Under the appropriate parameter setting, EPICS-DWI demonstrated higher SNR and better overall image quality for head and neck imaging than PI-DWI, without increasing image distortion.
Collapse
Affiliation(s)
- Yuya Hirano
- Department of Radiological Technology, Hokkaido University Hospital, Sapporo, JPN
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, JPN
| | - Kinya Ishizaka
- Department of Radiological Technology, Hokkaido University Hospital, Sapporo, JPN
| | - Takuya Aoike
- Department of Radiological Technology, Hokkaido University Hospital, Sapporo, JPN
| | - Junichi Nakagawa
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, JPN
| | | | - Kohsuke Kudo
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Sapporo, JPN
| |
Collapse
|
5
|
Kazimierczak N, Kazimierczak W, Serafin Z, Nowicki P, Nożewski J, Janiszewska-Olszowska J. AI in Orthodontics: Revolutionizing Diagnostics and Treatment Planning-A Comprehensive Review. J Clin Med 2024; 13:344. [PMID: 38256478 PMCID: PMC10816993 DOI: 10.3390/jcm13020344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/29/2023] [Accepted: 01/05/2024] [Indexed: 01/24/2024] Open
Abstract
The advent of artificial intelligence (AI) in medicine has transformed various medical specialties, including orthodontics. AI has shown promising results in enhancing the accuracy of diagnoses, treatment planning, and predicting treatment outcomes. Its usage in orthodontic practices worldwide has increased with the availability of various AI applications and tools. This review explores the principles of AI, its applications in orthodontics, and its implementation in clinical practice. A comprehensive literature review was conducted, focusing on AI applications in dental diagnostics, cephalometric evaluation, skeletal age determination, temporomandibular joint (TMJ) evaluation, decision making, and patient telemonitoring. Due to study heterogeneity, no meta-analysis was possible. AI has demonstrated high efficacy in all these areas, but variations in performance and the need for manual supervision suggest caution in clinical settings. The complexity and unpredictability of AI algorithms call for cautious implementation and regular manual validation. Continuous AI learning, proper governance, and addressing privacy and ethical concerns are crucial for successful integration into orthodontic practice.
Collapse
Affiliation(s)
- Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Wojciech Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Paweł Nowicki
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Jakub Nożewski
- Department of Emeregncy Medicine, University Hospital No 2 in Bydgoszcz, Ujejskiego 75, 85-168 Bydgoszcz, Poland
| | | |
Collapse
|