1
|
Bhat I, Kuijf HJ, Viergever MA, Pluim JPW. Influence of learned landmark correspondences on lung CT registration. Med Phys 2024. [PMID: 38713916 DOI: 10.1002/mp.17120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/04/2024] [Accepted: 04/25/2024] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Disease or injury may cause a change in the biomechanical properties of the lungs, which can alter lung function. Image registration can be used to measure lung ventilation and quantify volume change, which can be a useful diagnostic aid. However, lung registration is a challenging problem because of the variation in deformation along the lungs, sliding motion of the lungs along the ribs, and change in density. PURPOSE Landmark correspondences have been used to make deformable image registration robust to large displacements. METHODS To tackle the challenging task of intra-patient lung computed tomography (CT) registration, we extend the landmark correspondence prediction model deep convolutional neural network-Match by introducing a soft mask loss term to encourage landmark correspondences in specific regions and avoid the use of a mask during inference. To produce realistic deformations to train the landmark correspondence model, we use data-driven synthetic transformations. We study the influence of these learned landmark correspondences on lung CT registration by integrating them into intensity-based registration as a distance-based penalty. RESULTS Our results on the public thoracic CT dataset COPDgene show that using learned landmark correspondences as a soft constraint can reduce median registration error from approximately 5.46 to 4.08 mm compared to standard intensity-based registration, in the absence of lung masks. CONCLUSIONS We show that using landmark correspondences results in minor improvements in local alignment, while significantly improving global alignment.
Collapse
Affiliation(s)
- Ishaan Bhat
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Max A Viergever
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Josien P W Pluim
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
2
|
Barbosa RM, Serrador L, da Silva MV, Macedo CS, Santos CP. Knee landmarks detection via deep learning for automatic imaging evaluation of trochlear dysplasia and patellar height. Eur Radiol 2024:10.1007/s00330-024-10596-9. [PMID: 38337072 DOI: 10.1007/s00330-024-10596-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 12/13/2023] [Accepted: 12/19/2023] [Indexed: 02/12/2024]
Abstract
OBJECTIVES To develop and validate a deep learning-based approach to automatically measure the patellofemoral instability (PFI) indices related to patellar height and trochlear dysplasia in knee magnetic resonance imaging (MRI) scans. METHODS A total of 763 knee MRI slices from 95 patients were included in the study, and 3393 anatomical landmarks were annotated for measuring sulcus angle (SA), trochlear facet asymmetry (TFA), trochlear groove depth (TGD) and lateral trochlear inclination (LTI) to assess trochlear dysplasia, and Insall-Salvati index (ISI), modified Insall-Salvati index (MISI), Caton Deschamps index (CDI) and patellotrochlear index (PTI) to assess patellar height. A U-Net based network was implemented to predict the landmarks' locations. The successful detection rate (SDR) and the mean absolute error (MAE) evaluation metrics were used to evaluate the performance of the network. The intraclass correlation coefficient (ICC) was also used to evaluate the reliability of the proposed framework to measure the mentioned PFI indices. RESULTS The developed models achieved good accuracy in predicting the landmarks' locations, with a maximum value for the MAE of 1.38 ± 0.76 mm. The results show that LTI, TGD, ISI, CDI and PTI can be measured with excellent reliability (ICC > 0.9), and SA, TFA and MISI can be measured with good reliability (ICC > 0.75), with the proposed framework. CONCLUSIONS This study proposes a reliable approach with promising applicability for automatic patellar height and trochlear dysplasia assessment, assisting the radiologists in their clinical practice. CLINICAL RELEVANCE STATEMENT The objective knee landmarks detection on MRI images provided by artificial intelligence may improve the reproducibility and reliability of the imaging evaluation of trochlear anatomy and patellar height, assisting radiologists in their clinical practice in the patellofemoral instability assessment. KEY POINTS • Imaging evaluation of patellofemoral instability is subjective and vulnerable to substantial intra and interobserver variability. • Patellar height and trochlear dysplasia are reliably assessed in MRI by means of artificial intelligence (AI). • The developed AI framework provides an objective evaluation of patellar height and trochlear dysplasia enhancing the clinical practice of the radiologists.
Collapse
Affiliation(s)
- Roberto M Barbosa
- Center of MicroElectroMechanical Systems (CMEMS), University of Minho, Guimarães, Portugal.
- MIT Portugal Program, School of Engineering, University of Minho, Guimarães, Portugal.
| | - Luís Serrador
- Center of MicroElectroMechanical Systems (CMEMS), University of Minho, Guimarães, Portugal
| | | | | | - Cristina P Santos
- Center of MicroElectroMechanical Systems (CMEMS), University of Minho, Guimarães, Portugal
- LABBELS - Associate Laboratory, Braga/Guimarães, Portugal
| |
Collapse
|
3
|
Ng CKC. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1372. [PMID: 37628371 PMCID: PMC10453402 DOI: 10.3390/children10081372] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023]
Abstract
Generative artificial intelligence, especially with regard to the generative adversarial network (GAN), is an important research area in radiology as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years. However, no review article about GAN in pediatric radiology has been published yet. The purpose of this paper is to systematically review applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation. Electronic databases were used for a literature search on 6 April 2023. Thirty-seven papers met the selection criteria and were included. This review reveals that the GAN can be applied to magnetic resonance imaging, X-ray, computed tomography, ultrasound and positron emission tomography for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1-158.6%. However, these study findings should be used with caution because of a number of methodological weaknesses. For future GAN studies, more robust methods will be essential for addressing these issues. Otherwise, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN could not be realized widely.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
4
|
Pirhadi A, Salari S, Ahmad MO, Rivaz H, Xiao Y. Robust landmark-based brain shift correction with a Siamese neural network in ultrasound-guided brain tumor resection. Int J Comput Assist Radiol Surg 2023; 18:501-508. [PMID: 36306056 DOI: 10.1007/s11548-022-02770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 09/29/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE In brain tumor surgery, tissue shift (called brain shift) can move the surgical target and invalidate the surgical plan. A cost-effective and flexible tool, intra-operative ultrasound (iUS) with robust image registration algorithms can effectively track brain shift to ensure surgical outcomes and safety. METHODS We proposed to employ a Siamese neural network, which was first trained using natural images and fine-tuned with domain-specific data to automatically detect matching anatomical landmarks in iUS scans at different surgical stages. An efficient 2.5D approach and an iterative re-weighted least squares algorithm are utilized to perform landmark-based registration for brain shift correction. The proposed method is validated and compared against the state-of-the-art methods using the public BITE and RESECT datasets. RESULTS Registration of pre-resection iUS scans to during- and post-resection iUS images were executed. The results with the proposed method shows a significant improvement from the initial misalignment ([Formula: see text]) and the method is comparable to the state-of-the-art methods validated on the same datasets. CONCLUSIONS We have proposed a robust technique to efficiently detect matching landmarks in iUS and perform brain shift correction with excellent performance. It has the potential to improve the accuracy and safety of neurosurgery.
Collapse
Affiliation(s)
- Amir Pirhadi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada.
| | - Soorena Salari
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
| | - M Omair Ahmad
- Department of Electrical and Computer Engineering, Concordia University, Montreal, Canada
| | - Hassan Rivaz
- Department of Electrical and Computer Engineering and PERFORM Centre, Concordia University, Montreal, Canada
| | - Yiming Xiao
- Department of Computer Science and Software Engineering and PERFORM Centre, Concordia University, Montreal, Canada
| |
Collapse
|
5
|
Kunapinun A, Dailey MN, Songsaeng D, Parnichkun M, Keatmanee C, Ekpanyapong M. Improving GAN Learning Dynamics for Thyroid Nodule Segmentation. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:416-430. [PMID: 36424307 DOI: 10.1016/j.ultrasmedbio.2022.09.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 06/14/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
Thyroid nodules are lesions requiring diagnosis and follow-up. Tools for detecting and segmenting nodules can help physicians with this diagnosis. Besides immediate diagnosis, automated tools can also enable tracking of the probability of malignancy over time. This paper demonstrates a new algorithm for segmenting thyroid nodules in ultrasound images. The algorithm combines traditional supervised semantic segmentation with unsupervised learning using GANs. The hybrid approach has the potential to upgrade the semantic segmentation model's performance, but GANs have the well-known problems of unstable learning and mode collapse. To stabilize the training of the GAN model, we introduce the concept of closed-loop control of the gain on the loss output of the discriminator. We find gain control leads to smoother generator training and avoids the mode collapse that typically occurs when the discriminator learns too quickly relative to the generator. We also find that the combination of the supervised and unsupervised learning styles encourages both low-level accuracy and high-level consistency. As a test of the concept of controlled hybrid supervised and unsupervised semantic segmentation, we introduce a new model named the StableSeg GAN. The model uses DeeplabV3+ as the generator, Resnet18 as the discriminator, and uses PID control to stabilize the GAN learning process. The performance of the new model in terms of IoU is better than DeeplabV3+, with mean IoU of 81.26% on a challenging test set. The results of our thyroid nodule segmentation experiments show that StableSeg GANs have flexibility to segment nodules more accurately than either comparable supervised segmentation models or uncontrolled GANs.
Collapse
Affiliation(s)
- Alisa Kunapinun
- Industrial Systems Engineering Department, Asian Institute of Technology, Pathumthani, Thailand
| | - Matthew N Dailey
- Information and Communication Technologies, Asian Institute of Technology, Pathumthani, Thailand
| | - Dittapong Songsaeng
- Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Manukid Parnichkun
- Industrial Systems Engineering Department, Asian Institute of Technology, Pathumthani, Thailand
| | | | - Mongkol Ekpanyapong
- Industrial Systems Engineering Department, Asian Institute of Technology, Pathumthani, Thailand.
| |
Collapse
|
6
|
Xia W, Ameri G, Fakim D, Akhuanzada H, Raza MZ, Shobeiri SA, McLean L, Chen ECS. Automatic Plane of Minimal Hiatal Dimensions Extraction From 3D Female Pelvic Floor Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3873-3883. [PMID: 35984794 DOI: 10.1109/tmi.2022.3199968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
There is an increasing interest in the applications of 3D ultrasound imaging of the pelvic floor to improve the diagnosis, treatment, and surgical planning of female pelvic floor dysfunction (PFD). Pelvic floor biometrics are obtained on an oblique image plane known as the plane of minimal hiatal dimensions (PMHD). Identifying this plane requires the detection of two anatomical landmarks, the pubic symphysis and anorectal angle. The manual detection of the anatomical landmarks and the PMHD in 3D pelvic ultrasound requires expert knowledge of the pelvic floor anatomy, and is challenging, time-consuming, and subject to human error. These challenges have hindered the adoption of such quantitative analysis in the clinic. This work presents an automatic approach to identify the anatomical landmarks and extract the PMHD from 3D pelvic ultrasound volumes. To demonstrate clinical utility and a complete automated clinical task, an automatic segmentation of the levator-ani muscle on the extracted PMHD images was also performed. Experiments using 73 test images of patients during a pelvic muscle resting state showed that this algorithm has the capability to accurately identify the PMHD with an average Dice of 0.89 and an average mean boundary distance of 2.25mm. Further evaluation of the PMHD detection algorithm using 35 images of patients performing pelvic muscle contraction resulted in an average Dice of 0.88 and an average mean boundary distance of 2.75mm. This work had the potential to pave the way towards the adoption of ultrasound in the clinic and development of personalized treatment for PFD.
Collapse
|
7
|
Zhao J, Hou X, Pan M, Zhang H. Attention-based generative adversarial network in medical imaging: A narrative review. Comput Biol Med 2022; 149:105948. [PMID: 35994931 DOI: 10.1016/j.compbiomed.2022.105948] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/24/2022] [Accepted: 08/06/2022] [Indexed: 11/18/2022]
Abstract
As a popular probabilistic generative model, generative adversarial network (GAN) has been successfully used not only in natural image processing, but also in medical image analysis and computer-aided diagnosis. Despite the various advantages, the applications of GAN in medical image analysis face new challenges. The introduction of attention mechanisms, which resemble the human visual system that focuses on the task-related local image area for certain information extraction, has drawn increasing interest. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to summarize the applications of using transformer-based GAN for medical image analysis. We reviewed recent advances in techniques combining various attention modules with different adversarial training schemes, and their applications in medical segmentation, synthesis and detection. Several recent studies have shown that attention modules can be effectively incorporated into a GAN model in detecting lesion areas and extracting diagnosis-related feature information precisely, thus providing a useful tool for medical image processing and diagnosis. This review indicates that research on the medical imaging analysis of GAN and attention mechanisms is still at an early stage despite the great potential. We highlight the attention-based generative adversarial network is an efficient and promising computational model advancing future research and applications in medical image analysis.
Collapse
Affiliation(s)
- Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Meiqing Pan
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
8
|
MinimalGAN: diverse medical image synthesis for data augmentation using minimal training data. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03609-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
9
|
von Haxthausen F, Böttger S, Wulff D, Hagenah J, García-Vázquez V, Ipsen S. Medical Robotics for Ultrasound Imaging: Current Systems and Future Trends. ACTA ACUST UNITED AC 2021; 2:55-71. [PMID: 34977593 PMCID: PMC7898497 DOI: 10.1007/s43154-020-00037-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2020] [Indexed: 12/17/2022]
Abstract
Abstract
Purpose of Review
This review provides an overview of the most recent robotic ultrasound systems that have contemporary emerged over the past five years, highlighting their status and future directions. The systems are categorized based on their level of robot autonomy (LORA).
Recent Findings
Teleoperating systems show the highest level of technical maturity. Collaborative assisting and autonomous systems are still in the research phase, with a focus on ultrasound image processing and force adaptation strategies. However, missing key factors are clinical studies and appropriate safety strategies. Future research will likely focus on artificial intelligence and virtual/augmented reality to improve image understanding and ergonomics.
Summary
A review on robotic ultrasound systems is presented in which first technical specifications are outlined. Hereafter, the literature of the past five years is subdivided into teleoperation, collaborative assistance, or autonomous systems based on LORA. Finally, future trends for robotic ultrasound systems are reviewed with a focus on artificial intelligence and virtual/augmented reality.
Collapse
Affiliation(s)
- Felix von Haxthausen
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Sven Böttger
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Daniel Wulff
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Jannis Hagenah
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Verónica García-Vázquez
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Svenja Ipsen
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| |
Collapse
|