1
|
Zhang J, Fu T, Xiao D, Fan J, Song H, Ai D, Yang J. Bi-Fusion of Structure and Deformation at Multi-Scale for Joint Segmentation and Registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3676-3691. [PMID: 38837936 DOI: 10.1109/tip.2024.3407657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
Medical image segmentation and registration are two fundamental and highly related tasks. However, current works focus on the mutual promotion between the two at the loss function level, ignoring the feature information generated by the encoder-decoder network during the task-specific feature mapping process and the potential inter-task feature relationship. This paper proposes a unified multi-task joint learning framework based on bi-fusion of structure and deformation at multi-scale, called BFM-Net, which simultaneously achieves the segmentation results and deformation field in a single-step estimation. BFM-Net consists of a segmentation subnetwork (SegNet), a registration subnetwork (RegNet), and the multi-task connection module (MTC). The MTC module is used to transfer the latent feature representation between segmentation and registration at multi-scale and link different tasks at the network architecture level, including the spatial attention fusion module (SAF), the multi-scale spatial attention fusion module (MSAF) and the velocity field fusion module (VFF). Extensive experiments on MR, CT and ultrasound images demonstrate the effectiveness of our approach. The MTC module can increase the Dice scores of segmentation and registration by 3.2%, 1.6%, 2.2%, and 6.2%, 4.5%, 3.0%, respectively. Compared with six state-of-the-art algorithms for segmentation and registration, BFM-Net can achieve superior performance in various modal images, fully demonstrating its effectiveness and generalization.
Collapse
|
2
|
Aja-Fernández S, Martín-Martín C, Planchuelo-Gómez Á, Faiyaz A, Uddin MN, Schifitto G, Tiwari A, Shigwan SJ, Kumar Singh R, Zheng T, Cao Z, Wu D, Blumberg SB, Sen S, Goodwin-Allcock T, Slator PJ, Yigit Avci M, Li Z, Bilgic B, Tian Q, Wang X, Tang Z, Cabezas M, Rauland A, Merhof D, Manzano Maria R, Campos VP, Santini T, da Costa Vieira MA, HashemizadehKolowri S, DiBella E, Peng C, Shen Z, Chen Z, Ullah I, Mani M, Abdolmotalleby H, Eckstrom S, Baete SH, Filipiak P, Dong T, Fan Q, de Luis-García R, Tristán-Vega A, Pieciak T. Validation of deep learning techniques for quality augmentation in diffusion MRI for clinical studies. Neuroimage Clin 2023; 39:103483. [PMID: 37572514 PMCID: PMC10440596 DOI: 10.1016/j.nicl.2023.103483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 07/24/2023] [Accepted: 07/25/2023] [Indexed: 08/14/2023]
Abstract
The objective of this study is to evaluate the efficacy of deep learning (DL) techniques in improving the quality of diffusion MRI (dMRI) data in clinical applications. The study aims to determine whether the use of artificial intelligence (AI) methods in medical images may result in the loss of critical clinical information and/or the appearance of false information. To assess this, the focus was on the angular resolution of dMRI and a clinical trial was conducted on migraine, specifically between episodic and chronic migraine patients. The number of gradient directions had an impact on white matter analysis results, with statistically significant differences between groups being drastically reduced when using 21 gradient directions instead of the original 61. Fourteen teams from different institutions were tasked to use DL to enhance three diffusion metrics (FA, AD and MD) calculated from data acquired with 21 gradient directions and a b-value of 1000 s/mm2. The goal was to produce results that were comparable to those calculated from 61 gradient directions. The results were evaluated using both standard image quality metrics and Tract-Based Spatial Statistics (TBSS) to compare episodic and chronic migraine patients. The study results suggest that while most DL techniques improved the ability to detect statistical differences between groups, they also led to an increase in false positive. The results showed that there was a constant growth rate of false positives linearly proportional to the new true positives, which highlights the risk of generalization of AI-based tasks when assessing diverse clinical cohorts and training using data from a single group. The methods also showed divergent performance when replicating the original distribution of the data and some exhibited significant bias. In conclusion, extreme caution should be exercised when using AI methods for harmonization or synthesis in clinical studies when processing heterogeneous data in clinical studies, as important information may be altered, even when global metrics such as structural similarity or peak signal-to-noise ratio appear to suggest otherwise.
Collapse
Affiliation(s)
- Santiago Aja-Fernández
- Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Spain.
| | - Carmen Martín-Martín
- Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Spain
| | - Álvaro Planchuelo-Gómez
- Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Spain; Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, UK
| | | | | | | | | | | | | | | | | | - Dan Wu
- Zhejiang University, China
| | | | | | | | | | | | - Zihan Li
- Athinoula A. Martinos Center for Biomedical Imaging, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, USA
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Zan Chen
- Zhejiang University of Technology, China
| | | | | | | | | | | | | | | | | | - Rodrigo de Luis-García
- Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Spain
| | - Antonio Tristán-Vega
- Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Spain
| | - Tomasz Pieciak
- Laboratorio de Procesado de Imagen (LPI), ETSI Telecomunicación, Universidad de Valladolid, Spain
| |
Collapse
|
3
|
Jha RR, Kumar BVR, Pathak SK, Schneider W, Bhavsar A, Nigam A. Undersampled single-shell to MSMT fODF reconstruction using CNN-based ODE solver. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107339. [PMID: 36682110 DOI: 10.1016/j.cmpb.2023.107339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/27/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Diffusion MRI (dMRI) has been considered one of the most popular non-invasive techniques for studying the human brain's white matter (WM). dMRI is used to delineate the brain's microstructure by approximating the WM region's fiber tracts. The achieved fiber tracts can be utilized to assess mental diseases like Multiple sclerosis, ADHD, Seizures, Intellectual disability, and others. New techniques such as high angular resolution diffusion-weighted imaging (HARDI) have been developed, providing precise fiber directions, and overcoming the limitation of traditional DTI. Unlike Single-shell, Multi-shell HARDI provides tissue fractions for white matter, gray matter, and cerebrospinal fluid, resulting in a Multi-shell Multi-tissue fiber orientation distribution function (MSMT fODF). This MSMT fODF comes up with more precise fiber directions than a Single-shell, which helps to get correct fiber tracts. In addition, various multi-compartment diffusion models, including as CHARMED and NODDI, have been developed to describe the brain tissue microstructural information. This type of model requires multi-shell data to obtain more specific tissue microstructural information. However, a major concern with multi-shell is that it takes a longer scanning time restricting its use in clinical applications. In addition, most of the existing dMRI scanners with low gradient strengths commonly acquire a single b-value (shell) upto b=1000s/mm2 due to SNR (Signal-to-noise ratio) reasons and severe imaging artifacts. METHODS To address this issue, we propose a CNN-based ordinary differential equations solver for the reconstruction of MSMT fODF from under-sampled and fully sampled Single-shell (b=1000s/mm2) dMRI. The proposed architecture consists of CNN-based Adams-Bash-forth and Runge-Kutta modules along with two loss functions, including L1 and total variation. RESULTS We have shown quantitative results and visualization of fODF, fiber tracts, and structural connectivity for several brain regions on the publicly available HCP dataset. In addition, the obtained angular correlation coefficients for white matter and full brain are high, showing the proposed network's utility.Finally, we have also demonstrated the effect of noise by adjusting SNR from 5 to 50 and observed the network robustness. CONCLUSION We can conclude that our model can accurately predict MSMT fODF from under-sampled or fully sampled Single-shell dMRI volumes.
Collapse
Affiliation(s)
- Ranjeet Ranjan Jha
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India.
| | - B V Rathish Kumar
- Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, India
| | - Sudhir K Pathak
- Learning Research and Development Center, University of Pittsburgh, USA
| | - Walter Schneider
- Learning Research and Development Center, University of Pittsburgh, USA
| | - Arnav Bhavsar
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| | - Aditya Nigam
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| |
Collapse
|
4
|
Zhao L, Pang S, Chen Y, Zhu X, Jiang Z, Su Z, Lu H, Zhou Y, Feng Q. SpineRegNet: Spine Registration Network for volumetric MR and CT image by the joint estimation of an affine-elastic deformation field. Med Image Anal 2023; 86:102786. [PMID: 36878160 DOI: 10.1016/j.media.2023.102786] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/10/2023] [Accepted: 02/22/2023] [Indexed: 03/05/2023]
Abstract
Spine registration for volumetric magnetic resonance (MR) and computed tomography (CT) images plays a significant role in surgical planning and surgical navigation system for the radiofrequency ablation of spine intervertebral discs. The affine transformation of each vertebra and elastic deformation of the intervertebral disc exist at the same time. This situation is a major challenge in spine registration. Existing spinal image registration methods failed to solve the optimal affine-elastic deformation field (AEDF) simultaneously, only consider the overall rigid or elastic alignment with the help of a manual spine mask, and encounter difficulty in meeting the accuracy requirements of clinical registration application. In this study, we propose a novel affine-elastic registration framework named SpineRegNet. The SpineRegNet consists of a Multiple Affine Matrices Estimation (MAME) Module for multiple vertebrae alignment, an Affine-Elastic Fusion (AEF) Module for joint estimation of the overall AEDF, and a Local Rigidity Constraint (LRC) Module for preserving the rigidity of each vertebra. Experiments on T2-weighted volumetric MR and CT images show that the proposed approach achieves impressive performance with mean Dice similarity coefficients of 91.36%, 81.60%, and 83.08% for the mask of the vertebrae on Datasets A-C, respectively. The proposed technique does not require a mask or manual participation during the tests and provides a useful tool for clinical spinal disease surgical planning and surgical navigation systems.
Collapse
Affiliation(s)
- Lei Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Shumao Pang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yangfan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Xiongfeng Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Ziyue Jiang
- Department of Orthopedics, The Third Affiliated Hospital, Southern Medical University, Guangzhou, 510630, China
| | - Zhihai Su
- Department of Spinal Surgery, the Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai, 519000, China
| | - Hai Lu
- Department of Spinal Surgery, the Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai, 519000, China
| | - Yujia Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
5
|
Ruthven M, Miquel ME, King AP. A segmentation-informed deep learning framework to register dynamic two-dimensional magnetic resonance images of the vocal tract during speech. Biomed Signal Process Control 2023; 80:104290. [PMID: 36743699 PMCID: PMC9746295 DOI: 10.1016/j.bspc.2022.104290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/29/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Objective Dynamic magnetic resonance (MR) imaging enables visualisation of articulators during speech. There is growing interest in quantifying articulator motion in two-dimensional MR images of the vocal tract, to better understand speech production and potentially inform patient management decisions. Image registration is an established way to achieve this quantification. Recently, segmentation-informed deformable registration frameworks have been developed and have achieved state-of-the-art accuracy. This work aims to adapt such a framework and optimise it for estimating displacement fields between dynamic two-dimensional MR images of the vocal tract during speech. Methods A deep-learning-based registration framework was developed and compared with current state-of-the-art registration methods and frameworks (two traditional methods and three deep-learning-based frameworks, two of which are segmentation informed). The accuracy of the methods and frameworks was evaluated using the Dice coefficient (DSC), average surface distance (ASD) and a metric based on velopharyngeal closure. The metric evaluated if the fields captured a clinically relevant and quantifiable aspect of articulator motion. Results The segmentation-informed frameworks achieved higher DSCs and lower ASDs and captured more velopharyngeal closures than the traditional methods and the framework that was not segmentation informed. All segmentation-informed frameworks achieved similar DSCs and ASDs. However, the proposed framework captured the most velopharyngeal closures. Conclusions A framework was successfully developed and found to more accurately estimate articulator motion than five current state-of-the-art methods and frameworks. Significance The first deep-learning-based framework specifically for registering dynamic two-dimensional MR images of the vocal tract during speech has been developed and evaluated.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom,Corresponding author at: Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom.
| | - Marc E. Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,Digital Environment Research Institute (DERI), Empire House, 67-75 New Road, Queen Mary University of London, London E1 1HH, United Kingdom,Advanced Cardiovascular Imaging, Barts NIHR BRC, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P. King
- School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
6
|
Szeskin A, Rochman S, Weiss S, Lederman R, Sosna J, Joskowicz L. Liver lesion changes analysis in longitudinal CECT scans by simultaneous deep learning voxel classification with SimU-Net. Med Image Anal 2023; 83:102675. [PMID: 36334393 DOI: 10.1016/j.media.2022.102675] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 07/28/2022] [Accepted: 10/27/2022] [Indexed: 11/05/2022]
Abstract
The identification and quantification of liver lesions changes in longitudinal contrast enhanced CT (CECT) scans is required to evaluate disease status and to determine treatment efficacy in support of clinical decision-making. This paper describes a fully automatic end-to-end pipeline for liver lesion changes analysis in consecutive (prior and current) abdominal CECT scans of oncology patients. The three key novelties are: (1) SimU-Net, a simultaneous multi-channel 3D R2U-Net model trained on pairs of registered scans of each patient that identifies the liver lesions and their changes based on the lesion and healthy tissue appearance differences; (2) a model-based bipartite graph lesions matching method for the analysis of lesion changes at the lesion level; (3) a method for longitudinal analysis of one or more of consecutive scans of a patient based on SimU-Net that handles major liver deformations and incorporates lesion segmentations from previous analysis. To validate our methods, five experimental studies were conducted on a unique dataset of 3491 liver lesions in 735 pairs from 218 clinical abdominal CECT scans of 71 patients with metastatic disease manually delineated by an expert radiologist. The pipeline with the SimU-Net model, trained and validated on 385 pairs and tested on 249 pairs, yields a mean lesion detection recall of 0.86±0.14, a precision of 0.74±0.23 and a lesion segmentation Dice of 0.82±0.14 for lesions > 5 mm. This outperforms a reference standalone 3D R2-UNet mdel that analyzes each scan individually by ∼50% in precision with similar recall and Dice score on the same training and test datasets. For lesions matching, the precision is 0.86±0.18 and the recall is 0.90±0.15. For lesion classification, the specificity is 0.97±0.07, the precision is 0.85±0.31, and the recall is 0.86±0.23. Our new methods provide accurate and comprehensive results that may help reduce radiologists' time and effort and improve radiological oncology evaluation.
Collapse
Affiliation(s)
- Adi Szeskin
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram, Jerusalem 9190401, Israel; The Alexander Grass Center for Bioengineering, The Hebrew University of Jerusalem, Israel
| | - Shalom Rochman
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram, Jerusalem 9190401, Israel
| | - Snir Weiss
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram, Jerusalem 9190401, Israel
| | - Richard Lederman
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Jacob Sosna
- Department of Radiology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond J. Safra Campus, Givat Ram, Jerusalem 9190401, Israel.
| |
Collapse
|
7
|
Zhang F, Wells WM, O'Donnell LJ. Deep Diffusion MRI Registration (DDMReg): A Deep Learning Method for Diffusion MRI Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1454-1467. [PMID: 34968177 PMCID: PMC9273049 DOI: 10.1109/tmi.2021.3139507] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we present a deep learning method, DDMReg, for accurate registration between diffusion MRI (dMRI) datasets. In dMRI registration, the goal is to spatially align brain anatomical structures while ensuring that local fiber orientations remain consistent with the underlying white matter fiber tract anatomy. DDMReg is a novel method that uses joint whole-brain and tract-specific information for dMRI registration. Based on the successful VoxelMorph framework for image registration, we propose a novel registration architecture that leverages not only whole brain information but also tract-specific fiber orientation information. DDMReg is an unsupervised method for deformable registration between pairs of dMRI datasets: it does not require nonlinearly pre-registered training data or the corresponding deformation fields as ground truth. We perform comparisons with four state-of-the-art registration methods on multiple independently acquired datasets from different populations (including teenagers, young and elderly adults) and different imaging protocols and scanners. We evaluate the registration performance by assessing the ability to align anatomically corresponding brain structures and ensure fiber spatial agreement between different subjects after registration. Experimental results show that DDMReg obtains significantly improved registration performance compared to the state-of-the-art methods. Importantly, we demonstrate successful generalization of DDMReg to dMRI data from different populations with varying ages and acquired using different acquisition protocols and different scanners.
Collapse
|
8
|
Jha RR, Pathak SK, Nath V, Schneider W, Kumar BVR, Bhavsar A, Nigam A. VRfRNet: Volumetric ROI fODF reconstruction network for estimation of multi-tissue constrained spherical deconvolution with only single shell dMRI. Magn Reson Imaging 2022; 90:1-16. [PMID: 35341904 DOI: 10.1016/j.mri.2022.03.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 02/19/2022] [Accepted: 03/19/2022] [Indexed: 10/18/2022]
Abstract
Diffusion MRI (dMRI) is one of the most popular techniques for studying the brain structure, mainly the white matter region. Among several sampling methods in dMRI, the high angular resolution diffusion imaging (HARDI) technique has attracted researchers due to its more accurate fiber orientation estimation. However, the current single-shell HARDI makes the intravoxel structure challenging to estimate accurately. While multi-shell acquisition can address this problem, it takes a longer scanning time, restricting its use in clinical applications. In addition, most existing dMRI scanners with low gradient-strengths often acquire single-shell up to b=1000s/mm2 because of signal-to-noise ratio issues and severe image artefacts. Hence, we propose a novel generative adversarial network, VRfRNet, for the reconstruction of multi-shell multi-tissue fiber orientation distribution function from single-shell HARDI volumes. Such a transformation learning is performed in the spherical harmonics (SH) space, as raw input HARDI volume is transformed to SH coefficients to soften gradient directions. The proposed VRfRNet consists of several modules, such as multi-context feature enrichment module, feature level attention, and softmax level attention. In addition, three loss functions have been used to optimize network learning, including L1, adversarial, and total variation. The network is trained and tested using standard qualitative and quantitative performance metrics on the publicly available HCP data-set.
Collapse
Affiliation(s)
- Ranjeet Ranjan Jha
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India.
| | - Sudhir K Pathak
- Learning Research and Development Center, University of Pittsburgh, USA
| | - Vishwesh Nath
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, USA
| | - Walter Schneider
- Learning Research and Development Center, University of Pittsburgh, USA
| | - B V Rathish Kumar
- Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, India
| | - Arnav Bhavsar
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| | - Aditya Nigam
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| |
Collapse
|
9
|
Single-shell to multi-shell dMRI transformation using spatial and volumetric multilevel hierarchical reconstruction framework. Magn Reson Imaging 2022; 87:133-156. [PMID: 35017034 DOI: 10.1016/j.mri.2021.12.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 12/21/2021] [Accepted: 12/27/2021] [Indexed: 12/12/2022]
Abstract
Single or Multi-shell high angular resolution diffusion imaging (HARDI) has become an important dMRI acquisition technique for studying brain white matter fibers. Existing single-shell HARDI makes it challenging to estimate the intravoxel structure up to the desired resolution. However, multi-shell acquisition (with multiple b-values) can provide higher resolution for the intravoxel structure, which further helps in getting accurate fiber tracts; But, this comes at the cost of larger acquisition time and larger setup. Hence, we propose a novel deep learning architecture for the reconstruction of diffusion MRI volumes for different b-values (degree of diffusion weighting) using acquisitions at a fixed b-value (termed as single-shell) acquisition. This reconstruction has been performed in the spherical harmonics space to better manage varying gradient directions. In this work, we have demonstrated such a reconstruction for b = 3000 s/mm2 and b = 2000 s/mm2 from b = 1000 s/mm2. The proposed Multilevel Hierarchical Spherical Harmonics Coefficients Reconstruction (MHSH) framework takes advantage of contextual information within each slice as well as across the slices by involving Slice Level ReconNet (SLRNet) network and a Volumetric ROI Level ReconNet (VPLRNet) network, respectively. Three-loss functions have been used to optimize network learning, i.e., L1, Adversarial, and Total Variation Loss. Finally, the network is trained and validated on the publicly available HCP data-set with standard qualitative and quantitative performance measures and achieves promising results.
Collapse
|