1
|
Farnia P, Makkiabadi B, Alimohamadi M, Najafzadeh E, Basij M, Yan Y, Mehrmohammadi M, Ahmadian A. Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift. SENSORS 2022; 22:s22062399. [PMID: 35336570 PMCID: PMC8954240 DOI: 10.3390/s22062399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 12/13/2022]
Abstract
Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.
Collapse
Affiliation(s)
- Parastoo Farnia
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Bahador Makkiabadi
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maysam Alimohamadi
- Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran;
| | - Ebrahim Najafzadeh
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Yan Yan
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
| | - Mohammad Mehrmohammadi
- Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA; (M.B.); (Y.Y.)
- Barbara Ann Karmanos Cancer Institute, Detroit, MI 48201, USA
- Correspondence: (M.M.); (A.A.)
| | - Alireza Ahmadian
- Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran; (P.F.); (B.M.); (E.N.)
- Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
- Correspondence: (M.M.); (A.A.)
| |
Collapse
|
2
|
Machado I, Toews M, George E, Unadkat P, Essayed W, Luo J, Teodoro P, Carvalho H, Martins J, Golland P, Pieper S, Frisken S, Golby A, Wells Iii W, Ou Y. Deformable MRI-Ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. Neuroimage 2019; 202:116094. [PMID: 31446127 DOI: 10.1016/j.neuroimage.2019.116094] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 07/18/2019] [Accepted: 08/09/2019] [Indexed: 11/16/2022] Open
Abstract
Intraoperative tissue deformation, known as brain shift, decreases the benefit of using preoperative images to guide neurosurgery. Non-rigid registration of preoperative magnetic resonance (MR) to intraoperative ultrasound (iUS) has been proposed as a means to compensate for brain shift. We focus on the initial registration from MR to predurotomy iUS. We present a method that builds on previous work to address the need for accuracy and generality of MR-iUS registration algorithms in multi-site clinical data. High-dimensional texture attributes were used instead of image intensities for image registration and the standard difference-based attribute matching was replaced with correlation-based attribute matching. A strategy that deals explicitly with the large field-of-view mismatch between MR and iUS images was proposed. Key parameters were optimized across independent MR-iUS brain tumor datasets acquired at 3 institutions, with a total of 43 tumor patients and 758 reference landmarks for evaluating the accuracy of the proposed algorithm. Despite differences in imaging protocols, patient demographics and landmark distributions, the algorithm is able to reduce landmark errors prior to registration in three data sets (5.37±4.27, 4.18±1.97 and 6.18±3.38 mm, respectively) to a consistently low level (2.28±0.71, 2.08±0.37 and 2.24±0.78 mm, respectively). This algorithm was tested against 15 other algorithms and it is competitive with the state-of-the-art on multiple datasets. We show that the algorithm has one of the lowest errors in all datasets (accuracy), and this is achieved while sticking to a fixed set of parameters for multi-site data (generality). In contrast, other algorithms/tools of similar performance need per-dataset parameter tuning (high accuracy but lower generality), and those that stick to fixed parameters have larger errors or inconsistent performance (generality but not the top accuracy). Landmark errors were further characterized according to brain regions and tumor types, a topic so far missing in the literature.
Collapse
Affiliation(s)
- Inês Machado
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Montreal, Canada
| | - Elizabeth George
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prashin Unadkat
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Walid Essayed
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jie Luo
- Graduate School of Frontier Sciences, University of Tokyo, Tokyo, Japan
| | - Pedro Teodoro
- Escola Superior Náutica Infante D. Henrique, Lisbon, Portugal
| | - Herculano Carvalho
- Department of Neurosurgery, Hospital de Santa Maria, CHLN, Lisbon, Portugal
| | - Jorge Martins
- Department of Mechanical Engineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Steve Pieper
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Isomics, Inc., Cambridge, MA, USA
| | - Sarah Frisken
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexandra Golby
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - William Wells Iii
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Yangming Ou
- Department of Pediatrics and Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
3
|
Farnia P, Najafzadeh E, Ahmadian A, Makkiabadi B, Alimohamadi M, Alirezaie J. Co-Sparse Analysis Model Based Image Registration to Compensate Brain Shift by Using Intra-Operative Ultrasound Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:1-4. [PMID: 30440252 DOI: 10.1109/embc.2018.8512375] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Notwithstanding the widespread use of image guided neurosurgery systems in recent years, the accuracy of these systems is strongly limited by the intra-operative deformation of the brain tissue, the so-called brain shift. Intra-operative ultrasound (iUS) imaging as an effective solution to compensate complex brain shift phenomena update patients coordinate during surgery by registration of the intra-operative ultrasound and the pre-operative MRI data that is a challenging problem.In this work a non-rigid multimodal image registration technique based on co-sparse analysis model is proposed. This model captures the interdependency of two image modalities; MRI as an intensity image and iUS as a depth image. Based on this model, the transformation between the two modalities is minimized by using a bimodal pair of analysis operators which are learned by optimizing a joint co-sparsity function using a conjugate gradient.Experimental validation of our algorithm confirms that our registration approach outperforms several of other state-of-the-art registration methods quantitatively. The evaluation was performed using seven patient dataset with the mean registration error of only 1.83 mm. Our intensity-based co-sparse analysis model has improved the accuracy of non-rigid multimodal medical image registration by 15.37% compared to the curvelet based residual complexity as a powerful registration method, in a computational time compatible with clinical use.
Collapse
|
4
|
Farnia P, Makkiabadi B, Ahmadian A, Alirezaie J. Curvelet based residual complexity objective function for non-rigid registration of pre-operative MRI with intra-operative ultrasound images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:1167-1170. [PMID: 28268533 DOI: 10.1109/embc.2016.7590912] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Intra-operative ultrasound as an imaging based method has been recognized as an effective solution to compensate non rigid brain shift problem in recent years. Measuring brain shift requires registration of the pre-operative MRI images with the intra-operative ultrasound images which is a challenging task. In this study a novel hybrid method based on the matching echogenic structures such as sulci and tumor boundary in MRI with ultrasound images is proposed. The matching echogenic structures are achieved by optimizing the Residual Complexity (RC) in the curvelet domain. At the first step, the probabilistic map of the MR image is achieved and the residual image as the difference between this probabilistic map and intra-operative ultrasound is obtained. Then curvelet transform as a sparse function is used to minimize the complexity of residual image. The proposed method is a compromise between feature-based and intensity-based approaches. Evaluation was performed using 14 patients data set and the mean of registration error reached to 1.87 mm. This hybrid method based on RC improves accuracy of nonrigid multimodal image registration by 12.5% in a computational time compatible with clinical use.
Collapse
|