1
|
Shim J, Lee Y. No-Reference-Based and Noise Level Evaluations of Cinematic Rendering in Bone Computed Tomography. Bioengineering (Basel) 2024; 11:563. [PMID: 38927799 PMCID: PMC11201129 DOI: 10.3390/bioengineering11060563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/07/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Cinematic rendering (CR) is a new 3D post-processing technology widely used to produce bone computed tomography (CT) images. This study aimed to evaluate the performance quality of CR in bone CT images using blind quality and noise level evaluations. Bone CT images of the face, shoulder, lumbar spine, and wrist were acquired. Volume rendering (VR), which is widely used in the field of diagnostic medical imaging, was additionally set along with CR. A no-reference-based blind/referenceless image spatial quality evaluator (BRISQUE) and coefficient of variation (COV) were used to evaluate the overall quality of the acquired images. The average BRISQUE values derived from the four areas were 39.87 and 46.44 in CR and VR, respectively. The difference between the two values was approximately 1.16, and the difference between the resulting values increased, particularly in the bone CT image, where metal artifacts were observed. In addition, we confirmed that the COV value improved by 2.20 times on average when using CR compared to VR. This study proved that CR is useful in reconstructing bone CT 3D images and that various applications in the diagnostic medical field will be possible.
Collapse
Affiliation(s)
- Jina Shim
- Department of Diagnostic Radiology, Severance Hospital, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea;
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
2
|
Brookmeyer C, Chu LC, Rowe SP, Fishman EK. Clinical implementation of cinematic rendering. Curr Probl Diagn Radiol 2024; 53:313-328. [PMID: 38365458 DOI: 10.1067/j.cpradiol.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 01/16/2024] [Indexed: 02/18/2024]
Abstract
Cinematic rendering is a recently developed photorealistic display technique for standard volumetric data sets. It has broad-reaching applications in cardiovascular, musculoskeletal, abdominopelvic, and thoracic imaging. It has been used for surgical planning and has emerging use in educational settings. We review the logistics of performing this post-processing step and its integration into existing workflow.
Collapse
Affiliation(s)
- Claire Brookmeyer
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States.
| | - Linda C Chu
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Steven P Rowe
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Elliot K Fishman
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
3
|
Hou X, Xu R, Chen L, Yang D, Li D. 3D color multimodality fusion imaging as an augmented reality educational and surgical planning tool for extracerebral tumors. Neurosurg Rev 2023; 46:280. [PMID: 37875636 DOI: 10.1007/s10143-023-02184-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 08/26/2023] [Accepted: 10/14/2023] [Indexed: 10/26/2023]
Abstract
Extracerebral tumors often occur on the surface of the brain or at the skull base. It is important to identify the peritumoral sulci, gyri, and nerve fibers. Preoperative visualization of three-dimensional (3D) multimodal fusion imaging (MFI) is crucial for surgery. However, the traditional 3D-MFI brain models are homochromatic and do not allow easy identification of anatomical functional areas. In this study, 33 patients with extracerebral tumors without peritumoral edema were retrospectively recruited. They underwent 3D T1-weighted MRI, diffusion tensor imaging (DTI), and CT angiography (CTA) sequence scans. 3DSlicer, Freesurfer, and BrainSuite were used to explore 3D-color-MFI and preoperative planning. To determine the effectiveness of 3D-color-MFI as an augmented reality (AR) teaching tool for neurosurgeons and as a patient education and communication tool, questionnaires were administered to 15 neurosurgery residents and all patients, respectively. For neurosurgical residents, 3D-color-MFI provided a better understanding of surgical anatomy and more efficient techniques for removing extracerebral tumors than traditional 3D-MFI (P < 0.001). For patients, the use of 3D-color MFI can significantly improve their understanding of the surgical approach and risks (P < 0.005). 3D-color-MFI is a promising AR tool for extracerebral tumors and is more useful for learning surgical anatomy, developing surgical strategies, and improving communication with patients.
Collapse
Affiliation(s)
- Xiaolin Hou
- Department of Neurosurgery, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 61173, China
| | - Ruxiang Xu
- Department of Neurosurgery, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 61173, China.
| | - Longyi Chen
- Department of Neurosurgery, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 61173, China.
| | - Dongdong Yang
- The Department of Neurology, Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Dingjun Li
- Department of Neurosurgery, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 61173, China
| |
Collapse
|
4
|
Layden N, Brassil C, Jha N, Saundankar J, Yim D, Andrews D, Patukale A, Srigandan S, Murray CP. Cinematic versus volume rendered imaging for the depiction of complex congenital heart disease. J Med Imaging Radiat Oncol 2023; 67:487-491. [PMID: 36916320 DOI: 10.1111/1754-9485.13518] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 01/31/2023] [Indexed: 03/16/2023]
Abstract
INTRODUCTION Planning for surgical intervention for patients with complex congenital heart disease requires a comprehensive understanding of the individual's anatomy. Cinematic rendering (CR) is a novel technique that purportedly builds on traditional volume rendering (VR) by converting CT image data into clearly defined 3D reconstructions through the stimulation and propagation of light rays. The purpose of this study was to compare CR to VR for the understanding of critical anatomy in unoperated complex congenital heart disease. METHODS In this retrospective study, CT data sets from 20 sequential scanned cases of unoperated paediatric patients with complex congenital heart disease were included. 3D images were produced at standardised and selected orientations, matched for both VR and CR. The images were then independently reviewed by two cardiologists, two radiologists and two surgeons for overall image quality, depth perception and the visualisation of surgically relevant anatomy, the coronary arteries and the pulmonary veins. RESULTS Cinematic rendering demonstrated significantly superior image quality, depth perception and visualisation of surgically relevant anatomy than VR. CONCLUSION Cinematic rendering is a novel 3D CT-rendering technique that may surpass the traditionally used volumetric rendering technique in the provision of actionable pre-operative anatomical detail for complex congenital heart disease.
Collapse
Affiliation(s)
- Natalie Layden
- Department of Medical Imaging, Perth Children's Hospital, Perth, Western Australia, Australia
| | | | - Nihar Jha
- Department of Medical Imaging, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Jelena Saundankar
- Department of Cardiology, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Deane Yim
- Department of Cardiology, Perth Children's Hospital, Perth, Western Australia, Australia
| | - David Andrews
- Department of Cardiothoracic Surgery, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Aditya Patukale
- Department of Cardiothoracic Surgery, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Shrivuthsun Srigandan
- Department of Medical Imaging, Mazankowski Alberta Heart Institute, University of Alberta, Edmonton, Alberta, Canada
| | | |
Collapse
|
5
|
Banerjee S, Pham T, Eastaway A, Auffermann WF, Quigley EP. The Use of Virtual Reality in Teaching Three-Dimensional Anatomy and Pathology on CT. J Digit Imaging 2023; 36:1279-1284. [PMID: 36717519 PMCID: PMC9886418 DOI: 10.1007/s10278-023-00784-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 02/01/2023] Open
Abstract
While radiological imaging is presented as two-dimensional images either on radiography or cross-sectional imaging, it is important for interpreters to understand three-dimensional anatomy and pathology. We hypothesized that virtual reality (VR) may serve as an engaging and effective way for trainees to learn to extrapolate from two-dimensional images to an understanding of these three-dimensional structures. We created a Google Cardboard Virtual Reality application that depicts intracranial vasculature and aneurysms. We then recruited 12 medical students to voluntarily participate in our study. The performance of the students in identifying intracranial aneurysms before and after the virtual reality training was evaluated and compared to a control group. While the experimental group's performance in correctly identifying aneurysms after virtual reality educational intervention was better than the control's (experimental increased by 5.3%, control decreased by 2.1%), the difference was not statistically significant (p-value of 0.06). Significantly, survey data from the medical students was very positive with students noting they preferred the immersive virtual reality training over conventional education and believed that VR would be a helpful educational tool for them in the future. We believe virtual reality can serve as an important tool to help radiology trainees better understand three-dimensional anatomy and pathology.
Collapse
Affiliation(s)
- Soham Banerjee
- Department of Radiology, Baylor College of Medicine, One Baylor Plaza BCM360, TX, Houston, USA.
| | - Theresa Pham
- Department of Radiology, University of Utah, Salt Lake City, UT, USA
| | - Adriene Eastaway
- Department of Radiology, University of Utah, Salt Lake City, UT, USA
| | | | - Edward P Quigley
- Department of Radiology, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
6
|
Isikbay M, Caton MT, Calabrese E. A Deep Learning Approach for Automated Bone Removal from Computed Tomography Angiography of the Brain. J Digit Imaging 2023; 36:964-972. [PMID: 36781588 PMCID: PMC10287884 DOI: 10.1007/s10278-023-00788-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/29/2023] [Accepted: 01/30/2023] [Indexed: 02/15/2023] Open
Abstract
Advanced visualization techniques such as maximum intensity projection (MIP) and volume rendering (VR) are useful for evaluating neurovascular anatomy on CT angiography (CTA) of the brain; however, interference from surrounding osseous anatomy is common. Existing methods for removing bone from CTA images are limited in scope and/or accuracy, particularly at the skull base. We present a new brain CTA bone removal tool, which addresses many of these limitations. A deep convolutional neural network was designed and trained for bone removal using 72 brain CTAs. The model was tested on 15 CTAs from the same data source and 17 CTAs from an independent external dataset. Bone removal accuracy was assessed quantitatively, by comparing automated segmentation results to manual segmentations, and qualitatively by evaluating VR visualization of the carotid siphons compared to an existing method for automated bone removal. Average Dice overlap between automated and manual segmentations from the internal and external test datasets were 0.986 and 0.979 respectively. This was superior compared to a publicly available noncontrast head CT bone removal algorithm which had a Dice overlap of 0.947 (internal dataset) and 0.938 (external dataset). Our algorithm yielded better VR visualization of the carotid siphons than the publicly available bone removal tool in 14 out of 15 CTAs (93%, chi-square statistic of 22.5, p-value of < 0.00001) from the internal test dataset and 15 out of 17 CTAs (88%, chi-square statistic of 23.1, p-value of < 0.00001) from the external test dataset. Bone removal allowed subjectively superior MIP and VR visualization of vascular anatomy/pathology. The proposed brain CTA bone removal algorithm is rapid, accurate, and allows superior visualization of vascular anatomy and pathology compared to other available techniques and was validated on an independent external dataset.
Collapse
Affiliation(s)
- Masis Isikbay
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, M-396, San Francisco, CA, 94143, USA.
| | - M Travis Caton
- Cerebrovascular Center, Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, 1450 Madison Ave, New York, NY, 10029, USA
| | - Evan Calabrese
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, M-396, San Francisco, CA, 94143, USA
- Department of Radiology, Division of Neuroradiology, Duke University Medical Center, Box 3808 DUMC, Durham, NC, 27710, USA
- Duke Center for Artificial Intelligence in Radiology (DAIR), Duke University Medical Center, Durham, NC, 27710, USA
- Center for Intelligent Imaging, University of California San Francisco, San Francisco, CA, 94143, USA
| |
Collapse
|
7
|
Willershausen I, Necker F, Kloeckner R, Seidel CL, Paulsen F, Gölz L, Scholz M. Cinematic rendering to improve visualization of supplementary and ectopic teeth using CT datasets. Dentomaxillofac Radiol 2023; 52:20230058. [PMID: 37015249 PMCID: PMC10170174 DOI: 10.1259/dmfr.20230058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/06/2023] [Indexed: 04/06/2023] Open
Abstract
OBJECTIVES Ectopic, impacted, and supplementary teeth are the number one reason for cross-sectional imaging in pediatric dentistry. The accurate post-processing of acquired data sets is crucial to obtain precise, yet also intuitively understandable three-dimensional (3D) models, which facilitate clinical decision-making and improve treatment outcomes. Cinematic rendering (CR) is anovel visualization technique using physically based volume rendering to create photorealistic images from DICOM data. The aim of the present study was to tailor pre-existing CR reconstruction parameters for use in dental imaging with the example of the diagnostic 3D visualization of ectopic, impacted, and supplementary teeth. METHODS CR was employed for the volumetric image visualization of midface CT data sets. Predefined reconstruction parameters were specifically modified to visualize the presented dental pathologies, dentulous jaw, and isolated teeth. The 3D spatial relationship of the teeth, as well as their structural relationship with the antagonizing dentition, could immediately be investigated and highlighted by separate, interactive 3D visualization after segmentation through windowing. RESULTS To the best of our knowledge, CR has not been implemented for the visualization of supplementary and ectopic teeth segmented from the surrounding bone because the software has not yet provided appropriate customized reconstruction parameters for dental imaging. When employing our new, modified reconstruction parameters, its application presents a fast approach to obtain realistic visualizations of both dental and osseous structures. CONCLUSIONS CR enables dentists and oral surgeons to gain an improved 3D understanding of anatomical structures, allowing for more intuitive treatment planning and patient communication.
Collapse
Affiliation(s)
- Ines Willershausen
- Department of Orthodontics and Orofacial Orthopedics, Friedrich-Alexander-University Erlangen-Nürnberg, Gluecksstrasse, Erlangen, Germany
| | | | - Roman Kloeckner
- Institute of Interventional Radiology University Hospital of Schleswig-Holstein-Campus Lübeck, Ratzeburger Allee, Lübeck, Germany
| | - Corinna Lesley Seidel
- Department of Orthodontics and Orofacial Orthopedics, Friedrich-Alexander-University Erlangen-Nürnberg, Gluecksstrasse, Erlangen, Germany
| | - Friedrich Paulsen
- Institute of Functional and Clinical Anatomy Friedrich-Alexander-University Erlangen-Nürnberg, Krankenhausstrasse, Erlangen, Germany
| | - Lina Gölz
- Department of Orthodontics and Orofacial Orthopedics, Friedrich-Alexander-University Erlangen-Nürnberg, Gluecksstrasse, Erlangen, Germany
| | - Michael Scholz
- Institute of Functional and Clinical Anatomy Friedrich-Alexander-University Erlangen-Nürnberg, Krankenhausstrasse, Erlangen, Germany
| |
Collapse
|
8
|
Richards S. Student Engagement Using HoloLens Mixed-Reality Technology in Human Anatomy Laboratories for Osteopathic Medical Students: an Instructional Model. MEDICAL SCIENCE EDUCATOR 2023; 33:223-231. [PMID: 36691419 PMCID: PMC9850333 DOI: 10.1007/s40670-023-01728-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/09/2023] [Indexed: 06/17/2023]
Abstract
Mixed-reality technology is a powerful tool used in healthcare and medical education to engage students in life-like scenarios. This blend of virtual and augmented reality images incorporates virtual projections with the real environment to allow real-time observation and interaction [1]. While this immersive technology offers advantages over cadaver dissections, it creates new challenges to keeping students engaged [2, 3]. Student engagement improves students' commitment to learning, critical thinking, and motivation and results in successful course outcomes [4, 5]. This paper provides an activity model using the HoloLens mixed-reality technology to deliver human gross anatomy laboratory sessions to first-year osteopathic medical students. The activity was designed using Gagne's model for instructional design and team-based learning to create an active learning model, which targets the behavioral, emotional, and cognitive dimensions of student engagement [6, 7]: behavioral engagement through autonomy and time on task, emotional engagement through providing the guiding exploration and narrative flow to accompany students' visual experience, and cognitive engagement by incorporating team-based learning (TBL) and case-based learning (CBL). The instructional model also answers the call for a new type of virtual reality instructor and pedagogical strategy that addresses the unique challenges and increases student engagement with this new technology. The effectiveness of this classroom activity was assessed by observing students for indicators or behaviors of student engagement, which are discussed. Further studies are required to measure the extent to which these indicators were exhibited and compare student engagement with this mixed-reality to didactic cadaver-based laboratory sessions.
Collapse
Affiliation(s)
- Sherese Richards
- California Health Sciences University, Department of Biomedical Education- Anatomy, Clovis, CA 93611 USA
| |
Collapse
|
9
|
The Correlation Between Body Mass Index and Computed Tomography Angiography on Vascular Positioning in Anterolateral Thigh Flap Transplantation. J Belg Soc Radiol 2022; 106:102. [DOI: 10.5334/jbsr.2762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 10/13/2022] [Indexed: 11/10/2022] Open
|
10
|
Wang L, Zhao Z, Wang G, Zhou J, Zhu H, Guo H, Huang H, Yu M, Zhu G, Li N, Na Y. Application of a three-dimensional visualization model in intraoperative guidance of percutaneous nephrolithotomy. Int J Urol 2022; 29:838-844. [PMID: 35545290 DOI: 10.1111/iju.14907] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 04/05/2022] [Indexed: 01/12/2023]
Abstract
OBJECTIVES To establish a three-dimensional visualization model of percutaneous nephrolithotomy, apply it to guiding intraoperative puncture in a mixed reality environment, and evaluate its accuracy and clinical value. METHODS Patients with percutaneous nephrolithotomy indications were prospectively divided into three-dimensional group and control group with a ratio of 1:2. For patients in three-dimensional group, positioning markers were pasted on the skin and enhanced computed tomography scanning was performed in the prone position. Holographic three-dimensional models were made and puncture routes were planned before operation. During the operation, the three-dimensional model was displayed through HoloLens glass and visually registered with the patient's body. Puncture of the target renal calyx was performed under three-dimensional-image guiding and ultrasonic monitoring. Patients in the control group underwent routine percutaneous nephrolithotomy in the prone position under the monitoring of B-ultrasound. Deviation distance of the kidney, puncture time, puncture attempts, channel coincidence rate, stone clearance rate, and postoperative complications were assessed. RESULTS Twenty-one and 40 patients were enrolled in three-dimensional and control group, respectively. For three-dimensional group, the average deviation between virtual and real kidney was 3.1 ± 2.9 mm. All punctures were performed according to preoperative planning. Compared with the control group, the three-dimensional group had shorter puncture time (8.9 ± 3.3 vs 14.5 ± 6.1 min, P < 0.001), fewer puncture attempts (1.4 ± 0.6 vs 2.2 ± 1.5, P = 0.009), and might also have a better performance in stone clearance rate (90.5% vs 72.5%, P = 0.19) and postoperative complications (P = 0.074). CONCLUSIONS The percutaneous nephrolithotomy three-dimensional model manifested acceptable accuracy and good value for guiding puncture in a mixed reality environment.
Collapse
Affiliation(s)
- Lei Wang
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| | - Zichen Zhao
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| | - Gang Wang
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| | - Jianfang Zhou
- Department of Urology, Shougang Shuigang General Hospital, Liupanshui City, Guizhou
| | - He Zhu
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| | - Hongfeng Guo
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| | - Huagang Huang
- Department of Urology, Shougang Shuigang General Hospital, Liupanshui City, Guizhou
| | - Mingchuan Yu
- Department of Medical Imaging, Peking University Shougang Hospital, Beijing, China
| | - Gang Zhu
- Department of Urology, Beijing United Family Hospital, Beijing, China
| | - Ningchen Li
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| | - Yanqun Na
- Department of Urology, Peking University Shougang Hospital, Beijing, China.,Peking University Wujieping Urology Center, Peking University Health Science Center, Beijing, China
| |
Collapse
|
11
|
Nguyen I, Caton MT, Tonetti D, Abla A, Kim A, Smith W, Hetts SW. Angiographically Occult Subarachnoid Hemorrhage: Yield of Repeat Angiography, Influence of Initial CT Bleed Pattern, and Sources of Diagnostic Error in 242 Consecutive Patients. AJNR Am J Neuroradiol 2022; 43:731-735. [PMID: 35361576 PMCID: PMC9089267 DOI: 10.3174/ajnr.a7483] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 02/09/2022] [Indexed: 12/13/2022]
Abstract
BACKGROUND AND PURPOSE Nearly 20% of patients with spontaneous SAH have no definitive source on initial DSA. The purpose of this study was to investigate the timing and yield of repeat DSA, to clarify the influence of initial CT bleed pattern, and to characterize sources of diagnostic error in this scenario. MATERIALS AND METHODS We evaluated the yield of repeat DSA and clinical outcomes stratified by hemorrhage pattern on CT in consecutive patients with nontraumatic SAH with negative initial DSA findings at a referral center. Cases in which the culprit lesion was subsequently diagnosed were classified as physiologically occult (ie, undetectable) on the initial DSA, despite adequate technique and interpretation or misdiagnosed due to operator-dependent error. RESULTS Two hundred forty-two of 1163 (20.8%) patients with spontaneous SAH had negative initial DSA findings between 2009 and 2018. The SAH CT pattern was nonperimesencephalic (41%), perimesencephalic (36%), sulcal (18%), and CT-negative (5%). Repeat DSA in 135/242 patients (55.8%) revealed a source in 10 patients (7.4%): 4 saccular aneurysms, 4 atypical aneurysms, and 2 arteriovenous shunts. The overall yield of repeat DSA was 11.3% with nonperimesencephalic and 2.2% for perimesencephalic patterns. The yield of the second and third DSAs with a nonperimesencephalic pattern was 7.7% and 12%, respectively. Physiologically occult lesions accounted for 6/242 (2.5%) and operator-dependent errors accounted for 7/242 (2.9%) of all angiographically occult lesions on the first DSA. CONCLUSIONS Atypical aneurysms and small arteriovenous shunts are important causes of SAH negative on angiography. Improving DSAs technique can modestly reduce the need for repeat DSA; however, a small fraction of SAH sources remain occult despite adequate technique. These findings support the practice of repeating DSA in patients with a nonperimesencephalic SAH pattern.
Collapse
Affiliation(s)
- I Nguyen
- From the Department of Neurology (I.N.), University of California, Davis, Sacramento, California
- Department of Neurology (I.N., A.K., W.S.)
| | - M T Caton
- Radiology and Biomedical Imaging (M.T.C., S.W.H.)
| | - D Tonetti
- Neurological Surgery (D.T., A.A.), University of California, San Francisco, San Francisco, California
| | - A Abla
- Neurological Surgery (D.T., A.A.), University of California, San Francisco, San Francisco, California
| | - A Kim
- Department of Neurology (I.N., A.K., W.S.)
| | - W Smith
- Department of Neurology (I.N., A.K., W.S.)
| | - S W Hetts
- Radiology and Biomedical Imaging (M.T.C., S.W.H.)
| |
Collapse
|
12
|
Cinematic rendering of paediatric musculoskeletal pathologies: initial experiences with CT. Clin Radiol 2022; 77:274-282. [PMID: 35164928 DOI: 10.1016/j.crad.2022.01.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/06/2022] [Indexed: 11/22/2022]
Abstract
Cinematic rendering (CR) is a novel post-processing technique similar to volume rendering (VR), which allows for a more photorealistic imaging reconstruction by using a complex light modelling algorithm, incorporating information from multiple light paths and predicted photon scattering patterns. Several recent publications relating to adult imaging have argued that CR gives a better "realism" and "expressiveness" experience over VR techniques. CR has also been shown to improve visualisation of musculoskeletal and vascular anatomy compared with conventional CT viewing, and may help non-radiologists to understand complex patient anatomy. In this review, we provide an overview of how CR could be used in paediatric musculoskeletal imaging, particularly in complex diagnoses, surgical planning, and patient consent processes. We present a direct comparison of VR and CR reconstructions across a range of congenital and acquired musculoskeletal pathologies, highlighting potential advantages and areas for further research.
Collapse
|
13
|
Steffen T, Winklhofer S, Starz F, Wiedemeier D, Ahmadli U, Stadlinger B. Three-dimensional perception of cinematic rendering versus conventional volume rendering using CT and CBCT data of the facial skeleton. Ann Anat 2022; 241:151905. [PMID: 35150863 DOI: 10.1016/j.aanat.2022.151905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 12/17/2021] [Accepted: 01/21/2022] [Indexed: 10/19/2022]
Abstract
The aim of this exploratory study is to analyse whether three-dimensional cinematic rendering image reconstructions offer advantages over conventional volume rendering in the visualisation of cone beam computed tomography (CBCT) and computed tomography (CT) images of the facial skeleton. This is of interest, as some information gets lost during the rendering process. This especially applies to structures in the background of the image and some surface information which can be lost. The commonly applied two-dimensional representation of CBCT or CT images in three different axes requires experience for interpretation. Cinematic rendering is a new three-dimensional post processing reconstruction technique, creating photo realistic visualisations, thus possibly enabling an easier interpretation of the images. In this study, ten investigators assessed ten separate patient cases of the orofacial skeleton. For each case, a conventional volume rendering image reconstruction and a cinematic rendering reconstruction of the same area was created. A specially designed questionnaire assessed both objective and subjective criteria of image perception. Objective criteria were assessed by predefined questions on the visual perception of anatomical image characteristics, showing the two reconstruction types of each case randomly to the investigators in two sessions. Subjective criteria were assessed via a visual analogue scale, showing both reconstructions simultaneously in a third session. The results show that cinematic rendering offers advantages especially in the evaluation of depth perception and three-dimensionality. Volume rendering shows advantages in surface sharpness. Cinematic Rendering was subjectively rated higher for almost all reconstructions. The cinematic rendering process however may cause loss of information and blurring of surfaces compared to volume rendering. With respect to the subjective impression, cinematic rendering scored better than volume rendering. The visualisation is perceived as being very close to reality.
Collapse
Affiliation(s)
- Tobias Steffen
- Clinic of Cranio-Maxillofacial and Oral Surgery, University of Zurich, Switzerland
| | - Sebastian Winklhofer
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland
| | - Felicitas Starz
- Clinic of Cranio-Maxillofacial and Oral Surgery, University of Zurich, Switzerland
| | - Daniel Wiedemeier
- Statistical Services, University of Zurich, Center of Dental Medicine, University of Zurich, Switzerland
| | - Uzeyir Ahmadli
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Switzerland; University Institute of Diagnostic and Interventional Neuroradiology, University Hospital Bern, Inselspital, University of Bern, Switzerland
| | - Bernd Stadlinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, University of Zurich, Switzerland.
| |
Collapse
|
14
|
Wang R, Li JY. A preoperative evaluation of a giant mediastinal tumor using a novel three-dimensional cinematic rendering visualization method. Quant Imaging Med Surg 2021; 11:4700-4702. [PMID: 34737939 DOI: 10.21037/qims-20-1161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 04/13/2021] [Indexed: 11/06/2022]
Affiliation(s)
- Rui Wang
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital/Center, Kunming, China
| | - Jia-Yi Li
- Department of Colorectal Surgery, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital/Center, Kunming, China
| |
Collapse
|
15
|
Martín-Noguerol T, Concepción-Aramendia L, Lim CT, Santos-Armentia E, Cabrera-Zubizarreta A, Luna A. Conventional and advanced MRI evaluation of brain vascular malformations. J Neuroimaging 2021; 31:428-445. [PMID: 33856735 DOI: 10.1111/jon.12853] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 02/14/2021] [Accepted: 03/02/2021] [Indexed: 11/26/2022] Open
Abstract
Vascular malformations (VMs) of the central nervous system (CNS) include a wide range of pathological conditions related to intra and extracranial vessel abnormalities. Although some VMs show typical neuroimaging features, other VMs share and overlap pathological and neuroimaging features that hinder an accurate differentiation between them. Hence, it is not uncommon to misclassify different types of VMs under the general heading of arteriovenous malformations. Thorough knowledge of the imaging findings of each type of VM is mandatory to avoid these inaccuracies. Conventional MRI sequences, including MR angiography, have allowed the evaluation of CNS VMs without using ionizing radiation. Newer MRI techniques, such as susceptibility-weighted imaging, black blood sequences, arterial spin labeling, and 4D flow imaging, have an added value of providing physiopathological data in real time regarding the hemodynamics of VMs. Beyond MR images, new insights using 3D printed models are being incorporated as part of the armamentarium for a noninvasive evaluation of VMs. In this paper, we briefly review the pathophysiology of CNS VMs, focusing on the MRI findings that may be helpful to differentiate them. We discuss the role of each conventional and advanced MRI sequence for VMs assessment and provide some insights about the value of structured reports of 3D printing to evaluate VMs.
Collapse
Affiliation(s)
| | | | - Cc Tchoyoson Lim
- Neuroradiology Department, National Neuroscience Institute and Duke-NUS Medical School, Singapore
| | | | | | - Antonio Luna
- MRI Unit, Radiology Department, HT Medica, Jaén, Spain
| |
Collapse
|
16
|
Bueno MR, Estrela C, Granjeiro JM, Estrela MRDA, Azevedo BC, Diogenes A. Cone-beam computed tomography cinematic rendering: clinical, teaching and research applications. Braz Oral Res 2021; 35:e024. [PMID: 33624709 DOI: 10.1590/1807-3107bor-2021.vol35.0024] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 10/22/2020] [Indexed: 02/08/2023] Open
Abstract
Cone-beam computed tomography (CBCT) is an essential imaging method that increases the accuracy of diagnoses, planning and follow-up of endodontic complex cases. Image postprocessing and subsequent visualization relies on software for three-dimensional navigation, and application of indexation tools to provide clinically useful information according to a set of volumetric data. Image postprocessing has a crucial impact on diagnostic quality and various techniques have been employed on computed tomography (CT) and magnetic resonance imaging (MRI) data sets. These include: multiplanar reformations (MPR), maximum intensity projection (MIP) and volume rendering (VR). A recent advance in 3D data visualization is the new cinematic rendering reconstruction method, a technique that generates photorealistic 3D images from conventional CT and MRI data. This review discusses the importance of CBCT cinematic rendering for clinical decision-making, teaching, and research in Endodontics, and a presents series of cases that illustrate the diagnostic value of 3D cinematic rendering in clinical care.
Collapse
Affiliation(s)
| | - Carlos Estrela
- Universidade Federal de Goiás - UFGO, School of Dentistry, Stomatologic Science Department, Goiânia, GO, Brazil
| | - José Mauro Granjeiro
- Instituto Nacional de Metrologia, Qualidade e Tecnologia - Inmetro, Duque de Caxias, RJ, Brazil
| | | | - Bruno Correa Azevedo
- University of Louisville, School of Dentistry, Oral Radiology Department, Louisville, KY, USA
| | - Anibal Diogenes
- University of Texas Health at San Antonio, School of Dentistry, Endodontics Department, San Antonio, TX, USA
| |
Collapse
|
17
|
Luks FI, Collins S, Xia J, Cao SA, Rios M. Combination of volume-rendering 3D surface modeling and medical illustration to capture the living fetus. Prenat Diagn 2020; 41:79-88. [PMID: 33058179 DOI: 10.1002/pd.5844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 09/21/2020] [Accepted: 10/11/2020] [Indexed: 11/09/2022]
Abstract
OBJECTIVE A good medical illustration renders essential aspects of a procedure or condition faithfully, yet idealizes it enough to make it widely applicable. Unfortunately, the live fetus is generally hidden from sight, and illustrating it relies either on autopsy material or manipulated newborn images. High-definition volume rendering of diagnostic imaging data can represent hidden conditions with an almost lifelike realism but is limited by the resolution and artifacts of the data capture. We have combined both approaches to enhance the accuracy and didactic value of illustrations of fetal conditions. METHODS Three examples, of increasing complexity, are presented to demonstrate the creation of medical illustrations of the fetus based on semiautomatic computerized posthoc manipulation of diagnostic images. RESULTS The end product utilizes the diagnostic accuracy of ultrasound and magnetic resonance imaging of the fetuses and the spatial manipulation of 3D models to create a lifelike, accurate and informative image of the fetal anomalies. CONCLUSION Volume-rendering and 3D surface modeling can be combined with medical illustration to create realistic and informative images of the developing fetus, using a level of detail that is tailored to the intended audience.
Collapse
Affiliation(s)
- Francois I Luks
- Division of Pediatric Surgery, Brown University Warren Alpert Medical School, Providence, Rhode Island, USA.,Brown University, Providence, Rhode Island, USA.,Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Scott Collins
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, Rhode Island, USA
| | - Jimmy Xia
- Brown University, Providence, Rhode Island, USA
| | - Shiliang Alice Cao
- Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Matthew Rios
- Rhode Island School of Design, Providence, Rhode Island, USA
| |
Collapse
|