1
|
Lesonen P, Wettenhovi VV, Kolehmainen V, Pulkkinen A, Vauhkonen M. Anatomy-guided multi-resolution image reconstruction in PET. Phys Med Biol 2024. [PMID: 38636506 DOI: 10.1088/1361-6560/ad4082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
In this paper, we propose positron emission tomography (PET) image reconstruction using a multi-resolution triangular mesh. The mesh can be adapted based on patient specific anatomical information that can be in the form of a computed tomography (CT) or magnetic resonance imaging (MRI) image in the hybrid imaging systems. The triangular mesh can be adapted to high resolution in localized anatomical regions of interest (ROI) and made coarser in other regions, leading to an imaging model with high resolution in the ROI with clearly reduced number of degrees of freedom compared to a conventional uniformly dense imaging model. We compare maximum likelihood expectation maximization (MLEM) reconstructions with the multi-resolution model to reconstructions using a uniformly dense mesh, a sparse mesh and regular rectangular pixel mesh. Two simulated cases are used in the comparison, with the first one using the NEMA image quality phantom and the second the XCAT human phantom. When compared to the results with the uniform imaging models, the locally refined multi-resolution mesh retains the accuracy of the dense mesh reconstruction in the ROI while being faster to compute than the reconstructions with the uniformly dense mesh. The locally dense multi-resolution model leads also to more accurate reconstruction than the pixel-based mesh or the sparse triangular mesh. The findings suggest that triangular multi-resolution mesh, which can be made patient and application specific, is a potential alternative for pixel-based reconstruction.
Collapse
Affiliation(s)
- Piia Lesonen
- University of Eastern Finland - Kuopio Campus, Yliopistonranta 8, Kuopio, Pohjois-Savo, 70211, FINLAND
| | - Ville-Veikko Wettenhovi
- Technical Physics, University of Eastern Finland - Kuopio Campus, Yliopistonranta 8, (Melania), Kuopio, 70211, FINLAND
| | - Ville Kolehmainen
- Technical Physics, University of Eastern Finland - Kuopio Campus, Yliopistonranta 8, (Melania), Kuopio, Pohjois-Savo, 70211, FINLAND
| | - Aki Pulkkinen
- Technical Physics, University of Eastern Finland - Kuopio Campus, Yliopistonranta 8, Kuopio, Pohjois-Savo, 70211, FINLAND
| | - Marko Vauhkonen
- Technical Physics, University of Eastern Finland - Kuopio Campus, Yliopistonranta 8, Kuopio, Pohjois-Savo, 70211, FINLAND
| |
Collapse
|
2
|
Herzberger L, Hadwiger M, Kruger R, Sorger P, Pfister H, Groller E, Beyer J. Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering. IEEE Trans Vis Comput Graph 2024; 30:1380-1390. [PMID: 37889813 PMCID: PMC10840607 DOI: 10.1109/tvcg.2023.3327193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/29/2023]
Abstract
We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.
Collapse
|
3
|
Boutsi AM, Ioannidis C, Verykokou S. Multi-Resolution 3D Rendering for High-Performance Web AR. Sensors (Basel) 2023; 23:6885. [PMID: 37571668 PMCID: PMC10422453 DOI: 10.3390/s23156885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/23/2023] [Accepted: 08/02/2023] [Indexed: 08/13/2023]
Abstract
In the context of web augmented reality (AR), 3D rendering that maintains visual quality and frame rate requirements remains a challenge. The lack of a dedicated and efficient 3D format often results in the degraded visual quality of the original data and compromises the user experience. This paper examines the integration of web-streamable view-dependent representations of large-sized and high-resolution 3D models in web AR applications. The developed cross-platform prototype exploits the batched multi-resolution structures of the Nexus.js library as a dedicated lightweight web AR format and tests it against common formats and compression techniques. Built with AR.js and Three.js open-source libraries, it allows the overlay of the multi-resolution models by interactively adjusting the position, rotation and scale parameters. The proposed method includes real-time view-dependent rendering, geometric instancing and 3D pose regression for two types of AR: natural feature tracking (NFT) and location-based positioning for large and textured 3D overlays. The prototype achieves up to a 46% speedup in rendering time compared to optimized glTF models, while a 34 M vertices 3D model is visible in less than 4 s without degraded visual quality in slow 3D networks. The evaluation under various scenes and devices offers insights into how a multi-resolution scheme can be adopted in web AR for high-quality visualization and real-time performance.
Collapse
Affiliation(s)
| | | | - Styliani Verykokou
- Laboratory of Photogrammetry, School of Rural, Surveying and Geoinformatics Engineering, National Technical University of Athens, 15780 Athens, Greece; (A.-M.B.); (C.I.)
| |
Collapse
|
4
|
Wylie KP, Kronberg E, Legget KT, Sutton B, Tregellas JR. Stable Meta-Networks, Noise, and Artifacts in the Human Connectome: Low- to High-Dimensional Independent Components Analysis as a Hierarchy of Intrinsic Connectivity Networks. Front Neurosci 2021; 15:625737. [PMID: 34025337 PMCID: PMC8134552 DOI: 10.3389/fnins.2021.625737] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/23/2021] [Indexed: 11/29/2022] Open
Abstract
Connectivity within the human connectome occurs between multiple neuronal systems-at small to very large spatial scales. Independent component analysis (ICA) is potentially a powerful tool to facilitate multi-scale analyses. However, ICA has yet to be fully evaluated at very low (10 or fewer) and ultra-high dimensionalities (200 or greater). The current investigation used data from the Human Connectome Project (HCP) to determine the following: (1) if larger networks, or meta-networks, are present at low dimensionality, (2) if nuisance sources increase with dimensionality, and (3) if ICA is prone to overfitting. Using bootstrap ICA, results suggested that, at very low dimensionality, ICA spatial maps consisted of Visual/Attention and Default/Control meta-networks. At fewer than 10 components, well-known networks such as the Somatomotor Network were absent from results. At high dimensionality, nuisance sources were present even in denoised high-quality data but were identifiable by correlation with tissue probability maps. Artifactual overfitting occurred to a minor degree at high dimensionalities. Basic summary statistics on spatial maps (maximum cluster size, maximum component weight, and average weight outside of maximum cluster) quickly and easily separated artifacts from gray matter sources. Lastly, by using weighted averages of bootstrap stability, even ultra-high dimensional ICA resulted in highly reproducible spatial maps. These results demonstrate how ICA can be applied in multi-scale analyses, reliably and accurately reproducing the hierarchy of meta-networks, large-scale networks, and subnetworks, thereby characterizing cortical connectivity across multiple spatial scales.
Collapse
Affiliation(s)
- Korey P. Wylie
- Department of Psychiatry, University of Colorado School of Medicine, Aurora, CO, United States
| | - Eugene Kronberg
- Department of Psychiatry, University of Colorado School of Medicine, Aurora, CO, United States
- Department of Neurology, University of Colorado School of Medicine, Aurora, CO, United States
| | - Kristina T. Legget
- Department of Psychiatry, University of Colorado School of Medicine, Aurora, CO, United States
- Research Service, Rocky Mountain Regional VA Medical Center, Aurora, CO, United States
| | - Brianne Sutton
- Department of Psychiatry, University of Colorado School of Medicine, Aurora, CO, United States
| | - Jason R. Tregellas
- Department of Psychiatry, University of Colorado School of Medicine, Aurora, CO, United States
- Research Service, Rocky Mountain Regional VA Medical Center, Aurora, CO, United States
| |
Collapse
|
5
|
Syawaludin MF, Lee M, Hwang JI. Foveation Pipeline for 360° Video-Based Telemedicine. Sensors (Basel) 2020; 20:s20082264. [PMID: 32316257 PMCID: PMC7219060 DOI: 10.3390/s20082264] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 04/07/2020] [Accepted: 04/13/2020] [Indexed: 11/30/2022]
Abstract
Pan-tilt-zoom (PTZ) and omnidirectional cameras serve as a video-mediated communication interface for telemedicine. Most cases use either PTZ or omnidirectional cameras exclusively; even when used together, images from the two are shown separately on 2D displays. Conventional foveated imaging techniques may offer a solution for exploiting the benefits of both cameras, i.e., the high resolution of the PTZ camera and the wide field-of-view of the omnidirectional camera, but displaying the unified image on a 2D display would reduce the benefit of “omni-” directionality. In this paper, we introduce a foveated imaging pipeline designed to support virtual reality head-mounted displays (HMDs). The pipeline consists of two parallel processes: one for estimating parameters for the integration of the two images and another for rendering images in real time. A control mechanism for placing the foveal region (i.e., high-resolution area) in the scene and zooming is also proposed. Our evaluations showed that the proposed pipeline achieved, on average, 17 frames per second when rendering the foveated view on an HMD, and showed angular resolution improvement on the foveal region compared with the omnidirectional camera view. However, the improvement was less significant when the zoom level was 8× and more. We discuss possible improvement points and future research directions.
Collapse
|
6
|
Al-Faris M, Chiverton J, Yang Y, Ndzi D. Deep Learning of Fuzzy Weighted Multi-Resolution Depth Motion Maps with Spatial Feature Fusion for Action Recognition. J Imaging 2019; 5:82. [PMID: 34460648 PMCID: PMC8321166 DOI: 10.3390/jimaging5100082] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 10/13/2019] [Accepted: 10/17/2019] [Indexed: 11/17/2022] Open
Abstract
Human action recognition (HAR) is an important yet challenging task. This paper presents a novel method. First, fuzzy weight functions are used in computations of depth motion maps (DMMs). Multiple length motion information is also used. These features are referred to as fuzzy weighted multi-resolution DMMs (FWMDMMs). This formulation allows for various aspects of individual actions to be emphasized. It also helps to characterise the importance of the temporal dimension. This is important to help overcome, e.g., variations in time over which a single type of action might be performed. A deep convolutional neural network (CNN) motion model is created and trained to extract discriminative and compact features. Transfer learning is also used to extract spatial information from RGB and depth data using the AlexNet network. Different late fusion techniques are then investigated to fuse the deep motion model with the spatial network. The result is a spatial temporal HAR model. The developed approach is capable of recognising both human action and human-object interaction. Three public domain datasets are used to evaluate the proposed solution. The experimental results demonstrate the robustness of this approach compared with state-of-the art algorithms.
Collapse
Affiliation(s)
- Mahmoud Al-Faris
- School of Energy and Electronic Engineering, University of Portsmouth, Portsmouth PO1 3DJ, UK;
| | - John Chiverton
- School of Energy and Electronic Engineering, University of Portsmouth, Portsmouth PO1 3DJ, UK;
| | - Yanyan Yang
- School of Computing, University of Portsmouth, Portsmouth PO1 3DJ, UK;
| | - David Ndzi
- School of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisley PA1 2BE, UK;
| |
Collapse
|
7
|
Zhu C, Yin XC. Detecting Multi-Resolution Pedestrians Using Group Cost-Sensitive Boosting with Channel Features. Sensors (Basel) 2019; 19:s19040780. [PMID: 30769813 PMCID: PMC6412415 DOI: 10.3390/s19040780] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 02/02/2019] [Accepted: 02/11/2019] [Indexed: 11/16/2022]
Abstract
Significant progress has been achieved in the past few years for the challenging task of pedestrian detection. Nevertheless, a major bottleneck of existing state-of-the-art approaches lies in a great drop in performance with reducing resolutions of the detected targets. For the boosting-based detectors which are popular in pedestrian detection literature, a possible cause for this drop is that in their boosting training process, low-resolution samples, which are usually more difficult to be detected due to the missing details, are still treated equally importantly as high-resolution samples, resulting in the false negatives since they are more easily rejected in the early stages and can hardly be recovered in the late stages. To address this problem, we propose in this paper a robust multi-resolution detection approach with a novel group cost-sensitive boosting algorithm, which is derived from the standard AdaBoost algorithm to further explore different costs for different resolution groups of the samples in the boosting process, and to place greater emphasis on low-resolution groups in order to better handle the detection of multi-resolution targets. The effectiveness of the proposed approach is evaluated on the Caltech pedestrian benchmark and KAIST (Korea Advanced Institute of Science and Technology) multispectral pedestrian benchmark, and validated by its promising performance on different resolution-specific test sets of both benchmarks.
Collapse
Affiliation(s)
- Chao Zhu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China.
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China.
| | - Xu-Cheng Yin
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China.
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China.
| |
Collapse
|
8
|
Li H, Fan Y. NON-RIGID IMAGE REGISTRATION USING SELF-SUPERVISED FULLY CONVOLUTIONAL NETWORKS WITHOUT TRAINING DATA. Proc IEEE Int Symp Biomed Imaging 2018; 2018:1075-1078. [PMID: 30079127 PMCID: PMC6070305 DOI: 10.1109/isbi.2018.8363757] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A novel non-rigid image registration algorithm is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered in a self-supervised learning framework. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different spatial resolutions with deep self-supervision through typical feedforward and backpropagation computation. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.
Collapse
Affiliation(s)
- Hongming Li
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
9
|
Wu K, Zhang P, Li F, Guo C, Wu Z. On-Demand Multi-Resolution Liquid Alloy Printing Based on Viscoelastic Flow Squeezing. Polymers (Basel) 2018; 10:E330. [PMID: 30966365 PMCID: PMC6414868 DOI: 10.3390/polym10030330] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 03/15/2018] [Accepted: 03/16/2018] [Indexed: 11/16/2022] Open
Abstract
Recently, high-resolution patterning techniques of stretchable electronics advanced extensively. An important trend is to fabricate complex circuits with varied sizes in a small area, which is a technical challenge to current conductive ink printing technologies. Here, we introduce a new strategy for multi-resolution liquid alloy printing, which can tune the resolution of printed liquid alloy trace in real time with the squeezing effect of compound viscoelastic flow. A newly developed coaxial nozzle with the inner nozzle extension (CNINE) is used to wrap and squeeze liquid alloy steadily and effectively. By controlling the working parameters and compound flow properties, liquid alloy patterns with different widths are obtained continuously. This work offers a new way to rapidly manufacture complex stretchable electronics patterning in multi-resolution.
Collapse
Affiliation(s)
- Kang Wu
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Pan Zhang
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Fen Li
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Chuanfei Guo
- Department of Materials Science & Engineering, Southern University of Science & Technology, Shenzhen 518055, China.
| | - Zhigang Wu
- State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
10
|
Chen HC, Jia W, Sun X, Li Z, Li Y, Fernstrom JD, Burke LE, Baranowski T, Sun M. Saliency-aware food image segmentation for personal dietary assessment using a wearable computer. Meas Sci Technol 2015; 26:025702. [PMID: 26257473 PMCID: PMC4527659 DOI: 10.1088/0957-0233/26/2/025702] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Image-based dietary assessment has recently received much attention in the community of obesity research. In this assessment, foods in digital pictures are specified, and their portion sizes (volumes) are estimated. Although manual processing is currently the most utilized method, image processing holds much promise since it may eventually lead to automatic dietary assessment. In this paper we study the problem of segmenting food objects from images. This segmentation is difficult because of various food types, shapes and colors, different decorating patterns on food containers, and occlusions of food and non-food objects. We propose a novel method based on a saliency-aware active contour model (ACM) for automatic food segmentation from images acquired by a wearable camera. An integrated saliency estimation approach based on food location priors and visual attention features is designed to produce a salient map of possible food regions in the input image. Next, a geometric contour primitive is generated and fitted to the salient map by means of multi-resolution optimization with respect to a set of affine and elastic transformation parameters. The food regions are then extracted after contour fitting. Our experiments using 60 food images showed that the proposed method achieved significantly higher accuracy in food segmentation when compared to conventional segmentation methods.
Collapse
Affiliation(s)
- Hsin-Chen Chen
- Department of Radiation Oncology, Washington University in Saint Louis, Saint Louis, MO, USA
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Wenyan Jia
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Xin Sun
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Zhaoxin Li
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Yuecheng Li
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - John D. Fernstrom
- Department of Psychiatry and Pharmacology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lora E. Burke
- Health and Community Systems, University of Pittsburgh, Pittsburgh, PA, USA
| | - Thomas Baranowski
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Electrical Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
11
|
Abstract
In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While 'multi-scale' refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, 'multi-resolution' is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms' performance, we rank them by weighing the distinct computational time savings of the simulation runs versus the methods' ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods. The main finding of this work is that by pursuing a multi-resolution approach, one can reduce the computation time of a discrete-based model substantially while still maintaining a comparably high predictive power. This hints at even more computational savings in the more realistic 3D setting over time, and thus appears to outline a possible path to achieve scalability for the all-important clinical translation.
Collapse
Affiliation(s)
| | | | - Thomas S. Deisboeck
- Corresponding Author: Thomas S. Deisboeck, M.D., Complex Biosystems Modeling Laboratory, Harvard-MIT (HST) Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital-East, 2301, Bldg. 149, 13th Street, Charlestown, MA 02129, Tel: 617-724-1845, Fax: 617-726-7422,
| |
Collapse
|
12
|
Abstract
Tomographic imaging and computer simulations are increasingly yielding massive datasets. Interactive and exploratory visualizations have rapidly become indispensable tools to study large volumetric imaging and simulation data. Our scalable isosurface visualization framework on commodity off-the-shelf clusters is an end-to-end parallel and progressive platform, from initial data access to the final display. Interactive browsing of extracted isosurfaces is made possible by using parallel isosurface extraction, and rendering in conjunction with a new specialized piece of image compositing hardware called Metabuffer. In this paper, we focus on the back end scalability by introducing a fully parallel and out-of-core isosurface extraction algorithm. It achieves scalability by using both parallel and out-of-core processing and parallel disks. It statically partitions the volume data to parallel disks with a balanced workload spectrum, and builds I/O-optimal external interval trees to minimize the number of I/O operations of loading large data from disk. We also describe an isosurface compression scheme that is efficient for progress extraction, transmission and storage of isosurfaces.
Collapse
Affiliation(s)
- Xiaoyu Zhang
- Department of Computer Science, California State University San Marcos, San Marcos, CA 92096, United States
| | - Chandrajit Bajaj
- Department of Computer Science, University of Texas at Austin, Austin, TX 78702, United States
| |
Collapse
|
13
|
Abstract
Spatial normalization is frequently used to map data to a standard coordinate system by removing intersubject morphological differences, thereby allowing for group analysis to be carried out. The work presented in this paper is motivated by the need for an automated cortical surface normalization technique that will automatically identify homologous cortical landmarks and map them to the same coordinates on a standard manifold. The geometry of a cortical surface is analyzed using two shape measures that distinguish the sulcal and gyral regions in a multiscale framework. A multichannel optical flow warping procedure aligns these shape measures between a reference brain and a subject brain, creating the desired normalization. The partial differential equation that carries out the warping is implemented in a Euclidean framework in order to facilitate a multiresolution strategy, thereby permitting large deformations between the two surfaces. The technique is demonstrated by aligning 33 normal cortical surfaces and showing both improved structural alignment in manually labeled sulci and improved functional alignment in positron emission tomography data mapped to the surfaces. A quantitative comparison between our proposed surface-based spatial normalization method and a leading volumetric spatial normalization method is included to show that the surface-based spatial normalization performs better in matching homologous cortical anatomies.
Collapse
Affiliation(s)
- Duygu Tosun
- Department of Electrical and Computer Engineering, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | | |
Collapse
|
14
|
Abstract
Parallel coordinates, re-orderable matrices, and dendrograms are widely used for visual exploration of multivariate data. This research proposes an approach to systematically integrate the methods in a complementary manner for supporting multi-resolution visual data analysis with an enhanced overview+detail exploratory strategy. The paper focuses on three topics: (1) dynamic control across resolutions at which data are explored; (2) coordination and color mapping among the views; and (3) enhanced features of each view designed for the overview+detail exploratory tasks. We contend that systematically coordinating the views through user-controlled resolutions within a highly interactive analysis environment will boost productivity for exploration tasks. We offer a case study analysis to demonstrate this potential. The case study is focused on a complex, geographically referenced dataset including public health, demographic and environmental components.
Collapse
Affiliation(s)
- Jin Chen
- GeoVISTA Center and Department of Geography, Pennsylvania State University, 302 Walker Building, University Park, PA16802, ,
| | - Alan M. MacEachren
- GeoVISTA Center and Department of Geography, Pennsylvania State University, 302 Walker Building, University Park, PA16802, ,
| |
Collapse
|