1
|
Chen T, Xu L, Xu X, Zhu K. GestOnHMD: Enabling Gesture-based Interaction on Low-cost VR Head-Mounted Display. IEEE Trans Vis Comput Graph 2021; 27:2597-2607. [PMID: 33750694 DOI: 10.1109/tvcg.2021.3067689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Low-cost virtual-reality (VR) head-mounted displays (HMDs) with the integration of smartphones have brought the immersive VR to the masses, and increased the ubiquity of VR. However, these systems are often limited by their poor interactivity. In this paper, we present GestOnHMD, a gesture-based interaction technique and a gesture-classification pipeline that leverages the stereo microphones in a commodity smartphone to detect the tapping and the scratching gestures on the front, the left, and the right surfaces on a mobile VR headset. Taking the Google Cardboard as our focused headset, we first conducted a gesture-elicitation study to generate 150 user-defined gestures with 50 on each surface. We then selected 15, 9, and 9 gestures for the front, the left, and the right surfaces respectively based on user preferences and signal detectability. We constructed a data set containing the acoustic signals of 18 users performing these on-surface gestures, and trained the deep-learning classification pipeline for gesture detection and recognition. Lastly, with the real-time demonstration of GestOnHMD, we conducted a series of online participatory-design sessions to collect a set of user-defined gesture-referent mappings that could potentially benefit from GestOnHMD.
Collapse
|
2
|
Klatt S, Smeeton NJ. Immersive screens change attention width but not perception or decision-making performance in natural and basic tasks. Appl Ergon 2020; 82:102961. [PMID: 31614278 DOI: 10.1016/j.apergo.2019.102961] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Revised: 09/19/2019] [Accepted: 09/21/2019] [Indexed: 06/10/2023]
Abstract
In the last decades, a number of studies have examined people's perceptual and attentional capabilities using flat screen displays. The completion of studies using curved displays/screens has been neglected so far, despite their advantage of creating a more immersive and life-like experience. In two studies, we analysed possible performance differences between subjects' perceptual and attentional capabilities during a decision-making task whilst viewing life-size stimuli on large flat and curved immersive screens. In Study 1, participants performed an attention-demanding shape discrimination task. In Study 2, participants performed a more naturalistic football-specific discrimination task. Results of both studies revealed no differences in perception and decision making between screen conditions, but that attention can be directed across greater visual angles on immersive screens compared to flat screens. The findings suggest that attention can be directed across a larger visual angle on curved screens compared to flat screens probably because curved screens distort the image less than flat screens. This study has implications for the use of flat screens in studies that examine perceptual and attentional capabilities in the visual periphery.
Collapse
Affiliation(s)
- Stefanie Klatt
- German Sport University Cologne, Department of Cognitive and Team/Racket Sport Research, Cologne, Germany.
| | - Nicholas J Smeeton
- University of Brighton, Sport and Exercise Science and Medicine, Welkin Laboratories, Eastbourne, Brighton, United Kingdom.
| |
Collapse
|
3
|
Wolf D, Rietzler M, Hnatek L, Rukzio E. Face/On: Multi-Modal Haptic Feedback for Head-Mounted Displays in Virtual Reality. IEEE Trans Vis Comput Graph 2019; 25:3169-3177. [PMID: 31403417 DOI: 10.1109/tvcg.2019.2932215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While the real world provides humans with a huge variety of sensory stimuli, virtual worlds most of all communicate their properties by visual and auditory feedback due to the design of current head mounted displays (HMDs). Since HMDs offer sufficient contact area to integrate additional actuators, prior works utilised a limited amount of haptic actuators to integrate respective information about the virtual world. With the Face/On prototype complex feedback patterns are introduced that combine a high number of vibration motors with additional thermal sources to transport multi-modal and spatial information. A pre-study determining the boundaries of the feedbacks' intensities as well as a user study showing a significant increase of presence and enjoyment validate Face/On's approach.
Collapse
|
4
|
Huang J, Lemkul JA, Eastman PK, MacKerell AD. Molecular dynamics simulations using the drude polarizable force field on GPUs with OpenMM: Implementation, validation, and benchmarks. J Comput Chem 2018; 39:1682-1689. [PMID: 29727037 PMCID: PMC6031474 DOI: 10.1002/jcc.25339] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Revised: 03/13/2018] [Accepted: 03/18/2018] [Indexed: 12/30/2022]
Abstract
Presented is the implementation of the Drude force field in the open-source OpenMM simulation package allowing for access to graphical processing unit (GPU) hardware. In the Drude model, electronic degrees of freedom are represented by negatively charged particles attached to their parent atoms via harmonic springs, such that extra computational overhead comes from these additional particles and virtual sites representing lone pairs on electronegative atoms, as well as the associated thermostat and integration algorithms. This leads to an approximately fourfold increase in computational demand over additive force fields. However, by making the Drude model accessible to consumer-grade desktop GPU hardware it will be possible to perform simulations of one microsecond or more in less than a month, indicating that the barrier to employ polarizable models has largely been removed such that polarizable simulations with the classical Drude model are readily accessible and practical.
Collapse
Affiliation(s)
- Jing Huang
- Department of Pharmaceutical Sciences, University of Maryland, Baltimore, Baltimore, MD 21201
| | - Justin A. Lemkul
- Department of Pharmaceutical Sciences, University of Maryland, Baltimore, Baltimore, MD 21201
| | - Peter K. Eastman
- Department of Bioengineering, Stanford University, Stanford, CA 94035
| | - Alexander D. MacKerell
- Department of Pharmaceutical Sciences, University of Maryland, Baltimore, Baltimore, MD 21201
| |
Collapse
|
5
|
Ling C, Hamada T, Gao J, Zhao G, Sun D, Shi W. MrBayes tgMC 3++: A High Performance and Resource-Efficient GPU-Oriented Phylogenetic Analysis Method. IEEE/ACM Trans Comput Biol Bioinform 2016; 13:845-854. [PMID: 26529779 DOI: 10.1109/tcbb.2015.2495202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
MrBayes is a widespread phylogenetic inference tool harnessing empirical evolutionary models and Bayesian statistics. However, the computational cost on the likelihood estimation is very expensive, resulting in undesirably long execution time. Although a number of multi-threaded optimizations have been proposed to speed up MrBayes, there are bottlenecks that severely limit the GPU thread-level parallelism of likelihood estimations. This study proposes a high performance and resource-efficient method for GPU-oriented parallelization of likelihood estimations. Instead of having to rely on empirical programming, the proposed novel decomposition storage model implements high performance data transfers implicitly. In terms of performance improvement, a speedup factor of up to 178 can be achieved on the analysis of simulated datasets by four Tesla K40 cards. In comparison to the other publicly available GPU-oriented MrBayes, the tgMC3++ method (proposed herein) outperforms the tgMC3 (v1.0), nMC3 (v2.1.1) and oMC3 (v1.00) methods by speedup factors of up to 1.6, 1.9 and 2.9, respectively. Moreover, tgMC3++ supports more evolutionary models and gamma categories, which previous GPU-oriented methods fail to take into analysis.
Collapse
|
6
|
Santhanam AP, Neylon J, Eldredge J, Teran J, Dutson E, Benharash P. GPU-Based Parallelized Solver for Large Scale Vascular Blood Flow Modeling and Simulations. Stud Health Technol Inform 2016; 220:345-351. [PMID: 27046603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Cardio-vascular blood flow simulations are essential in understanding the blood flow behavior during normal and disease conditions. To date, such blood flow simulations have only been done at a macro scale level due to computational limitations. In this paper, we present a GPU based large scale solver that enables modeling the flow even in the smallest arteries. A mechanical equivalent of the circuit based flow modeling system is first developed to employ the GPU computing framework. Numerical studies were employed using a set of 10 million connected vascular elements. Run-time flow analysis were performed to simulate vascular blockages, as well as arterial cut-off. Our results showed that we can achieve ~100 FPS using a GTX 680m and ~40 FPS using a Tegra K1 computing platform.
Collapse
Affiliation(s)
- Anand P Santhanam
- Department of Radiation Oncology, University of California, Los Angeles
| | - John Neylon
- Department of Radiation Oncology, University of California, Los Angeles
| | - Jeff Eldredge
- Department of Mechanical Engineering, University of California, Los Angeles
| | - Joseph Teran
- Department of Mathematics, University of California, Los Angeles
| | - Erik Dutson
- Cardiothoracic Surgery, University of California, Los Angeles
| | - Peyman Benharash
- Computer Aided Surgical and Interventional Technologies, University of California, Los Angeles
| |
Collapse
|
7
|
Eck U, Pankratz F, Sandor C, Klinker G, Laga H. Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality. IEEE Trans Vis Comput Graph 2015; 21:1427-1441. [PMID: 26394430 DOI: 10.1109/tvcg.2015.2480087] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.
Collapse
|
8
|
Zhou M, Zhang Q, Xu K, Tian Z, Wang Y, He W. PRIMAL: Page Rank-Based Indoor Mapping and Localization Using Gene-Sequenced Unlabeled WLAN Received Signal Strength. Sensors (Basel) 2015; 15:24791-817. [PMID: 26404274 PMCID: PMC4634416 DOI: 10.3390/s151024791] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 09/09/2015] [Accepted: 09/21/2015] [Indexed: 11/16/2022]
Abstract
Due to the wide deployment of wireless local area networks (WLAN), received signal strength (RSS)-based indoor WLAN localization has attracted considerable attention in both academia and industry. In this paper, we propose a novel page rank-based indoor mapping and localization (PRIMAL) by using the gene-sequenced unlabeled WLAN RSS for simultaneous localization and mapping (SLAM). Specifically, first of all, based on the observation of the motion patterns of the people in the target environment, we use the Allen logic to construct the mobility graph to characterize the connectivity among different areas of interest. Second, the concept of gene sequencing is utilized to assemble the sporadically-collected RSS sequences into a signal graph based on the transition relations among different RSS sequences. Third, we apply the graph drawing approach to exhibit both the mobility graph and signal graph in a more readable manner. Finally, the page rank (PR) algorithm is proposed to construct the mapping from the signal graph into the mobility graph. The experimental results show that the proposed approach achieves satisfactory localization accuracy and meanwhile avoids the intensive time and labor cost involved in the conventional location fingerprinting-based indoor WLAN localization.
Collapse
Affiliation(s)
- Mu Zhou
- Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Qiao Zhang
- Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | | | - Zengshan Tian
- Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Yanmeng Wang
- Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Wei He
- Chongqing Key Lab of Mobile Communications Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| |
Collapse
|
9
|
Nguyen TD, Schmidt B, Zheng Z, Kwoh CK. Efficient and Accurate OTU Clustering with GPU-Based Sequence Alignment and Dynamic Dendrogram Cutting. IEEE/ACM Trans Comput Biol Bioinform 2015; 12:1060-1073. [PMID: 26451819 DOI: 10.1109/tcbb.2015.2407574] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
De novo clustering is a popular technique to perform taxonomic profiling of a microbial community by grouping 16S rRNA amplicon reads into operational taxonomic units (OTUs). In this work, we introduce a new dendrogram-based OTU clustering pipeline called CRiSPy. The key idea used in CRiSPy to improve clustering accuracy is the application of an anomaly detection technique to obtain a dynamic distance cutoff instead of using the de facto value of 97 percent sequence similarity as in most existing OTU clustering pipelines. This technique works by detecting an abrupt change in the merging heights of a dendrogram. To produce the output dendrograms, CRiSPy employs the OTU hierarchical clustering approach that is computed on a genetic distance matrix derived from an all-against-all read comparison by pairwise sequence alignment. However, most existing dendrogram-based tools have difficulty processing datasets larger than 10,000 unique reads due to high computational complexity. We address this difficulty by developing two efficient algorithms for CRiSPy: a compute-efficient GPU-accelerated parallel algorithm for pairwise distance matrix computation and a memory-efficient hierarchical clustering algorithm. Our experiments on various datasets with distinct attributes show that CRiSPy is able to produce more accurate OTU groupings than most OTU clustering applications.
Collapse
|
10
|
Chacón A, Marco-Sola S, Espinosa A, Ribeca P, Moure JC. Boosting the FM-Index on the GPU: Effective Techniques to Mitigate Random Memory Access. IEEE/ACM Trans Comput Biol Bioinform 2015; 12:1048-1059. [PMID: 26451818 DOI: 10.1109/tcbb.2014.2377716] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The recent advent of high-throughput sequencing machines producing big amounts of short reads has boosted the interest in efficient string searching techniques. As of today, many mainstream sequence alignment software tools rely on a special data structure, called the FM-index, which allows for fast exact searches in large genomic references. However, such searches translate into a pseudo-random memory access pattern, thus making memory access the limiting factor of all computation-efficient implementations, both on CPUs and GPUs. Here, we show that several strategies can be put in place to remove the memory bottleneck on the GPU: more compact indexes can be implemented by having more threads work cooperatively on larger memory blocks, and a k-step FM-index can be used to further reduce the number of memory accesses. The combination of those and other optimisations yields an implementation that is able to process about two Gbases of queries per second on our test platform, being about 8 × faster than a comparable multi-core CPU version, and about 3 × to 5 × faster than the FM-index implementation on the GPU provided by the recently announced Nvidia NVBIO bioinformatics library.
Collapse
|
11
|
González-Domínguez J, Wienbrandt L, Kässens JC, Ellinghaus D, Schimmler M, Schmidt B. Parallelizing Epistasis Detection in GWAS on FPGA and GPU-Accelerated Computing Systems. IEEE/ACM Trans Comput Biol Bioinform 2015; 12:982-994. [PMID: 26451813 DOI: 10.1109/tcbb.2015.2389958] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
High-throughput genotyping technologies (such as SNP-arrays) allow the rapid collection of up to a few million genetic markers of an individual. Detecting epistasis (based on 2-SNP interactions) in Genome-Wide Association Studies is an important but time consuming operation since statistical computations have to be performed for each pair of measured markers. Computational methods to detect epistasis therefore suffer from prohibitively long runtimes; e.g., processing a moderately-sized dataset consisting of about 500,000 SNPs and 5,000 samples requires several days using state-of-the-art tools on a standard 3 GHz CPU. In this paper, we demonstrate how this task can be accelerated using a combination of fine-grained and coarse-grained parallelism on two different computing systems. The first architecture is based on reconfigurable hardware (FPGAs) while the second architecture uses multiple GPUs connected to the same host. We show that both systems can achieve speedups of around four orders-of-magnitude compared to the sequential implementation. This significantly reduces the runtimes for detecting epistasis to only a few minutes for moderately-sized datasets and to a few hours for large-scale datasets.
Collapse
|
12
|
Xie J, Zhou Z, Ma J, Xiang C, Nie Q, Zhang W. Graphics processing unit-based alignment of protein interaction networks. IET Syst Biol 2015; 9:120-7. [PMID: 26243827 PMCID: PMC8687428 DOI: 10.1049/iet-syb.2014.0052] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2014] [Revised: 01/23/2015] [Accepted: 03/03/2015] [Indexed: 11/19/2022] Open
Abstract
Network alignment is an important bridge to understanding human protein-protein interactions (PPIs) and functions through model organisms. However, the underlying subgraph isomorphism problem complicates and increases the time required to align protein interaction networks (PINs). Parallel computing technology is an effective solution to the challenge of aligning large-scale networks via sequential computing. In this study, the typical Hungarian-Greedy Algorithm (HGA) is used as an example for PIN alignment. The authors propose a HGA with 2-nearest neighbours (HGA-2N) and implement its graphics processing unit (GPU) acceleration. Numerical experiments demonstrate that HGA-2N can find alignments that are close to those found by HGA while dramatically reducing computing time. The GPU implementation of HGA-2N optimises the parallel pattern, computing mode and storage mode and it improves the computing time ratio between the CPU and GPU compared with HGA when large-scale networks are considered. By using HGA-2N in GPUs, conserved PPIs can be observed, and potential PPIs can be predicted. Among the predictions based on 25 common Gene Ontology terms, 42.8% can be found in the Human Protein Reference Database. Furthermore, a new method of reconstructing phylogenetic trees is introduced, which shows the same relationships among five herpes viruses that are obtained using other methods.
Collapse
Affiliation(s)
- Jiang Xie
- School of Computer Engineering and Science, Shanghai University, Shanghai, People's Republic of China.
| | - Zhonghua Zhou
- School of Computer Engineering and Science, Shanghai University, Shanghai, People's Republic of China
| | - Jin Ma
- School of Computer Engineering and Science, Shanghai University, Shanghai, People's Republic of China
| | - Chaojuan Xiang
- School of Computer Engineering and Science, Shanghai University, Shanghai, People's Republic of China
| | - Qing Nie
- Department of Mathematics, Center for Mathematical and Computational Biology, University of California at Irvine, California, USA
| | - Wu Zhang
- School of Computer Engineering and Science, Shanghai University, Shanghai, People's Republic of China
| |
Collapse
|
13
|
Abstract
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.
Collapse
Affiliation(s)
- Limin Li
- Center for Basic MR Research, NorthShore University HealthSystem Research Institute, Evanston, IL, USA.
| | - Alice M Wyrwicz
- Center for Basic MR Research, NorthShore University HealthSystem Research Institute, Evanston, IL, USA; Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
| |
Collapse
|
14
|
Han M, Kim K, Jang SJ, Cho HS, Bouma BE, Oh WY, Ryu S. GPU-accelerated framework for intracoronary optical coherence tomography imaging at the push of a button. PLoS One 2015; 10:e0124192. [PMID: 25880375 PMCID: PMC4400174 DOI: 10.1371/journal.pone.0124192] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2015] [Accepted: 02/26/2015] [Indexed: 12/13/2022] Open
Abstract
Frequency domain optical coherence tomography (FD-OCT) has become one of the important clinical tools for intracoronary imaging to diagnose and monitor coronary artery disease, which has been one of the leading causes of death. To help more accurate diagnosis and monitoring of the disease, many researchers have recently worked on visualization of various coronary microscopic features including stent struts by constructing three-dimensional (3D) volumetric rendering from series of cross-sectional intracoronary FD-OCT images. In this paper, we present the first, to our knowledge, "push-of-a-button" graphics processing unit (GPU)-accelerated framework for intracoronary OCT imaging. Our framework visualizes 3D microstructures of the vessel wall with stent struts from raw binary OCT data acquired by the system digitizer as one seamless process. The framework reports the state-of-the-art performance; from raw OCT data, it takes 4.7 seconds to provide 3D visualization of a 5-cm-long coronary artery (of size 1600 samples x 1024 A-lines x 260 frames) with stent struts and detection of malapposition automatically at the single push of a button.
Collapse
Affiliation(s)
- Myounghee Han
- Department of Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Kyunghun Kim
- Department of Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sun-Joo Jang
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Han Saem Cho
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Brett E. Bouma
- Harvard Medical School and Massachusetts General Hospital, Wellman Center for Photomedicine, Boston, United States of America
| | - Wang-Yuhl Oh
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sukyoung Ryu
- Department of Computer Science, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- * E-mail:
| |
Collapse
|
15
|
Abstract
Providing patients and clinicians with self-contained PACS viewer on CD format is a common and necessary tool to share vital imaging data. However, to be useful, this tool should be reliable, robust, and convenient. Numerous PACS viewer options are available, often without empirical data to guide in choosing one for routine use. To assist in making a standardized choice for our institution, we chose four common viewers, benchmarked on four different workstations reflecting the variety of environments used by non-radiologist clinicians who would receive a CD. Four CD-based DICOM viewers from eFilm, Philips, Pacsgear Gearview, and iSite were examed on two radiology PACS workstations, a standard desktop computer, and a laptop using a test case consisting of a multi-series CTA with 13 series and 3,035 total images. Multiple objective measures, subjective measures, and presence of key features were evaluated including program time to load, image time to load, cine/movie mode, ability to adequately window and level, pan and zoom functionality, basic measurement tools, and perceived lag when scrolling through a multi-image series. Substantial differences in speed of operation and behavior on multiple systems were documented, which could potentially add several minutes to the time required to open and view a patient's imaging data. The eFilm and iSite viewers operated consistently and reliably across all tested computer environments. The iSite viewer, having among the quickest load times in the group tested and consistently low subjective scroll lag during series viewing, and also beneficially allowing partial viewing while images load in the background, was found to generate the best overall user experience. Because of these significant differences, we have recommended that our institution standardize all patient imaging CD creation using the iSite viewer.
Collapse
Affiliation(s)
- Richard Edward Hosch
- University of Mississippi Medical Center, 2500 North State Street, Jackson, MS, 39216, USA,
| | | |
Collapse
|
16
|
Itoh Y, Klinker G. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays. IEEE Trans Vis Comput Graph 2015; 21:471-480. [PMID: 26357097 DOI: 10.1109/tvcg.2015.2391859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.
Collapse
Affiliation(s)
- Yuta Itoh
- Department of Informatics, Technical University of Munich
| | - Gudrun Klinker
- Department of Informatics, Technical University of Munich
| |
Collapse
|
17
|
Moser K, Itoh Y, Oshima K, Swan JE, Klinker G, Sandor C. Subjective Evaluation of a Semi-Automatic Optical See-Through Head-Mounted Display Calibration Technique. IEEE Trans Vis Comput Graph 2015; 21:491-500. [PMID: 26357099 DOI: 10.1109/tvcg.2015.2391856] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
With the growing availability of optical see-through (OST) head-mounted displays (HMDs) there is a present need for robust, uncomplicated, and automatic calibration methods suited for non-expert users. This work presents the results of a user study which both objectively and subjectively examines registration accuracy produced by three OST HMD calibration methods: (1) SPAAM, (2) Degraded SPAAM, and (3) Recycled INDICA, a recently developed semi-automatic calibration method. Accuracy metrics used for evaluation include subject provided quality values and error between perceived and absolute registration coordinates. Our results show all three calibration methods produce very accurate registration in the horizontal direction but caused subjects to perceive the distance of virtual objects to be closer than intended. Surprisingly, the semi-automatic calibration method produced more accurate registration vertically and in perceived object distance overall. User assessed quality values were also the highest for Recycled INDICA, particularly when objects were shown at distance. The results of this study confirm that Recycled INDICA is capable of producing equal or superior on-screen registration compared to common OST HMD calibration methods. We also identify a potential hazard in using reprojection error as a quantitative analysis technique to predict registration accuracy. We conclude with discussing the further need for examining INDICA calibration in binocular HMD systems, and the present possibility for creation of a closed-loop continuous calibration method for OST Augmented Reality.
Collapse
Affiliation(s)
- Kenneth Moser
- Department of Computer Science and Engineering, Mississippi State University
| | - Yuta Itoh
- Department of Informatics, Technical University of Munich
| | - Kohei Oshima
- Department of Information Science, Interactive Media Design Lab
| | - J Edward Swan
- Department of Computer Science, Mississippi State University
| | - Gudrun Klinker
- Department of Informatics, Technical University of Munich
| | | |
Collapse
|
18
|
Abstract
Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.
Collapse
|
19
|
Hui X, Ye T, Zheng S, Zhou J, Chi H, Jin X, Zhang X. Space-frequency analysis with parallel computing in a phase-sensitive optical time-domain reflectometer distributed sensor. Appl Opt 2014; 53:6586-6590. [PMID: 25322248 DOI: 10.1364/ao.53.006586] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Accepted: 08/26/2014] [Indexed: 06/04/2023]
Abstract
For a phase-sensitive optical time-domain reflectometer (ϕ-OTDR) distributed sensor system, space-frequency analysis can reduce the false alarm by analyzing the frequency distribution compared with the traditional difference value method. We propose a graphics processing unit (GPU)-based parallel computing method to perform multichannel fast Fourier transform (FFT) and realize the real-time space-frequency analysis. The experiment results show that the time taken by the multichannel FFT decreased considerably based on this GPU parallel computing. The method can be completed with a sensing fiber up to 16 km long and an entry-level GPU. Meanwhile, the GPU can reduce the computing load of the central processing unit from 70% down to less than 20%. We carried out an experiment on a two-point space-frequency analysis, and the results clearly and simultaneously show the vibration point locations and frequency components. The sensor system outputs the real-time space-frequency spectra continuously with a spatial resolution of 16.3 m and frequency resolution of 2.25 Hz.
Collapse
|
20
|
Yi X, Wang X, Chen W, Wan W, Zhao H, Gao F. Full domain-decomposition scheme for diffuse optical tomography of large-sized tissues with a combined CPU and GPU parallelization. Appl Opt 2014; 53:2754-2765. [PMID: 24921857 DOI: 10.1364/ao.53.002754] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2014] [Accepted: 03/16/2014] [Indexed: 06/03/2023]
Abstract
The common approach to diffuse optical tomography is to solve a nonlinear and ill-posed inverse problem using a linearized iteration process that involves repeated use of the forward and inverse solvers on an appropriately discretized domain of interest. This scheme normally brings severe computation and storage burdens to its applications on large-sized tissues, such as breast tumor diagnosis and brain functional imaging, and prevents from using the matrix-fashioned linear inversions for improved image quality. To cope with the difficulties, we propose in this paper a parallelized full domain-decomposition scheme, which divides the whole domain into several overlapped subdomains and solves the corresponding subinversions independently within the framework of the Schwarz-type iterations, with the support of a combined multicore CPU and multithread graphics processing unit (GPU) parallelization strategy. The numerical and phantom experiments both demonstrate that the proposed method can effectively reduce the computation time and memory occupation for the large-sized problem and improve the quantitative performance of the reconstruction.
Collapse
|
21
|
Abstract
Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.
Collapse
|
22
|
Abstract
Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.
Collapse
Affiliation(s)
- Xun Jia
- Deparment of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Peter Ziegenhein
- German Cancer Research Center (DKFZ), Department of Medical Physics in Radiation Oncology, Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
| | - Steve B. Jiang
- Deparment of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
23
|
Granados A, Hald N, Di Marco A, Ahmed S, Low-Beer N, Higham J, Kneebone R, Bello F. Real-time visualisation and analysis of internal examinations--seeing the unseen. Med Image Comput Comput Assist Interv 2014; 17:617-625. [PMID: 25333170 DOI: 10.1007/978-3-319-10404-1_77] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Internal examinations such as Digital Rectal Examination (DRE) and bimanual Vaginal Examination (BVE) are routinely performed for early diagnosis of cancer and other diseases. Although they are recognised as core skills to be taught on a medical curriculum, they are difficult to learn and teach due to their unsighted nature. We present a framework that combines a visualisation and analysis tool with position and pressure sensors to enable the study of internal examinations and provision of real-time feedback. This approach is novel as it allows for real-time continuous trajectory and pressure data to be obtained for the complete examination, which may be used for teaching and assessment. Experiments were conducted performing DRE and BVE on benchtop models, and BVE on Gynaecological Teaching Assistants (GTA). The results obtained suggest that the proposed methodology may provide an insight into what constitutes an adequate DRE or BVE, provide real-time feedback tools for learning and assessment, and inform haptics-based simulator design.
Collapse
|
24
|
Idzenga T, Gaburov E, Vermin W, Menssen J, de Korte C. Fast 2-D ultrasound strain imaging: the benefits of using a GPU. IEEE Trans Ultrason Ferroelectr Freq Control 2014; 61:207-213. [PMID: 24402909 DOI: 10.1109/tuffc.2014.6689790] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Deformation of tissue can be accurately estimated from radio-frequency ultrasound data using a 2-dimensional normalized cross correlation (NCC)-based algorithm. This procedure, however, is very computationally time-consuming. A major time reduction can be achieved by parallelizing the numerous computations of NCC. In this paper, two approaches for parallelization have been investigated: the OpenMP interface on a multi-CPU system and Compute Unified Device Architecture (CUDA) on a graphics processing unit (GPU). The performance of the OpenMP and GPU approaches were compared with a conventional Matlab implementation of NCC. The OpenMP approach with 8 threads achieved a maximum speed-up factor of 132 on the computing of NCC, whereas the GPU approach on an Nvidia Tesla K20 achieved a maximum speed-up factor of 376. Neither parallelization approach resulted in a significant loss in image quality of the elastograms. Parallelization of the NCC computations using the GPU, therefore, significantly reduces the computation time and increases the frame rate for motion estimation.
Collapse
|
25
|
Åsen JP, Buskenes JI, Colombo Nilsen CI, Austeng A, Holm S. Implementing capon beamforming on a GPU for real-time cardiac ultrasound imaging. IEEE Trans Ultrason Ferroelectr Freq Control 2014; 61:76-85. [PMID: 24402897 DOI: 10.1109/tuffc.2014.6689777] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Capon beamforming is associated with a high computational complexity, which limits its use as a real-time method in many applications. In this paper, we present an implementation of the Capon beamformer that exhibits realtime performance when applied in a typical cardiac ultrasound imaging setting. To achieve this performance, we make use of the parallel processing power found in modern graphics processing units (GPUs), combined with beamspace processing to reduce the computational complexity as the number of array elements increases. For a three-dimensional beamspace, we show that processing rates supporting real-time cardiac ultrasound imaging are possible, meaning that images can be processed faster than the image acquisition rate for a wide range of parameters. Image quality is investigated in an in vivo cardiac data set. These results show that Capon beamforming is feasible for cardiac ultrasound imaging, providing images with improved lateral resolution both in element-space and beamspace.
Collapse
|
26
|
Abstract
Many ultrasound educational products and ultrasound researchers present diagnostic and interventional ultrasound information using picture-in-picture videos, which simultaneously show the ultrasound image and transducer and patient positions. Traditional techniques for creating picture-in-picture videos are expensive, nonportable, or time-consuming. This article describes an inexpensive, simple, and portable way of creating picture-in-picture ultrasound videos. This technique uses a laptop computer with a video capture device to acquire the ultrasound feed. Simultaneously, a webcam captures a live video feed of the transducer and patient position and live audio. Both sources are streamed onto the computer screen and recorded by screen capture software. This technique makes the process of recording picture-in-picture ultrasound videos more accessible for ultrasound educators and researchers for use in their presentations or publications.
Collapse
|
27
|
Yuan J, Xu G, Yu Y, Zhou Y, Carson PL, Wang X, Liu X. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization. J Biomed Opt 2013; 18:86001. [PMID: 23907277 PMCID: PMC3733419 DOI: 10.1117/1.jbo.18.8.086001] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2013] [Revised: 05/31/2013] [Accepted: 06/21/2013] [Indexed: 05/18/2023]
Abstract
Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.
Collapse
Affiliation(s)
- Jie Yuan
- Nanjing University, School of Electronic Science and Engineering, Nanjing 210093, China
| | - Guan Xu
- University of Michigan, Department of Radiology, Ann Arbor, Michigan 48109
| | - Yao Yu
- Nanjing University, School of Electronic Science and Engineering, Nanjing 210093, China
| | - Yu Zhou
- Nanjing University, School of Electronic Science and Engineering, Nanjing 210093, China
| | - Paul L. Carson
- University of Michigan, Department of Radiology, Ann Arbor, Michigan 48109
| | - Xueding Wang
- University of Michigan, Department of Radiology, Ann Arbor, Michigan 48109
- Address all correspondence to: Xueding Wang, University of Michigan, Department of Radiology, Ann Arbor, Michigan 48109. Tel: +1-734-647-2728; Fax: +1-734-764-8541; E-mail:
| | - Xiaojun Liu
- Nanjing University, School of Physics, Nanjing 210093, China
| |
Collapse
|
28
|
Abstract
We propose the first graphics processing unit (GPU) solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many existing CPU algorithms to solve the CDT problem in computational geometry, yet there has been no prior approach to solve this problem efficiently using the parallel computing power of the GPU. For the special case of the CDT problem where the PSLG consists of just points, which is simply the normal Delaunay triangulation (DT) problem, a hybrid approach using the GPU together with the CPU to partially speed up the computation has already been presented in the literature. Our work, on the other hand, accelerates the entire computation on the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust, and runs up to an order of magnitude faster than the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real-world GIS data having millions of points and edges.
Collapse
Affiliation(s)
- Meng Qi
- National University of Singapore, Singapore.
| | | | | |
Collapse
|
29
|
Matsukura H, Yoneda T, Ishida H. Smelling screen: development and evaluation of an olfactory display system for presenting a virtual odor source. IEEE Trans Vis Comput Graph 2013; 19:606-615. [PMID: 23428445 DOI: 10.1109/tvcg.2013.40] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We propose a new olfactory display system that can generate an odor distribution on a two-dimensional display screen. The proposed system has four fans on the four corners of the screen. The airflows that are generated by these fans collide multiple times to create an airflow that is directed towards the user from a certain position on the screen. By introducing odor vapor into the airflows, the odor distribution is as if an odor source had been placed onto the screen. The generated odor distribution leads the user to perceive the odor as emanating from a specific region of the screen. The position of this virtual odor source can be shifted to an arbitrary position on the screen by adjusting the balance of the airflows from the four fans. Most users do not immediately notice the odor presentation mechanism of the proposed olfactory display system because the airflow and perceived odor come from the display screen rather than the fans. The airflow velocity can even be set below the threshold for airflow sensation, such that the odor alone is perceived by the user. We present experimental results that show the airflow field and odor distribution that are generated by the proposed system. We also report sensory test results to show how the generated odor distribution is perceived by the user and the issues that must be considered in odor presentation.
Collapse
|
30
|
Laha B, Bowman DA, Schiffbauer JD. Validation of the MR simulation approach for evaluating the effects of immersion on visual analysis of volume data. IEEE Trans Vis Comput Graph 2013; 19:529-538. [PMID: 23428436 DOI: 10.1109/tvcg.2013.43] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In our research agenda to study the effects of immersion (level of fidelity) on various tasks in virtual reality (VR) systems, we have found that the most generalizable findings come not from direct comparisons of different technologies, but from controlled simulations of those technologies. We call this the mixed reality (MR) simulation approach. However, the validity of MR simulation, especially when different simulator platforms are used, can be questioned. In this paper, we report the results of an experiment examining the effects of field of regard (FOR) and head tracking on the analysis of volume visualized micro-CT datasets, and compare them with those from a previous study. The original study used a CAVE-like display as the MR simulator platform, while the present study used a high-end head-mounted display (HMD). Out of the 24 combinations of system characteristics and tasks tested on the two platforms, we found that the results produced by the two different MR simulators were similar in 20 cases. However, only one of the significant effects found in the original experiment for quantitative tasks was reproduced in the present study. Our observations provide evidence both for and against the validity of MR simulation, and give insight into the differences caused by different MR simulator platforms. The present experiment also examined new conditions not present in the original study, and produced new significant results, which confirm and extend previous existing knowledge on the effects of FOR and head tracking. We provide design guidelines for choosing display systems that can improve the effectiveness of volume visualization applications.
Collapse
Affiliation(s)
- Bireswar Laha
- Center for Human-Computer Interaction and the Department of Computer Science, Virginia Tech, Blacksburg, VA, USA
| | | | | |
Collapse
|
31
|
Berkelman P, Miyasaka M, Bozlee S. Co-located haptic and 3D graphic interface for medical simulations. Stud Health Technol Inform 2013; 184:48-50. [PMID: 23400128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We describe a system which provides high-fidelity haptic feedback in the same physical location as a 3D graphical display, in order to enable realistic physical interaction with virtual anatomical tissue during modelled procedures such as needle driving, palpation, and other interventions performed using handheld instruments. The haptic feedback is produced by the interaction between an array of coils located behind a thin flat LCD screen, and permanent magnets embedded in the instrument held by the user. The coil and magnet configuration permits arbitrary forces and torques to be generated on the instrument in real time according to the dynamics of the simulated tissue by activating the coils in combination. A rigid-body motion tracker provides position and orientation feedback of the handheld instrument to the computer simulation, and the 3D display is produced using LCD shutter glasses and a head-tracking system for the user.
Collapse
Affiliation(s)
- Peter Berkelman
- Department of Mechanical Engineering, University of Hawaii, Honolulu, HI, USA
| | | | | |
Collapse
|
32
|
SALUD LH, KWAN C, PUGH CM. Simplifying touch data from tri-axial sensors using a new data visualization tool. Stud Health Technol Inform 2013; 184:370-376. [PMID: 23400186 PMCID: PMC3693446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Quantification and evaluation of palpation is a growing field of research in medicine and engineering. A newly developed tri-axial touch sensor has been designed to capture a multi-dimensional profile of touch-loaded forces. We have developed a data visualization tool as a first step in simplifying interpretation of touch for assessing hands-on clinical performance.
Collapse
|
33
|
Wang D, Qiao H, Song X, Fan Y, Li D. Fluorescence molecular tomography using a two-step three-dimensional shape-based reconstruction with graphics processing unit acceleration. Appl Opt 2012; 51:8731-8744. [PMID: 23262613 DOI: 10.1364/ao.51.008731] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2012] [Accepted: 11/26/2012] [Indexed: 06/01/2023]
Abstract
In fluorescence molecular tomography, the accurate and stable reconstruction of fluorescence-labeled targets remains a challenge for wide application of this imaging modality. Here we propose a two-step three-dimensional shape-based reconstruction method using graphics processing unit (GPU) acceleration. In this method, the fluorophore distribution is assumed as the sum of ellipsoids with piecewise-constant fluorescence intensities. The inverse problem is formulated as a constrained nonlinear least-squares problem with respect to shape parameters, leading to much less ill-posedness as the number of unknowns is greatly reduced. Considering that various shape parameters contribute differently to the boundary measurements, we use a two-step optimization algorithm to handle them in a distinctive way and also stabilize the reconstruction. Additionally, the GPU acceleration is employed for finite-element-method-based calculation of the objective function value and the Jacobian matrix, which reduces the total optimization time from around 10 min to less than 1 min. The numerical simulations show that our method can accurately reconstruct multiple targets of various shapes while the conventional voxel-based reconstruction cannot separate the nearby targets. Moreover, the two-step optimization can tolerate different initial values in the existence of noises, even when the number of targets is not known a priori. A physical phantom experiment further demonstrates the method's potential in practical applications.
Collapse
Affiliation(s)
- Daifa Wang
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, China
| | | | | | | | | |
Collapse
|
34
|
Nishitsuji T, Shimobaba T, Kakue T, Masuda N, Ito T. Fast calculation of computer-generated hologram using the circular symmetry of zone plates. Opt Express 2012; 20:27496-502. [PMID: 23262699 DOI: 10.1364/oe.20.027496] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Computer-Generated Holograms (CGHs) can be generated from three-dimensional objects composed of point light sources by overlapping zone plates. A zone plate is a grating that can focus an incident wave and it has circular symmetry shape. In this study, we propose a fast CGH generating algorithm using the circular symmetry of zone plates and computer graphics techniques. We evaluated the proposed method by numerical simulation.
Collapse
Affiliation(s)
- Takashi Nishitsuji
- Graduate School of Engineering, Chiba University, Inage-ku, Chiba, Japan.
| | | | | | | | | |
Collapse
|
35
|
Dinkelbach HÜ, Vitay J, Beuth F, Hamker FH. Comparison of GPU- and CPU-implementations of mean-firing rate neural networks on parallel hardware. Network 2012; 23:212-236. [PMID: 23140422 DOI: 10.3109/0954898x.2012.739292] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Modern parallel hardware such as multi-core processors (CPUs) and graphics processing units (GPUs) have a high computational power which can be greatly beneficial to the simulation of large-scale neural networks. Over the past years, a number of efforts have focused on developing parallel algorithms and simulators best suited for the simulation of spiking neural models. In this article, we aim at investigating the advantages and drawbacks of the CPU and GPU parallelization of mean-firing rate neurons, widely used in systems-level computational neuroscience. By comparing OpenMP, CUDA and OpenCL implementations towards a serial CPU implementation, we show that GPUs are better suited than CPUs for the simulation of very large networks, but that smaller networks would benefit more from an OpenMP implementation. As this performance strongly depends on data organization, we analyze the impact of various factors such as data structure, memory alignment and floating precision. We then discuss the suitability of the different hardware depending on the networks' size and connectivity, as random or sparse connectivities in mean-firing rate networks tend to break parallel performance on GPUs due to the violation of coalescence.
Collapse
Affiliation(s)
- Helge Ülo Dinkelbach
- Department of Computer Science, Artificial Intelligence, Chemnitz University of Technology, Germany
| | | | | | | |
Collapse
|
36
|
Chessa M, Bianchi V, Zampetti M, Sabatini SP, Solari F. Real-time simulation of large-scale neural architectures for visual features computation based on GPU. Network 2012; 23:272-291. [PMID: 23116085 DOI: 10.3109/0954898x.2012.737500] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.
Collapse
Affiliation(s)
- Manuela Chessa
- Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, 16145 Genoa, Italy.
| | | | | | | | | |
Collapse
|
37
|
Abstract
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
Collapse
Affiliation(s)
- Leszek Slażyński
- Department of Life Sciences, Centrum Wiskunde & Informatica, Science Park 123, NL-1098XG Amsterdam, NL
| | | |
Collapse
|
38
|
Abstract
Modern graphics cards contain hundreds of cores that can be programmed for intensive calculations. They are beginning to be used for spiking neural network simulations. The goal is to make parallel simulation of spiking neural networks available to a large audience, without the requirements of a cluster. We review the ongoing efforts towards this goal, and we outline the main difficulties.
Collapse
Affiliation(s)
- Romain Brette
- Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes, Paris, France.
| | | |
Collapse
|
39
|
Abstract
In this article we examine data from eight purpose-built aware homes over a six-month period, looking at presence in rooms to try to determine patterns among the older residents. We look for homes that have similar movement patterns using cluster analysis. We also examine how movement over days clusters within individual homes. Our analysis shows that different homes have distinct movement patterns but within individual homes residents have strong movement routines.
Collapse
Affiliation(s)
- John Loane
- CASALA, Dundalk Institute of Technology, Ireland.
| | | | | | | |
Collapse
|
40
|
Wang L, Hofer B, Guggenheim JA, Povazay B. Graphics processing unit-based dispersion encoded full-range frequency-domain optical coherence tomography. J Biomed Opt 2012; 17:077007. [PMID: 22894520 DOI: 10.1117/1.jbo.17.7.077007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Dispersion encoded full-range (DEFR) frequency-domain optical coherence tomography (FD-OCT) and its enhanced version, fast DEFR, utilize dispersion mismatch between sample and reference arm to eliminate the ambiguity in OCT signals caused by non-complex valued spectral measurement, thereby numerically doubling the usable information content. By iteratively suppressing asymmetrically dispersed complex conjugate artifacts of OCT-signal pulses the complex valued signal can be recovered without additional measurements, thus doubling the spatial signal range to cover the full positive and negative sampling range. Previously the computational complexity and low processing speed limited application of DEFR to smaller amounts of data and did not allow for interactive operation at high resolution. We report a graphics processing unit (GPU)-based implementation of fast DEFR, which significantly improves reconstruction speed by a factor of more than 90 in respect to CPU-based processing and thereby overcomes these limitations. Implemented on a commercial low-cost GPU, a display line rate of ∼21,000 depth scans/s for 2048 samples/depth scan using 10 iterations of the fast DEFR algorithm has been achieved, sufficient for real-time visualization in situ.
Collapse
Affiliation(s)
- Ling Wang
- Cardiff University, School of Optometry & Vision Sciences, Maindy Road, Cardiff, CF24 4LU, United Kingdom
| | | | | | | |
Collapse
|
41
|
Abstract
Numerical simulation with a numerical human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the numerical human model, we adapt three-dimensional FDTD code to a multi-GPU environment using Compute Unified Device Architecture (CUDA). In this study, we used NVIDIA Tesla C2070 as GPGPU boards. The performance of multi-GPU is evaluated in comparison with that of a single GPU and vector supercomputer. The calculation speed with four GPUs was approximately 3.5 times faster than with a single GPU, and was slightly (approx. 1.3 times) slower than with the supercomputer. Calculation speed of the three-dimensional FDTD method using GPUs can significantly improve with an expanding number of GPUs.
Collapse
Affiliation(s)
- Tomoaki Nagaoka
- Electromagnetic Compatibility Laboratory, Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, Tokyo 184-8795, Japan.
| | | |
Collapse
|
42
|
Chen W, Ward K, Li Q, Kecman V, Najarian K, Menke N. Agent based modeling of blood coagulation system: implementation using a GPU based high speed framework. Annu Int Conf IEEE Eng Med Biol Soc 2012; 2011:145-8. [PMID: 22254271 DOI: 10.1109/iembs.2011.6089915] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The coagulation and fibrinolytic systems are complex, inter-connected biological systems with major physiological roles. The complex, nonlinear multi-point relationships between the molecular and cellular constituents of two systems render a comprehensive and simultaneous study of the system at the microscopic and macroscopic level a significant challenge. We have created an Agent Based Modeling and Simulation (ABMS) approach for simulating these complex interactions. As the scale of agents increase, the time complexity and cost of the resulting simulations presents a significant challenge. As such, in this paper, we also present a high-speed framework for the coagulation simulation utilizing the computing power of graphics processing units (GPU). For comparison, we also implemented the simulations in NetLogo, Repast, and a direct C version. As our experiments demonstrate, the computational speed of the GPU implementation of the million-level scale of agents is over 10 times faster versus the C version, over 100 times faster versus the Repast version and over 300 times faster versus the NetLogo simulation.
Collapse
Affiliation(s)
- Wenan Chen
- Department of Biostatistics, Virginia Commonwealth University, USA.
| | | | | | | | | | | |
Collapse
|
43
|
Abstract
Computational modeling of cardiac electrophysiology is a powerful tool for studying arrhythmia mechanisms. In particular, cardiac models are useful for gaining insights into experimental studies, and in the foreseeable future they will be used by clinicians to improve therapy for the patients suffering from complex arrhythmias. Such models are highly intricate, both in their geometric structure and in the equations that represent myocyte electrophysiology. For these models to be useful in a clinical setting, cost-effective solutions for solving the models in real time must be developed. In this work, we hypothesized that low-cost GPGPU-based hardware systems can be used to accelerate arrhythmia simulations. We ported a two dimensional monodomain cardiac model and executed it on various GPGPU platforms. Electrical activity was simulated during point stimulation and rotor activity. Our GPGPU implementations provided significant speedups over the CPU implementation: 18X for point stimulation and 12X for rotor activity. We found that the number of threads that could be launched concurrently was a critical factor in optimizing the GPGPU implementations.
Collapse
Affiliation(s)
- Wei Wang
- Department Information Sciences, University of Delaware, Newark, DE 19716, USA.
| | | | | | | |
Collapse
|
44
|
Abstract
Time resolved three-dimensional (3D) echocardiography generates four-dimensional (3D+time) data sets that bring new possibilities in clinical practice. Image quality of four-dimensional (4D) echocardiography is however regarded as poorer compared to conventional echocardiography where time-resolved 2D imaging is used. Advanced image processing filtering methods can be used to achieve image improvements but to the cost of heavy data processing. The recent development of graphics processing unit (GPUs) enables highly parallel general purpose computations, that considerably reduces the computational time of advanced image filtering methods. In this study multidimensional adaptive filtering of 4D echocardiography was performed using GPUs. Filtering was done using multiple kernels implemented in OpenCL (open computing language) working on multiple subsets of the data. Our results show a substantial speed increase of up to 74 times, resulting in a total filtering time less than 30 s on a common desktop. This implies that advanced adaptive image processing can be accomplished in conjunction with a clinical examination. Since the presented GPU processor method scales linearly with the number of processing elements, we expect it to continue scaling with the expected future increases in number of processing elements. This should be contrasted with the increases in data set sizes in the near future following the further improvements in ultrasound probes and measuring devices. It is concluded that GPUs facilitate the use of demanding adaptive image filtering techniques that in turn enhance 4D echocardiographic data sets. The presented general methodology of implementing parallelism using GPUs is also applicable for other medical modalities that generate multidimensional data.
Collapse
Affiliation(s)
- Mathias Broxvall
- Centre for Modeling and Simulation, Campus Alfred Nobel, Örebro University, 69142 Karlskoga, Sweden.
| | | | | |
Collapse
|
45
|
Watanabe Y. Real time processing of Fourier domain optical coherence tomography with fixed-pattern noise removal by partial median subtraction using a graphics processing unit. J Biomed Opt 2012; 17:050503. [PMID: 22612118 DOI: 10.1117/1.jbo.17.5.050503] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The author presents a graphics processing unit (GPU) programming for real-time Fourier domain optical coherence tomography (FD-OCT) with fixed-pattern noise removal by subtracting mean and median. In general, the fixed-pattern noise can be removed by the averaged spectrum from the many spectra of an actual measurement. However, a mean-spectrum results in artifacts as residual lateral lines caused by a small number of high-reflective points on a sample surface. These artifacts can be eliminated from OCT images by using medians instead of means. However, median calculations that are based on a sorting algorithm can generate a large amount of computation time. With the developed GPU programming, highly reflective surface regions were obtained by calculating the standard deviation of the Fourier transformed data in the lateral direction. The medians and means were then subtracted at the observed regions and other regions, such as backgrounds. When the median calculation was less than 256 positions out of a total 512 depths in an OCT image with 1024 A-lines, the GPU processing rate was faster than that of the line scan camera (46.9 kHz). Therefore, processed OCT images can be displayed in real-time using partial medians.
Collapse
|
46
|
Abstract
BACKGROUND Calculating the electrostatic surface potential (ESP) of a biomolecule is critical towards understanding biomolecular function. Because of its quadratic computational complexity (as a function of the number of atoms in a molecule), there have been continual efforts to reduce its complexity either by improving the algorithm or the underlying hardware on which the calculations are performed. RESULTS We present the combined effect of (i) a multi-scale approximation algorithm, known as hierarchical charge partitioning (HCP), when applied to the calculation of ESP and (ii) its mapping onto a graphics processing unit (GPU). To date, most molecular modeling algorithms perform an artificial partitioning of biomolecules into a grid/lattice on the GPU. In contrast, HCP takes advantage of the natural partitioning in biomolecules, which in turn, better facilitates its mapping onto the GPU. Specifically, we characterize the effect of known GPU optimization techniques like use of shared memory. In addition, we demonstrate how the cost of divergent branching on a GPU can be amortized across algorithms like HCP in order to deliver a massive performance boon. CONCLUSIONS We accelerated the calculation of ESP by 25-fold solely by parallelization on the GPU. Combining GPU and HCP, resulted in a speedup of at most 1,860-fold for our largest molecular structure. The baseline for these speedups is an implementation that has been hand-tuned SSE-optimized and parallelized across 16 cores on the CPU. The use of GPU does not deteriorate the accuracy of our results.
Collapse
Affiliation(s)
- Mayank Daga
- Department of Computer Science, Virginia Tech, Blacksburg, VA 24060, USA
| | - Wu-chun Feng
- Department of Computer Science, Virginia Tech, Blacksburg, VA 24060, USA
- Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
- Virginia Bioinformatics Institute, Virginia Tech, Blacksburg, VA 24061, USA
| |
Collapse
|
47
|
Nakai H. [Three-dimensional computer graphics. 1. Hardware topics]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2012; 68:1414-1418. [PMID: 23089846 DOI: 10.6009/jjrt.2012_jsrt_68.10.1414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
|
48
|
Abstract
The Arnold diffusion constitutes a dynamical phenomenon which may occur in the phase space of a non-integrable Hamiltonian system whenever the number of the system degrees of freedom is M ≥ 3. The diffusion is mediated by a web-like structure of resonance channels, which penetrates the phase space and allows the system to explore the whole energy shell. The Arnold diffusion is a slow process; consequently, the mapping of the web presents a very time-consuming task. We demonstrate that the exploration of the Arnold web by use of a graphic processing unit-supercomputer can result in distinct speedups of two orders of magnitude as compared with standard CPU-based simulations.
Collapse
Affiliation(s)
- A Seibert
- Institute of Physics, University of Augsburg, Universitätstr.1, D-86159 Augsburg, Germany.
| | | | | | | |
Collapse
|
49
|
Abstract
Ultrasound elastography is becoming a widely available clinical imaging tool. In recent years, several real- time elastography algorithms have been proposed; however, most of these algorithms achieve real-time frame rates through compromises in elastographic image quality. Cross-correlation- based elastographic techniques are known to provide high- quality elastographic estimates, but they are computationally intense and usually not suitable for real-time clinical applications. Recently, the use of massively parallel general purpose graphics processing units (GPGPUs) for accelerating computationally intense operations in biomedical applications has received great interest. In this study, we investigate the use of the GPGPU to speed up generation of cross-correlation-based elastograms and achieve real-time frame rates while preserving elastographic image quality. We propose and statistically analyze performance of a new hybrid model of computation suitable for elastography applications in which sequential code is executed on the CPU and parallel code is executed on the GPGPU. Our results indicate that the proposed hybrid approach yields optimal results and adequately addresses the trade-off between speed and quality.
Collapse
Affiliation(s)
- Xu Yang
- Texas A&M University, Dwight Look College of Engineering, Department of Electrical and Computer Engineering, College Station, TX, USA.
| | | | | |
Collapse
|
50
|
Waudby CA, Christodoulou J. GPU accelerated Monte Carlo simulation of pulsed-field gradient NMR experiments. J Magn Reson 2011; 211:67-73. [PMID: 21570329 DOI: 10.1016/j.jmr.2011.04.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2011] [Revised: 04/07/2011] [Accepted: 04/11/2011] [Indexed: 05/30/2023]
Abstract
The simulation of diffusion by Monte Carlo methods is often essential to describing NMR measurements of diffusion in porous media. However, simulation timescales must often span hundreds of milliseconds, with large numbers of trajectories required to ensure statistical convergence. Here we demonstrate that by parallelising code to run on graphics processing units (GPUs), these calculations may be accelerated by over three orders of magnitude, opening new frontiers in experimental design and analysis. As such cards are commonly installed on most desktop computers, we expect that this will prove useful in many cases where simple analytical descriptions are not available or appropriate, e.g. in complex geometries or where short gradient pulse approximations do not hold, or for the analysis of diffusion-weighted MRI in complex tissues such as the lungs and brain.
Collapse
Affiliation(s)
- Christopher A Waudby
- Institute of Structural and Molecular Biology, University College London and Birkbeck College, WC1E 6BT, UK.
| | | |
Collapse
|