1
|
Zhang K, Kang JU. Graphics processing unit accelerated non-uniform fast Fourier transform for ultrahigh-speed, real-time Fourier-domain OCT. OPTICS EXPRESS 2010; 18:23472-87. [PMID: 21164690 PMCID: PMC3358119 DOI: 10.1364/oe.18.023472] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2010] [Revised: 10/14/2010] [Accepted: 10/15/2010] [Indexed: 05/20/2023]
Abstract
We implemented fast Gaussian gridding (FGG)-based non-uniform fast Fourier transform (NUFFT) on the graphics processing unit (GPU) architecture for ultrahigh-speed, real-time Fourier-domain optical coherence tomography (FD-OCT). The Vandermonde matrix-based non-uniform discrete Fourier transform (NUDFT) as well as the linear/cubic interpolation with fast Fourier transform (InFFT) methods are also implemented on GPU to compare their performance in terms of image quality and processing speed. The GPU accelerated InFFT/NUDFT/NUFFT methods are applied to process both the standard half-range FD-OCT and complex full-range FD-OCT (C-FD-OCT). GPU-NUFFT provides an accurate approximation to GPU-NUDFT in terms of image quality, but offers >10 times higher processing speed. Compared with the GPU-InFFT methods, GPU-NUFFT has improved sensitivity roll-off, higher local signal-to-noise ratio and immunity to side-lobe artifacts caused by the interpolation error. Using a high speed CMOS line-scan camera, we demonstrated the real-time processing and display of GPU-NUFFT-based C-FD-OCT at a camera-limited rate of 122 k line/s (1024 pixel/A-scan).
Collapse
|
Research Support, N.I.H., Extramural |
15 |
25 |
2
|
Sullivan SZ, Muir RD, Newman JA, Carlsen MS, Sreehari S, Doerge C, Begue NJ, Everly RM, Bouman CA, Simpson GJ. High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories. OPTICS EXPRESS 2014; 22:24224-34. [PMID: 25321997 PMCID: PMC4247188 DOI: 10.1364/oe.22.024224] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
A simple beam-scanning optical design based on Lissajous trajectory imaging is described for achieving up to kHz frame-rate optical imaging on multiple simultaneous data acquisition channels. In brief, two fast-scan resonant mirrors direct the optical beam on a circuitous trajectory through the field of view, with the trajectory repeat-time given by the least common multiplier of the mirror periods. Dicing the raw time-domain data into sub-trajectories combined with model-based image reconstruction (MBIR) 3D in-painting algorithms allows for effective frame-rates much higher than the repeat time of the Lissajous trajectory. Since sub-trajectory and full-trajectory imaging are simply different methods of analyzing the same data, both high-frame rate images with relatively low resolution and low frame rate images with high resolution are simultaneously acquired. The optical hardware required to perform Lissajous imaging represents only a minor modification to established beam-scanning hardware, combined with additional control and data acquisition electronics. Preliminary studies based on laser transmittance imaging and polarization-dependent second harmonic generation microscopy support the viability of the approach both for detection of subtle changes in large signals and for trace-light detection of transient fluctuations.
Collapse
|
Research Support, N.I.H., Extramural |
11 |
22 |
3
|
Wu PH, Nelson N, Tseng Y. A general method for improving spatial resolution by optimization of electron multiplication in CCD imaging. OPTICS EXPRESS 2010; 18:5199-212. [PMID: 20389533 PMCID: PMC2872937 DOI: 10.1364/oe.18.005199] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2009] [Revised: 01/22/2010] [Accepted: 02/06/2010] [Indexed: 05/20/2023]
Abstract
The electron-multiplying charge-coupled device (EMCCD) camera possesses an electron multiplying function that can effectively convert the weak incident photon signal to amplified electron output, thereby greatly enhancing the contrast of the acquired images. This device has become a popular photon detector in single-cell biophysical assays to enhance subcellular images. However, the quantitative relationship between the resolution in such measurements and the electron multiplication setting in the EMCCD camera is not well-understood. We therefore developed a method to characterize the exact dependence of the signal-to-noise-ratio (SNR) on EM gain settings over a full range of incident light intensity. This information was further used to evaluate the EMCCD performance in subcellular particle tracking. We conclude that there are optimal EM gain settings for achieving the best SNR and the best spatial resolution in these experiments. If it is not used optimally, electron multiplication can decrease the SNR and increases spatial error.
Collapse
|
Research Support, N.I.H., Extramural |
15 |
12 |
4
|
Chang CW, Mycek MA. Precise fluorophore lifetime mapping in live-cell, multi-photon excitation microscopy. OPTICS EXPRESS 2010; 18:8688-96. [PMID: 20588712 PMCID: PMC3410727 DOI: 10.1364/oe.18.008688] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Fluorophore excited state lifetime is a useful indicator of micro-environment in cellular optical molecular imaging. For quantitative sensing, precise lifetime determination is important, yet is often difficult to accomplish when using the experimental conditions favored by live cells. Here we report the first application of temporal optimization and spatial denoising methods to two-photon time-correlated single photon counting (TCSPC) fluorescence lifetime imaging microscopy (FLIM) to improve lifetime precision in live-cell images. The results demonstrated a greater than five-fold improvement in lifetime precision. This approach minimizes the adverse effects of excitation light on live cells and should benefit FLIM applications to high content analysis and bioimage informatics.
Collapse
|
Research Support, N.I.H., Extramural |
15 |
6 |
5
|
Xu D, Huang Y, Kang JU. GPU-accelerated non-uniform fast Fourier transform-based compressive sensing spectral domain optical coherence tomography. OPTICS EXPRESS 2014; 22:14871-84. [PMID: 24977582 PMCID: PMC4083058 DOI: 10.1364/oe.22.014871] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).
Collapse
|
Research Support, N.I.H., Extramural |
11 |
5 |
6
|
Noor-Ul-Huda M, Tehsin S, Ahmed S, Niazi FAK, Murtaza Z. Retinal images benchmark for the detection of diabetic retinopathy and clinically significant macular edema (CSME). ACTA ACUST UNITED AC 2019; 64:297-307. [PMID: 30055096 DOI: 10.1515/bmt-2018-0098] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 06/15/2018] [Indexed: 11/15/2022]
Abstract
Diabetes mellitus is an enduring disease related with significant morbidity and mortality. The main pathogenesis behind this disease is its numerous micro- and macrovascular complications. In developing countries, diabetic retinopathy (DR) is one of the major sources of vision impairment in working age population. DR has been classified into two categories: proliferative diabetic retinopathy (PDR) and non-proliferative diabetic retinopathy (NPDR). NPDR is further classified into mild, moderate and severe, while PDR is further classified into early PDR, high risk PDR and advanced diabetic eye disease. DR is a disease caused due to high blood glucose levels which result in vision loss or permanent blindness. High-level advancements in the field of bio-medical image processing have speeded up the automated process of disease diagnoses and analysis. Much research has been conducted and computerized systems have been designed to detect and analyze retinal diseases through image processing. Similarly, a number of algorithms have been designed to detect and grade DR by analyzing different symptoms including microaneurysms, soft exudates, hard exudates, cotton wool spots, fibrotic bands, neovascularization on disc (NVD), neovascularization elsewhere (NVE), hemorrhages and tractional bands. The visual examination of the retina is a vital test to diagnose DR-related complications. However, all the DR computer-aided diagnostic systems require a standard dataset for the estimation of their efficiency, performance and accuracy. This research presents a benchmark for the evaluation of computer-based DR diagnostic systems. The existing DR benchmarks are small in size and do not cover all the DR stages and categories. The dataset contains 1445 high-quality fundus photographs of retinal images, acquired over 2 years from the records of the patients who presented to the Department of Ophthalmology, Holy Family Hospital, Rawalpindi. This benchmark provides an evaluation platform for medical image analysis researchers. Furthermore, it provides evaluation data for all the stages of DR.
Collapse
|
|
6 |
5 |
7
|
Tahmasbi A, Ward ES, Ober RJ. Determination of localization accuracy based on experimentally acquired image sets: applications to single molecule microscopy. OPTICS EXPRESS 2015; 23:7630-52. [PMID: 25837101 PMCID: PMC4413838 DOI: 10.1364/oe.23.007630] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Fluorescence microscopy is a photon-limited imaging modality that allows the study of subcellular objects and processes with high specificity. The best possible accuracy (standard deviation) with which an object of interest can be localized when imaged using a fluorescence microscope is typically calculated using the Cramér-Rao lower bound, that is, the inverse of the Fisher information. However, the current approach for the calculation of the best possible localization accuracy relies on an analytical expression for the image of the object. This can pose practical challenges since it is often difficult to find appropriate analytical models for the images of general objects. In this study, we instead develop an approach that directly uses an experimentally collected image set to calculate the best possible localization accuracy for a general subcellular object. In this approach, we fit splines, i.e. smoothly connected piecewise polynomials, to the experimentally collected image set to provide a continuous model of the object, which can then be used for the calculation of the best possible localization accuracy. Due to its practical importance, we investigate in detail the application of the proposed approach in single molecule fluorescence microscopy. In this case, the object of interest is a point source and, therefore, the acquired image set pertains to an experimental point spread function.
Collapse
|
Research Support, N.I.H., Extramural |
10 |
5 |
8
|
Moon S, Lee SW, Chen Z. Reference spectrum extraction and fixed-pattern noise removal in optical coherence tomography. OPTICS EXPRESS 2010; 18:24395-404. [PMID: 21164786 PMCID: PMC3100290 DOI: 10.1364/oe.18.024395] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2010] [Revised: 10/15/2010] [Accepted: 10/21/2010] [Indexed: 05/03/2023]
Abstract
We present a new signal processing method that extracts the reference spectrum information from an acquired optical coherence tomography (OCT) image without a separate calibration step of reference spectrum measurement. The reference spectrum is used to remove the fixed-pattern noise that is a characteristic artifact of Fourier-domain OCT schemes. It was found that the conventional approach based on an averaged spectrum, or mean spectrum, is prone to be influenced by the high-amplitude data points whose statistical distribution is hardly randomized. Thus, the conventional mean-spectrum subtraction method cannot completely eliminate the artifact but may leave residual horizontal lines in the final image. This problem was avoided by utilizing an advanced statistical analysis tool of the median A-line. The reference A-line was obtained by taking a complex median of each horizontal-line data. As an optional method of high-speed calculation, we also propose a minimum-variance mean A-line that can be calculated from an image by a collection of mean A-line values taken from a horizontal segment whose complex variance of the data points is the minimum. By comparing the images processed by those methods, it was found that our new processing schemes of the median-line subtraction and the minimum-variance mean-line subtraction successfully suppressed the fixed-pattern noise. The inverse Fourier transform of the obtained reference A-line well matched the reference spectrum obtained by a physical measurement as well.
Collapse
|
Research Support, N.I.H., Extramural |
15 |
5 |
9
|
Johnson PV, Kim J, Banks MS. Stereoscopic 3D display technique using spatiotemporal interlacing has improved spatial and temporal properties. OPTICS EXPRESS 2015; 23:9252-75. [PMID: 25968758 PMCID: PMC4523373 DOI: 10.1364/oe.23.009252] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2015] [Revised: 03/18/2015] [Accepted: 03/26/2015] [Indexed: 05/20/2023]
Abstract
Stereoscopic 3D (S3D) displays use spatial or temporal interlacing to send different images to the two eyes. Temporal interlacing delivers images to the left and right eyes alternately in time; it has high effective spatial resolution but is prone to temporal artifacts. Spatial interlacing delivers even pixel rows to one eye and odd rows to the other eye simultaneously; it is subject to spatial limitations such as reduced spatial resolution. We propose a spatiotemporal-interlacing protocol that interlaces the left- and right-eye views spatially, but with the rows being delivered to each eye alternating with each frame. We performed psychophysical experiments and found that flicker, motion artifacts, and depth distortion are substantially reduced relative to the temporal-interlacing protocol, and spatial resolution is better than in the spatial-interlacing protocol. Thus, the spatiotemporal-interlacing protocol retains the benefits of spatial and temporal interlacing while minimizing or even eliminating the drawbacks.
Collapse
|
Research Support, N.I.H., Extramural |
10 |
4 |
10
|
McLean JP, Ling Y, Hendon CP. Frequency-constrained robust principal component analysis: a sparse representation approach to segmentation of dynamic features in optical coherence tomography imaging. OPTICS EXPRESS 2017; 25:25819-25830. [PMID: 29041245 PMCID: PMC5644470 DOI: 10.1364/oe.25.025819] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 10/05/2017] [Accepted: 10/05/2017] [Indexed: 05/18/2023]
Abstract
Sparse representation theory is an exciting area of research with recent applications in medical imaging and detection, segmentation, and quantitative analysis of biological processes. We present a variant on the robust-principal component analysis (RPCA) algorithm, called frequency constrained RPCA (FC-RPCA), for selectively segmenting dynamic phenomena that exhibit spectra within a user-defined range of frequencies. The algorithm lacks subjective parameter tuning and demonstrates robust segmentation in datasets containing multiple motion sources and high amplitude noise. When tested on 17 ex-vivo, time lapse optical coherence tomography (OCT) B-scans of human ciliated epithelium, segmentation accuracies ranged between 91-99% and consistently out-performed traditional RPCA.
Collapse
|
research-article |
8 |
3 |
11
|
Tahmasbi A, Ram S, Chao J, Abraham AV, Tang FW, Sally Ward E, Ober RJ. Designing the focal plane spacing for multifocal plane microscopy. OPTICS EXPRESS 2014; 22:16706-21. [PMID: 25090489 PMCID: PMC4162350 DOI: 10.1364/oe.22.016706] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Multifocal plane microscopy (MUM) has made it possible to study subcellular dynamics in 3D at high temporal and spatial resolution by simultaneously imaging distinct planes within the specimen. MUM allows high accuracy localization of a point source along the z-axis since it overcomes the depth discrimination problem of conventional single plane microscopy. An important question in MUM experiments is how the number of focal planes and their spacings should be chosen to achieve the best possible localization accuracy along the z-axis. Here, we propose approaches based on the Fisher information matrix and report spacing scenarios called strong coupling and weak coupling which yield an appropriate 3D localization accuracy. We examine the effect of numerical aperture, magnification, photon count, emission wavelength and extraneous noise on the spacing scenarios. In addition, we investigate the effect of changing the number of focal planes on the 3D localization accuracy. We also introduce a new software package that provides a user-friendly framework to find appropriate plane spacings for a MUM setup. These developments should assist in optimizing MUM experiments.
Collapse
|
Research Support, N.I.H., Extramural |
11 |
3 |