26
|
Jalal UM, Kim SC, Shim JS. Histogram analysis for smartphone-based rapid hematocrit determination. BIOMEDICAL OPTICS EXPRESS 2017; 8:3317-3328. [PMID: 28717569 PMCID: PMC5508830 DOI: 10.1364/boe.8.003317] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 05/20/2017] [Accepted: 06/07/2017] [Indexed: 06/07/2023]
Abstract
A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based "Histogram" app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases.
Collapse
|
27
|
Venhuizen FG, van Ginneken B, Liefers B, van Grinsven MJ, Fauser S, Hoyng C, Theelen T, Sánchez CI. Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks. BIOMEDICAL OPTICS EXPRESS 2017; 8:3292-3316. [PMID: 28717568 PMCID: PMC5508829 DOI: 10.1364/boe.8.003292] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 05/22/2017] [Accepted: 06/03/2017] [Indexed: 05/18/2023]
Abstract
We developed a fully automated system using a convolutional neural network (CNN) for total retina segmentation in optical coherence tomography (OCT) that is robust to the presence of severe retinal pathology. A generalized U-net network architecture was introduced to include the large context needed to account for large retinal changes. The proposed algorithm outperformed qualitative and quantitatively two available algorithms. The algorithm accurately estimated macular thickness with an error of 14.0 ± 22.1 µm, substantially lower than the error obtained using the other algorithms (42.9 ± 116.0 µm and 27.1 ± 69.3 µm, respectively). These results highlighted the proposed algorithm's capability of modeling the wide variability in retinal appearance and obtained a robust and reliable retina segmentation even in severe pathological cases.
Collapse
|
28
|
Fang L, Cunefare D, Wang C, Guymer RH, Li S, Farsiu S. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. BIOMEDICAL OPTICS EXPRESS 2017; 8:2732-2744. [PMID: 28663902 PMCID: PMC5480509 DOI: 10.1364/boe.8.002732] [Citation(s) in RCA: 261] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 04/22/2017] [Accepted: 04/23/2017] [Indexed: 05/18/2023]
Abstract
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.
Collapse
|
29
|
Guo Y, Veneman WJ, Spaink HP, Verbeek FJ. Three-dimensional reconstruction and measurements of zebrafish larvae from high-throughput axial-view in vivo imaging. BIOMEDICAL OPTICS EXPRESS 2017; 8:2611-2634. [PMID: 28663894 PMCID: PMC5480501 DOI: 10.1364/boe.8.002611] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 01/31/2017] [Accepted: 01/31/2017] [Indexed: 05/30/2023]
Abstract
High-throughput imaging is applied to provide observations for accurate statements on phenomena in biology and this has been successfully applied in the domain of cells, i.e. cytomics. In the domain of whole organisms, we need to take the hurdles to ensure that the imaging can be accomplished with a sufficient throughput and reproducibility. For vertebrate biology, zebrafish is a popular model system for High-throughput applications. The development of the Vertebrate Automated Screening Technology (VAST BioImager), a microscope mounted system, enables the application of zebrafish high-throughput screening. The VAST BioImager contains a capillary that holds a zebrafish for imaging. Through the rotation of the capillary, multiple axial-views of a specimen can be acquired. For the VAST BioImager, fluorescence and/or confocal microscopes are used. Quantitation of a specific signal as derived from a label in one fluorescent channel requires insight in the zebrafish volume to be able to normalize quantitation to volume units. However, from the setup of the VAST BioImager, a specimen volume cannot be straightforwardly derived. We present a high-throughput axial-view imaging architecture based on the VAST BioImager. We propose profile-based 3D reconstruction to produce 3D volumetric representations for zebrafish larvae using the axial-views. Volume and surface area can then be derived from the 3D reconstruction to obtain the shape characteristics in high-throughput measurements. In addition, we develop a calibration and a validation of our methodology. From our measurements we show that with a limited amount of views, accurate measurements of volume and surface area for zebrafish larvae can be obtained. We have applied the proposed method on a range of developmental stages in zebrafish and produced metrical references for the volume and surface area for each stage.
Collapse
|
30
|
Cheng J, Tao D, Wong DWK, Liu J. Quadratic divergence regularized SVM for optic disc segmentation. BIOMEDICAL OPTICS EXPRESS 2017; 8:2687-2696. [PMID: 28663898 PMCID: PMC5480505 DOI: 10.1364/boe.8.002687] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 03/29/2017] [Accepted: 03/29/2017] [Indexed: 05/29/2023]
Abstract
Machine learning has been used in many retinal image processing applications such as optic disc segmentation. It assumes that the training and testing data sets have the same feature distribution. However, retinal images are often collected under different conditions and may have different feature distributions. Therefore, the models trained from one data set may not work well for another data set. However, it is often too expensive and time consuming to label the needed training data and rebuild the models for all different data sets. In this paper, we propose a novel quadratic divergence regularized support vector machine (QDSVM) to transfer the knowledge from domains with sufficient training data to domains with limited or even no training data. The proposed method simultaneously minimizes the distribution difference between the source domain and target domain while training the classifier. Experimental results show that the proposed transfer learning based method reduces the classification error in superpixel level from 14.2% without transfer learning to 2.4% with transfer learning. The proposed method is effective to transfer the label knowledge from source to target domain, which enables it to be used for optic disc segmentation in data sets with different feature distributions.
Collapse
|
31
|
Pérez-merino P, Velasco-Ocana M, Martinez-Enriquez E, Revuelta L, McFadden SA, Marcos S. Three-dimensional OCT based guinea pig eye model: relating morphology and optics. BIOMEDICAL OPTICS EXPRESS 2017; 8:2173-2184. [PMID: 28736663 PMCID: PMC5516822 DOI: 10.1364/boe.8.002173] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Accepted: 02/14/2017] [Indexed: 05/24/2023]
Abstract
Custom Spectral Optical Coherence Tomography (SOCT) provided with automatic quantification and distortion correction algorithms was used to measure the 3-D morphology in guinea pig eyes (n = 8, 30 days; n = 5, 40 days). Animals were measured awake in vivo under cyclopegia. Measurements showed low intraocular variability (<4% in corneal and anterior lens radii and <8% in the posterior lens radii, <1% interocular distances). The repeatability of the surface elevation was less than 2 µm. Surface astigmatism was the individual dominant term in all surfaces. Higher-order RMS surface elevation was largest in the posterior lens. Individual surface elevation Zernike terms correlated significantly across corneal and anterior lens surfaces. Higher-order-aberrations (except spherical aberration) were comparable with those predicted by OCT-based eye models.
Collapse
|
32
|
Prieto SP, Lai KK, Laryea JA, Mizell JS, Mustain WC, Muldoon TJ. Fluorescein as a topical fluorescent contrast agent for quantitative microendoscopic inspection of colorectal epithelium. BIOMEDICAL OPTICS EXPRESS 2017; 8:2324-2338. [PMID: 28736674 PMCID: PMC5516830 DOI: 10.1364/boe.8.002324] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 03/20/2017] [Accepted: 03/20/2017] [Indexed: 05/21/2023]
Abstract
Fiber bundle microendoscopic imaging of colorectal tissue has shown promising results, for both qualitative and quantitative analysis. A quantitative image quality control and image feature extraction algorithm was previously designed for quantitative image feature analysis of proflavine-stained ex vivo colorectal tissue. We investigated fluorescein as an alternative topical stain. Images of ex vivo porcine, caprine, and human colorectal tissue were used to compare microendoscopic images of tissue topically stained with fluorescein and proflavine solutions. Fluorescein was shown to be comparable for automated crypt detection, with an average crypt detection sensitivity exceeding 90% using a combination of three contrast limit pairs.
Collapse
|
33
|
Wang J, Zhang M, Hwang TS, Bailey ST, Huang D, Wilson DJ, Jia Y. Reflectance-based projection-resolved optical coherence tomography angiography [Invited]. BIOMEDICAL OPTICS EXPRESS 2017; 8:1536-1548. [PMID: 28663848 PMCID: PMC5480563 DOI: 10.1364/boe.8.001536] [Citation(s) in RCA: 73] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Revised: 02/10/2017] [Accepted: 02/10/2017] [Indexed: 05/18/2023]
Abstract
Optical coherence tomography angiography (OCTA) is limited by projection artifacts from the superficial blood vessels onto deeper layers. We have recently described projection-resolved (PR) OCTA that solves the ambiguity between in situ flow and flow projection along each axial scan and suppresses the artifact on both en face and cross-sectional angiograms. While this method significantly improved the depth resolution of OCTA, the vascular integrity of the deeper layers was not fully preserved. In this study, we propose a novel reflectance-based projection-resolved (rbPR) OCTA algorithm which uses OCT reflectance to enhance the flow signal and suppress the projection artifacts in 3-dimensional OCTA. We demonstrated quantitatively that rbPR improved the vascular connectivity and improved the discrimination of the deeper plexus angiograms in healthy eyes, compared to prior PR-OCTA method. We also demonstrated qualitatively that rbPR removes flow projection artifacts more completely from the outer retinal slab in the eyes with age-related macular degeneration, and preserves vascular integrity of the intermediate and deep capillary plexuses in the eyes with diabetic retinopathy. Additionally, this method improves the resolution of the choriocapillaris and demonstrates details comparable to scanning electron microscopy.
Collapse
|
34
|
Zang P, Gao SS, Hwang TS, Flaxel CJ, Wilson DJ, Morrison JC, Huang D, Li D, Jia Y. Automated boundary detection of the optic disc and layer segmentation of the peripapillary retina in volumetric structural and angiographic optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2017; 8:1306-1318. [PMID: 28663830 PMCID: PMC5480545 DOI: 10.1364/boe.8.001306] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 01/25/2017] [Accepted: 01/25/2017] [Indexed: 05/20/2023]
Abstract
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch's membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm).
Collapse
|
35
|
Rehman AU, Anwer AG, Gosnell ME, Mahbub SB, Liu G, Goldys EM. Fluorescence quenching of free and bound NADH in HeLa cells determined by hyperspectral imaging and unmixing of cell autofluorescence. BIOMEDICAL OPTICS EXPRESS 2017; 8:1488-1498. [PMID: 28663844 PMCID: PMC5480559 DOI: 10.1364/boe.8.001488] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 01/12/2017] [Accepted: 02/06/2017] [Indexed: 05/06/2023]
Abstract
Carbonyl cyanide-p-trifluoro methoxyphenylhydrazone (FCCP) is a well-known mitochondrial uncoupling agent. We examined FCCP-induced fluorescence quenching of reduced nicotinamide adenine dinucleotide / nicotinamide adenine dinucleotide phosphate (NAD(P)H) in solution and in cultured HeLa cells in a wide range of FCCP concentrations from 50 to 1000µM. A non-invasive label-free method of hyperspectral imaging of cell autofluorescence combined with unsupervised unmixing was used to separately isolate the emissions of free and bound NAD(P)H from cell autofluorescence. Hyperspectral image analysis of FCCP-treated HeLa cells confirms that this agent selectively quenches fluorescence of free and bound NAD(P)H in a broad range of concentrations. This is confirmed by the measurements of average NAD/NADH and NADP/NADPH content in cells. FCCP quenching of free NAD(P)H in cells and in solution is found to be similar, but quenching of bound NAD(P)H in cells is attenuated compared to solution quenching possibly due to a contribution from the metabolic and/or antioxidant response in cells. Chemical quenching of NAD(P)H fluorescence by FCCP validates the results of unsupervised unmixing of cell autofluorescence.
Collapse
|
36
|
Vahid MR, Chao J, Kim D, Ward ES, Ober RJ. State space approach to single molecule localization in fluorescence microscopy. BIOMEDICAL OPTICS EXPRESS 2017; 8:1332-1355. [PMID: 28663832 PMCID: PMC5480547 DOI: 10.1364/boe.8.001332] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Revised: 01/14/2017] [Accepted: 01/30/2017] [Indexed: 06/07/2023]
Abstract
Single molecule super-resolution microscopy enables imaging at sub-diffraction-limit resolution by producing images of subsets of stochastically photoactivated fluorophores over a sequence of frames. In each frame of the sequence, the fluorophores are accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Many methods have been developed for localizing fluorophores from the images. The majority of these methods comprise two separate steps: detection and estimation. In the detection step, fluorophores are identified. In the estimation step, the locations of the identified fluorophores are estimated through an iterative approach. Here, we propose a non-iterative state space-based localization method which combines the detection and estimation steps. We demonstrate that the estimated locations obtained from the proposed method can be used as initial conditions in an estimation routine to potentially obtain improved location estimates. The proposed method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix. The locations of the poles of the resulting system determine the peak locations in the frequency domain, and the locations of the most significant peaks correspond to the single molecule locations in the original image. The performance of the method is validated using both simulated and experimental data.
Collapse
|
37
|
Zheng Y, Wang Y, Jiao W, Hou S, Ren Y, Qin M, Hou D, Luo C, Wang H, Gee J, Zhao B. Joint alignment of multispectral images via semidefinite programming. BIOMEDICAL OPTICS EXPRESS 2017; 8:890-901. [PMID: 28270991 PMCID: PMC5330559 DOI: 10.1364/boe.8.000890] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 01/08/2017] [Accepted: 01/09/2017] [Indexed: 06/06/2023]
Abstract
In this paper, we introduce a novel feature-point-matching based framework for achieving an optimized joint-alignment of sequential images from multispectral imaging (MSI). It solves a low-rank and semidefinite matrix that stores all pairwise-image feature-mappings by minimizing the total amount of point-to-point matching cost via a convex optimization of a semidefinite programming formulation. This unique strategy takes a complete consideration of the information aggregated by all point-matching costs and enables the entire set of pairwise-image feature-mappings to be solved simultaneously and near-optimally. Our framework is capable of running in an automatic or interactive fashion, offering an effective tool for eliminating spatial misalignments introduced into sequential MSI images during the imaging process. Our experimental results obtained from a database of 28 sequences of MSI images of human eye demonstrate the superior performances of our approach to the state-of-the-art techniques. Our framework is potentially invaluable in a large variety of practical applications of MSI images.
Collapse
|
38
|
Karri SPK, Chakraborty D, Chatterjee J. Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration. BIOMEDICAL OPTICS EXPRESS 2017; 8:579-592. [PMID: 28270969 PMCID: PMC5330546 DOI: 10.1364/boe.8.000579] [Citation(s) in RCA: 120] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 12/12/2016] [Accepted: 12/13/2016] [Indexed: 05/06/2023]
Abstract
We present an algorithm for identifying retinal pathologies given retinal optical coherence tomography (OCT) images. Our approach fine-tunes a pre-trained convolutional neural network (CNN), GoogLeNet, to improve its prediction capability (compared to random initialization training) and identifies salient responses during prediction to understand learned filter characteristics. We considered a data set containing subjects with diabetic macular edema, or dry age-related macular degeneration, or no pathology. The fine-tuned CNN could effectively identify pathologies in comparison to classical learning. Our algorithm aims to demonstrate that models trained on non-medical images can be fine-tuned for classifying OCT images with limited training data.
Collapse
|
39
|
Fatima KN, Hassan T, Akram MU, Akhtar M, Butt WH. Fully automated diagnosis of papilledema through robust extraction of vascular patterns and ocular pathology from fundus photographs. BIOMEDICAL OPTICS EXPRESS 2017; 8:1005-1024. [PMID: 28270999 PMCID: PMC5330576 DOI: 10.1364/boe.8.001005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 01/11/2017] [Accepted: 01/15/2017] [Indexed: 06/06/2023]
Abstract
Rapid development in the field of ophthalmology has increased the demand of computer aided diagnosis of various eye diseases. Papilledema is an eye disease in which the optic disc of the eye is swelled due to an increase in intracranial pressure. This increased pressure can cause severe encephalic complications like abscess, tumors, meningitis or encephalitis, which may lead to a patient's death. Although there have been several papilledema case studies reported from a medical point of view, only a few researchers have presented automated algorithms for this problem. This paper presents a novel computer aided system which aims to automatically detect papilledema from fundus images. Firstly, the fundus images are preprocessed by going through optic disc detection and vessel segmentation. After preprocessing, a total of 26 different features are extracted to capture possible changes in the optic disc due to papilledema. These features are further divided into four categories based upon their color, textural, vascular and disc margin obscuration properties. The best features are then selected and combined to form a feature matrix that is used to distinguish between normal images and images with papilledema using the supervised support vector machine (SVM) classifier. The proposed method is tested on 160 fundus images obtained from two different data sets i.e. structured analysis of retina (STARE), which is a publicly available data set, and our local data set that has been acquired from the Armed Forces Institute of Ophthalmology (AFIO). The STARE data set contained 90 and our local data set contained 70 fundus images respectively. These annotations have been performed with the help of two ophthalmologists. We report detection accuracies of 95.6% for STARE, 87.4% for the local data set, and 85.9% for the combined STARE and local data sets. The proposed system is fast and robust in detecting papilledema from fundus images with promising results. This will aid physicians in clinical assessment of fundus images. It will not take away the role of physicians, but will rather help them in the time consuming process of screening fundus images.
Collapse
|
40
|
Martinez-Enriquez E, Pérez-Merino P, Velasco-Ocana M, Marcos S. OCT-based full crystalline lens shape change during accommodation in vivo. BIOMEDICAL OPTICS EXPRESS 2017; 8:918-933. [PMID: 28270993 PMCID: PMC5330589 DOI: 10.1364/boe.8.000918] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 12/15/2016] [Accepted: 12/27/2016] [Indexed: 05/22/2023]
Abstract
The full shape of the accommodating crystalline lens was estimated using custom three-dimensional (3-D) spectral OCT and image processing algorithms. Automatic segmentation and distortion correction were used to construct 3-D models of the lens region visible through the pupil. The lens peripheral region was estimated with a trained and validated parametric model. Nineteen young eyes were measured at 0-6 D accommodative demands in 1.5 D steps. Lens volume, surface area, diameter, and equatorial plane position were automatically quantified. Lens diameter & surface area correlated negatively and equatorial plane position positively with accommodation response. Lens volume remained constant and surface area decreased with accommodation, indicating that the lens material is incompressible and the capsular bag elastic.
Collapse
|
41
|
Hackett LP, Seo S, Kim S, Goddard LL, Liu GL. Label-free cell-substrate adhesion imaging on plasmonic nanocup arrays. BIOMEDICAL OPTICS EXPRESS 2017; 8:1139-1151. [PMID: 28271009 PMCID: PMC5330562 DOI: 10.1364/boe.8.001139] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 12/07/2016] [Accepted: 12/19/2016] [Indexed: 05/10/2023]
Abstract
Cell adhesion is a crucial biological and biomedical parameter defining cell differentiation, cell migration, cell survival, and state of disease. Because of its importance in cellular function, several tools have been developed in order to monitor cell adhesion in response to various biochemical and mechanical cues. However, there remains a need to monitor cell adhesion and cell-substrate separation with a method that allows real-time measurements on accessible equipment. In this article, we present a method to monitor cell-substrate separation at the single cell level using a plasmonic extraordinary optical transmission substrate, which has a high sensitivity to refractive index changes at the metal-dielectric interface. We show how refractive index changes can be detected using intensity peaks in color channel histograms from RGB images taken of the device surface with a brightfield microscope. This allows mapping of the nonuniform refractive index pattern of a single cell cultured on the plasmonic substrate and therefore high-throughput detection of cell-substrate adhesion with observations in real time.
Collapse
|
42
|
Dongye C, Zhang M, Hwang TS, Wang J, Gao SS, Liu L, Huang D, Wilson DJ, Jia Y. Automated detection of dilated capillaries on optical coherence tomography angiography. BIOMEDICAL OPTICS EXPRESS 2017; 8:1101-1109. [PMID: 28271005 PMCID: PMC5330594 DOI: 10.1364/boe.8.001101] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 01/20/2017] [Accepted: 01/20/2017] [Indexed: 05/29/2023]
Abstract
Automated detection and grading of angiographic high-risk features in diabetic retinopathy can potentially enhance screening and clinical care. We have previously identified capillary dilation in angiograms of the deep plexus in optical coherence tomography angiography as a feature associated with severe diabetic retinopathy. In this study, we present an automated algorithm that uses hybrid contrast to distinguish angiograms with dilated capillaries from healthy controls and then applies saliency measurement to map the extent of the dilated capillary networks. The proposed algorithm agreed well with human grading.
Collapse
|
43
|
Abdolmanafi A, Duong L, Dahdah N, Cheriet F. Deep feature learning for automatic tissue classification of coronary artery using optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2017; 8:1203-1220. [PMID: 28271012 PMCID: PMC5330543 DOI: 10.1364/boe.8.001203] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Revised: 01/20/2017] [Accepted: 01/20/2017] [Indexed: 05/03/2023]
Abstract
Kawasaki disease (KD) is an acute childhood disease complicated by coronary artery aneurysms, intima thickening, thrombi, stenosis, lamellar calcifications, and disappearance of the media border. Automatic classification of the coronary artery layers (intima, media, and scar features) is important for analyzing optical coherence tomography (OCT) images recorded in pediatric patients. OCT has been known as an intracoronary imaging modality using near-infrared light which has recently been used to image the inner coronary artery tissues of pediatric patients, providing high spatial resolution (ranging from 10 to 20 μm). This study aims to develop a robust and fully automated tissue classification method by using the convolutional neural networks (CNNs) as feature extractor and comparing the predictions of three state-of-the-art classifiers, CNN, random forest (RF), and support vector machine (SVM). The results show the robustness of CNN as the feature extractor and random forest as the classifier with classification rate up to 96%, especially to characterize the second layer of coronary arteries (media), which is a very thin layer and it is challenging to be recognized and specified from other tissues.
Collapse
|
44
|
Wang Y, Zhang Y, Yao Z, Zhao R, Zhou F. Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images. BIOMEDICAL OPTICS EXPRESS 2016; 7:4928-4940. [PMID: 28018716 PMCID: PMC5175542 DOI: 10.1364/boe.7.004928] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 10/05/2016] [Accepted: 10/05/2016] [Indexed: 05/05/2023]
Abstract
Non-lethal macular diseases greatly impact patients' life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples.
Collapse
|
45
|
Amelard R, Clausi DA, Wong A. Spectral-spatial fusion model for robust blood pulse waveform extraction in photoplethysmographic imaging. BIOMEDICAL OPTICS EXPRESS 2016; 7:4874-4885. [PMID: 28018712 PMCID: PMC5175538 DOI: 10.1364/boe.7.004874] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Revised: 08/22/2016] [Accepted: 09/09/2016] [Indexed: 05/18/2023]
Abstract
Photoplethysmographic imaging is an optical solution for non-contact cardiovascular monitoring from a distance. This camera-based technology enables physiological monitoring in situations where contact-based devices may be problematic or infeasible, such as ambulatory, sleep, and multi-individual monitoring. However, automatically extracting the blood pulse waveform signal is challenging due to the unknown mixture of relevant (pulsatile) and irrelevant pixels in the scene. Here, we propose a signal fusion framework, FusionPPG, for extracting a blood pulse waveform signal with strong temporal fidelity from a scene without requiring anatomical priors. The extraction problem is posed as a Bayesian least squares fusion problem, and solved using a novel probabilistic pulsatility model that incorporates both physiologically derived spectral and spatial waveform priors to identify pulsatility characteristics in the scene. Evaluation was performed on a 24-participant sample with various ages (9-60 years) and body compositions (fat% 30.0 ± 7.9, muscle% 40.4 ± 5.3, BMI 25.5 ± 5.2 kg·m-2). Experimental results show stronger matching to the ground-truth blood pulse waveform signal compared to the FaceMeanPPG (p < 0.001) and DistancePPG (p < 0.001) methods. Heart rates predicted using FusionPPG correlated strongly with ground truth measurements (r2 = 0.9952). A cardiac arrhythmia was visually identified in FusionPPG's waveform via temporal analysis.
Collapse
|
46
|
Miri MS, Abràmoff MD, Kwon YH, Garvin MK. Multimodal registration of SD-OCT volumes and fundus photographs using histograms of oriented gradients. BIOMEDICAL OPTICS EXPRESS 2016; 7:5252-5267. [PMID: 28018740 PMCID: PMC5175567 DOI: 10.1364/boe.7.005252] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 10/19/2016] [Accepted: 11/11/2016] [Indexed: 05/14/2023]
Abstract
With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 μm (0.84 ± 0.41 pixels).
Collapse
|
47
|
Rico-Jimenez JJ, Campos-Delgado DU, Villiger M, Otsuka K, Bouma BE, Jo JA. Automatic classification of atherosclerotic plaques imaged with intravascular OCT. BIOMEDICAL OPTICS EXPRESS 2016; 7:4069-4085. [PMID: 27867716 PMCID: PMC5102521 DOI: 10.1364/boe.7.004069] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 09/09/2016] [Accepted: 09/12/2016] [Indexed: 05/24/2023]
Abstract
Intravascular optical coherence tomography (IV-OCT) allows evaluation of atherosclerotic plaques; however, plaque characterization is performed by visual assessment and requires a trained expert for interpretation of the large data sets. Here, we present a novel computational method for automated IV-OCT plaque characterization. This method is based on the modeling of each A-line of an IV-OCT data set as a linear combination of a number of depth profiles. After estimating these depth profiles by means of an alternating least square optimization strategy, they are automatically classified to predefined tissue types based on their morphological characteristics. The performance of our proposed method was evaluated with IV-OCT scans of cadaveric human coronary arteries and corresponding tissue histopathology. Our results suggest that this methodology allows automated identification of fibrotic and lipid-containing plaques. Moreover, this novel computational method has the potential to enable high throughput atherosclerotic plaque characterization.
Collapse
|
48
|
Nylk J, McCluskey K, Aggarwal S, Tello JA, Dholakia K. Enhancement of image quality and imaging depth with Airy light-sheet microscopy in cleared and non-cleared neural tissue. BIOMEDICAL OPTICS EXPRESS 2016; 7:4021-4033. [PMID: 27867712 PMCID: PMC5102539 DOI: 10.1364/boe.7.004021] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2016] [Revised: 09/02/2016] [Accepted: 09/05/2016] [Indexed: 05/07/2023]
Abstract
We have investigated the effect of Airy illumination on the image quality and depth penetration of digitally scanned light-sheet microscopy in turbid neural tissue. We used Fourier analysis of images acquired using Gaussian and Airy light-sheets to assess their respective image quality versus penetration into the tissue. We observed a three-fold average improvement in image quality at 50 μm depth with the Airy light-sheet. We also used optical clearing to tune the scattering properties of the tissue and found that the improvement when using an Airy light-sheet is greater in the presence of stronger sample-induced aberrations. Finally, we used homogeneous resolution probes in these tissues to quantify absolute depth penetration in cleared samples with each beam type. The Airy light-sheet method extended depth penetration by 30% compared to a Gaussian light-sheet.
Collapse
|
49
|
Nguyen HD, Hong KS. Bundled-optode implementation for 3D imaging in functional near-infrared spectroscopy. BIOMEDICAL OPTICS EXPRESS 2016; 7:3491-3507. [PMID: 27699115 PMCID: PMC5030027 DOI: 10.1364/boe.7.003491] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Revised: 08/04/2016] [Accepted: 08/10/2016] [Indexed: 05/03/2023]
Abstract
The paper presents a functional near-infrared spectroscopy (fNIRS)-based bundled-optode method for detection of the changes of oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR) concentrations. fNIRS with 32 optodes is utilized to measure five healthy male subjects' brain-hemodynamic responses to arithmetic tasks. Specifically, the coordinates of 256 voxels in the three-dimensional (3D) volume are computed according to the known probe geometry. The mean path length factor in the Beer-Lambert equation is estimated as a function of the emitter-detector distance, which is utilized for computation of the absorption coefficient. The mean values of HbO and HbR obtained from the absorption coefficient are then applied for construction of a 3D fNIRS image. Our results show that the proposed method, as compared with the conventional approach, can detect brain activity with higher spatial resolution. This method can be extended for 3D fNIRS imaging in real-time applications.
Collapse
|
50
|
Alexander NS, Palczewska G, Stremplewski P, Wojtkowski M, Kern TS, Palczewski K. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina. BIOMEDICAL OPTICS EXPRESS 2016; 7:2671-91. [PMID: 27446697 PMCID: PMC4948621 DOI: 10.1364/boe.7.002671] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 06/11/2016] [Accepted: 06/11/2016] [Indexed: 05/18/2023]
Abstract
Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [Palczewska G., Nat Med.20, 785 (2014) Sharma R., Biomed. Opt. Express4, 1285 (2013)].
Collapse
|