1
|
Gawlik K, Hausser F, Paul F, Brandt AU, Kadas EM. Active contour method for ILM segmentation in ONH volume scans in retinal OCT. Biomed Opt Express 2018; 9:6497-6518. [PMID: 31065445 PMCID: PMC6491014 DOI: 10.1364/boe.9.006497] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 06/14/2018] [Accepted: 06/14/2018] [Indexed: 05/28/2023]
Abstract
The optic nerve head (ONH) is affected by many neurodegenerative and autoimmune inflammatory conditions. Optical coherence tomography can acquire high-resolution 3D ONH scans. However, the ONH's complex anatomy and pathology make image segmentation challenging. This paper proposes a robust approach to segment the inner limiting membrane (ILM) in ONH volume scans based on an active contour method of Chan-Vese type, which can work in challenging topological structures. A local intensity fitting energy is added in order to handle very inhomogeneous image intensities. A suitable boundary potential is introduced to avoid structures belonging to outer retinal layers being detected as part of the segmentation. The average intensities in the inner and outer region are then rescaled locally to account for different brightness values occurring among the ONH center. The appropriate values for the parameters used in the complex computational model are found using an optimization based on the differential evolution algorithm. The evaluation of results showed that the proposed framework significantly improved segmentation results compared to the commercial solution.
Collapse
Affiliation(s)
- Kay Gawlik
- Beuth-Hochschule für Technik Berlin - University of Applied Sciences, Berlin,
Germany
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin,
Germany
| | - Frank Hausser
- Beuth-Hochschule für Technik Berlin - University of Applied Sciences, Berlin,
Germany
| | - Friedemann Paul
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin,
Germany
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité -Universitätsmedizin Berlin,
Germany
| | - Alexander U. Brandt
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin,
Germany
- Department of Neurology, University of California Irvine, CA,
USA
| | - Ella Maria Kadas
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin,
Germany
| |
Collapse
|
2
|
Gan M, Wang C, Yang T, Yang N, Zhang M, Yuan W, Li X, Wang L. Robust layer segmentation of esophageal OCT images based on graph search using edge-enhanced weights. Biomed Opt Express 2018; 9:4481-4495. [PMID: 30615715 PMCID: PMC6157790 DOI: 10.1364/boe.9.004481] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 08/17/2018] [Accepted: 08/20/2018] [Indexed: 05/18/2023]
Abstract
Automatic segmentation of esophageal layers in OCT images is crucial for studying esophageal diseases and computer-assisted diagnosis. This work aims to improve the current techniques to increase the accuracy and robustness for esophageal OCT image segmentation. A two-step edge-enhanced graph search (EEGS) framework is proposed in this study. Firstly, a preprocessing scheme is applied to suppress speckle noise and remove the disturbance in the esophageal structure. Secondly, the image is formulated into a graph and layer boundaries are located by graph search. In this process, we propose an edge-enhanced weight matrix for the graph by combining the vertical gradients with a Canny edge map. Experiments on esophageal OCT images from guinea pigs demonstrate that the EEGS framework is more robust and more accurate than the current segmentation method. It can be potentially useful for the early detection of esophageal diseases.
Collapse
Affiliation(s)
- Meng Gan
- Department of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163,
China
| | - Cong Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163,
China
| | - Ting Yang
- Department of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Na Yang
- Department of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Miao Zhang
- Department of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Wu Yuan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205,
USA
| | - Xingde Li
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205,
USA
| | - Lirong Wang
- Department of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| |
Collapse
|
3
|
Shah A, Zhou L, Abrámoff MD, Wu X. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images. Biomed Opt Express 2018; 9:4509-4526. [PMID: 30615698 PMCID: PMC6157759 DOI: 10.1364/boe.9.004509] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 08/17/2018] [Accepted: 08/18/2018] [Indexed: 05/07/2023]
Abstract
Automated segmentation of object boundaries or surfaces is crucial for quantitative image analysis in numerous biomedical applications. For example, retinal surfaces in optical coherence tomography (OCT) images play a vital role in the diagnosis and management of retinal diseases. Recently, graph based surface segmentation and contour modeling have been developed and optimized for various surface segmentation tasks. These methods require expertly designed, application specific transforms, including cost functions, constraints and model parameters. However, deep learning based methods are able to directly learn the model and features from training data. In this paper, we propose a convolutional neural network (CNN) based framework to segment multiple surfaces simultaneously. We demonstrate the application of the proposed method by training a single CNN to segment three retinal surfaces in two types of OCT images - normal retinas and retinas affected by intermediate age-related macular degeneration (AMD). The trained network directly infers the segmentations for each B-scan in one pass. The proposed method was validated on 50 retinal OCT volumes (3000 B-scans) including 25 normal and 25 intermediate AMD subjects. Our experiment demonstrated statistically significant improvement of segmentation accuracy compared to the optimal surface segmentation method with convex priors (OSCS) and two deep learning based UNET methods for both types of data. The average computation time for segmenting an entire OCT volume (consisting of 60 B-scans each) for the proposed method was 12.3 seconds, demonstrating low computation costs and higher performance compared to the graph based optimal surface segmentation and UNET based methods.
Collapse
Affiliation(s)
- Abhay Shah
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
| | - Leixin Zhou
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
| | - Michael D. Abrámoff
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA,
USA
- Department of Ophthalmology and Visual Sciences, Carver College of Medicine, University of Iowa, Iowa City, IA,
USA
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA,
USA
- Department of Radiation Oncology, University of Iowa, Iowa City, IA,
USA
| |
Collapse
|
4
|
Srivastava R, Yow AP, Cheng J, Wong DWK, Tey HL. Three-dimensional graph-based skin layer segmentation in optical coherence tomography images for roughness estimation. Biomed Opt Express 2018; 9:3590-3606. [PMID: 30338142 PMCID: PMC6191621 DOI: 10.1364/boe.9.003590] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 05/10/2018] [Accepted: 05/16/2018] [Indexed: 06/01/2023]
Abstract
Automatic skin layer segmentation in optical coherence tomography (OCT) images is important for a topographic assessment of skin or skin disease detection. However, existing methods cannot deal with the problem of shadowing in OCT images due to the presence of hair, scales, etc. In this work, we propose a method to segment the topmost layer of the skin (or the skin surface) using 3D graphs with a novel cost function to deal with shadowing in OCT images. 3D graph cuts use context information across B-scans when segmenting the skin surface, which improves the segmentation as compared to segmenting each B-scan separately. The proposed method reduces the segmentation error by more than 20% as compared to the best performing related work. The method has been applied to roughness estimation and shows a high correlation with a manual assessment. Promising results demonstrate the usefulness of the proposed method for skin layer segmentation and roughness estimation in both normal OCT images and OCT images with shadowing.
Collapse
Affiliation(s)
- Ruchir Srivastava
- Institute for Infocomm Research, 1 Fusionopolis Way, No. 21-01 Connexis (South Tower), 138632,
Singapore
| | - Ai Ping Yow
- Institute for Infocomm Research, 1 Fusionopolis Way, No. 21-01 Connexis (South Tower), 138632,
Singapore
| | - Jun Cheng
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, 1219 Zhongguan West Road, Zhenhai District, Ningbo 315201,
China
| | - Damon W. K. Wong
- Institute for Infocomm Research, 1 Fusionopolis Way, No. 21-01 Connexis (South Tower), 138632,
Singapore
| | - Hong Liang Tey
- National Skin Center, 1 Mandalay Road, 308205,
Singapore
- Lee Kong Chian School of Medicine, Headquarters and Clinical Sciences Building, 11 Mandalay Road, 308232,
Singapore
| |
Collapse
|
5
|
Hristu R, Eftimie LG, Stanciu SG, Tranca DE, Paun B, Sajin M, Stanciu GA. Quantitative second harmonic generation microscopy for the structural characterization of capsular collagen in thyroid neoplasms. Biomed Opt Express 2018; 9:3923-3936. [PMID: 30338165 PMCID: PMC6191628 DOI: 10.1364/boe.9.003923] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 07/13/2018] [Accepted: 07/14/2018] [Indexed: 05/11/2023]
Abstract
Quantitative second harmonic generation microscopy was used to investigate collagen organization in the fibrillar capsules of human benign and malignant thyroid nodules. We demonstrate that the combination of texture analysis and second harmonic generation images of collagen can be used to differentiate between capsules surrounding the thyroid follicular adenoma and papillary carcinoma nodules. Our findings indicate that second harmonic generation microscopy can provide quantitative information about the collagenous capsule surrounding both the thyroid and thyroid nodules, which may complement traditional histopathological examination.
Collapse
Affiliation(s)
- Radu Hristu
- Center for Microcopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| | - Lucian G Eftimie
- Center for Microcopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
- Central University Emergency Military Hospital, Pathology Department, 134 Calea Plevnei, 010825 Bucharest, Romania
- Carol Davila University of Medicine and Pharmacy, 37 Dionisie Lupu, 030167 Bucharest, Romania
| | - Stefan G Stanciu
- Center for Microcopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| | - Denis E Tranca
- Center for Microcopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| | - Bogdan Paun
- Faculty of Energetics, University Politehnica of Bucharest, 313 Splaiul Independentei, 060042 Bucharest, Romania
- Currently with Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 26-28 George Baritiu St, 40002 Cluj-Napoca, Romania
| | - Maria Sajin
- Carol Davila University of Medicine and Pharmacy, 37 Dionisie Lupu, 030167 Bucharest, Romania
| | - George A Stanciu
- Center for Microcopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| |
Collapse
|
6
|
Cunefare D, Langlo CS, Patterson EJ, Blau S, Dubra A, Carroll J, Farsiu S. Deep learning based detection of cone photoreceptors with multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia. Biomed Opt Express 2018; 9:3740-3756. [PMID: 30338152 PMCID: PMC6191607 DOI: 10.1364/boe.9.003740] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 07/15/2018] [Accepted: 07/15/2018] [Indexed: 05/18/2023]
Abstract
Fast and reliable quantification of cone photoreceptors is a bottleneck in the clinical utilization of adaptive optics scanning light ophthalmoscope (AOSLO) systems for the study, diagnosis, and prognosis of retinal diseases. To-date, manual grading has been the sole reliable source of AOSLO quantification, as no automatic method has been reliably utilized for cone detection in real-world low-quality images of diseased retina. We present a novel deep learning based approach that combines information from both the confocal and non-confocal split detector AOSLO modalities to detect cones in subjects with achromatopsia. Our dual-mode deep learning based approach outperforms the state-of-the-art automated techniques and is on a par with human grading.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Christopher S. Langlo
- Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Emily J. Patterson
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sarah Blau
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Joseph Carroll
- Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
7
|
Azuma S, Makita S, Miyazawa A, Ikuno Y, Miura M, Yasuno Y. Pixel-wise segmentation of severely pathologic retinal pigment epithelium and choroidal stroma using multi-contrast Jones matrix optical coherence tomography. Biomed Opt Express 2018; 9:2955-2973. [PMID: 29984078 PMCID: PMC6033570 DOI: 10.1364/boe.9.002955] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 05/22/2018] [Accepted: 05/23/2018] [Indexed: 05/04/2023]
Abstract
Tissue segmentation of retinal optical coherence tomography (OCT) is widely used in ophthalmic diagnosis. However, its performance in severe pathologic cases is still insufficient. We propose a pixel-wise segmentation method that uses the multi-contrast measurement capability of Jones matrix OCT (JM-OCT). This method is applicable to both normal and pathologic retinal pigment epithelium (RPE) and choroidal stroma. In this method, "features," which are sensitive to specific tissues of interest, are synthesized by combining the multi-contrast images of JM-OCT, including attenuation coefficient, degree-of-polarization-uniformity, and OCT angiography. The tissue segmentation is done by simple thresholding of the feature. Compared with conventional segmentation methods for pathologic maculae, the proposed method is less computationally intensive. The segmentation method was validated by applying it to images from normal and severely pathologic cases. The segmentation results enabled the development of several types of en face visualizations, including melano-layer thickness maps, RPE elevation maps, choroidal thickness maps, and choroidal stromal attenuation coefficient maps. These facilitate close examination of macular pathology. The melano-layer thickness map is very similar to a near infrared fundus autofluorescence image, so the map can be used to identify the source of a hyper-autofluorescent signal.
Collapse
Affiliation(s)
- Shinnosuke Azuma
- Computational Optics Group, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573,
Japan
- Computational Optics and Ophthalmology Group, Tsukuba, Ibaraki 305-8531,
Japan
| | - Shuichi Makita
- Computational Optics Group, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573,
Japan
- Computational Optics and Ophthalmology Group, Tsukuba, Ibaraki 305-8531,
Japan
| | - Arata Miyazawa
- Computational Optics Group, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573,
Japan
- Computational Optics and Ophthalmology Group, Tsukuba, Ibaraki 305-8531,
Japan
| | - Yasushi Ikuno
- Ikuno Eye Center, 2-9-10-3F Juso-Higashi, Yodogawa-Ku, Osaka 532-0023,
Japan
| | - Masahiro Miura
- Computational Optics and Ophthalmology Group, Tsukuba, Ibaraki 305-8531,
Japan
- Tokyo Medical University Ibaraki Medical Center, 3-20-1 Chuo, Ami, Ibaraki 300-0395,
Japan
| | - Yoshiaki Yasuno
- Computational Optics Group, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573,
Japan
- Computational Optics and Ophthalmology Group, Tsukuba, Ibaraki 305-8531,
Japan
| |
Collapse
|
8
|
Hamwood J, Alonso-Caneiro D, Read SA, Vincent SJ, Collins MJ. Effect of patch size and network architecture on a convolutional neural network approach for automatic segmentation of OCT retinal layers. Biomed Opt Express 2018; 9:3049-3066. [PMID: 29984082 PMCID: PMC6033561 DOI: 10.1364/boe.9.003049] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Revised: 06/01/2018] [Accepted: 06/01/2018] [Indexed: 05/06/2023]
Abstract
Deep learning strategies, particularly convolutional neural networks (CNNs), are especially suited to finding patterns in images and using those patterns for image classification. The method is normally applied to an image patch and assigns a class weight to the patch; this method has recently been used to detect the probability of retinal boundary locations in OCT images, which is subsequently used to segment the OCT image using a graph-search approach. This paper examines the effects of a number of modifications to the CNN architecture with the aim of optimizing retinal layer segmentation, specifically the effect of patch size as well as the network architecture design on CNN performance and subsequent layer segmentation. The results demonstrate that increasing patch size can improve the performance of the classification and provides a more reliable segmentation in the analysis of retinal layer characteristics in OCT imaging. Similarly, this work shows that changing aspects of the CNN network design can also significantly improve the segmentation results. This work also demonstrates that the performance of the method can change depending on the number of classes (i.e. boundaries) used to train the CNN, with fewer classes showing an inferior performance due to the presence of similar image features between classes that can trigger false positives. Changes in the network (patch size and or architecture) can be applied to provide a superior segmentation performance, which is robust to the class effect. The findings from this work may inform future CNN development in OCT retinal image analysis.
Collapse
Affiliation(s)
- Jared Hamwood
- Contact Lens and Visual Optics Laboratory, School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Queensland, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Scott A. Read
- Contact Lens and Visual Optics Laboratory, School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Stephen J. Vincent
- Contact Lens and Visual Optics Laboratory, School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Queensland, Australia
| | - Michael J. Collins
- Contact Lens and Visual Optics Laboratory, School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Queensland, Australia
| |
Collapse
|
9
|
Cao Y, Jin Q, Lu Y, Jing J, Chen Y, Yin Q, Qin X, Li J, Zhu R, Zhao W. Automatic analysis of bioresorbable vascular scaffolds in intravascular optical coherence tomography images. Biomed Opt Express 2018; 9:2495-2510. [PMID: 30258668 PMCID: PMC6154186 DOI: 10.1364/boe.9.002495] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 04/19/2018] [Accepted: 04/24/2018] [Indexed: 06/08/2023]
Abstract
The bioresorbable vascular scaffold (BVS) is a new generation of bioresorbable scaffold (BRS) for the treatment of coronary artery disease. A potential challenge of BVS is malapposition, which may possibly lead to late stent thrombosis. It is therefore important to conduct malapposition analysis right after stenting. Since an intravascular optical coherence tomography (IVOCT) image sequence contains thousands of BVS struts, manual analysis is labor intensive and time consuming. Computer-based automatic analysis is an alternative, but faces some difficulties due to the interference of blood artifacts and the uncertainty of the struts number, position and size. In this paper, we propose a novel framework for a struts malapposition analysis that breaks down the problem into two steps. Firstly, struts are detected by a cascade classifier trained by AdaBoost and a region of interest (ROI) is determined for each strut to completely contain it. Then, strut boundaries are segmented within ROIs through dynamic programming. Based on the segmentation result, malapposition analysis is conducted automatically. Tested on 7 pullbacks labeled by an expert, our method correctly detected 91.5% of 5821 BVS struts with 12.1% false positives. The average segmentation Dice coefficient for correctly detected struts was 0.81. The time consumption for a pullback is 15 sec on average. We conclude that our method is accurate and efficient for BVS strut detection and segmentation, and enables automatic BVS malapposition analysis in IVOCT images.
Collapse
Affiliation(s)
- Yihui Cao
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi,
China
- School of the Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049,
China
- University of Chinese Academy of Sciences, Beijing 100049,
China
| | - Qinhua Jin
- Department of Cardiology, Chinese PLA General Hospital, Beijing,
China
| | - Yifeng Lu
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi,
China
- University of Chinese Academy of Sciences, Beijing 100049,
China
| | - Jing Jing
- Department of Cardiology, Chinese PLA General Hospital, Beijing,
China
| | - Yundai Chen
- Department of Cardiology, Chinese PLA General Hospital, Beijing,
China
| | - Qinye Yin
- School of the Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049,
China
| | - Xianjing Qin
- Department of Aerospace Biodynamics, Fourth Military Medical University, Xi’an 710032, Shaanxi,
China
- Xidian University, Xi’an 710071, Shaanxi,
China
| | - Jianan Li
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi,
China
| | - Rui Zhu
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi,
China
| | - Wei Zhao
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi,
China
| |
Collapse
|
10
|
Nolte L, Antonopoulos GC, Rämisch L, Heisterkamp A, Ripken T, Meyer H. Enabling second harmonic generation as a contrast mechanism for optical projection tomography (OPT) and scanning laser optical tomography (SLOT). Biomed Opt Express 2018; 9:2627-2639. [PMID: 30258678 PMCID: PMC6154203 DOI: 10.1364/boe.9.002627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Accepted: 04/20/2018] [Indexed: 05/10/2023]
Abstract
Volumetric imaging of connective tissue provides insights into the structure of biological tissue. Second harmonic generation (SHG) microscopy has become a standard method to image collagen rich tissue like skin or cornea. Due to the non-centrosymmetric architecture, no additional label is needed and tissue can be visualized noninvasively. Thus, SHG microscopy enables the investigation of collagen associated diseases, providing high resolution images and a field of view of several hundreds of μm. However, the in toto visualization of larger samples is limited to the working distance of the objective and the integration time of the microscope setup, which can sum up to several hours and days. A faster imaging technique for samples in the mesoscopic range is scanning laser optical tomography (SLOT), which provides linear fluorescence, scattering and absorption as intrinsic contrast mechanisms. Due to the advantages of SHG and the reduced measurement time of SLOT, the integration of SHG in SLOT would be a great extension. This way SHG measurements could be performed faster on large samples, providing isotropic resolution and simultaneous acquisition of all other contrast mechanisms available, such as fluorescence and absorption. SLOT is based on the principle of computed tomography, which requires the rotation of the sample. The SHG signal, however, depends strongly on the sample orientation and the polarization of the laser, which results in SHG intensity fluctuation during sample rotation and prevents successful 3D reconstruction. In this paper we investigate the angular dependence of the SHG signal by simulation and experiment and found a way to eliminate reconstruction artifacts caused by this angular dependence in SHG-SLOT data. This way, it is now possible to visualize samples in the mesoscopic range using SHG-SLOT, with isotropic resolution and in correlation to other contrast mechanisms as absorption, fluorescence and scattering.
Collapse
Affiliation(s)
- Lena Nolte
- Industrial and Biomedical Optics Department, Laser Zentrum Hannover e.V., Hannover,
Germany
| | | | - Lisa Rämisch
- Industrial and Biomedical Optics Department, Laser Zentrum Hannover e.V., Hannover,
Germany
| | | | - Tammo Ripken
- Industrial and Biomedical Optics Department, Laser Zentrum Hannover e.V., Hannover,
Germany
| | - Heiko Meyer
- Industrial and Biomedical Optics Department, Laser Zentrum Hannover e.V., Hannover,
Germany
| |
Collapse
|
11
|
Venhuizen FG, van Ginneken B, Liefers B, van Asten F, Schreur V, Fauser S, Hoyng C, Theelen T, Sánchez CI. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography. Biomed Opt Express 2018; 9:1545-1569. [PMID: 29675301 PMCID: PMC5905905 DOI: 10.1364/boe.9.001545] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Revised: 01/13/2018] [Accepted: 01/31/2018] [Indexed: 05/18/2023]
Abstract
We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies.
Collapse
Affiliation(s)
- Freerk G. Venhuizen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Bart Liefers
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Freekje van Asten
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Vivian Schreur
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Sascha Fauser
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche Ltd, Basel,
Switzerland
- Cologne University Eye Clinic, Cologne,
Germany
| | - Carel Hoyng
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Thomas Theelen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen,
the Netherlands
| |
Collapse
|
12
|
Wu S, Huang Y, Tang Q, Li Z, Horng H, Li J, Wu Z, Chen Y, Li H. Quantitative evaluation of redox ratio and collagen characteristics during breast cancer chemotherapy using two-photon intrinsic imaging. Biomed Opt Express 2018. [PMID: 29541528 PMCID: PMC5846538 DOI: 10.1364/boe.9.001375] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Preoperative neoadjuvant treatment in locally advanced breast cancer is recognized as an effective adjuvant therapy, as it improves treatment outcomes. However, the potential complications remain a threat, so there is an urgent clinical need to assess both the tumor response and changes in its microenvironment using non-invasive and precise identification techniques. Here, two-photon microscopy was employed to detect morphological alterations in breast cancer progression and recession throughout chemotherapy. The changes in structure were analyzed based on the autofluorescence and collagen of differing statuses. Parameters, including optical redox ratio, the ratio of second harmonic generation and auto-fluorescence signal, collagen density, and collagen shape orientation, were studied. Results indicate that these parameters are potential indicators for evaluating breast tumors and their microenvironment changes during progression and chemotherapy. Combined analyses of these parameters could provide a quantitative, novel method for monitoring tumor therapy.
Collapse
Affiliation(s)
- Shulian Wu
- College of Photonic and Electronic Engineering, Fujian Normal University, Fujian Provincial Key Laboratory of Photonic Technology, Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fuzhou, Fujian, 350007, China
- Fischell Department of Bioengineering, University of Maryland, College Park, MD, 20742, USA
- These authors contributed equally to this work
| | - Yudian Huang
- Department of Pathology, Fuzhou First Hospital Affiliated to Fujian Medical University, Fuzhou, Fujian, 350009, China
- These authors contributed equally to this work
| | - Qinggong Tang
- Fischell Department of Bioengineering, University of Maryland, College Park, MD, 20742, USA
| | - Zhifang Li
- College of Photonic and Electronic Engineering, Fujian Normal University, Fujian Provincial Key Laboratory of Photonic Technology, Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fuzhou, Fujian, 350007, China
| | - Hannah Horng
- Fischell Department of Bioengineering, University of Maryland, College Park, MD, 20742, USA
| | - Jiatian Li
- College of Photonic and Electronic Engineering, Fujian Normal University, Fujian Provincial Key Laboratory of Photonic Technology, Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fuzhou, Fujian, 350007, China
| | - Zaihua Wu
- Department of Pathology, Fuzhou First Hospital Affiliated to Fujian Medical University, Fuzhou, Fujian, 350009, China
| | - Yu Chen
- College of Photonic and Electronic Engineering, Fujian Normal University, Fujian Provincial Key Laboratory of Photonic Technology, Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fuzhou, Fujian, 350007, China
- Fischell Department of Bioengineering, University of Maryland, College Park, MD, 20742, USA
| | - Hui Li
- College of Photonic and Electronic Engineering, Fujian Normal University, Fujian Provincial Key Laboratory of Photonic Technology, Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fuzhou, Fujian, 350007, China
| |
Collapse
|
13
|
Yu K, Shi F, Gao E, Zhu W, Chen H, Chen X. Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation. Biomed Opt Express 2018; 9:962-983. [PMID: 29541497 PMCID: PMC5846542 DOI: 10.1364/boe.9.000962] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 01/08/2018] [Accepted: 01/23/2018] [Indexed: 05/18/2023]
Abstract
Optic nerve head (ONH) is a crucial region for glaucoma detection and tracking based on spectral domain optical coherence tomography (SD-OCT) images. In this region, the existence of a "hole" structure makes retinal layer segmentation and analysis very challenging. To improve retinal layer segmentation, we propose a 3D method for ONH centered SD-OCT image segmentation, which is based on a modified graph search algorithm with a shared-hole and locally adaptive constraints. With the proposed method, both the optic disc boundary and nine retinal surfaces can be accurately segmented in SD-OCT images. An overall mean unsigned border positioning error of 7.27 ± 5.40 µm was achieved for layer segmentation, and a mean Dice coefficient of 0.925 ± 0.03 was achieved for optic disc region detection.
Collapse
Affiliation(s)
- Kai Yu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- Indicates these authors contributed equally
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- Indicates these authors contributed equally
| | - Enting Gao
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou 515041, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- corresponding author:
| |
Collapse
|
14
|
Bozic I, Li X, Tao Y. Quantitative biometry of zebrafish retinal vasculature using optical coherence tomographic angiography. Biomed Opt Express 2018; 9:1244-1255. [PMID: 29541517 PMCID: PMC5846527 DOI: 10.1364/boe.9.001244] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 02/10/2018] [Accepted: 02/14/2018] [Indexed: 06/01/2023]
Abstract
The zebrafish is a robust model for studying human ophthalmic function and disease because of its fecundity, life-cycle, and similarities between its retinal structure and the human retina. Here, we demonstrate longitudinal in vivo imaging of retinal structure using optical coherence tomography (OCT) and noninvasive retinal vascular perfusion imaging using OCT angiography (OCT-A) in zebrafish. In addition, we present methods for retinal vascular segmentation and biometry to quantify vessel branch length, curvature, and angle. We further motivate retinal vascular biometry as a novel method for noninvasive zebrafish identification and demonstrated 99.9% accuracy for uniquely identifying eyes from a set of 200 longitudinal OCT/OCT-A volumes. The described methods enable the quantitative analysis of the vascular changes in zebrafish models of ophthalmic diseases and may broadly benefit large-scale zebrafish studies.
Collapse
Affiliation(s)
- Ivan Bozic
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA
- These authors contributed equally in this work
| | - Xiaoyue Li
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA
- These authors contributed equally in this work
| | - Yuankai Tao
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
15
|
Li Z, Huang F, Zhang J, Dashtbozorg B, Abbasi-Sureshjani S, Sun Y, Long X, Yu Q, Romeny BTH, Tan T. Multi-modal and multi-vendor retina image registration. Biomed Opt Express 2018; 9:410-422. [PMID: 29552382 PMCID: PMC5854047 DOI: 10.1364/boe.9.000410] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 12/09/2017] [Accepted: 12/10/2017] [Indexed: 05/04/2023]
Abstract
Multi-modal retinal image registration is often required to utilize the complementary information from different retinal imaging modalities. However, a robust and accurate registration is still a challenge due to the modality-varied resolution, contrast, and luminosity. In this paper, a two step registration method is proposed to address this problem. Descriptor matching on mean phase images is used to globally register images in the first step. Deformable registration based on modality independent neighbourhood descriptor (MIND) method is followed to locally refine the registration result in the second step. The proposed method is extensively evaluated on color fundus images and scanning laser ophthalmoscope (SLO) images. Both qualitative and quantitative tests demonstrate improved registration using the proposed method compared to the state-of-the-art. The proposed method produces significantly and substantially larger mean Dice coefficients compared to other methods (p<0.001). It may facilitate the measurement of corresponding features from different retinal images, which can aid in assessing certain retinal diseases.
Collapse
Affiliation(s)
- Zhang Li
- College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073,
China
- Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation, Changsha 410073,
China
| | - Fan Huang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Jiong Zhang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Behdad Dashtbozorg
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Samaneh Abbasi-Sureshjani
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Yue Sun
- Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Xi Long
- Electrical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
| | - Qifeng Yu
- College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073,
China
- Hunan Provincial Key Laboratory of Image Measurement and Vision Navigation, Changsha 410073,
China
| | - Bart ter Haar Romeny
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
- Department of Biomedical and Information Technology, Northeastern University, Shenyang, 110000,
China
| | - Tao Tan
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven 5600 MB,
The Netherlands
- Research and Development, ScreenPoint Medical, Nijmegen, 6512 AB,
The Netherlands
| |
Collapse
|
16
|
Germann JA, Martinez-Enriquez E, Marcos S. Quantization of collagen organization in the stroma with a new order coefficient. Biomed Opt Express 2018; 9:173-189. [PMID: 29359095 PMCID: PMC5772573 DOI: 10.1364/boe.9.000173] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 11/20/2017] [Accepted: 11/21/2017] [Indexed: 05/20/2023]
Abstract
Many optical and biomechanical properties of the cornea, specifically the transparency of the stroma and its stiffness, can be traced to the degree of order and direction of the constituent collagen fibers. To measure the degree of order inside the cornea, a new metric, the order coefficient, was introduced to quantify the organization of the collagen fibers from images of the stroma produced with a custom-developed second harmonic generation microscope. The order coefficient method gave a quantitative assessment of the differences in stromal collagen arrangement across the cornea depths and between untreated stroma and cross-linked stroma.
Collapse
|
17
|
Halimi A, Batatia H, Le Digabel J, Josse G, Tourneret JY. Wavelet-based statistical classification of skin images acquired with reflectance confocal microscopy. Biomed Opt Express 2017; 8:5450-5467. [PMID: 29296480 PMCID: PMC5745095 DOI: 10.1364/boe.8.005450] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 10/18/2017] [Accepted: 10/19/2017] [Indexed: 06/07/2023]
Abstract
Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction.
Collapse
Affiliation(s)
- Abdelghafour Halimi
- University of Toulouse, IRIT-INPT, 2 rue Camichel, BP 7122, 31071 Toulouse cedex 7,
France
| | - Hadj Batatia
- University of Toulouse, IRIT-INPT, 2 rue Camichel, BP 7122, 31071 Toulouse cedex 7,
France
| | - Jimmy Le Digabel
- Centre de Recherche sur la Peau, Pierre Fabre Dermo-Cosmétique, 2 rue Viguerie, 31025 Toulouse Cedex 3, France
| | - Gwendal Josse
- Centre de Recherche sur la Peau, Pierre Fabre Dermo-Cosmétique, 2 rue Viguerie, 31025 Toulouse Cedex 3, France
| | - Jean Yves Tourneret
- University of Toulouse, IRIT-INPT, 2 rue Camichel, BP 7122, 31071 Toulouse cedex 7,
France
| |
Collapse
|
18
|
Li A, You J, Du C, Pan Y. Automated segmentation and quantification of OCT angiography for tracking angiogenesis progression. Biomed Opt Express 2017; 8:5604-5616. [PMID: 29296491 PMCID: PMC5745106 DOI: 10.1364/boe.8.005604] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 10/31/2017] [Accepted: 11/02/2017] [Indexed: 05/02/2023]
Abstract
Angiogenesis is recognized as a crucial component of many neurovascular diseases such as stroke, carcinogenesis, and neurotoxicity of abused drug. The ability to track angiogenesis will facilitate a better understanding of disease progression and assessment of therapeutical effects. Optical coherence angiography (OCTA) is a promising tool to assess 3D microvascular networks due to its micron-level resolution, high sensitivity, and relatively large field of view. However, quantitative OCTA image analysis for characterization of microvascular network changes, including accurately tracking the progression of angiogenesis, remains a challenge. In this paper, we proposed an angiogenesis tracking algorithm which combines improved vessel segmentation and brain boundary detection methods to significantly enhance time-lapse OCTA images for quantification of microvascular network changes. Specifically, top-hat enhancement and optimally oriented flux (OOF) algorithms facilitated accurate segmentation of cerebrovascular networks (including capillaries); graph-search based brain boundary detection enabled coregistration of 3D OCTA data sets from different time points for accurate vessel density assessment and analysis of their changes in various cortical layers. Results show that this algorithm significantly enhanced the accuracy of vessel segmentation compared to Hessian method. Application to chronic cocaine intoxication study shows effectively reduced errors in chronic tracking of microvasculature and more accurate assessment of vessel density changes induced by angiogenesis.
Collapse
|
19
|
Liefers B, Venhuizen FG, Schreur V, van Ginneken B, Hoyng C, Fauser S, Theelen T, Sánchez CI. Automatic detection of the foveal center in optical coherence tomography. Biomed Opt Express 2017; 8:5160-5178. [PMID: 29188111 PMCID: PMC5695961 DOI: 10.1364/boe.8.005160] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 10/11/2017] [Accepted: 10/11/2017] [Indexed: 05/07/2023]
Abstract
We propose a method for automatic detection of the foveal center in optical coherence tomography (OCT). The method is based on a pixel-wise classification of all pixels in an OCT volume using a fully convolutional neural network (CNN) with dilated convolution filters. The CNN-architecture contains anisotropic dilated filters and a shortcut connection and has been trained using a dynamic training procedure where the network identifies its own relevant training samples. The performance of the proposed method is evaluated on a data set of 400 OCT scans of patients affected by age-related macular degeneration (AMD) at different severity levels. For 391 scans (97.75%) the method identified the foveal center with a distance to a human reference less than 750 μm, with a mean (± SD) distance of 71 μm ± 107 μm. Two independent observers also annotated the foveal center, with a mean distance to the reference of 57 μm ± 84 μm and 56 μm ± 80 μm, respectively. Furthermore, we evaluate variations to the proposed network architecture and training procedure, providing insight in the characteristics that led to the demonstrated performance of the proposed method.
Collapse
Affiliation(s)
- Bart Liefers
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Freerk G. Venhuizen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Vivian Schreur
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Carel Hoyng
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Sascha Fauser
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche Ltd, Basel,
Switzerland
- Cologne University Eye Clinic, Cologne,
Germany
| | - Thomas Theelen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| |
Collapse
|
20
|
la Cour MF, Mehrvar S, Kim J, Martin A, Zimmerman MA, Hong JC, Ranji M. Optical imaging for the assessment of hepatocyte metabolic state in ischemia and reperfusion injuries. Biomed Opt Express 2017; 8:4419-4426. [PMID: 29082074 PMCID: PMC5654789 DOI: 10.1364/boe.8.004419] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Revised: 08/10/2017] [Accepted: 08/13/2017] [Indexed: 05/09/2023]
Abstract
Deterioration in mitochondrial function leads to hepatic ischemia and reperfusion injury (IRI) in liver surgery and transplantation. 3D optical cryoimaging was used to measure the levels of mitochondrial coenzymes NADH and FAD, and their redox ratio (NADH/FAD) gave a quantitative marker for hepatocyte oxidative stress during IRI. Using a rat model, five groups were compared: control, ischemia for 60 or 90 minutes (Isc60, Isc90), ischemia for 60 or 90 minutes followed by reperfusion of 24 hours (IRI60, IRI90). Ischemia alone did not cause a significant increase in the redox ratio; however, the redox ratio in both IRI60 and IRI90 groups was significantly decreased by 29% and 71%, respectively. A significant correlation was observed between the redox ratio and other markers of injury such as serum aminotransferase levels and the tissue ATP level. The mitochondrial redox state can be successfully measured using optical cryoimaging as a quantitative marker of hepatic IR injury.
Collapse
Affiliation(s)
- Mette F. la Cour
- Biopotonics Lab, Electrical Engineering Department, University of Wisconsin - Milwaukee, 3200 N Cramer St, Milwaukee, WI 53211, USA
- Both contributed equally and are therefore first authors
| | - Shima Mehrvar
- Biopotonics Lab, Electrical Engineering Department, University of Wisconsin - Milwaukee, 3200 N Cramer St, Milwaukee, WI 53211, USA
- Both contributed equally and are therefore first authors
| | - Joohyun Kim
- Division of Transplant Surgery, Department of Surgery, Medical College of Wisconsin, 9200 W. Wisconsin Avenue, Suite E5700, Milwaukee, WI 53226, USA
| | - Alicia Martin
- Division of Transplant Surgery, Department of Surgery, Medical College of Wisconsin, 9200 W. Wisconsin Avenue, Suite E5700, Milwaukee, WI 53226, USA
| | - Michael A. Zimmerman
- Division of Transplant Surgery, Department of Surgery, Medical College of Wisconsin, 9200 W. Wisconsin Avenue, Suite E5700, Milwaukee, WI 53226, USA
| | - Johnny C. Hong
- Division of Transplant Surgery, Department of Surgery, Medical College of Wisconsin, 9200 W. Wisconsin Avenue, Suite E5700, Milwaukee, WI 53226, USA
| | - Mahsa Ranji
- Biopotonics Lab, Electrical Engineering Department, University of Wisconsin - Milwaukee, 3200 N Cramer St, Milwaukee, WI 53211, USA
| |
Collapse
|
21
|
Miyazawa A, Hong YJ, Makita S, Kasaragod D, Yasuno Y. Generation and optimization of superpixels as image processing kernels for Jones matrix optical coherence tomography. Biomed Opt Express 2017; 8:4396-4418. [PMID: 29082073 PMCID: PMC5654788 DOI: 10.1364/boe.8.004396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 08/31/2017] [Accepted: 09/01/2017] [Indexed: 05/05/2023]
Abstract
Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels' spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels.
Collapse
|
22
|
Cordeiro C, Abilez OJ, Goetz G, Gupta T, Zhuge Y, Solgaard O, Palanker D. Optophysiology of cardiomyocytes: characterizing cellular motion with quantitative phase imaging. Biomed Opt Express 2017; 8:4652-4662. [PMID: 29082092 PMCID: PMC5654807 DOI: 10.1364/boe.8.004652] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 09/15/2017] [Accepted: 09/19/2017] [Indexed: 06/07/2023]
Abstract
Quantitative phase imaging enables precise characterization of cellular shape and motion. Variation of cell volume in populations of cardiomyocytes can help distinguish their types, while changes in optical thickness during beating cycle identify contraction and relaxation periods and elucidate cell dynamics. Parameters such as characteristic cycle shape, beating frequency, duration and regularity can be used to classify stem-cell derived cardiomyocytes according to their health and, potentially, cell type. Unlike classical patch-clamp based electrophysiological characterization of cardiomyocytes, this interferometric approach enables rapid and non-destructive analysis of large populations of cells, with longitudinal follow-up, and applications to tissue regeneration, personalized medicine, and drug testing.
Collapse
Affiliation(s)
- Christine Cordeiro
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Oscar J. Abilez
- Division of Cardiovascular Medicine, Stanford University, Stanford, CA, 94305, USA
- Cardiovascular Institute, Stanford University, Stanford, CA 94305, USA
| | - Georges Goetz
- Department of Neurosurgery, Stanford University, Stanford, CA, 94305, USA
| | - Tushar Gupta
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Yan Zhuge
- Molecular Imaging Program at Stanford, Stanford University, Stanford, CA, 94305, USA
| | - Olav Solgaard
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Daniel Palanker
- Department of Ophthalmology, Stanford University, Stanford, CA, 94305, USA
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
23
|
Keikhosravi A, Liu Y, Drifka C, Woo KM, Verma A, Oldenbourg R, Eliceiri KW. Quantification of collagen organization in histopathology samples using liquid crystal based polarization microscopy. Biomed Opt Express 2017; 8:4243-4256. [PMID: 28966862 PMCID: PMC5611938 DOI: 10.1364/boe.8.004243] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 08/11/2017] [Accepted: 08/23/2017] [Indexed: 05/02/2023]
Abstract
A number of histopathology studies have utilized the label free microscopy method of Second Harmonic Generation (SHG) to investigate collagen organization in disease onset and progression. Here we explored an alternative label free imaging approach, LC-PolScope that is based on liquid crystal based polarized light imaging. We demonstrated that this more accessible technology has the ability to visualize all fibers of interest and has a good to excellent correlation between SHG and LC-PolScope measurements in fibrillar collagen orientation and alignment. This study supports that LC-PolScope is a viable alternative to SHG for label free collagen organization measurements in thin histology sections.
Collapse
Affiliation(s)
- Adib Keikhosravi
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin at Madison, Madison WI, USA
- Biomedical Engineering Department, University of Wisconsin at Madison, Madison WI, USA
- Morgridge Institute for Research, Madison WI, USA
- These authors have contributed equally
| | - Yuming Liu
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin at Madison, Madison WI, USA
- These authors have contributed equally
| | - Cole Drifka
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin at Madison, Madison WI, USA
- Biomedical Engineering Department, University of Wisconsin at Madison, Madison WI, USA
- Morgridge Institute for Research, Madison WI, USA
| | - Kaitlin M. Woo
- Department of Biostatistics and Medical informatics, Brown University, Providence, RI, USA
| | | | - Rudolf Oldenbourg
- Marine Biological Laboratory, Woods Hole, MA, USA
- Department of Physics, Brown University, Providence, RI, USA
| | - Kevin W. Eliceiri
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin at Madison, Madison WI, USA
- Biomedical Engineering Department, University of Wisconsin at Madison, Madison WI, USA
- Morgridge Institute for Research, Madison WI, USA
| |
Collapse
|
24
|
Yadav SK, Motamedi S, Oberwahrenbrock T, Oertel FC, Polthier K, Paul F, Kadas EM, Brandt AU. CuBe: parametric modeling of 3D foveal shape using cubic Bézier. Biomed Opt Express 2017; 8:4181-4199. [PMID: 28966857 PMCID: PMC5611933 DOI: 10.1364/boe.8.004181] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 07/26/2017] [Accepted: 07/27/2017] [Indexed: 06/07/2023]
Abstract
Optical coherence tomography (OCT) allows three-dimensional (3D) imaging of the retina, and is commonly used for assessing pathological changes of fovea and macula in many diseases. Many neuroinflammatory conditions are known to cause modifications to the fovea shape. In this paper, we propose a method for parametric modeling of the foveal shape. Our method exploits invariant features of the macula from OCT data and applies a cubic Bézier polynomial along with a least square optimization to produce a best fit parametric model of the fovea. Additionally, we provide several parameters of the foveal shape based on the proposed 3D parametric modeling. Our quantitative and visual results show that the proposed model is not only able to reconstruct important features from the foveal shape, but also produces less error compared to the state-of-the-art methods. Finally, we apply the model in a comparison of healthy control eyes and eyes from patients with neuroinflammatory central nervous system disorders and optic neuritis, and show that several derived model parameters show significant differences between the two groups.
Collapse
Affiliation(s)
- Sunil Kumar Yadav
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin,
Germany
- Mathematical Geometry Processing Group, Freie Universität Berlin,
Germany
| | | | - Timm Oberwahrenbrock
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin,
Germany
| | | | - Konrad Polthier
- Mathematical Geometry Processing Group, Freie Universität Berlin,
Germany
| | - Friedemann Paul
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin,
Germany
- Department of Neurology, Charité - Universitätsmedizin Berlin,
Germany
- Experimental and Clinical Research Center, Max Delbrück Center for Molecular Medicine and Charité - Universitätsmedizin Berlin,
Germany
| | - Ella Maria Kadas
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin,
Germany
| | - Alexander U. Brandt
- NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin,
Germany
| |
Collapse
|
25
|
Cheng J, Zhang Z, Tao D, Wong DWK, Liu J, Baskaran M, Aung T, Wong TY. Similarity regularized sparse group lasso for cup to disc ratio computation. Biomed Opt Express 2017; 8:3763-3777. [PMID: 28856048 PMCID: PMC5560839 DOI: 10.1364/boe.8.003763] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 06/15/2017] [Accepted: 07/07/2017] [Indexed: 05/29/2023]
Abstract
Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.
Collapse
Affiliation(s)
- Jun Cheng
- Institute for Infocomm Research, A*STAR,
Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre,
Singapore
| | - Zhuo Zhang
- Institute for Infocomm Research, A*STAR,
Singapore
| | | | | | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Chinese Academic of Science,
China
| | - Mani Baskaran
- Singapore Eye Research Institute, Singapore National Eye Centre,
Singapore
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre,
Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre,
Singapore
- National University of Singapore,
Singapore
| |
Collapse
|
26
|
Jalal UM, Kim SC, Shim JS. Histogram analysis for smartphone-based rapid hematocrit determination. Biomed Opt Express 2017; 8:3317-3328. [PMID: 28717569 PMCID: PMC5508830 DOI: 10.1364/boe.8.003317] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 05/20/2017] [Accepted: 06/07/2017] [Indexed: 06/07/2023]
Abstract
A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based "Histogram" app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases.
Collapse
|
27
|
Venhuizen FG, van Ginneken B, Liefers B, van Grinsven MJ, Fauser S, Hoyng C, Theelen T, Sánchez CI. Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks. Biomed Opt Express 2017; 8:3292-3316. [PMID: 28717568 PMCID: PMC5508829 DOI: 10.1364/boe.8.003292] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 05/22/2017] [Accepted: 06/03/2017] [Indexed: 05/18/2023]
Abstract
We developed a fully automated system using a convolutional neural network (CNN) for total retina segmentation in optical coherence tomography (OCT) that is robust to the presence of severe retinal pathology. A generalized U-net network architecture was introduced to include the large context needed to account for large retinal changes. The proposed algorithm outperformed qualitative and quantitatively two available algorithms. The algorithm accurately estimated macular thickness with an error of 14.0 ± 22.1 µm, substantially lower than the error obtained using the other algorithms (42.9 ± 116.0 µm and 27.1 ± 69.3 µm, respectively). These results highlighted the proposed algorithm's capability of modeling the wide variability in retinal appearance and obtained a robust and reliable retina segmentation even in severe pathological cases.
Collapse
Affiliation(s)
- Freerk G. Venhuizen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Bart Liefers
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Mark J.J.P. van Grinsven
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Sascha Fauser
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche Ltd, Basel,
Switzerland
- Cologne University Eye Clinic, Cologne,
Germany
| | - Carel Hoyng
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Thomas Theelen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| |
Collapse
|
28
|
Fang L, Cunefare D, Wang C, Guymer RH, Li S, Farsiu S. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomed Opt Express 2017; 8:2732-2744. [PMID: 28663902 PMCID: PMC5480509 DOI: 10.1364/boe.8.002732] [Citation(s) in RCA: 256] [Impact Index Per Article: 36.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 04/22/2017] [Accepted: 04/23/2017] [Indexed: 05/18/2023]
Abstract
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.
Collapse
Affiliation(s)
- Leyuan Fang
- Departments of Biomedical Engineering Duke University, Durham, NC 27708, USA
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - David Cunefare
- Departments of Biomedical Engineering Duke University, Durham, NC 27708, USA
| | - Chong Wang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Robyn H. Guymer
- Centre for Eye Research Australia University of Melbourne, Department of Surgery, Royal Victorian Eye and Ear Hospital, Victoria 3002, Australia
| | - Shutao Li
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Sina Farsiu
- Departments of Biomedical Engineering Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
29
|
Guo Y, Veneman WJ, Spaink HP, Verbeek FJ. Three-dimensional reconstruction and measurements of zebrafish larvae from high-throughput axial-view in vivo imaging. Biomed Opt Express 2017; 8:2611-2634. [PMID: 28663894 PMCID: PMC5480501 DOI: 10.1364/boe.8.002611] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 01/31/2017] [Accepted: 01/31/2017] [Indexed: 05/30/2023]
Abstract
High-throughput imaging is applied to provide observations for accurate statements on phenomena in biology and this has been successfully applied in the domain of cells, i.e. cytomics. In the domain of whole organisms, we need to take the hurdles to ensure that the imaging can be accomplished with a sufficient throughput and reproducibility. For vertebrate biology, zebrafish is a popular model system for High-throughput applications. The development of the Vertebrate Automated Screening Technology (VAST BioImager), a microscope mounted system, enables the application of zebrafish high-throughput screening. The VAST BioImager contains a capillary that holds a zebrafish for imaging. Through the rotation of the capillary, multiple axial-views of a specimen can be acquired. For the VAST BioImager, fluorescence and/or confocal microscopes are used. Quantitation of a specific signal as derived from a label in one fluorescent channel requires insight in the zebrafish volume to be able to normalize quantitation to volume units. However, from the setup of the VAST BioImager, a specimen volume cannot be straightforwardly derived. We present a high-throughput axial-view imaging architecture based on the VAST BioImager. We propose profile-based 3D reconstruction to produce 3D volumetric representations for zebrafish larvae using the axial-views. Volume and surface area can then be derived from the 3D reconstruction to obtain the shape characteristics in high-throughput measurements. In addition, we develop a calibration and a validation of our methodology. From our measurements we show that with a limited amount of views, accurate measurements of volume and surface area for zebrafish larvae can be obtained. We have applied the proposed method on a range of developmental stages in zebrafish and produced metrical references for the volume and surface area for each stage.
Collapse
Affiliation(s)
- Yuanhao Guo
- Imaging & BioInformatics, Leiden Institute of Advanced Computer Science (LIACS), Leiden University, 2333CA, Leiden,
The Netherlands
| | - Wouter J. Veneman
- Department of Animal Sciences and Health, Institute of Biology (IBL), Leiden University, 2333BE, Leiden,
The Netherlands
| | - Herman P. Spaink
- Department of Animal Sciences and Health, Institute of Biology (IBL), Leiden University, 2333BE, Leiden,
The Netherlands
| | - Fons J. Verbeek
- Imaging & BioInformatics, Leiden Institute of Advanced Computer Science (LIACS), Leiden University, 2333CA, Leiden,
The Netherlands
| |
Collapse
|
30
|
Cheng J, Tao D, Wong DWK, Liu J. Quadratic divergence regularized SVM for optic disc segmentation. Biomed Opt Express 2017; 8:2687-2696. [PMID: 28663898 PMCID: PMC5480505 DOI: 10.1364/boe.8.002687] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 03/29/2017] [Accepted: 03/29/2017] [Indexed: 05/29/2023]
Abstract
Machine learning has been used in many retinal image processing applications such as optic disc segmentation. It assumes that the training and testing data sets have the same feature distribution. However, retinal images are often collected under different conditions and may have different feature distributions. Therefore, the models trained from one data set may not work well for another data set. However, it is often too expensive and time consuming to label the needed training data and rebuild the models for all different data sets. In this paper, we propose a novel quadratic divergence regularized support vector machine (QDSVM) to transfer the knowledge from domains with sufficient training data to domains with limited or even no training data. The proposed method simultaneously minimizes the distribution difference between the source domain and target domain while training the classifier. Experimental results show that the proposed transfer learning based method reduces the classification error in superpixel level from 14.2% without transfer learning to 2.4% with transfer learning. The proposed method is effective to transfer the label knowledge from source to target domain, which enables it to be used for optic disc segmentation in data sets with different feature distributions.
Collapse
Affiliation(s)
- Jun Cheng
- Institute for Infocomm Research, A*STAR,
Singapore
| | | | | | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Chinese Academic of Sciences,
China
| |
Collapse
|
31
|
Pérez-merino P, Velasco-Ocana M, Martinez-Enriquez E, Revuelta L, McFadden SA, Marcos S. Three-dimensional OCT based guinea pig eye model: relating morphology and optics. Biomed Opt Express 2017; 8:2173-2184. [PMID: 28736663 PMCID: PMC5516822 DOI: 10.1364/boe.8.002173] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Accepted: 02/14/2017] [Indexed: 05/24/2023]
Abstract
Custom Spectral Optical Coherence Tomography (SOCT) provided with automatic quantification and distortion correction algorithms was used to measure the 3-D morphology in guinea pig eyes (n = 8, 30 days; n = 5, 40 days). Animals were measured awake in vivo under cyclopegia. Measurements showed low intraocular variability (<4% in corneal and anterior lens radii and <8% in the posterior lens radii, <1% interocular distances). The repeatability of the surface elevation was less than 2 µm. Surface astigmatism was the individual dominant term in all surfaces. Higher-order RMS surface elevation was largest in the posterior lens. Individual surface elevation Zernike terms correlated significantly across corneal and anterior lens surfaces. Higher-order-aberrations (except spherical aberration) were comparable with those predicted by OCT-based eye models.
Collapse
Affiliation(s)
- Pablo Pérez-merino
- Instituto de Óptica “Daza de Valdés,” Consejo Superior de Investigaciones Científicas (CSIC), Madrid, Spain
| | - Miriam Velasco-Ocana
- Instituto de Óptica “Daza de Valdés,” Consejo Superior de Investigaciones Científicas (CSIC), Madrid, Spain
| | - Eduardo Martinez-Enriquez
- Instituto de Óptica “Daza de Valdés,” Consejo Superior de Investigaciones Científicas (CSIC), Madrid, Spain
| | - Luis Revuelta
- Facultad de Veterinaria, Universidad Complutense, Madrid, Spain
| | - Sally A McFadden
- School of Psychology, University of Newcastle, Newcastle, NSW, Australia
| | - Susana Marcos
- Instituto de Óptica “Daza de Valdés,” Consejo Superior de Investigaciones Científicas (CSIC), Madrid, Spain
| |
Collapse
|
32
|
Prieto SP, Lai KK, Laryea JA, Mizell JS, Mustain WC, Muldoon TJ. Fluorescein as a topical fluorescent contrast agent for quantitative microendoscopic inspection of colorectal epithelium. Biomed Opt Express 2017; 8:2324-2338. [PMID: 28736674 PMCID: PMC5516830 DOI: 10.1364/boe.8.002324] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 03/20/2017] [Accepted: 03/20/2017] [Indexed: 05/21/2023]
Abstract
Fiber bundle microendoscopic imaging of colorectal tissue has shown promising results, for both qualitative and quantitative analysis. A quantitative image quality control and image feature extraction algorithm was previously designed for quantitative image feature analysis of proflavine-stained ex vivo colorectal tissue. We investigated fluorescein as an alternative topical stain. Images of ex vivo porcine, caprine, and human colorectal tissue were used to compare microendoscopic images of tissue topically stained with fluorescein and proflavine solutions. Fluorescein was shown to be comparable for automated crypt detection, with an average crypt detection sensitivity exceeding 90% using a combination of three contrast limit pairs.
Collapse
Affiliation(s)
- Sandra P. Prieto
- Department of Biomedical Engineering, University of Arkansas, 1 University Blvd., Fayetteville, AR 72701, USA
| | - Keith K. Lai
- Department of Anatomic Pathology, Cleveland Clinic, 9500 Euclid Ave, L-25, Cleveland, OH 44195, USA
| | - Jonathan A. Laryea
- Department of Surgery, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, AR 72205, USA
| | - Jason S. Mizell
- Department of Surgery, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, AR 72205, USA
| | - William C. Mustain
- Department of Surgery, University of Arkansas for Medical Sciences, 4301 W. Markham Street, Little Rock, AR 72205, USA
| | - Timothy J. Muldoon
- Department of Biomedical Engineering, University of Arkansas, 1 University Blvd., Fayetteville, AR 72701, USA
| |
Collapse
|
33
|
Wang J, Zhang M, Hwang TS, Bailey ST, Huang D, Wilson DJ, Jia Y. Reflectance-based projection-resolved optical coherence tomography angiography [Invited]. Biomed Opt Express 2017; 8:1536-1548. [PMID: 28663848 PMCID: PMC5480563 DOI: 10.1364/boe.8.001536] [Citation(s) in RCA: 73] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Revised: 02/10/2017] [Accepted: 02/10/2017] [Indexed: 05/18/2023]
Abstract
Optical coherence tomography angiography (OCTA) is limited by projection artifacts from the superficial blood vessels onto deeper layers. We have recently described projection-resolved (PR) OCTA that solves the ambiguity between in situ flow and flow projection along each axial scan and suppresses the artifact on both en face and cross-sectional angiograms. While this method significantly improved the depth resolution of OCTA, the vascular integrity of the deeper layers was not fully preserved. In this study, we propose a novel reflectance-based projection-resolved (rbPR) OCTA algorithm which uses OCT reflectance to enhance the flow signal and suppress the projection artifacts in 3-dimensional OCTA. We demonstrated quantitatively that rbPR improved the vascular connectivity and improved the discrimination of the deeper plexus angiograms in healthy eyes, compared to prior PR-OCTA method. We also demonstrated qualitatively that rbPR removes flow projection artifacts more completely from the outer retinal slab in the eyes with age-related macular degeneration, and preserves vascular integrity of the intermediate and deep capillary plexuses in the eyes with diabetic retinopathy. Additionally, this method improves the resolution of the choriocapillaris and demonstrates details comparable to scanning electron microscopy.
Collapse
|
34
|
Zang P, Gao SS, Hwang TS, Flaxel CJ, Wilson DJ, Morrison JC, Huang D, Li D, Jia Y. Automated boundary detection of the optic disc and layer segmentation of the peripapillary retina in volumetric structural and angiographic optical coherence tomography. Biomed Opt Express 2017; 8:1306-1318. [PMID: 28663830 PMCID: PMC5480545 DOI: 10.1364/boe.8.001306] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 01/25/2017] [Accepted: 01/25/2017] [Indexed: 05/20/2023]
Abstract
To improve optic disc boundary detection and peripapillary retinal layer segmentation, we propose an automated approach for structural and angiographic optical coherence tomography. The algorithm was performed on radial cross-sectional B-scans. The disc boundary was detected by searching for the position of Bruch's membrane opening, and retinal layer boundaries were detected using a dynamic programming-based graph search algorithm on each B-scan without the disc region. A comparison of the disc boundary using our method with that determined by manual delineation showed good accuracy, with an average Dice similarity coefficient ≥0.90 in healthy eyes and eyes with diabetic retinopathy and glaucoma. The layer segmentation accuracy in the same cases was on average less than one pixel (3.13 μm).
Collapse
Affiliation(s)
- Pengxiao Zang
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, Institute of Biomedical Sciences, School of Physics and Electronics, Shandong Normal University, 88 East Wenhua Rd, Jinan, Shandong 250014, China
| | - Simon S Gao
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - Christina J Flaxel
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - David J Wilson
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - John C Morrison
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| | - Dengwang Li
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, Institute of Biomedical Sciences, School of Physics and Electronics, Shandong Normal University, 88 East Wenhua Rd, Jinan, Shandong 250014, China
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, 3375 SW Terwilliger Blvd, Portland, OR 97239, USA
| |
Collapse
|
35
|
Rehman AU, Anwer AG, Gosnell ME, Mahbub SB, Liu G, Goldys EM. Fluorescence quenching of free and bound NADH in HeLa cells determined by hyperspectral imaging and unmixing of cell autofluorescence. Biomed Opt Express 2017; 8:1488-1498. [PMID: 28663844 PMCID: PMC5480559 DOI: 10.1364/boe.8.001488] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 01/12/2017] [Accepted: 02/06/2017] [Indexed: 05/06/2023]
Abstract
Carbonyl cyanide-p-trifluoro methoxyphenylhydrazone (FCCP) is a well-known mitochondrial uncoupling agent. We examined FCCP-induced fluorescence quenching of reduced nicotinamide adenine dinucleotide / nicotinamide adenine dinucleotide phosphate (NAD(P)H) in solution and in cultured HeLa cells in a wide range of FCCP concentrations from 50 to 1000µM. A non-invasive label-free method of hyperspectral imaging of cell autofluorescence combined with unsupervised unmixing was used to separately isolate the emissions of free and bound NAD(P)H from cell autofluorescence. Hyperspectral image analysis of FCCP-treated HeLa cells confirms that this agent selectively quenches fluorescence of free and bound NAD(P)H in a broad range of concentrations. This is confirmed by the measurements of average NAD/NADH and NADP/NADPH content in cells. FCCP quenching of free NAD(P)H in cells and in solution is found to be similar, but quenching of bound NAD(P)H in cells is attenuated compared to solution quenching possibly due to a contribution from the metabolic and/or antioxidant response in cells. Chemical quenching of NAD(P)H fluorescence by FCCP validates the results of unsupervised unmixing of cell autofluorescence.
Collapse
Affiliation(s)
- Aziz Ul Rehman
- ARC Centre of Excellence in Nanoscale Biophotonics, Macquarie University, Sydney, 2109, New South Wales, Australia
- Biophotonics Laboratory, National Institute of Lasers and Optronics, Lehtrar Road, Islamabad 45650, Pakistan
| | - Ayad G. Anwer
- ARC Centre of Excellence in Nanoscale Biophotonics, Macquarie University, Sydney, 2109, New South Wales, Australia
| | - Martin E. Gosnell
- ARC Centre of Excellence in Nanoscale Biophotonics, Macquarie University, Sydney, 2109, New South Wales, Australia
- Quantitative Pty Ltd, ABN 17 165 684 186, Australia
| | - Saabah B. Mahbub
- ARC Centre of Excellence in Nanoscale Biophotonics, Macquarie University, Sydney, 2109, New South Wales, Australia
| | - Guozhen Liu
- ARC Centre of Excellence in Nanoscale Biophotonics, Macquarie University, Sydney, 2109, New South Wales, Australia
- Key Laboratory of Pesticide and Chemical Biology of Ministry of Education, College of Chemistry, Central China Normal University, Wuhan 430079, China
| | - Ewa M. Goldys
- ARC Centre of Excellence in Nanoscale Biophotonics, Macquarie University, Sydney, 2109, New South Wales, Australia
| |
Collapse
|
36
|
Vahid MR, Chao J, Kim D, Ward ES, Ober RJ. State space approach to single molecule localization in fluorescence microscopy. Biomed Opt Express 2017; 8:1332-1355. [PMID: 28663832 PMCID: PMC5480547 DOI: 10.1364/boe.8.001332] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Revised: 01/14/2017] [Accepted: 01/30/2017] [Indexed: 06/07/2023]
Abstract
Single molecule super-resolution microscopy enables imaging at sub-diffraction-limit resolution by producing images of subsets of stochastically photoactivated fluorophores over a sequence of frames. In each frame of the sequence, the fluorophores are accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Many methods have been developed for localizing fluorophores from the images. The majority of these methods comprise two separate steps: detection and estimation. In the detection step, fluorophores are identified. In the estimation step, the locations of the identified fluorophores are estimated through an iterative approach. Here, we propose a non-iterative state space-based localization method which combines the detection and estimation steps. We demonstrate that the estimated locations obtained from the proposed method can be used as initial conditions in an estimation routine to potentially obtain improved location estimates. The proposed method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix. The locations of the poles of the resulting system determine the peak locations in the frequency domain, and the locations of the most significant peaks correspond to the single molecule locations in the original image. The performance of the method is validated using both simulated and experimental data.
Collapse
Affiliation(s)
- Milad R. Vahid
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843,
USA
- Department of Molecular and Cellular Medicine, Texas A&M Health Science Center, College Station, TX 77843,
USA
| | - Jerry Chao
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843,
USA
- Department of Molecular and Cellular Medicine, Texas A&M Health Science Center, College Station, TX 77843,
USA
| | - Dongyoung Kim
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843,
USA
- Department of Molecular and Cellular Medicine, Texas A&M Health Science Center, College Station, TX 77843,
USA
| | - E. Sally Ward
- Department of Molecular and Cellular Medicine, Texas A&M Health Science Center, College Station, TX 77843,
USA
- Department of Microbial Pathogenesis and Immunology, Texas A&M Health Science Center, College Station, TX 77843,
USA
| | - Raimund J. Ober
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843,
USA
- Department of Molecular and Cellular Medicine, Texas A&M Health Science Center, College Station, TX 77843,
USA
| |
Collapse
|
37
|
Zheng Y, Wang Y, Jiao W, Hou S, Ren Y, Qin M, Hou D, Luo C, Wang H, Gee J, Zhao B. Joint alignment of multispectral images via semidefinite programming. Biomed Opt Express 2017; 8:890-901. [PMID: 28270991 PMCID: PMC5330559 DOI: 10.1364/boe.8.000890] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 01/08/2017] [Accepted: 01/09/2017] [Indexed: 06/06/2023]
Abstract
In this paper, we introduce a novel feature-point-matching based framework for achieving an optimized joint-alignment of sequential images from multispectral imaging (MSI). It solves a low-rank and semidefinite matrix that stores all pairwise-image feature-mappings by minimizing the total amount of point-to-point matching cost via a convex optimization of a semidefinite programming formulation. This unique strategy takes a complete consideration of the information aggregated by all point-matching costs and enables the entire set of pairwise-image feature-mappings to be solved simultaneously and near-optimally. Our framework is capable of running in an automatic or interactive fashion, offering an effective tool for eliminating spatial misalignments introduced into sequential MSI images during the imaging process. Our experimental results obtained from a database of 28 sequences of MSI images of human eye demonstrate the superior performances of our approach to the state-of-the-art techniques. Our framework is potentially invaluable in a large variety of practical applications of MSI images.
Collapse
Affiliation(s)
- Yuanjie Zheng
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
- Institute of Life Sciences at Shandong Normal University, Jinan,
China
- Key Lab of Intelligent Information Processing at Shandong Normal University, Jinan,
China
| | - Yu Wang
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
| | - Wanzhen Jiao
- Dept. of Ophthalmology, Shandong Provincial Hospital Affiliated to Shandong University, Jinan,
China
| | - Sujuan Hou
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
| | - Yanju Ren
- School of Psychology, Shandong Normal University, Jinan,
China
| | - Maoling Qin
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
| | - Dewen Hou
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
| | - Chao Luo
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
| | - Hong Wang
- School of Information Science & Engineering, Shandong Normal University, Jinan,
China
- Institute of Life Sciences at Shandong Normal University, Jinan,
China
- Key Lab of Intelligent Information Processing at Shandong Normal University, Jinan,
China
| | - James Gee
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA,
USA
| | - Bojun Zhao
- Dept. of Ophthalmology, Shandong Provincial Hospital Affiliated to Shandong University, Jinan,
China
| |
Collapse
|
38
|
Karri SPK, Chakraborty D, Chatterjee J. Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration. Biomed Opt Express 2017; 8:579-592. [PMID: 28270969 PMCID: PMC5330546 DOI: 10.1364/boe.8.000579] [Citation(s) in RCA: 120] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 12/12/2016] [Accepted: 12/13/2016] [Indexed: 05/06/2023]
Abstract
We present an algorithm for identifying retinal pathologies given retinal optical coherence tomography (OCT) images. Our approach fine-tunes a pre-trained convolutional neural network (CNN), GoogLeNet, to improve its prediction capability (compared to random initialization training) and identifies salient responses during prediction to understand learned filter characteristics. We considered a data set containing subjects with diabetic macular edema, or dry age-related macular degeneration, or no pathology. The fine-tuned CNN could effectively identify pathologies in comparison to classical learning. Our algorithm aims to demonstrate that models trained on non-medical images can be fine-tuned for classifying OCT images with limited training data.
Collapse
Affiliation(s)
- S. P. K. Karri
- School of Medical Science and Technology, IIT Kharagpur, Kharagpur, India
| | | | | |
Collapse
|
39
|
Fatima KN, Hassan T, Akram MU, Akhtar M, Butt WH. Fully automated diagnosis of papilledema through robust extraction of vascular patterns and ocular pathology from fundus photographs. Biomed Opt Express 2017; 8:1005-1024. [PMID: 28270999 PMCID: PMC5330576 DOI: 10.1364/boe.8.001005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 01/11/2017] [Accepted: 01/15/2017] [Indexed: 06/06/2023]
Abstract
Rapid development in the field of ophthalmology has increased the demand of computer aided diagnosis of various eye diseases. Papilledema is an eye disease in which the optic disc of the eye is swelled due to an increase in intracranial pressure. This increased pressure can cause severe encephalic complications like abscess, tumors, meningitis or encephalitis, which may lead to a patient's death. Although there have been several papilledema case studies reported from a medical point of view, only a few researchers have presented automated algorithms for this problem. This paper presents a novel computer aided system which aims to automatically detect papilledema from fundus images. Firstly, the fundus images are preprocessed by going through optic disc detection and vessel segmentation. After preprocessing, a total of 26 different features are extracted to capture possible changes in the optic disc due to papilledema. These features are further divided into four categories based upon their color, textural, vascular and disc margin obscuration properties. The best features are then selected and combined to form a feature matrix that is used to distinguish between normal images and images with papilledema using the supervised support vector machine (SVM) classifier. The proposed method is tested on 160 fundus images obtained from two different data sets i.e. structured analysis of retina (STARE), which is a publicly available data set, and our local data set that has been acquired from the Armed Forces Institute of Ophthalmology (AFIO). The STARE data set contained 90 and our local data set contained 70 fundus images respectively. These annotations have been performed with the help of two ophthalmologists. We report detection accuracies of 95.6% for STARE, 87.4% for the local data set, and 85.9% for the combined STARE and local data sets. The proposed system is fast and robust in detecting papilledema from fundus images with promising results. This will aid physicians in clinical assessment of fundus images. It will not take away the role of physicians, but will rather help them in the time consuming process of screening fundus images.
Collapse
Affiliation(s)
- Khush Naseeb Fatima
- Department of Computer Engineering, National University of Sciences and Technology, Pakistan
| | - Taimur Hassan
- Department of Electrical Engineering, Bahria University, Islamabad Pakistan
| | - M. Usman Akram
- Department of Computer Engineering, National University of Sciences and Technology, Pakistan
| | - Mahmood Akhtar
- Department of Computer Engineering, National University of Sciences and Technology, Pakistan
| | - Wasi Haider Butt
- Department of Computer Engineering, National University of Sciences and Technology, Pakistan
| |
Collapse
|
40
|
Martinez-Enriquez E, Pérez-Merino P, Velasco-Ocana M, Marcos S. OCT-based full crystalline lens shape change during accommodation in vivo. Biomed Opt Express 2017; 8:918-933. [PMID: 28270993 PMCID: PMC5330589 DOI: 10.1364/boe.8.000918] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 12/15/2016] [Accepted: 12/27/2016] [Indexed: 05/22/2023]
Abstract
The full shape of the accommodating crystalline lens was estimated using custom three-dimensional (3-D) spectral OCT and image processing algorithms. Automatic segmentation and distortion correction were used to construct 3-D models of the lens region visible through the pupil. The lens peripheral region was estimated with a trained and validated parametric model. Nineteen young eyes were measured at 0-6 D accommodative demands in 1.5 D steps. Lens volume, surface area, diameter, and equatorial plane position were automatically quantified. Lens diameter & surface area correlated negatively and equatorial plane position positively with accommodation response. Lens volume remained constant and surface area decreased with accommodation, indicating that the lens material is incompressible and the capsular bag elastic.
Collapse
|
41
|
Hackett LP, Seo S, Kim S, Goddard LL, Liu GL. Label-free cell-substrate adhesion imaging on plasmonic nanocup arrays. Biomed Opt Express 2017; 8:1139-1151. [PMID: 28271009 PMCID: PMC5330562 DOI: 10.1364/boe.8.001139] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 12/07/2016] [Accepted: 12/19/2016] [Indexed: 05/10/2023]
Abstract
Cell adhesion is a crucial biological and biomedical parameter defining cell differentiation, cell migration, cell survival, and state of disease. Because of its importance in cellular function, several tools have been developed in order to monitor cell adhesion in response to various biochemical and mechanical cues. However, there remains a need to monitor cell adhesion and cell-substrate separation with a method that allows real-time measurements on accessible equipment. In this article, we present a method to monitor cell-substrate separation at the single cell level using a plasmonic extraordinary optical transmission substrate, which has a high sensitivity to refractive index changes at the metal-dielectric interface. We show how refractive index changes can be detected using intensity peaks in color channel histograms from RGB images taken of the device surface with a brightfield microscope. This allows mapping of the nonuniform refractive index pattern of a single cell cultured on the plasmonic substrate and therefore high-throughput detection of cell-substrate adhesion with observations in real time.
Collapse
Affiliation(s)
- L. P. Hackett
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N Wright St, Urbana, IL 61801, USA
- Miro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, 208 N Wright St, Urbana, IL 61801, USA
| | - S. Seo
- Miro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, 208 N Wright St, Urbana, IL 61801, USA
- Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, 1304 W Green St, Urbana, IL 61801, USA
| | - S. Kim
- Miro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, 208 N Wright St, Urbana, IL 61801, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, 1270 Digital Computer Laboratory, Urbana, IL 61801, USA
| | - L. L. Goddard
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N Wright St, Urbana, IL 61801, USA
- Miro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, 208 N Wright St, Urbana, IL 61801, USA
| | - G. L. Liu
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 306 N Wright St, Urbana, IL 61801, USA
- Miro and Nanotechnology Laboratory, University of Illinois at Urbana-Champaign, 208 N Wright St, Urbana, IL 61801, USA
- School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan 43007, China
| |
Collapse
|
42
|
Dongye C, Zhang M, Hwang TS, Wang J, Gao SS, Liu L, Huang D, Wilson DJ, Jia Y. Automated detection of dilated capillaries on optical coherence tomography angiography. Biomed Opt Express 2017; 8:1101-1109. [PMID: 28271005 PMCID: PMC5330594 DOI: 10.1364/boe.8.001101] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 01/20/2017] [Accepted: 01/20/2017] [Indexed: 05/29/2023]
Abstract
Automated detection and grading of angiographic high-risk features in diabetic retinopathy can potentially enhance screening and clinical care. We have previously identified capillary dilation in angiograms of the deep plexus in optical coherence tomography angiography as a feature associated with severe diabetic retinopathy. In this study, we present an automated algorithm that uses hybrid contrast to distinguish angiograms with dilated capillaries from healthy controls and then applies saliency measurement to map the extent of the dilated capillary networks. The proposed algorithm agreed well with human grading.
Collapse
Affiliation(s)
- Changlei Dongye
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Miao Zhang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- OptoVue, Inc., 2800 Bayview Dr, Fremont, CA 94538, USA
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Simon S. Gao
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Liang Liu
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - David J. Wilson
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
43
|
Abdolmanafi A, Duong L, Dahdah N, Cheriet F. Deep feature learning for automatic tissue classification of coronary artery using optical coherence tomography. Biomed Opt Express 2017; 8:1203-1220. [PMID: 28271012 PMCID: PMC5330543 DOI: 10.1364/boe.8.001203] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Revised: 01/20/2017] [Accepted: 01/20/2017] [Indexed: 05/03/2023]
Abstract
Kawasaki disease (KD) is an acute childhood disease complicated by coronary artery aneurysms, intima thickening, thrombi, stenosis, lamellar calcifications, and disappearance of the media border. Automatic classification of the coronary artery layers (intima, media, and scar features) is important for analyzing optical coherence tomography (OCT) images recorded in pediatric patients. OCT has been known as an intracoronary imaging modality using near-infrared light which has recently been used to image the inner coronary artery tissues of pediatric patients, providing high spatial resolution (ranging from 10 to 20 μm). This study aims to develop a robust and fully automated tissue classification method by using the convolutional neural networks (CNNs) as feature extractor and comparing the predictions of three state-of-the-art classifiers, CNN, random forest (RF), and support vector machine (SVM). The results show the robustness of CNN as the feature extractor and random forest as the classifier with classification rate up to 96%, especially to characterize the second layer of coronary arteries (media), which is a very thin layer and it is challenging to be recognized and specified from other tissues.
Collapse
Affiliation(s)
- Atefeh Abdolmanafi
- Dept. of Software and IT Engineering, École de Technologie Supérieure, Montréal,
Canada
| | - Luc Duong
- Dept. of Software and IT Engineering, École de Technologie Supérieure, Montréal,
Canada
| | - Nagib Dahdah
- Div. of Pediatric Cardiology and Research Center, Centre Hospitalier Universitaire Sainte-Justine, Montréal,
Canada
| | - Farida Cheriet
- Dept. of Computer Engineering, École Polytechnique de Montréal, Montréal,
Canada
| |
Collapse
|
44
|
Wang Y, Zhang Y, Yao Z, Zhao R, Zhou F. Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images. Biomed Opt Express 2016; 7:4928-4940. [PMID: 28018716 PMCID: PMC5175542 DOI: 10.1364/boe.7.004928] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 10/05/2016] [Accepted: 10/05/2016] [Indexed: 05/05/2023]
Abstract
Non-lethal macular diseases greatly impact patients' life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples.
Collapse
Affiliation(s)
- Yu Wang
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, Liaoning 110169, China; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yaonan Zhang
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, Liaoning 110169, China; College of Electronics and Information Engineering, Xi'an Siyuan University, Xi'an 710038, China;
| | - Zhaomin Yao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, Liaoning 110169, China; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Ruixue Zhao
- College of Computer Science and Technology, Jilin University, Changchun, Jilin 130012, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin 130012, China
| | - Fengfeng Zhou
- College of Computer Science and Technology, Jilin University, Changchun, Jilin 130012, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin 130012, China; ; ; http://www.healthinformaticslab.org/ffzhou/
| |
Collapse
|
45
|
Amelard R, Clausi DA, Wong A. Spectral-spatial fusion model for robust blood pulse waveform extraction in photoplethysmographic imaging. Biomed Opt Express 2016; 7:4874-4885. [PMID: 28018712 PMCID: PMC5175538 DOI: 10.1364/boe.7.004874] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Revised: 08/22/2016] [Accepted: 09/09/2016] [Indexed: 05/18/2023]
Abstract
Photoplethysmographic imaging is an optical solution for non-contact cardiovascular monitoring from a distance. This camera-based technology enables physiological monitoring in situations where contact-based devices may be problematic or infeasible, such as ambulatory, sleep, and multi-individual monitoring. However, automatically extracting the blood pulse waveform signal is challenging due to the unknown mixture of relevant (pulsatile) and irrelevant pixels in the scene. Here, we propose a signal fusion framework, FusionPPG, for extracting a blood pulse waveform signal with strong temporal fidelity from a scene without requiring anatomical priors. The extraction problem is posed as a Bayesian least squares fusion problem, and solved using a novel probabilistic pulsatility model that incorporates both physiologically derived spectral and spatial waveform priors to identify pulsatility characteristics in the scene. Evaluation was performed on a 24-participant sample with various ages (9-60 years) and body compositions (fat% 30.0 ± 7.9, muscle% 40.4 ± 5.3, BMI 25.5 ± 5.2 kg·m-2). Experimental results show stronger matching to the ground-truth blood pulse waveform signal compared to the FaceMeanPPG (p < 0.001) and DistancePPG (p < 0.001) methods. Heart rates predicted using FusionPPG correlated strongly with ground truth measurements (r2 = 0.9952). A cardiac arrhythmia was visually identified in FusionPPG's waveform via temporal analysis.
Collapse
|
46
|
Miri MS, Abràmoff MD, Kwon YH, Garvin MK. Multimodal registration of SD-OCT volumes and fundus photographs using histograms of oriented gradients. Biomed Opt Express 2016; 7:5252-5267. [PMID: 28018740 PMCID: PMC5175567 DOI: 10.1364/boe.7.005252] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 10/19/2016] [Accepted: 11/11/2016] [Indexed: 05/14/2023]
Abstract
With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 μm (0.84 ± 0.41 pixels).
Collapse
Affiliation(s)
- Mohammad Saleh Miri
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242,
USA
| | - Michael D. Abràmoff
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242,
USA
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA 52242,
USA
- Iowa City VA Health Care System, Iowa City, IA 52246,
USA
| | - Young H. Kwon
- Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA 52242,
USA
| | - Mona K. Garvin
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242,
USA
- Iowa City VA Health Care System, Iowa City, IA 52246,
USA
| |
Collapse
|
47
|
Rico-Jimenez JJ, Campos-Delgado DU, Villiger M, Otsuka K, Bouma BE, Jo JA. Automatic classification of atherosclerotic plaques imaged with intravascular OCT. Biomed Opt Express 2016; 7:4069-4085. [PMID: 27867716 PMCID: PMC5102521 DOI: 10.1364/boe.7.004069] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 09/09/2016] [Accepted: 09/12/2016] [Indexed: 05/24/2023]
Abstract
Intravascular optical coherence tomography (IV-OCT) allows evaluation of atherosclerotic plaques; however, plaque characterization is performed by visual assessment and requires a trained expert for interpretation of the large data sets. Here, we present a novel computational method for automated IV-OCT plaque characterization. This method is based on the modeling of each A-line of an IV-OCT data set as a linear combination of a number of depth profiles. After estimating these depth profiles by means of an alternating least square optimization strategy, they are automatically classified to predefined tissue types based on their morphological characteristics. The performance of our proposed method was evaluated with IV-OCT scans of cadaveric human coronary arteries and corresponding tissue histopathology. Our results suggest that this methodology allows automated identification of fibrotic and lipid-containing plaques. Moreover, this novel computational method has the potential to enable high throughput atherosclerotic plaque characterization.
Collapse
Affiliation(s)
- Jose J. Rico-Jimenez
- Department of Biomedical Engineering, Texas A & M University, College Station, TX,
USA
| | | | - Martin Villiger
- Wellman Center for Photomedicine, Harvard Medical School and Massachusetts General Hospital, Boston, MA,
USA
| | - Kenichiro Otsuka
- Wellman Center for Photomedicine, Harvard Medical School and Massachusetts General Hospital, Boston, MA,
USA
| | - Brett E. Bouma
- Wellman Center for Photomedicine, Harvard Medical School and Massachusetts General Hospital, Boston, MA,
USA
| | - Javier A. Jo
- Department of Biomedical Engineering, Texas A & M University, College Station, TX,
USA
| |
Collapse
|
48
|
Nylk J, McCluskey K, Aggarwal S, Tello JA, Dholakia K. Enhancement of image quality and imaging depth with Airy light-sheet microscopy in cleared and non-cleared neural tissue. Biomed Opt Express 2016; 7:4021-4033. [PMID: 27867712 PMCID: PMC5102539 DOI: 10.1364/boe.7.004021] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2016] [Revised: 09/02/2016] [Accepted: 09/05/2016] [Indexed: 05/07/2023]
Abstract
We have investigated the effect of Airy illumination on the image quality and depth penetration of digitally scanned light-sheet microscopy in turbid neural tissue. We used Fourier analysis of images acquired using Gaussian and Airy light-sheets to assess their respective image quality versus penetration into the tissue. We observed a three-fold average improvement in image quality at 50 μm depth with the Airy light-sheet. We also used optical clearing to tune the scattering properties of the tissue and found that the improvement when using an Airy light-sheet is greater in the presence of stronger sample-induced aberrations. Finally, we used homogeneous resolution probes in these tissues to quantify absolute depth penetration in cleared samples with each beam type. The Airy light-sheet method extended depth penetration by 30% compared to a Gaussian light-sheet.
Collapse
Affiliation(s)
- Jonathan Nylk
- SUPA, School of Physics and Astronomy, University of St. Andrews, St. Andrews, KY16 9SS,
UK
| | - Kaley McCluskey
- SUPA, School of Physics and Astronomy, University of St. Andrews, St. Andrews, KY16 9SS,
UK
| | - Sanya Aggarwal
- School of Medicine, University of St. Andrews, St. Andrews, KY16 9TF,
UK
| | - Javier A. Tello
- School of Medicine, University of St. Andrews, St. Andrews, KY16 9TF,
UK
| | - Kishan Dholakia
- SUPA, School of Physics and Astronomy, University of St. Andrews, St. Andrews, KY16 9SS,
UK
| |
Collapse
|
49
|
Nguyen HD, Hong KS. Bundled-optode implementation for 3D imaging in functional near-infrared spectroscopy. Biomed Opt Express 2016; 7:3491-3507. [PMID: 27699115 PMCID: PMC5030027 DOI: 10.1364/boe.7.003491] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Revised: 08/04/2016] [Accepted: 08/10/2016] [Indexed: 05/03/2023]
Abstract
The paper presents a functional near-infrared spectroscopy (fNIRS)-based bundled-optode method for detection of the changes of oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR) concentrations. fNIRS with 32 optodes is utilized to measure five healthy male subjects' brain-hemodynamic responses to arithmetic tasks. Specifically, the coordinates of 256 voxels in the three-dimensional (3D) volume are computed according to the known probe geometry. The mean path length factor in the Beer-Lambert equation is estimated as a function of the emitter-detector distance, which is utilized for computation of the absorption coefficient. The mean values of HbO and HbR obtained from the absorption coefficient are then applied for construction of a 3D fNIRS image. Our results show that the proposed method, as compared with the conventional approach, can detect brain activity with higher spatial resolution. This method can be extended for 3D fNIRS imaging in real-time applications.
Collapse
Affiliation(s)
- Hoang-Dung Nguyen
- Department of Cogno-Mechatronics Engineering, Pusan National University, 2 Busandaehak-ro, Geumjeong-gu, Busan 46241, South Korea
| | - Keum-Shik Hong
- Department of Cogno-Mechatronics Engineering, Pusan National University, 2 Busandaehak-ro, Geumjeong-gu, Busan 46241, South Korea
- School of Mechanical Engineering, Pusan National University, 2 Busandaehak-ro, Geumjeong-gu, Busan 46241, South Korea
| |
Collapse
|
50
|
Alexander NS, Palczewska G, Stremplewski P, Wojtkowski M, Kern TS, Palczewski K. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina. Biomed Opt Express 2016; 7:2671-91. [PMID: 27446697 PMCID: PMC4948621 DOI: 10.1364/boe.7.002671] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 06/11/2016] [Accepted: 06/11/2016] [Indexed: 05/18/2023]
Abstract
Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [Palczewska G., Nat Med.20, 785 (2014) Sharma R., Biomed. Opt. Express4, 1285 (2013)].
Collapse
Affiliation(s)
- Nathan S Alexander
- Department of Pharmacology, Cleveland Center for Membrane and Structural Biology, School of Medicine, Case Western Reserve University, Cleveland, OH 44106, USA;
| | | | - Patrycjusz Stremplewski
- Faculty of Physics, Astronomy and Informatics, Institute of Physics, Nicolaus Copernicus University, 87-100 Torun, Poland
| | - Maciej Wojtkowski
- Faculty of Physics, Astronomy and Informatics, Institute of Physics, Nicolaus Copernicus University, 87-100 Torun, Poland
| | - Timothy S Kern
- Department of Pharmacology, Cleveland Center for Membrane and Structural Biology, School of Medicine, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Krzysztof Palczewski
- Department of Pharmacology, Cleveland Center for Membrane and Structural Biology, School of Medicine, Case Western Reserve University, Cleveland, OH 44106, USA; Polgenix Inc., 11000 Cedar Ave, Cleveland, Ohio 44106, USA;
| |
Collapse
|