1
|
Durr NJ, Larson T, Smith DK, Korgel BA, Sokolov K, Ben-Yakar A. Two-photon luminescence imaging of cancer cells using molecularly targeted gold nanorods. NANO LETTERS 2007; 7:941-5. [PMID: 17335272 PMCID: PMC2743599 DOI: 10.1021/nl062962v] [Citation(s) in RCA: 521] [Impact Index Per Article: 28.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
We demonstrate the use of gold nanorods as bright contrast agents for two-photon luminescence (TPL) imaging of cancer cells in a three-dimensional tissue phantom down to 75 mum deep. The TPL intensity from gold-nanorod-labeled cancer cells is 3 orders of magnitude brighter than the two-photon autofluorescence (TPAF) emission intensity from unlabeled cancer cells at 760 nm excitation light. Their strong signal, resistance to photobleaching, chemical stability, ease of synthesis, simplicity of conjugation chemistry, and biocompatibility make gold nanorods an attractive contrast agent for two-photon imaging of epithelial cancer.
Collapse
|
Research Support, N.I.H., Extramural |
18 |
521 |
2
|
Guo SX, Bourgeois F, Chokshi T, Durr NJ, Hilliard MA, Chronis N, Ben-Yakar A. Femtosecond laser nanoaxotomy lab-on-a-chip for in vivo nerve regeneration studies. Nat Methods 2008; 5:531-3. [PMID: 18408725 PMCID: PMC3143684 DOI: 10.1038/nmeth.1203] [Citation(s) in RCA: 143] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2008] [Accepted: 03/24/2008] [Indexed: 11/09/2022]
Abstract
A thorough understanding of nerve regeneration in Caenorhabditis elegans requires performing femtosecond laser nanoaxotomy while minimally affecting the worm. We present a microfluidic device that fulfills such criteria and can easily be automated to enable high-throughput genetic and pharmacological screenings. Using the 'nanoaxotomy' chip, we discovered that axonal regeneration occurs much faster than previously described, and notably, the distal fragment of the severed axon regrows in the absence of anesthetics.
Collapse
|
Research Support, N.I.H., Extramural |
17 |
143 |
3
|
Mahmood F, Borders D, Chen RJ, Mckay GN, Salimian KJ, Baras A, Durr NJ. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3257-3267. [PMID: 31283474 PMCID: PMC8588951 DOI: 10.1109/tmi.2019.2927182] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.
Collapse
|
|
5 |
132 |
4
|
Mahmood F, Chen R, Durr NJ. Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2572-2581. [PMID: 29993538 DOI: 10.1109/tmi.2018.2842767] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To realize the full potential of deep learning for medical imaging, large annotated datasets are required for training. Such datasets are difficult to acquire due to privacy issues, lack of experts available for annotation, underrepresentation of rare conditions, and poor standardization. The lack of annotated data has been addressed in conventional vision applications using synthetic images refined via unsupervised adversarial training to look like real images. However, this approach is difficult to extend to general medical imaging because of the complex and diverse set of features found in real human tissues. We propose a novel framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and clinically-relevant features are preserved via self-regularization. These domain-adapted synthetic-like images can then be accurately interpreted by networks trained on large datasets of synthetic medical images. We implement this approach on the notoriously difficult task of depth-estimation from monocular endoscopy which has a variety of applications in colonoscopy, robotic surgery, and invasive endoscopic procedures. We train a depth estimator on a large data set of synthetic images generated using an accurate forward model of an endoscope and an anatomically-realistic colon. Our analysis demonstrates that the structural similarity of endoscopy depth estimation in a real pig colon predicted from a network trained solely on synthetic data improved by 78.7% by using reverse domain adaptation.
Collapse
|
|
7 |
107 |
5
|
Gioux S, Mazhar A, Lee BT, Lin SJ, Tobias AM, Cuccia DJ, Stockdale A, Oketokoun R, Ashitate Y, Kelly E, Weinmann M, Durr NJ, Moffitt LA, Durkin AJ, Tromberg BJ, Frangioni JV. First-in-human pilot study of a spatial frequency domain oxygenation imaging system. JOURNAL OF BIOMEDICAL OPTICS 2011; 16:086015. [PMID: 21895327 PMCID: PMC3182084 DOI: 10.1117/1.3614566] [Citation(s) in RCA: 105] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2011] [Revised: 06/30/2011] [Accepted: 07/01/2011] [Indexed: 05/18/2023]
Abstract
Oxygenation measurements are widely used in patient care. However, most clinically available instruments currently consist of contact probes that only provide global monitoring of the patient (e.g., pulse oximetry probes) or local monitoring of small areas (e.g., spectroscopy-based probes). Visualization of oxygenation over large areas of tissue, without a priori knowledge of the location of defects, has the potential to improve patient management in many surgical and critical care applications. In this study, we present a clinically compatible multispectral spatial frequency domain imaging (SFDI) system optimized for surgical oxygenation imaging. This system was used to image tissue oxygenation over a large area (16×12 cm) and was validated during preclinical studies by comparing results obtained with an FDA-approved clinical oxygenation probe. Skin flap, bowel, and liver vascular occlusion experiments were performed on Yorkshire pigs and demonstrated that over the course of the experiment, relative changes in oxygen saturation measured using SFDI had an accuracy within 10% of those made using the FDA-approved device. Finally, the new SFDI system was translated to the clinic in a first-in-human pilot study that imaged skin flap oxygenation during reconstructive breast surgery. Overall, this study lays the foundation for clinical translation of endogenous contrast imaging using SFDI.
Collapse
|
Research Support, N.I.H., Extramural |
14 |
105 |
6
|
Ozyoruk KB, Gokceler GI, Bobrow TL, Coskun G, Incetan K, Almalioglu Y, Mahmood F, Curto E, Perdigoto L, Oliveira M, Sahin H, Araujo H, Alexandrino H, Durr NJ, Gilbert HB, Turan M. EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med Image Anal 2021; 71:102058. [PMID: 33930829 DOI: 10.1016/j.media.2021.102058] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/23/2021] [Accepted: 03/29/2021] [Indexed: 02/07/2023]
Abstract
Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.
Collapse
|
Research Support, Non-U.S. Gov't |
4 |
54 |
7
|
Durr NJ, Weisspfennig CT, Holfeld BA, Ben-Yakar A. Maximum imaging depth of two-photon autofluorescence microscopy in epithelial tissues. JOURNAL OF BIOMEDICAL OPTICS 2011; 16:026008. [PMID: 21361692 PMCID: PMC3061332 DOI: 10.1117/1.3548646] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Endogenous fluorescence provides morphological, spectral, and lifetime contrast that can indicate disease states in tissues. Previous studies have demonstrated that two-photon autofluorescence microscopy (2PAM) can be used for noninvasive, three-dimensional imaging of epithelial tissues down to approximately 150 μm beneath the skin surface. We report ex-vivo 2PAM images of epithelial tissue from a human tongue biopsy down to 370 μm below the surface. At greater than 320 μm deep, the fluorescence generated outside the focal volume degrades the image contrast to below one. We demonstrate that these imaging depths can be reached with 160 mW of laser power (2-nJ per pulse) from a conventional 80-MHz repetition rate ultrafast laser oscillator. To better understand the maximum imaging depths that we can achieve in epithelial tissues, we studied image contrast as a function of depth in tissue phantoms with a range of relevant optical properties. The phantom data agree well with the estimated contrast decays from time-resolved Monte Carlo simulations and show maximum imaging depths similar to that found in human biopsy results. This work demonstrates that the low staining inhomogeneity (∼ 20) and large scattering coefficient (∼ 10 mm(-1)) associated with conventional 2PAM limit the maximum imaging depth to 3 to 5 mean free scattering lengths deep in epithelial tissue.
Collapse
|
Research Support, N.I.H., Extramural |
14 |
45 |
8
|
Hoy CL, Durr NJ, Chen P, Piyawattanametha W, Ra H, Solgaard O, Ben-Yakar A. Miniaturized probe for femtosecond laser microsurgery and two-photon imaging. OPTICS EXPRESS 2008; 16:9996-10005. [PMID: 18575570 PMCID: PMC3143712 DOI: 10.1364/oe.16.009996] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Combined two-photon fluorescence microscopy and femtosecond laser microsurgery has many potential biomedical applications as a powerful "seek-and-treat" tool. Towards developing such a tool, we demonstrate a miniaturized probe which combines these techniques in a compact housing. The device is 10 x 15 x 40 mm(3) in size and uses an aircore photonic crystal fiber to deliver femtosecond laser pulses at 80 MHz repetition rate for imaging and 1 kHz for microsurgery. A fast two-axis microelectromechanical system scanning mirror is driven at resonance to produce Lissajous beam scanning at 10 frames per second. Field of view is 310 microm in diameter and the lateral and axial resolutions are 1.64 microm and 16.4 microm, respectively. Combined imaging and microsurgery is demonstrated using live cancer cells.
Collapse
|
research-article |
17 |
42 |
9
|
Woods RW, Camp MS, Durr NJ, Harvey SC. A Review of Options for Localization of Axillary Lymph Nodes in the Treatment of Invasive Breast Cancer. Acad Radiol 2019; 26:805-819. [PMID: 30143401 DOI: 10.1016/j.acra.2018.07.002] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2018] [Revised: 07/03/2018] [Accepted: 07/04/2018] [Indexed: 02/09/2023]
Abstract
Invasive breast cancer is a common disease, and the most common initial site of metastatic disease are the axillary lymph nodes. As the standard of care shifts towards less invasive surgery in the axilla for patients with invasive breast cancer, techniques have been developed for axillary node localization that allow targeted dissection of specific lymph nodes without requiring full axillary lymph node dissection. Many of these techniques have been adapted from technologies developed for localization of lesions within the breast and include marker clip placement with intraoperative ultrasound, carbon-suspension liquids, localization wires, radioactive seeds, magnetic seeds, radar reflectors, and radiofrequency identification devices.The purpose of this article is to summarize these methods and describe benefits and drawbacks of each method for performing localization of lymph nodes in the axilla.
Collapse
|
Review |
6 |
41 |
10
|
İncetan K, Celik IO, Obeid A, Gokceler GI, Ozyoruk KB, Almalioglu Y, Chen RJ, Mahmood F, Gilbert H, Durr NJ, Turan M. VR-Caps: A Virtual Environment for Capsule Endoscopy. Med Image Anal 2021; 70:101990. [PMID: 33609920 DOI: 10.1016/j.media.2021.101990] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 02/01/2021] [Accepted: 02/02/2021] [Indexed: 02/06/2023]
Abstract
Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360∘ camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification. All of the code, pre-trained weights and created 3D organ models of the virtual environment with detailed instructions how to setup and use the environment are made publicly available at https://github.com/CapsuleEndoscope/VirtualCapsuleEndoscopy and a video demonstration can be seen in the supplementary videos (Video-I).
Collapse
|
Research Support, Non-U.S. Gov't |
4 |
27 |
11
|
Durr NJ, Dave SR, Lage E, Marcos S, Thorn F, Lim D. From Unseen to Seen: Tackling the Global Burden of Uncorrected Refractive Errors. Annu Rev Biomed Eng 2014; 16:131-53. [DOI: 10.1146/annurev-bioeng-071813-105216] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
|
11 |
25 |
12
|
Mahmood F, Chen R, Sudarsky S, Yu D, Durr NJ. Deep learning with cinematic rendering: fine-tuning deep neural networks using photorealistic medical images. Phys Med Biol 2018; 63:185012. [PMID: 30113015 DOI: 10.1088/1361-6560/aada93] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Deep learning has emerged as a powerful artificial intelligence tool to interpret medical images for a growing variety of applications. However, the paucity of medical imaging data with high-quality annotations that is necessary for training such methods ultimately limits their performance. Medical data is challenging to acquire due to privacy issues, shortage of experts available for annotation, limited representation of rare conditions and cost. This problem has previously been addressed by using synthetically generated data. However, networks trained on synthetic data often fail to generalize to real data. Cinematic rendering simulates the propagation and interaction of light passing through tissue models reconstructed from CT data, enabling the generation of photorealistic images. In this paper, we present one of the first applications of cinematic rendering in deep learning, in which we propose to fine-tune synthetic data-driven networks using cinematically rendered CT data for the task of monocular depth estimation in endoscopy. Our experiments demonstrate that: (a) convolutional neural networks (CNNs) trained on synthetic data and fine-tuned on photorealistic cinematically rendered data adapt better to real medical images and demonstrate more robust performance when compared to networks with no fine-tuning, (b) these fine-tuned networks require less training data to converge to an optimal solution, and (c) fine-tuning with data from a variety of photorealistic rendering conditions of the same scene prevents the network from learning patient-specific information and aids in generalizability of the model. Our empirical evaluation demonstrates that networks fine-tuned with cinematically rendered data predict depth with 56.87% less error for rendered endoscopy images and 27.49% less error for real porcine colon endoscopy images.
Collapse
|
Journal Article |
7 |
24 |
13
|
Chen MT, Mahmood F, Sweer JA, Durr NJ. GANPOP: Generative Adversarial Network Prediction of Optical Properties From Single Snapshot Wide-Field Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1988-1999. [PMID: 31899416 PMCID: PMC8314791 DOI: 10.1109/tmi.2019.2962786] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We present a deep learning framework for wide-field, content-aware estimation of absorption and scattering coefficients of tissues, called Generative Adversarial Network Prediction of Optical Properties (GANPOP). Spatial frequency domain imaging is used to obtain ground-truth optical properties at 660 nm from in vivo human hands and feet, freshly resected human esophagectomy samples, and homogeneous tissue phantoms. Images of objects with either flat-field or structured illumination are paired with registered optical property maps and are used to train conditional generative adversarial networks that estimate optical properties from a single input image. We benchmark this approach by comparing GANPOP to a single-snapshot optical property (SSOP) technique, using a normalized mean absolute error (NMAE) metric. In human gastrointestinal specimens, GANPOP with a single structured-light input image estimates the reduced scattering and absorption coefficients with 60% higher accuracy than SSOP while GANPOP with a single flat-field illumination image achieves similar accuracy to SSOP. When applied to both in vivo and ex vivo swine tissues, a GANPOP model trained solely on structured-illumination images of human specimens and phantoms estimates optical properties with approximately 46% improvement over SSOP, indicating adaptability to new, unseen tissue types. Given a training set that appropriately spans the target domain, GANPOP has the potential to enable rapid and accurate wide-field measurements of optical properties.
Collapse
|
Research Support, N.I.H., Extramural |
5 |
22 |
14
|
Parot V, Lim D, González G, Traverso G, Nishioka NS, Vakoc BJ, Durr NJ. Photometric stereo endoscopy. JOURNAL OF BIOMEDICAL OPTICS 2013; 18:076017. [PMID: 23864015 PMCID: PMC4407669 DOI: 10.1117/1.jbo.18.7.076017] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2013] [Revised: 06/14/2013] [Accepted: 06/21/2013] [Indexed: 05/18/2023]
Abstract
While color video endoscopy has enabled wide-field examination of the gastrointestinal tract, it often misses or incorrectly classifies lesions. Many of these missed lesions exhibit characteristic three-dimensional surface topographies. An endoscopic system that adds topographical measurements to conventional color imagery could therefore increase lesion detection and improve classification accuracy. We introduce photometric stereo endoscopy (PSE), a technique which allows high spatial frequency components of surface topography to be acquired simultaneously with conventional two-dimensional color imagery. We implement this technique in an endoscopic form factor and demonstrate that it can acquire the topography of small features with complex geometries and heterogeneous optical properties. PSE imaging of ex vivo human gastrointestinal tissue shows that surface topography measurements enable differentiation of abnormal shapes from surrounding normal tissue. Together, these results confirm that the topographical measurements can be obtained with relatively simple hardware in an endoscopic form factor, and suggest the potential of PSE to improve lesion detection and classification in gastrointestinal imaging.
Collapse
|
research-article |
12 |
19 |
15
|
Hoy CL, Durr NJ, Ben-Yakar A. Fast-updating and nonrepeating Lissajous image reconstruction method for capturing increased dynamic information. APPLIED OPTICS 2011; 50:2376-82. [PMID: 21629316 DOI: 10.1364/ao.50.002376] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
We present a fast-updating Lissajous image reconstruction methodology that uses an increased image frame rate beyond the pattern repeat rate generally used in conventional Lissajous image reconstruction methods. The fast display rate provides increased dynamic information and reduced motion blur, as compared to conventional Lissajous reconstruction, at the cost of single-frame pixel density. Importantly, this method does not discard any information from the conventional Lissajous image reconstruction, and frames from the complete Lissajous pattern can be displayed simultaneously. We present the theoretical background for this image reconstruction methodology along with images and video taken using the algorithm in a custom-built miniaturized multiphoton microscopy system.
Collapse
|
|
14 |
18 |
16
|
Bobrow TL, Mahmood F, Inserni M, Durr NJ. DeepLSR: a deep learning approach for laser speckle reduction. BIOMEDICAL OPTICS EXPRESS 2019; 10:2869-2882. [PMID: 31259057 PMCID: PMC6583356 DOI: 10.1364/boe.10.002869] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 05/08/2019] [Accepted: 05/08/2019] [Indexed: 05/06/2023]
Abstract
Speckle artifacts degrade image quality in virtually all modalities that utilize coherent energy, including optical coherence tomography, reflectance confocal microscopy, ultrasound, and widefield imaging with laser illumination. We present an adversarial deep learning framework for laser speckle reduction, called DeepLSR (https://durr.jhu.edu/DeepLSR), that transforms images from a source domain of coherent illumination to a target domain of speckle-free, incoherent illumination. We apply this method to widefield images of objects and tissues illuminated with a multi-wavelength laser, using light emitting diode-illuminated images as ground truth. In images of gastrointestinal tissues, DeepLSR reduces laser speckle noise by 6.4 dB, compared to a 2.9 dB reduction from optimized non-local means processing, a 3.0 dB reduction from BM3D, and a 3.7 dB reduction from an optical speckle reducer utilizing an oscillating diffuser. Further, DeepLSR can be combined with optical speckle reduction to reduce speckle noise by 9.4 dB. This dramatic reduction in speckle noise may enable the use of coherent light sources in applications that require small illumination sources and high-quality imaging, including medical endoscopy.
Collapse
|
research-article |
6 |
17 |
17
|
Durr NJ, Dave SR, Lim D, Joseph S, Ravilla TD, Lage E. Quality of eyeglass prescriptions from a low-cost wavefront autorefractor evaluated in rural India: results of a 708-participant field study. BMJ Open Ophthalmol 2019; 4:e000225. [PMID: 31276029 PMCID: PMC6579572 DOI: 10.1136/bmjophth-2018-000225] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Revised: 05/10/2019] [Accepted: 05/16/2019] [Indexed: 11/20/2022] Open
Abstract
Objective To assess the quality of eyeglass prescriptions provided by an affordable wavefront autorefractor operated by a minimally trained technician in a low-resource setting. Methods and Analysis 708 participants were recruited from consecutive patients registered for routine eye examinations at Aravind Eye Hospital in Madurai, India, or an affiliated rural satellite vision centre. Visual acuity (VA) and patient preference were compared between trial lenses set to two eyeglass prescriptions from (1) a novel wavefront autorefractor and (2) subjective refraction by an experienced refractionist. Results The mean±SD VA was 0.30±0.37, –0.02±0.14 and −0.04±0.11 logarithm of the minimum angle of resolution units before correction, with autorefractor correction and with subjective refraction correction, respectively (all differences p<0.01). Overall, 25% of participants had no preference, 33% preferred eyeglass prescriptions from autorefraction, and 42% preferred eyeglass prescriptions from subjective refraction (p<0.01). Of the 438 patients 40 years old and younger, 96 had no preference and the remainder had no statistically significant difference in preference for subjective refraction prescriptions (51%) versus autorefractor prescriptions (49%) (p=0.52). Conclusion Average VAs from autorefractor-prescribed eyeglasses were one letter worse than those from subjective refraction. More than half of all participants either had no preference or preferred eyeglasses prescribed by the autorefractor. This marginal difference in quality may warrant autorefractor-based prescriptions, given the portable form factor, short measurement time, low cost and minimal training required to use the autorefractor evaluated here.
Collapse
|
Journal Article |
6 |
15 |
18
|
Almalioglu Y, Bengisu Ozyoruk K, Gokce A, Incetan K, Irem Gokceler G, Ali Simsek M, Ararat K, Chen RJ, Durr NJ, Mahmood F, Turan M. EndoL2H: Deep Super-Resolution for Capsule Endoscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4297-4309. [PMID: 32795966 DOI: 10.1109/tmi.2020.3016744] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Although wireless capsule endoscopy is the preferred modality for diagnosis and assessment of small bowel diseases, the poor camera resolution is a substantial limitation for both subjective and automated diagnostics. Enhanced-resolution endoscopy has shown to improve adenoma detection rate for conventional endoscopy and is likely to do the same for capsule endoscopy. In this work, we propose and quantitatively validate a novel framework to learn a mapping from low-to-high-resolution endoscopic images. We combine conditional adversarial networks with a spatial attention block to improve the resolution by up to factors of 8× , 10× , 12× , respectively. Quantitative and qualitative studies demonstrate the superiority of EndoL2H over state-of-the-art deep super-resolution methods Deep Back-Projection Networks (DBPN), Deep Residual Channel Attention Networks (RCAN) and Super Resolution Generative Adversarial Network (SRGAN). Mean Opinion Score (MOS) tests were performed by 30 gastroenterologists qualitatively assess and confirm the clinical relevance of the approach. EndoL2H is generally applicable to any endoscopic capsule system and has the potential to improve diagnosis and better harness computational approaches for polyp detection and characterization. Our code and trained models are available at https://github.com/CapsuleEndoscope/EndoL2H.
Collapse
|
|
5 |
13 |
19
|
Chen MT, Durr NJ. Rapid tissue oxygenation mapping from snapshot structured-light images with adversarial deep learning. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200210SSR. [PMID: 33251783 PMCID: PMC7701163 DOI: 10.1117/1.jbo.25.11.112907] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 11/10/2020] [Indexed: 05/06/2023]
Abstract
SIGNIFICANCE Spatial frequency-domain imaging (SFDI) is a powerful technique for mapping tissue oxygen saturation over a wide field of view. However, current SFDI methods either require a sequence of several images with different illumination patterns or, in the case of single-snapshot optical properties (SSOP), introduce artifacts and sacrifice accuracy. AIM We introduce OxyGAN, a data-driven, content-aware method to estimate tissue oxygenation directly from single structured-light images. APPROACH OxyGAN is an end-to-end approach that uses supervised generative adversarial networks. Conventional SFDI is used to obtain ground truth tissue oxygenation maps for ex vivo human esophagi, in vivo hands and feet, and an in vivo pig colon sample under 659- and 851-nm sinusoidal illumination. We benchmark OxyGAN by comparing it with SSOP and a two-step hybrid technique that uses a previously developed deep learning model to predict optical properties followed by a physical model to calculate tissue oxygenation. RESULTS When tested on human feet, cross-validated OxyGAN maps tissue oxygenation with an accuracy of 96.5%. When applied to sample types not included in the training set, such as human hands and pig colon, OxyGAN achieves a 93% accuracy, demonstrating robustness to various tissue types. On average, OxyGAN outperforms SSOP and a hybrid model in estimating tissue oxygenation by 24.9% and 24.7%, respectively. Finally, we optimize OxyGAN inference so that oxygenation maps are computed ∼10 times faster than previous work, enabling video-rate, 25-Hz imaging. CONCLUSIONS Due to its rapid acquisition and processing speed, OxyGAN has the potential to enable real-time, high-fidelity tissue oxygenation mapping that may be useful for many clinical applications.
Collapse
|
Research Support, N.I.H., Extramural |
5 |
12 |
20
|
Durr NJ, González G, Parot V. 3D imaging techniques for improved colonoscopy. Expert Rev Med Devices 2014; 11:105-7. [DOI: 10.1586/17434440.2013.868303] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
|
11 |
11 |
21
|
Zhang HK, Kim Y, Lin M, Paredes M, Kannan K, Moghekar A, Durr NJ, Boctor EM. Toward dynamic lumbar puncture guidance using needle-based single-element ultrasound imaging. J Med Imaging (Bellingham) 2018; 5:021224. [PMID: 29651451 DOI: 10.1117/1.jmi.5.2.021224] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2017] [Accepted: 03/05/2018] [Indexed: 11/14/2022] Open
Abstract
Lumbar punctures (LPs) are interventional procedures that are used to collect cerebrospinal fluid. Since the target window is small, physicians have limited success conducting the procedure. The procedure is especially difficult for obese patients due to the increased distance between bone and skin surface. We propose a simple and direct needle insertion platform, enabling image formation by sweeping a needle with a single ultrasound element at the tip. The needle-shaped ultrasound transducer can not only sense the distance between the tip and a potential obstacle, such as bone, but also visually locate the structures by combining transducer location tracking and synthetic aperture focusing. The concept of the system was validated through a simulation that revealed robust image reconstruction under expected errors in tip localization. The initial prototype was built into a 14 G needle and was mounted on a holster equipped with a rotation shaft allowing one degree-of-freedom rotational sweeping and a rotation tracking encoder. We experimentally evaluated the system using a metal-wire phantom mimicking high reflection bone structures and human spinal bone phantom. Images of the phantoms were reconstructed, and the synthetic aperture reconstruction improved the image quality. These results demonstrate the potential of the system to be used as a real-time guidance tool for improving LPs.
Collapse
|
|
7 |
10 |
22
|
McKay GN, Mohan N, Durr NJ. Imaging human blood cells in vivo with oblique back-illumination capillaroscopy. BIOMEDICAL OPTICS EXPRESS 2020; 11:2373-2382. [PMID: 32499930 PMCID: PMC7249808 DOI: 10.1364/boe.389088] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 03/12/2020] [Accepted: 04/01/2020] [Indexed: 05/08/2023]
Abstract
We present a non-invasive, label-free method of imaging blood cells flowing through human capillaries in vivo using oblique back-illumination capillaroscopy (OBC). Green light illumination allows simultaneous phase and absorption contrast, enhancing the ability to distinguish red and white blood cells. Single-sided illumination through the objective lens enables 200 Hz imaging with close illumination-detection separation and a simplified setup. Phase contrast is optimized when the illumination axis is offset from the detection axis by approximately 225 µm when imaging ∼80 µm deep in phantoms and human ventral tongue. We demonstrate high-speed imaging of individual red blood cells, white blood cells with sub-cellular detail, and platelets flowing through capillaries and vessels in human tongue. A custom pneumatic cap placed over the objective lens stabilizes the field of view, enabling longitudinal imaging of a single capillary for up to seven minutes. We present high-quality images of blood cells in individuals with Fitzpatrick skin phototypes II, IV, and VI, showing that the technique is robust to high peripheral melanin concentration. The signal quality, speed, simplicity, and robustness of this approach underscores its potential for non-invasive blood cell counting.
Collapse
|
research-article |
5 |
9 |
23
|
Turan M, Almalioglu Y, Gilbert HB, Mahmood F, Durr NJ, Araujo H, Sari AE, Ajay A, Sitti M. Learning to Navigate Endoscopic Capsule Robots. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2924846] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
|
6 |
9 |
24
|
Sweer JA, Chen T, Salimian K, Battafarano RJ, Durr NJ. Wide-field optical property mapping and structured light imaging of the esophagus with spatial frequency domain imaging. JOURNAL OF BIOPHOTONICS 2019; 12:e201900005. [PMID: 31056845 PMCID: PMC6721984 DOI: 10.1002/jbio.201900005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 03/31/2019] [Accepted: 05/02/2019] [Indexed: 05/18/2023]
Abstract
As the incidence of esophageal adenocarcinoma continues to rise, there is a need for improved imaging technologies with contrast to abnormal esophageal tissues. To inform the design of optical technologies that meet this need, we characterize the spatial distribution of the scattering and absorption properties from 471 to 851 nm of eight resected human esophagi tissues using Spatial Frequency Domain Imaging. Histopathology was used to categorize tissue types, including normal, inflammation, fibrotic, ulceration, Barrett's Esophagus and squamous cell carcinoma. Average absorption and reduced scattering coefficients of normal tissues were 0.211 ± 0.051 and 1.20 ± 0.18 mm-1 , respectively at 471 nm, and both values decreased monotonically with increasing wavelength. Fibrotic tissue exhibited at least 68% larger scattering signal across all wavelengths, while squamous cell carcinoma exhibited a 36% decrease in scattering at 471 nm. We additionally image the esophagus with high spatial frequencies up to 0.5 mm-1 and show strong reflectance contrast to tissue treated with radiation. Lastly, we observe that esophageal absorption and scattering values change by an average of 9.4% and 2.7% respectively over a 30 minute duration post-resection. These results may guide system design for the diagnosis, prevention and monitoring of esophageal pathologies.
Collapse
|
Research Support, N.I.H., Extramural |
6 |
8 |
25
|
Manbachi A, Kreamer-Tonin K, Walch P, Gamo NJ, Khoshakhlagh P, Zhang YS, Montague C, Acharya S, Logsdon EA, Allen RH, Durr NJ, Luciano MG, Theodore N, Brem H, Yazdi Y. Starting a Medical Technology Venture as a Young Academic Innovator or Student Entrepreneur. Ann Biomed Eng 2017; 46:1-13. [DOI: 10.1007/s10439-017-1938-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Accepted: 09/29/2017] [Indexed: 11/28/2022]
|
|
8 |
8 |