1
|
Jin L, Tang Y, Coole JB, Tan MT, Zhao X, Badaoui H, Robinson JT, Williams MD, Vigneswaran N, Gillenwater AM, Richards-Kortum RR, Veeraraghavan A. DeepDOF-SE: affordable deep-learning microscopy platform for slide-free histology. Nat Commun 2024; 15:2935. [PMID: 38580633 PMCID: PMC10997797 DOI: 10.1038/s41467-024-47065-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/19/2024] [Indexed: 04/07/2024] Open
Abstract
Histopathology plays a critical role in the diagnosis and surgical management of cancer. However, access to histopathology services, especially frozen section pathology during surgery, is limited in resource-constrained settings because preparing slides from resected tissue is time-consuming, labor-intensive, and requires expensive infrastructure. Here, we report a deep-learning-enabled microscope, named DeepDOF-SE, to rapidly scan intact tissue at cellular resolution without the need for physical sectioning. Three key features jointly make DeepDOF-SE practical. First, tissue specimens are stained directly with inexpensive vital fluorescent dyes and optically sectioned with ultra-violet excitation that localizes fluorescent emission to a thin surface layer. Second, a deep-learning algorithm extends the depth-of-field, allowing rapid acquisition of in-focus images from large areas of tissue even when the tissue surface is highly irregular. Finally, a semi-supervised generative adversarial network virtually stains DeepDOF-SE fluorescence images with hematoxylin-and-eosin appearance, facilitating image interpretation by pathologists without significant additional training. We developed the DeepDOF-SE platform using a data-driven approach and validated its performance by imaging surgical resections of suspected oral tumors. Our results show that DeepDOF-SE provides histological information of diagnostic importance, offering a rapid and affordable slide-free histology platform for intraoperative tumor margin assessment and in low-resource settings.
Collapse
Affiliation(s)
- Lingbo Jin
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Yubo Tang
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Jackson B Coole
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Melody T Tan
- Department of Bioengineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Xuan Zhao
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Hawraa Badaoui
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Jacob T Robinson
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA
| | - Michelle D Williams
- Department of Pathology, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | - Nadarajah Vigneswaran
- Department of Diagnostic and Biomedical Sciences, University of Texas Health Science Center at Houston School of Dentistry, 7500 Cambridge St, Houston, TX, USA
| | - Ann M Gillenwater
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, USA
| | | | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, USA.
| |
Collapse
|
2
|
Huang L, Han Z, Wirth-Singh A, Saragadam V, Mukherjee S, Fröch JE, Tanguy QAA, Rollag J, Gibson R, Hendrickson JR, Hon PWC, Kigner O, Coppens Z, Böhringer KF, Veeraraghavan A, Majumdar A. Broadband thermal imaging using meta-optics. Nat Commun 2024; 15:1662. [PMID: 38395983 PMCID: PMC10891089 DOI: 10.1038/s41467-024-45904-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
Subwavelength diffractive optics known as meta-optics have demonstrated the potential to significantly miniaturize imaging systems. However, despite impressive demonstrations, most meta-optical imaging systems suffer from strong chromatic aberrations, limiting their utilities. Here, we employ inverse-design to create broadband meta-optics operating in the long-wave infrared (LWIR) regime (8-12 μm). Via a deep-learning assisted multi-scale differentiable framework that links meta-atoms to the phase, we maximize the wavelength-averaged volume under the modulation transfer function (MTF) surface of the meta-optics. Our design framework merges local phase-engineering via meta-atoms and global engineering of the scatterer within a single pipeline. We corroborate our design by fabricating and experimentally characterizing all-silicon LWIR meta-optics. Our engineered meta-optic is complemented by a simple computational backend that dramatically improves the quality of the captured image. We experimentally demonstrate a six-fold improvement of the wavelength-averaged Strehl ratio over the traditional hyperboloid metalens for broadband imaging.
Collapse
Affiliation(s)
- Luocheng Huang
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
| | - Zheyi Han
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
| | - Anna Wirth-Singh
- Department of Physics, University of Washington, Seattle, WA, USA
| | | | - Saswata Mukherjee
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
| | - Johannes E Fröch
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Quentin A A Tanguy
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
| | - Joshua Rollag
- KBR, Inc., Beavercreek, OH, USA
- Sensors Directorate, Air Force Research Laboratory, Wright-Patterson AFB, OH, USA
| | - Ricky Gibson
- Sensors Directorate, Air Force Research Laboratory, Wright-Patterson AFB, OH, USA
| | - Joshua R Hendrickson
- Sensors Directorate, Air Force Research Laboratory, Wright-Patterson AFB, OH, USA
| | - Philip W C Hon
- NG Next, Northrop Grumman Corporation, Redondo Beach, CA, USA
| | - Orrin Kigner
- NG Next, Northrop Grumman Corporation, Redondo Beach, CA, USA
| | | | - Karl F Böhringer
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
- Institute for Nano-Engineered Systems, University of Washington, Seattle, WA, USA
| | | | - Arka Majumdar
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA.
- Department of Physics, University of Washington, Seattle, WA, USA.
| |
Collapse
|
3
|
Wu J, Chen Y, Veeraraghavan A, Seidemann E, Robinson JT. Mesoscopic calcium imaging in a head-unrestrained male non-human primate using a lensless microscope. Nat Commun 2024; 15:1271. [PMID: 38341403 PMCID: PMC10858944 DOI: 10.1038/s41467-024-45417-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/22/2024] [Indexed: 02/12/2024] Open
Abstract
Mesoscopic calcium imaging enables studies of cell-type specific neural activity over large areas. A growing body of literature suggests that neural activity can be different when animals are free to move compared to when they are restrained. Unfortunately, existing systems for imaging calcium dynamics over large areas in non-human primates (NHPs) are table-top devices that require restraint of the animal's head. Here, we demonstrate an imaging device capable of imaging mesoscale calcium activity in a head-unrestrained male non-human primate. We successfully miniaturize our system by replacing lenses with an optical mask and computational algorithms. The resulting lensless microscope can fit comfortably on an NHP, allowing its head to move freely while imaging. We are able to measure orientation columns maps over a 20 mm2 field-of-view in a head-unrestrained macaque. Our work establishes mesoscopic imaging using a lensless microscope as a powerful approach for studying neural activity under more naturalistic conditions.
Collapse
Affiliation(s)
- Jimin Wu
- Department of Bioengineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA
| | - Yuzhi Chen
- Department of Neuroscience, University of Texas at Austin, 100 E 24th St., Austin, TX, 78712, USA
- Department of Psychology, University of Texas at Austin, 108 E Dean Keeton St., Austin, TX, 78712, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA
- Department of Computer Science, Rice University, 6100 Main Street, Houston, TX, 77005, USA
| | - Eyal Seidemann
- Department of Neuroscience, University of Texas at Austin, 100 E 24th St., Austin, TX, 78712, USA.
- Department of Psychology, University of Texas at Austin, 108 E Dean Keeton St., Austin, TX, 78712, USA.
| | - Jacob T Robinson
- Department of Bioengineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA.
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA.
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, TX, 77030, USA.
| |
Collapse
|
4
|
Maity AK, Sharma MK, Veeraraghavan A, Sabharwal A. SpeckleCam: high-resolution computational speckle contrast tomography for deep blood flow imaging. Biomed Opt Express 2023; 14:5316-5337. [PMID: 37854569 PMCID: PMC10581815 DOI: 10.1364/boe.498900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/28/2023] [Accepted: 08/28/2023] [Indexed: 10/20/2023]
Abstract
Laser speckle contrast imaging is widely used in clinical studies to monitor blood flow distribution. Speckle contrast tomography, similar to diffuse optical tomography, extends speckle contrast imaging to provide deep tissue blood flow information. However, the current speckle contrast tomography techniques suffer from poor spatial resolution and involve both computation and memory intensive reconstruction algorithms. In this work, we present SpeckleCam, a camera-based system to reconstruct high resolution 3D blood flow distribution deep inside the skin. Our approach replaces the traditional forward model using diffuse approximations with Monte-Carlo simulations-based convolutional forward model, which enables us to develop an improved deep tissue blood flow reconstruction algorithm. We show that our proposed approach can recover complex structures up to 6 mm deep inside a tissue-like scattering medium in the reflection geometry. We also conduct human experiments to demonstrate that our approach can detect reduced flow in major blood vessels during vascular occlusion.
Collapse
Affiliation(s)
- Akash Kumar Maity
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Manoj Kumar Sharma
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Ashutosh Sabharwal
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| |
Collapse
|
5
|
Farrell SM, Boominathan V, Raymondi N, Sabharwal A, Veeraraghavan A. CoIR: Compressive Implicit Radar. IEEE Trans Pattern Anal Mach Intell 2023; PP:1-12. [PMID: 37561613 DOI: 10.1109/tpami.2023.3301553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/12/2023]
Abstract
Using millimeter wave (mmWave) signals for imaging has an important advantage in that they can penetrate through poor environmental conditions such as fog, dust, and smoke that severely degrade optical-based imaging systems. However, mmWave radars, contrary to cameras and LiDARs, suffer from low angular resolution because of small physical apertures and conventional signal processing techniques. Sparse radar imaging, on the other hand, can increase the aperture size while minimizing the power consumption and read out bandwidth. This paper presents CoIR, an analysis by synthesis method that leverages the implicit neural network bias in convolutional decoders and compressed sensing to perform high accuracy sparse radar imaging. The proposed system is data set-agnostic and does not require any auxiliary sensors for training or testing. We introduce a sparse array design that allows for a 5.5× reduction in the number of antenna elements needed compared to conventional MIMO array designs. We demonstrate our system's improved imaging performance over standard mmWave radars and other competitive untrained methods on both simulated and experimental mmWave radar data.
Collapse
|
6
|
Wu J, Boominathan V, Veeraraghavan A, Robinson JT. Real-time, deep-learning aided lensless microscope. Biomed Opt Express 2023; 14:4037-4051. [PMID: 37799697 PMCID: PMC10549754 DOI: 10.1364/boe.490199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/28/2023] [Accepted: 06/29/2023] [Indexed: 10/07/2023]
Abstract
Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.
Collapse
Affiliation(s)
- Jimin Wu
- Department of Bioengineering,
Rice University, Houston, Texas 77005, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
- Department of Computer Science, Rice University, Houston, Texas 77005, USA
| | - Jacob T. Robinson
- Department of Bioengineering,
Rice University, Houston, Texas 77005, USA
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, Texas 77030, USA
| |
Collapse
|
7
|
Feng BY, Guo H, Xie M, Boominathan V, Sharma MK, Veeraraghavan A, Metzler CA. NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media. Sci Adv 2023; 9:eadg4671. [PMID: 37379386 DOI: 10.1126/sciadv.adg4671] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 05/23/2023] [Indexed: 06/30/2023]
Abstract
Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators-but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.
Collapse
Affiliation(s)
- Brandon Y Feng
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| | - Haiyun Guo
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Mingyang Xie
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Manoj K Sharma
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Christopher A Metzler
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| |
Collapse
|
8
|
Wang F, Kim SH, Zhao Y, Raghuram A, Veeraraghavan A, Robinson J, Hielscher AH. High-Speed Time-Domain Diffuse Optical Tomography with a Sensitivity Equation-based Neural Network. IEEE Trans Comput Imaging 2023; 9:459-474. [PMID: 37456517 PMCID: PMC10348778 DOI: 10.1109/tci.2023.3273423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
Steady progress in time-domain diffuse optical tomography (TD-DOT) technology is allowing for the first time the design of low-cost, compact, and high-performance systems, thus promising more widespread clinical TD-DOT use, such as for recording brain tissue hemodynamics. TD-DOT is known to provide more accurate values of optical properties and physiological parameters compared to its frequency-domain or steady-state counterparts. However, achieving high temporal resolution is still difficult, as solving the inverse problem is computationally demanding, leading to relatively long reconstruction times. The runtime is further compromised by processes that involve 'nontrivial' empirical tuning of reconstruction parameters, which increases complexity and inefficiency. To address these challenges, we present a new reconstruction algorithm that combines a deep-learning approach with our previously introduced sensitivity-equation-based, non-iterative sparse optical reconstruction (SENSOR) code. The new algorithm (called SENSOR-NET) unfolds the iterations of SENSOR into a deep neural network. In this way, we achieve high-resolution sparse reconstruction using only learned parameters, thus eliminating the need to tune parameters prior to reconstruction empirically. Furthermore, once trained, the reconstruction time is not dependent on the number of sources or wavelengths used. We validate our method with numerical and experimental data and show that accurate reconstructions with 1 mm spatial resolution can be obtained in under 20 milliseconds regardless of the number of sources used in the setup. This opens the door for real-time brain monitoring and other high-speed DOT applications.
Collapse
Affiliation(s)
- Fay Wang
- Department of Biomedical Engineering, Columbia University, New York, NY 10027
| | - Stephen H Kim
- Department of Biomedical Engineering, New York University - Tandon School of Engineering, New York, NY 10001
| | - Yongyi Zhao
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Ankit Raghuram
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Jacob Robinson
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Andreas H Hielscher
- Department of Biomedical Engineering, New York University - Tandon School of Engineering, New York, NY 10001
| |
Collapse
|
9
|
Zhao Y, Raghuram A, Wang F, Kim SH, Hielscher A, Robinson JT, Veeraraghavan A. Unrolled-DOT: an interpretable deep network for diffuse optical tomography. J Biomed Opt 2023; 28:036002. [PMID: 36908760 PMCID: PMC9995139 DOI: 10.1117/1.jbo.28.3.036002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 02/09/2023] [Indexed: 06/18/2023]
Abstract
SIGNIFICANCE Imaging through scattering media is critical in many biomedical imaging applications, such as breast tumor detection and functional neuroimaging. Time-of-flight diffuse optical tomography (ToF-DOT) is one of the most promising methods for high-resolution imaging through scattering media. ToF-DOT and many traditional DOT methods require an image reconstruction algorithm. Unfortunately, this algorithm often requires long computational runtimes and may produce lower quality reconstructions in the presence of model mismatch or improper hyperparameter tuning. AIM We used a data-driven unrolled network as our ToF-DOT inverse solver. The unrolled network is faster than traditional inverse solvers and achieves higher reconstruction quality by accounting for model mismatch. APPROACH Our model "Unrolled-DOT" uses the learned iterative shrinkage thresholding algorithm. In addition, we incorporate a refinement U-Net and Visual Geometry Group (VGG) perceptual loss to further increase the reconstruction quality. We trained and tested our model on simulated and real-world data and benchmarked against physics-based and learning-based inverse solvers. RESULTS In experiments on real-world data, Unrolled-DOT outperformed learning-based algorithms and achieved over 10× reduction in runtime and mean-squared error, compared to traditional physics-based solvers. CONCLUSION We demonstrated a learning-based ToF-DOT inverse solver that achieves state-of-the-art performance in speed and reconstruction quality, which can aid in future applications for noninvasive biomedical imaging.
Collapse
Affiliation(s)
- Yongyi Zhao
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Ankit Raghuram
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Fay Wang
- Columbia University, Department of Biomedical Engineering, New York, New York, United States
| | - Stephen Hyunkeol Kim
- Columbia University Irvine Medical Center, Department of Radiology, New York, New York, United States
- New York University - Tandon School of Engineering, Department of Biomedical Engineering, New York, New York, United States
| | - Andreas Hielscher
- New York University - Tandon School of Engineering, Department of Biomedical Engineering, New York, New York, United States
| | - Jacob T. Robinson
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Ashok Veeraraghavan
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| |
Collapse
|
10
|
Pollmann EH, Yin H, Uguz I, Dubey A, Wingel KE, Choi JS, Moazeni S, Gilhotra Y, Pavlovsky VA, Banees A, Boominathan V, Robinson J, Veeraraghavan A, Pieribone VA, Pesaran B, Shepard KL. Subdural CMOS optical probe (SCOPe) for bidirectional neural interfacing. bioRxiv 2023:2023.02.07.527500. [PMID: 36798295 PMCID: PMC9934536 DOI: 10.1101/2023.02.07.527500] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
Optical neurotechnologies use light to interface with neurons and can monitor and manipulate neural activity with high spatial-temporal precision over large cortical extents. While there has been significant progress in miniaturizing microscope for head-mounted configurations, these existing devices are still very bulky and could never be fully implanted. Any viable translation of these technologies to human use will require a much more noninvasive, fully implantable form factor. Here, we leverage advances in microelectronics and heterogeneous optoelectronic packaging to develop a transformative, ultrathin, miniaturized device for bidirectional optical stimulation and recording: the subdural CMOS Optical Probe (SCOPe). By being thin enough to lie entirely within the subdural space of the primate brain, SCOPe defines a path for the eventual human translation of a new generation of brain-machine interfaces based on light.
Collapse
|
11
|
Perez O, Kumar Vadathya A, Beltran A, Barnett RM, Hindera O, Garza T, Musaad SM, Baranowski T, Hughes SO, Mendoza JA, Sabharwal A, Veeraraghavan A, O'Connor TM. The Family Level Assessment of Screen Use-Mobile Approach: Development of an Approach to Measure Children's Mobile Device Use. JMIR Form Res 2022; 6:e40452. [PMID: 36269651 PMCID: PMC9636534 DOI: 10.2196/40452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 08/30/2022] [Accepted: 09/15/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND There is a strong association between increased mobile device use and worse dietary habits, worse sleep outcomes, and poor academic performance in children. Self-report or parent-proxy report of children's screen time has been the most common method of measuring screen time, which may be imprecise or biased. OBJECTIVE The objective of this study was to assess the feasibility of measuring the screen time of children on mobile devices using the Family Level Assessment of Screen Use (FLASH)-mobile approach, an innovative method that leverages the existing features of the Android platform. METHODS This pilot study consisted of 2 laboratory-based observational feasibility studies and 2 home-based feasibility studies in the United States. A total of 48 parent-child dyads consisting of a parent and child aged 6 to 11 years participated in the pilot study. The children had to have their own or shared Android device. The laboratory-based studies included a standardized series of tasks while using the mobile device or watching television, which were video recorded. Video recordings were coded by staff for a gold standard comparison. The home-based studies instructed the parent-child dyads to use their mobile device as they typically use it over 3 days. Parents received a copy of the use logs at the end of the study and completed an exit interview in which they were asked to review their logs and share their perceptions and suggestions for the improvement of the FLASH-mobile approach. RESULTS The final version of the FLASH-mobile approach resulted in user identification compliance rates of >90% for smartphones and >80% for tablets. For laboratory-based studies, a mean agreement of 73.6% (SD 16.15%) was achieved compared with the gold standard (human coding of video recordings) in capturing the target child's mobile use. Qualitative feedback from parents and children revealed that parents found the FLASH-mobile approach useful for tracking how much time their child spends using the mobile device as well as tracking the apps they used. Some parents revealed concerns over privacy and provided suggestions for improving the FLASH-mobile approach. CONCLUSIONS The FLASH-mobile approach offers an important new research approach to measure children's use of mobile devices more accurately across several days, even when the child shares the device with other family members. With additional enhancement and validation studies, this approach can significantly advance the measurement of mobile device use among young children.
Collapse
Affiliation(s)
- Oriana Perez
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Anil Kumar Vadathya
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Alicia Beltran
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - R Matthew Barnett
- Center for Research Computing, Rice University, Houston, TX, United States
| | | | - Tatyana Garza
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Salma M Musaad
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Tom Baranowski
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Sheryl O Hughes
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Jason A Mendoza
- Public Health Sciences Division, Fred Hutchinson Cancer Center, Seattle, WA, United States.,Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, United States
| | - Ashutosh Sabharwal
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Ashok Veeraraghavan
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Teresia M O'Connor
- United States Department of Agriculture/Agricultural Research Service Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
12
|
Bagadthey D, Prabhu S, Khan SS, Fredrick DT, Boominathan V, Veeraraghavan A, Mitra K. FlatNet3D: intensity and absolute depth from single-shot lensless capture. J Opt Soc Am A Opt Image Sci Vis 2022; 39:1903-1912. [PMID: 36215563 DOI: 10.1364/josaa.466286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
Lensless cameras are ultra-thin imaging systems that replace the lens with a thin passive optical mask and computation. Passive mask-based lensless cameras encode depth information in their measurements for a certain depth range. Early works have shown that this encoded depth can be used to perform 3D reconstruction of close-range scenes. However, these approaches for 3D reconstructions are typically optimization based and require strong hand-crafted priors and hundreds of iterations to reconstruct. Moreover, the reconstructions suffer from low resolution, noise, and artifacts. In this work, we propose FlatNet3D-a feed-forward deep network that can estimate both depth and intensity from a single lensless capture. FlatNet3D is an end-to-end trainable deep network that directly reconstructs depth and intensity from a lensless measurement using an efficient physics-based 3D mapping stage and a fully convolutional network. Our algorithm is fast and produces high-quality results, which we validate using both simulated and real scenes captured using PhlatCam.
Collapse
|
13
|
Dave A, Hold-Geoffroy Y, Hašan M, Sunkavalli K, Veeraraghavan A. Snapshot polarimetric diffuse-specular separation. Opt Express 2022; 30:34239-34255. [PMID: 36242441 DOI: 10.1364/oe.460984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 07/29/2022] [Indexed: 06/16/2023]
Abstract
We present a polarization-based approach to perform diffuse-specular separation from a single polarimetric image, acquired using a flexible, practical capture setup. Our key technical insight is that, unlike previous polarization-based separation methods that assume completely unpolarized diffuse reflectance, we use a more general polarimetric model that accounts for partially polarized diffuse reflections. We capture the scene with a polarimetric sensor and produce an initial analytical diffuse-specular separation that we further pass into a deep network trained to refine the separation. We demonstrate that our combination of analytical separation and deep network refinement produces state-of-the-art diffuse-specular separation, which enables image-based appearance editing of dynamic scenes and enhanced appearance estimation.
Collapse
|
14
|
Ghanekar B, Saragadam V, Mehra D, Gustavsson AK, Sankaranarayanan AC, Veeraraghavan A. PS 2 F: Polarized Spiral Point Spread Function for Single-Shot 3D Sensing. IEEE Trans Pattern Anal Mach Intell 2022; PP:1-12. [PMID: 36037460 DOI: 10.1109/tpami.2022.3202511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We propose a compact snapshot monocular depth estimation technique that relies on an engineered point spread function (PSF). Traditional approaches used in microscopic super-resolution imaging such as the Double-Helix PSF (DHPSF) are ill-suited for scenes that are more complex than a sparse set of point light sources. We show, using the Cramér-Rao lower bound, that separating the two lobes of the DHPSF and thereby capturing two separate images leads to a dramatic increase in depth accuracy. A special property of the phase mask used for generating the DHPSF is that a separation of the phase mask into two halves leads to a spatial separation of the two lobes. We leverage this property to build a compact polarization-based optical setup, where we place two orthogonal linear polarizers on each half of the DHPSF phase mask and then capture the resulting image with a polarization-sensitive camera. Results from simulations and a lab prototype demonstrate that our technique achieves up to 50% lower depth error compared to state-of-the-art designs including the DHPSF and the Tetrapod PSF, with little to no loss in spatial resolution.
Collapse
|
15
|
Adams JK, Yan D, Wu J, Boominathan V, Gao S, Rodriguez AV, Kim S, Carns J, Richards-Kortum R, Kemere C, Veeraraghavan A, Robinson JT. In vivo lensless microscopy via a phase mask generating diffraction patterns with high-contrast contours. Nat Biomed Eng 2022; 6:617-628. [PMID: 35256759 PMCID: PMC9142365 DOI: 10.1038/s41551-022-00851-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 01/21/2022] [Indexed: 12/25/2022]
Abstract
The simple and compact optics of lensless microscopes and the associated computational algorithms allow for large fields of view and the refocusing of the captured images. However, existing lensless techniques cannot accurately reconstruct the typical low-contrast images of optically dense biological tissue. Here we show that lensless imaging of tissue in vivo can be achieved via an optical phase mask designed to create a point spread function consisting of high-contrast contours with a broad spectrum of spatial frequencies. We built a prototype lensless microscope incorporating the 'contour' phase mask and used it to image calcium dynamics in the cortex of live mice (over a field of view of about 16 mm2) and in freely moving Hydra vulgaris, as well as microvasculature in the oral mucosa of volunteers. The low cost, small form factor and computational refocusing capability of in vivo lensless microscopy may open it up to clinical uses, especially for imaging difficult-to-reach areas of the body.
Collapse
Affiliation(s)
- Jesse K Adams
- Applied Physics Program, Rice University, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Dong Yan
- Applied Physics Program, Rice University, Houston, TX, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Jimin Wu
- Department of Bioengineering, Rice University, Houston, TX, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Sibo Gao
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Alex V Rodriguez
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Soonyoung Kim
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Jennifer Carns
- Department of Bioengineering, Rice University, Houston, TX, USA
| | - Rebecca Richards-Kortum
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
- Department of Bioengineering, Rice University, Houston, TX, USA
| | - Caleb Kemere
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
- Department of Bioengineering, Rice University, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | - Ashok Veeraraghavan
- Applied Physics Program, Rice University, Houston, TX, USA.
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
- Department of Computer Science, Rice University, Houston, TX, USA.
| | - Jacob T Robinson
- Applied Physics Program, Rice University, Houston, TX, USA.
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
- Department of Bioengineering, Rice University, Houston, TX, USA.
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
16
|
Maity AK, Veeraraghavan A, Sabharwal A. PPGMotion: Model-based detection of motion artifacts in photoplethysmography signals. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
17
|
Khan SS, Sundar V, Boominathan V, Veeraraghavan A, Mitra K. FlatNet: Towards Photorealistic Scene Reconstruction From Lensless Measurements. IEEE Trans Pattern Anal Mach Intell 2022; 44:1934-1948. [PMID: 33104508 PMCID: PMC8979921 DOI: 10.1109/tpami.2020.3033882] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Lensless imaging has emerged as a potential solution towards realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. Without a focusing lens, the lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, the current iterative-optimization-based reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions. Our approach, called FlatNet, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras, where the camera's forward model formulation is known. FlatNet consists of two stages: (1) an inversion stage that maps the measurement into a space of intermediate reconstruction by learning parameters within the forward model formulation, and (2) a perceptual enhancement stage that improves the perceptual quality of this intermediate reconstruction. These stages are trained together in an end-to-end manner. We show high-quality reconstructions by performing extensive experiments on real and challenging scenes using two different types of lensless prototypes: one which uses a separable forward model and another, which uses a more general non-separable cropped-convolution model. Our end-to-end approach is fast, produces photorealistic reconstructions, and is easy to adopt for other mask-based lensless cameras.
Collapse
|
18
|
Vadathya AK, Musaad S, Beltran A, Perez O, Meister L, Baranowski T, Hughes SO, Mendoza JA, Sabharwal A, Veeraraghavan A, O'Connor T. An Objective System for Quantitative Assessment of Television Viewing Among Children (Family Level Assessment of Screen Use in the Home-Television): System Development Study. JMIR Pediatr Parent 2022; 5:e33569. [PMID: 35323113 PMCID: PMC8990369 DOI: 10.2196/33569] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 01/11/2022] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND Television viewing among children is associated with developmental and health outcomes, yet measurement techniques for television viewing are prone to errors, biases, or both. OBJECTIVE This study aims to develop a system to objectively and passively measure children's television viewing time. METHODS The Family Level Assessment of Screen Use in the Home-Television (FLASH-TV) system includes three sequential algorithms applied to video data collected in front of a television screen: face detection, face verification, and gaze estimation. A total of 21 families of diverse race and ethnicity were enrolled in 1 of 4 design studies to train the algorithms and provide proof of concept testing for the integrated FLASH-TV system. Video data were collected from each family in a laboratory mimicking a living room or in the child's home. Staff coded the video data for the target child as the gold standard. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were calculated for each algorithm, as compared with the gold standard. Prevalence and biased adjusted κ scores and an intraclass correlation using a generalized linear mixed model compared FLASH-TV's estimation of television viewing duration to the gold standard. RESULTS FLASH-TV demonstrated high sensitivity for detecting faces (95.5%-97.9%) and performed well on face verification when the child's gaze was on the television. Each of the metrics for estimating the child's gaze on the screen was moderate to good (range: 55.1% negative predictive value to 91.2% specificity). When combining the 3 sequential steps, FLASH-TV estimation of the child's screen viewing was overall good, with an intraclass correlation for an overall time watching television of 0.725 across conditions. CONCLUSIONS FLASH-TV offers a critical step forward in improving the assessment of children's television viewing.
Collapse
Affiliation(s)
- Anil Kumar Vadathya
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Salma Musaad
- Agricultural Research Service, US Department of Agriculture, Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Alicia Beltran
- Agricultural Research Service, US Department of Agriculture, Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Oriana Perez
- Agricultural Research Service, US Department of Agriculture, Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Leo Meister
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Tom Baranowski
- Agricultural Research Service, US Department of Agriculture, Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Sheryl O Hughes
- Agricultural Research Service, US Department of Agriculture, Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| | - Jason A Mendoza
- Public Health Sciences Division, Fred Hutchinson Cancer Research Center, Seattle, WA, United States.,General Pediatrics, Department of Pediatrics, University of Washington, Seattle, WA, United States
| | - Ashutosh Sabharwal
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Ashok Veeraraghavan
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
| | - Teresia O'Connor
- Agricultural Research Service, US Department of Agriculture, Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
19
|
Abstract
Lensless imaging provides opportunities to design imaging systems free from the constraints imposed by traditional camera architectures. Thanks to advances in imaging hardware, fabrication techniques, and new algorithms, researchers have recently developed lensless imaging systems that are extremely compact, lightweight or able to image higher-dimensional quantities. Here we review these recent advances and describe the design principles and their effects that one should consider when developing and using lensless imaging systems.
Collapse
|
20
|
Li B, Tan S, Dong J, Lian X, Zhang Y, Ji X, Veeraraghavan A. Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning. Biomed Opt Express 2022; 13:284-299. [PMID: 35154871 PMCID: PMC8803017 DOI: 10.1364/boe.444488] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to "teach" a traditional wide-field microscope, one that's available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.
Collapse
Affiliation(s)
- Bowen Li
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Shiyu Tan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Jiuyang Dong
- Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Xiaocong Lian
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Yongbing Zhang
- Harbin Institute of Technology (Shenzhen), Shenzhen, China
| | - Xiangyang Ji
- Department of Automation & BNRist, Tsinghua University, Beijing, China
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| |
Collapse
|
21
|
Moazeni S, Pollmann E, Boominathan V, Cardoso FA, Robinson J, Veeraraghavan A, Shepard K. A Mechanically Flexible, Implantable Neural Interface for Computational Imaging and Optogenetic Stimulation Over 5.4×5.4mm 2 FoV. IEEE Trans Biomed Circuits Syst 2021; 15:1295-1305. [PMID: 34951854 DOI: 10.1109/tbcas.2021.3138334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Emerging optical functional imaging and optogenetics are among the most promising approaches in neuroscience to study neuronal circuits. Combining both methods into a single implantable device enables all-optical neural interrogation with immediate applications in freely-behaving animal studies. In this paper, we demonstrate such a device capable of optical neural recording and stimulation over large cortical areas. This implantable surface device exploits lens-less computational imaging and a novel packaging scheme to achieve an ultra-thin (250μm-thick), mechanically flexible form factor. The core of this device is a custom-designed CMOS integrated circuit containing a 160×160 array of time-gated single-photon avalanche photodiodes (SPAD) for low-light intensity imaging and an interspersed array of dual-color (blue and green) flip-chip bonded micro-LED (μLED) as light sources. We achieved 60μm lateral imaging resolution and 0.2mm3 volumetric precision for optogenetics over a 5.4×5.4mm2 field of view (FoV). The device achieves a 125-fps frame-rate and consumes 40 mW of total power.
Collapse
|
22
|
Kim HK, Zhao Y, Raghuram A, Veeraraghavan A, Robinson J, Hielscher AH. Ultrafast and Ultrahigh-Resolution Diffuse Optical Tomography for Brain Imaging with Sensitivity Equation based Noniterative Sparse Optical Reconstruction (SENSOR). J Quant Spectrosc Radiat Transf 2021; 276:107939. [PMID: 34966190 PMCID: PMC8713562 DOI: 10.1016/j.jqsrt.2021.107939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We introduce a novel image reconstruction method for time-resolved diffuse optical tomography (DOT) that yields submillimeter resolution in less than a second. This opens the door to high-resolution real-time DOT in imaging of the brain activity. We call this approach the sensitivity equation based noniterative sparse optical reconstruction (SENSOR) method. The high spatial resolution is achieved by implementing an asymptotic l 0-norm operator that guarantees to obtain sparsest representation of reconstructed targets. The high computational speed is achieved by employing the nontruncated sensitivity equation based noniterative inverse formulation combined with reduced sensing matrix and parallel computing. We tested the new method with numerical and experimental data. The results demonstrate that the SENSOR algorithm can achieve 1 mm3 spatial-resolution optical tomographic imaging at depth of ∼60 mean free paths (MFPs) in 20∼30 milliseconds on an Intel Core i9 processor.
Collapse
Affiliation(s)
- Hyun Keol Kim
- Department of Radiology, Columbia University Irvine Medical Center, New York, NY 10032
- Department of Biomedical Engineering, New York University – Tandon School of Engineering, New York, NY 10010
| | - Yongyi Zhao
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Ankit Raghuram
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Jacob Robinson
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005
| | - Andreas H. Hielscher
- Department of Biomedical Engineering, New York University – Tandon School of Engineering, New York, NY 10010
| |
Collapse
|
23
|
Tan J, Boominathan V, Baraniuk R, Veeraraghavan A. EDoF-ToF: extended depth of field time-of-flight imaging. Opt Express 2021; 29:38540-38556. [PMID: 34808905 DOI: 10.1364/oe.441515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 10/24/2021] [Indexed: 06/13/2023]
Abstract
Conventional continuous-wave amplitude-modulated time-of-flight (CWAM ToF) cameras suffer from a fundamental trade-off between light throughput and depth of field (DoF): a larger lens aperture allows more light collection but suffers from significantly lower DoF. However, both high light throughput, which increases signal-to-noise ratio, and a wide DoF, which enlarges the system's applicable depth range, are valuable for CWAM ToF applications. In this work, we propose EDoF-ToF, an algorithmic method to extend the DoF of large-aperture CWAM ToF cameras by using a neural network to deblur objects outside of the lens's narrow focal region and thus produce an all-in-focus measurement. A key component of our work is the proposed large-aperture ToF training data simulator, which models the depth-dependent blurs and partial occlusions caused by such apertures. Contrary to conventional image deblurring where the blur model is typically linear, ToF depth maps are nonlinear functions of scene intensities, resulting in a nonlinear blur model that we also derive for our simulator. Unlike extended DoF for conventional photography where depth information needs to be encoded (or made depth-invariant) using additional hardware (phase masks, focal sweeping, etc.), ToF sensor measurements naturally encode depth information, allowing a completely software solution to extended DoF. We experimentally demonstrate EDoF-ToF increasing the DoF of a conventional ToF system by 3.6 ×, effectively achieving the DoF of a smaller lens aperture that allows 22.1 × less light. Ultimately, EDoF-ToF enables CWAM ToF cameras to enjoy the benefits of both high light throughput and a wide DoF.
Collapse
|
24
|
Saragadam V, DeZeeuw M, Baraniuk RG, Veeraraghavan A, Sankaranarayanan AC. SASSI - Super-Pixelated Adaptive Spatio-Spectral Imaging. IEEE Trans Pattern Anal Mach Intell 2021; 43:2233-2244. [PMID: 33891546 DOI: 10.1109/tpami.2021.3075228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We introduce a novel video-rate hyperspectral imager with high spatial, temporal and spectral resolutions. Our key hypothesis is that spectral profiles of pixels within each super-pixel tend to be similar. Hence, a scene-adaptive spatial sampling of a hyperspectral scene, guided by its super-pixel segmented image, is capable of obtaining high-quality reconstructions. To achieve this, we acquire an RGB image of the scene, compute its super-pixels, from which we generate a spatial mask of locations where we measure high-resolution spectrum. The hyperspectral image is subsequently estimated by fusing the RGB image and the spectral measurements using a learnable guided filtering approach. Due to low computational complexity of the superpixel estimation step, our setup can capture hyperspectral images of the scenes with little overhead over traditional snapshot hyperspectral cameras, but with significantly higher spatial and spectral resolutions. We validate the proposed technique with extensive simulations as well as a lab prototype that measures hyperspectral video at a spatial resolution of 600 ×900 pixels, at a spectral resolution of 10 nm over visible wavebands, and achieving a frame rate at 18fps.
Collapse
|
25
|
Zhao Y, Raghuram A, Kim HK, Hielscher AH, Robinson JT, Veeraraghavan A. High Resolution, Deep Imaging Using Confocal Time-of-Flight Diffuse Optical Tomography. IEEE Trans Pattern Anal Mach Intell 2021; 43:2206-2219. [PMID: 33891548 PMCID: PMC8270678 DOI: 10.1109/tpami.2021.3075366] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Light scattering by tissue severely limits how deep beneath the surface one can image, and the spatial resolution one can obtain from these images. Diffuse optical tomography (DOT) is one of the most powerful techniques for imaging deep within tissue - well beyond the conventional ∼ 10-15 mean scattering lengths tolerated by ballistic imaging techniques such as confocal and two-photon microscopy. Unfortunately, existing DOT systems are limited, achieving only centimeter-scale resolution. Furthermore, they suffer from slow acquisition times and slow reconstruction speeds making real-time imaging infeasible. We show that time-of-flight diffuse optical tomography (ToF-DOT) and its confocal variant (CToF-DOT), by exploiting the photon travel time information, allow us to achieve millimeter spatial resolution in the highly scattered diffusion regime ( mean free paths). In addition, we demonstrate two additional innovations: focusing on confocal measurements, and multiplexing the illumination sources allow us to significantly reduce the measurement acquisition time. Finally, we rely on a novel convolutional approximation that allows us to develop a fast reconstruction algorithm, achieving a 100× speedup in reconstruction time compared to traditional DOT reconstruction techniques. Together, we believe that these technical advances serve as the first step towards real-time, millimeter resolution, deep tissue imaging using DOT.
Collapse
|
26
|
Pai A, Veeraraghavan A, Sabharwal A. HRVCam: robust camera-based measurement of heart rate variability. J Biomed Opt 2021; 26:JBO-200236SSR. [PMID: 33569935 PMCID: PMC7874852 DOI: 10.1117/1.jbo.26.2.022707] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Accepted: 12/30/2020] [Indexed: 05/28/2023]
Abstract
SIGNIFICANCE Non-contact, camera-based heart rate variability estimation is desirable in numerous applications, including medical, automotive, and entertainment. Unfortunately, camera-based HRV accuracy and reliability suffer due to two challenges: (a) darker skin tones result in lower SNR and (b) relative motion induces measurement artifacts. AIM We propose an algorithm HRVCam that provides sufficient robustness to low SNR and motion-induced artifacts commonly present in imaging photoplethysmography (iPPG) signals. APPROACH HRVCam computes camera-based HRV from the instantaneous frequency of the iPPG signal. HRVCam uses automatic adaptive bandwidth filtering along with discrete energy separation to estimate the instantaneous frequency. The parameters of HRVCam use the observed characteristics of HRV and iPPG signals. RESULTS We capture a new dataset containing 16 participants with diverse skin tones. We demonstrate that HRVCam reduces the error in camera-based HRV metrics significantly (more than 50% reduction) for videos with dark skin and face motion. CONCLUSION HRVCam can be used on top of iPPG estimation algorithms to provide robust HRV measurements making camera-based HRV practical.
Collapse
Affiliation(s)
- Amruta Pai
- Rice University, Scalable Health Labs, Electrical and Computer Engineering Department, Houston, Texas, United States
| | - Ashok Veeraraghavan
- Rice University, Scalable Health Labs, Electrical and Computer Engineering Department, Houston, Texas, United States
| | - Ashutosh Sabharwal
- Rice University, Scalable Health Labs, Electrical and Computer Engineering Department, Houston, Texas, United States
| |
Collapse
|
27
|
Nowara EM, McDuff D, Veeraraghavan A. Systematic analysis of video-based pulse measurement from compressed videos. Biomed Opt Express 2021; 12:494-508. [PMID: 33659085 PMCID: PMC7899506 DOI: 10.1364/boe.408471] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 11/12/2020] [Accepted: 12/07/2020] [Indexed: 06/12/2023]
Abstract
Camera-based physiological measurement enables vital signs to be captured unobtrusively without contact with the body. Remote, or imaging, photoplethysmography involves recovering peripheral blood flow from subtle variations in video pixel intensities. While the pulse signal might be easy to obtain from high quality uncompressed videos, the signal-to-noise ratio drops dramatically with video bitrate. Uncompressed videos incur large file storage and data transfer costs, making analysis, manipulation and sharing challenging. To help address these challenges, we use compression specific supervised models to mitigate the effect of temporal video compression on heart rate estimates. We perform a systematic evaluation of the performance of state-of-the-art algorithms across different levels, and formats, of compression. We demonstrate that networks trained on compressed videos consistently outperform other benchmark methods, both on stationary videos and videos with significant rigid head motions. By training on videos with the same, or higher compression factor than test videos, we achieve improvements in signal-to-noise ratio (SNR) of up to 3 dB and mean absolute error (MAE) of up to 6 beats per minute (BPM).
Collapse
Affiliation(s)
- Ewa M. Nowara
- Electrical and Computer Engineering Department, Rice University, 6100 Main St, Houston, TX 77005, USA
| | - Daniel McDuff
- Microsoft Research AI, 14820 NE 36th St, Redmond, WA 98052, USA
| | - Ashok Veeraraghavan
- Electrical and Computer Engineering Department, Rice University, 6100 Main St, Houston, TX 77005, USA
| |
Collapse
|
28
|
Bhowmick S, Nagarajaiah S, Veeraraghavan A. Vision and Deep Learning-Based Algorithms to Detect and Quantify Cracks on Concrete Surfaces from UAV Videos. Sensors (Basel) 2020; 20:s20216299. [PMID: 33167411 PMCID: PMC7663834 DOI: 10.3390/s20216299] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 10/26/2020] [Accepted: 10/30/2020] [Indexed: 11/16/2022]
Abstract
Immediate assessment of structural integrity of important civil infrastructures, like bridges, hospitals, or dams, is of utmost importance after natural disasters. Currently, inspection is performed manually by engineers who look for local damages and their extent on significant locations of the structure to understand its implication on its global stability. However, the whole process is time-consuming and prone to human errors. Due to their size and extent, some regions of civil structures are hard to gain access for manual inspection. In such situations, a vision-based system of Unmanned Aerial Vehicles (UAVs) programmed with Artificial Intelligence algorithms may be an effective alternative to carry out a health assessment of civil infrastructures in a timely manner. This paper proposes a framework of achieving the above-mentioned goal using computer vision and deep learning algorithms for detection of cracks on the concrete surface from its image by carrying out image segmentation of pixels, i.e., classification of pixels in an image of the concrete surface and whether it belongs to cracks or not. The image segmentation or dense pixel level classification is carried out using a deep neural network architecture named U-Net. Further, morphological operations on the segmented images result in dense measurements of crack geometry, like length, width, area, and crack orientation for individual cracks present in the image. The efficacy and robustness of the proposed method as a viable real-life application was validated by carrying out a laboratory experiment of a four-point bending test on an 8-foot-long concrete beam of which the video is recorded using a camera mounted on a UAV-based, as well as a still ground-based, video camera. Detection, quantification, and localization of damage on a civil infrastructure using the proposed framework can directly be used in the prognosis of the structure’s ability to withstand service loads.
Collapse
Affiliation(s)
- Sutanu Bhowmick
- Department of Civil and Environmental Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA;
| | - Satish Nagarajaiah
- Department of Civil and Environmental Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA;
- Department of Mechanical Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Correspondence:
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA;
| |
Collapse
|
29
|
Nagamatsu G, Nowara EM, Pai A, Veeraraghavan A, Kawasaki H. PPG3D: Does 3D head tracking improve camera-based PPG estimation? Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:1194-1197. [PMID: 33018201 DOI: 10.1109/embc44109.2020.9176065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Over the last few years, camera-based estimation of vital signs referred to as imaging photoplethysmography (iPPG) has garnered significant attention due to the relative simplicity, ease, unobtrusiveness and flexibility offered by such measurements. It is expected that iPPG may be integrated into a host of emerging applications in areas as diverse as autonomous cars, neonatal monitoring, and telemedicine. In spite of this potential, the primary challenge of non-contact camera-based measurements is the relative motion between the camera and the subjects. Current techniques employ 2D feature tracking to reduce the effect of subject and camera motion but they are limited to handling translational and in-plane motion. In this paper, we study, for the first-time, the utility of 3D face tracking to allow iPPG to retain robust performance even in presence of out-of-plane and large relative motions. We use a RGB-D camera to obtain 3D information from the subjects and use the spatial and depth information to fit a 3D face model and track the model over the video frames. This allows us to estimate correspondence over the entire video with pixel-level accuracy, even in the presence of out-of-plane or large motions. We then estimate iPPG from the warped video data that ensures per-pixel correspondence over the entire window-length used for estimation. Our experiments demonstrate improvement in robustness when head motion is large.
Collapse
|
30
|
Boominathan V, Adams JK, Robinson JT, Veeraraghavan A. PhlatCam: Designed Phase-Mask Based Thin Lensless Camera. IEEE Trans Pattern Anal Mach Intell 2020; 42:1618-1629. [PMID: 32324539 PMCID: PMC7439257 DOI: 10.1109/tpami.2020.2987489] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We demonstrate a versatile thin lensless camera with a designed phase-mask placed at sub-2 mm from an imaging CMOS sensor. Using wave optics and phase retrieval methods, we present a general-purpose framework to create phase-masks that achieve desired sharp point-spread-functions (PSFs) for desired camera thicknesses. From a single 2D encoded measurement, we show the reconstruction of high-resolution 2D images, computational refocusing, and 3D imaging. This ability is made possible by our proposed high-performance contour-based PSF. The heuristic contour-based PSF is designed using concepts in signal processing to achieve maximal information transfer to a bit-depth limited sensor. Due to the efficient coding, we can use fast linear methods for high-quality image reconstructions and switch to iterative nonlinear methods for higher fidelity reconstructions and 3D imaging.
Collapse
|
31
|
Kumar M, Suliburk JW, Veeraraghavan A, Sabharwal A. PulseCam: a camera-based, motion-robust and highly sensitive blood perfusion imaging modality. Sci Rep 2020; 10:4825. [PMID: 32179806 PMCID: PMC7075982 DOI: 10.1038/s41598-020-61576-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 02/27/2020] [Indexed: 11/08/2022] Open
Abstract
Blood carries oxygen and nutrients to the trillions of cells in our body to sustain vital life processes. Lack of blood perfusion can cause irreversible cell damage. Therefore, blood perfusion measurement has widespread clinical applications. In this paper, we develop PulseCam - a new camera-based, motion-robust, and highly sensitive blood perfusion imaging modality with 1 mm spatial resolution and 1 frame-per-second temporal resolution. Existing camera-only blood perfusion imaging modality suffers from two core challenges: (i) motion artifact, and (ii) small signal recovery in the presence of large surface reflection and measurement noise. PulseCam addresses these challenges by robustly combining the video recording from the camera with a pulse waveform measured using a conventional pulse oximeter to obtain reliable blood perfusion maps in the presence of motion artifacts and outliers in the video recordings. For video stabilization, we adopt a novel brightness-invariant optical flow algorithm that helps us reduce error in blood perfusion estimate below 10% in different motion scenarios compared to 20-30% error when using current approaches. PulseCam can detect subtle changes in blood perfusion below the skin with at least two times better sensitivity, three times better response time, and is significantly cheaper compared to infrared thermography. PulseCam can also detect venous or partial blood flow occlusion that is difficult to identify using existing modalities such as the perfusion index measured using a pulse oximeter. With the help of a pilot clinical study, we also demonstrate that PulseCam is robust and reliable in an operationally challenging surgery room setting. We anticipate that PulseCam will be used both at the bedside as well as a point-of-care blood perfusion imaging device to visualize and analyze blood perfusion in an easy-to-use and cost-effective manner.
Collapse
Affiliation(s)
- Mayank Kumar
- Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, 77005, USA
| | - James W Suliburk
- Division of General Surgery, Baylor College of Medicine, 6620 Main St, Houston, TX, 77030, USA
| | - Ashok Veeraraghavan
- Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, 77005, USA
| | - Ashutosh Sabharwal
- Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, 77005, USA.
| |
Collapse
|
32
|
Gruber DF, Phillips BT, O’Brien R, Boominathan V, Veeraraghavan A, Vasan G, O’Brien P, Pieribone VA, Sparks JS. Bioluminescent flashes drive nighttime schooling behavior and synchronized swimming dynamics in flashlight fish. PLoS One 2019; 14:e0219852. [PMID: 31412054 PMCID: PMC6693688 DOI: 10.1371/journal.pone.0219852] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 07/02/2019] [Indexed: 01/13/2023] Open
Abstract
Schooling fishes, like flocking birds and swarming insects, display remarkable behavioral coordination. While over 25% of fish species exhibit schooling behavior, nighttime schooling has rarely been observed or reported. This is due to vision being the primary modality for schooling, which is corroborated by the fact that most fish schools disperse at critically low light levels. Here we report on a large aggregation of the bioluminescent flashlight fish Anomalops katoptron that exhibited nighttime schooling behavior during multiple moon phases, including the new moon. Data were recorded with a suite of low-light imaging devices, including a high-speed, high-resolution scientific complementary metal-oxide-semiconductor (sCMOS) camera. Image analysis revealed nighttime schooling using synchronized bioluminescent flashing displays, and demonstrated that school motion synchrony exhibits correlation with relative swim speed. A computer model of flashlight fish schooling behavior shows that only a small percentage of individuals need to exhibit bioluminescence in order for school cohesion to be maintained. Flashlight fish schooling is unique among fishes, in that bioluminescence enables schooling in conditions of no ambient light. In addition, some members can still partake in the school while not actively exhibiting their bioluminescence. Image analysis of our field data and model demonstrate that if a small percentage of fish become motivated to change direction, the rest of the school follows. The use of bioluminescence by flashlight fish to enable schooling in shallow water adds an additional ecological application to bioluminescence and suggests that schooling behavior in mesopelagic bioluminescent fishes may be also mediated by luminescent displays.
Collapse
Affiliation(s)
- David F. Gruber
- Department of Natural Sciences, City University of New York, Baruch College, New York, New York, United States of America
- PhD Program in Biology, The Graduate Center, City University of New York, New York, New York, United States of America
- Sackler Institute for Comparative Genomics, American Museum of Natural History, New York, New York, United States of America
- * E-mail:
| | - Brennan T. Phillips
- Department of Ocean Engineering, University of Rhode Island, Narragansett, Rhode Island, United States of America
| | - Rory O’Brien
- Department of Cellular and Molecular Physiology, The John B. Pierce Laboratory, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Vivek Boominathan
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States of America
| | - Ashok Veeraraghavan
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States of America
| | - Ganesh Vasan
- Department of Cellular and Molecular Physiology, The John B. Pierce Laboratory, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Peter O’Brien
- Department of Cellular and Molecular Physiology, The John B. Pierce Laboratory, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Vincent A. Pieribone
- Department of Cellular and Molecular Physiology, The John B. Pierce Laboratory, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - John S. Sparks
- Sackler Institute for Comparative Genomics, American Museum of Natural History, New York, New York, United States of America
- Department of Ichthyology, Division of Vertebrate Zoology, American Museum of Natural History, New York, New York, United States of America
| |
Collapse
|
33
|
Wu Y, Sharma MK, Veeraraghavan A. WISH: wavefront imaging sensor with high resolution. Light Sci Appl 2019; 8:44. [PMID: 31069074 PMCID: PMC6491653 DOI: 10.1038/s41377-019-0154-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 04/01/2019] [Accepted: 04/08/2019] [Indexed: 05/12/2023]
Abstract
Wavefront sensing is the simultaneous measurement of the amplitude and phase of an incoming optical field. Traditional wavefront sensors such as Shack-Hartmann wavefront sensor (SHWFS) suffer from a fundamental tradeoff between spatial resolution and phase estimation and consequently can only achieve a resolution of a few thousand pixels. To break this tradeoff, we present a novel computational-imaging-based technique, namely, the Wavefront Imaging Sensor with High resolution (WISH). We replace the microlens array in SHWFS with a spatial light modulator (SLM) and use a computational phase-retrieval algorithm to recover the incident wavefront. This wavefront sensor can measure highly varying optical fields at more than 10-megapixel resolution with the fine phase estimation. To the best of our knowledge, this resolution is an order of magnitude higher than the current noninterferometric wavefront sensors. To demonstrate the capability of WISH, we present three applications, which cover a wide range of spatial scales. First, we produce the diffraction-limited reconstruction for long-distance imaging by combining WISH with a large-aperture, low-quality Fresnel lens. Second, we show the recovery of high-resolution images of objects that are obscured by scattering. Third, we show that WISH can be used as a microscope without an objective lens. Our study suggests that the designing principle of WISH, which combines optical modulators and computational algorithms to sense high-resolution optical fields, enables improved capabilities in many existing applications while revealing entirely new, hitherto unexplored application areas.
Collapse
Affiliation(s)
- Yicheng Wu
- Department of Electrical and Computer Engineering, Rice University, Houston, TX USA
- Applied Physics Program, Rice University, Houston, TX USA
| | - Manoj Kumar Sharma
- Department of Electrical and Computer Engineering, Rice University, Houston, TX USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX USA
- Applied Physics Program, Rice University, Houston, TX USA
| |
Collapse
|
34
|
Ye F, Avants BW, Veeraraghavan A, Robinson JT. Integrated light-sheet illumination using metallic slit microlenses. Opt Express 2018; 26:27326-27338. [PMID: 30469803 DOI: 10.1364/oe.26.027326] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 09/25/2018] [Indexed: 06/09/2023]
Abstract
Light sheet microscopy (LSM) - also known as selective plane illumination microscopy (SPIM) - enables high-speed, volumetric imaging by illuminating a two-dimensional cross-section of a specimen. Typically, this light sheet is created by table-top optics, which limits the ability to miniaturize the overall SPIM system. Replacing this table-top illumination system with miniature, integrated devices would reduce the cost and footprint of SPIM systems. One important element for a miniature SPIM system is a flat, easily manufactured lens that can form a light sheet. Here we investigate planar metallic lenses as the beam shaping element of an integrated SPIM illuminator. Based on finite difference time domain (FDTD) simulations, we find that diffraction from a single slit can create planar illumination with a higher light throughput than zone plate or plasmonic lenses. Metallic slit microlenses also show broadband operation across the entire visible range and are nearly polarization insensitive. Furthermore, compared to meta-lenses based on sub-wavelength-scale diffractive elements, metallic slit lenses have micron-scale features compatible with low-cost photolithographic manufacturing. These features allow us to create inexpensive integrated devices that generate light-sheet illumination comparable to tabletop microscopy systems. Further miniaturization of this type of integrated SPIM illuminators will open new avenues for flat, implantable photonic devices for in vivo biological imaging.
Collapse
|
35
|
Niu L, Cai J, Veeraraghavan A, Zhang L. Zero-Shot Learning via Category-Specific Visual-Semantic Mapping and Label Refinement. IEEE Trans Image Process 2018; 28:965-979. [PMID: 30281456 DOI: 10.1109/tip.2018.2872916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Zero-Shot Learning (ZSL) aims to classify a test instance from an unseen category based on the training instances from seen categories, in which the gap between seen categories and unseen categories is generally bridged via visual-semantic mapping between the low-level visual feature space and the intermediate semantic space. However, the visual-semantic mapping (i.e., projection) learnt based on seen categories may not generalize well to unseen categories, which is known as the projection domain shift in ZSL. To address this projection domain shift issue, we propose a method named Adaptive Embedding ZSL (AEZSL) to learn an adaptive visual-semantic mapping for each unseen category, followed by progressive label refinement. Moreover, to avoid learning visual-semantic mapping for each unseen category in the large-scale classification task, we additionally propose a deep adaptive embedding model named Deep AEZSL (DAEZSL) sharing the similar idea (i.e., visual-semantic mapping should be category-specific and related to the semantic space) with AEZSL, which only needs to be trained once, but can be applied to arbitrary number of unseen categories. Extensive experiments demonstrate that our proposed methods achieve the state-of-theart results for image classification on three small-scale benchmark datasets and one large-scale benchmark dataset.
Collapse
|
36
|
Li F, Chen H, Pediredla A, Yeh C, He K, Veeraraghavan A, Cossairt O. CS-ToF: High-resolution compressive time-of-flight imaging. Opt Express 2017; 25:31096-31110. [PMID: 29245787 DOI: 10.1364/oe.25.031096] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 11/17/2017] [Indexed: 06/07/2023]
Abstract
Three-dimensional imaging using Time-of-flight (ToF) sensors is rapidly gaining widespread adoption in many applications due to their cost effectiveness, simplicity, and compact size. However, the current generation of ToF cameras suffers from low spatial resolution due to physical fabrication limitations. In this paper, we propose CS-ToF, an imaging architecture to achieve high spatial resolution ToF imaging via optical multiplexing and compressive sensing. Our approach is based on the observation that, while depth is non-linearly related to ToF pixel measurements, a phasor representation of captured images results in a linear image formation model. We utilize this property to develop a CS-based technique that is used to recover high resolution 3D images. Based on the proposed architecture, we developed a prototype 1-megapixel compressive ToF camera that achieves as much as 4× improvement in spatial resolution and 3× improvement for natural scenes. We believe that our proposed CS-ToF architecture provides a simple and low-cost solution to improve the spatial resolution of ToF and related sensors.
Collapse
|
37
|
Adams JK, Boominathan V, Avants BW, Vercosa DG, Ye F, Baraniuk RG, Robinson JT, Veeraraghavan A. Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope. Sci Adv 2017; 3:e1701548. [PMID: 29226243 PMCID: PMC5722650 DOI: 10.1126/sciadv.1701548] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 11/02/2017] [Indexed: 05/21/2023]
Abstract
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world's tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high-frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.
Collapse
Affiliation(s)
- Jesse K. Adams
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Benjamin W. Avants
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Daniel G. Vercosa
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Fan Ye
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Richard G. Baraniuk
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Jacob T. Robinson
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, TX 77030, USA
| | - Ashok Veeraraghavan
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| |
Collapse
|
38
|
Adams JK, Boominathan V, Avants BW, Vercosa DG, Ye F, Baraniuk RG, Robinson JT, Veeraraghavan A. Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope. Sci Adv 2017; 3:e1701548. [PMID: 29226243 DOI: 10.1126/sciadv.l701548] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 11/02/2017] [Indexed: 05/28/2023]
Abstract
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world's tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high-frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.
Collapse
Affiliation(s)
- Jesse K Adams
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Benjamin W Avants
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Daniel G Vercosa
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Fan Ye
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Richard G Baraniuk
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| | - Jacob T Robinson
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
- Department of Bioengineering, Rice University, Houston, TX 77005, USA
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, TX 77030, USA
| | - Ashok Veeraraghavan
- Applied Physics Program, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
- Nanophotonic Computational Imaging and Sensing Laboratory, Rice University, Houston, TX 77005, USA
| |
Collapse
|
39
|
Holloway J, Wu Y, Sharma MK, Cossairt O, Veeraraghavan A. SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography. Sci Adv 2017; 3:e1602564. [PMID: 28439550 PMCID: PMC5392025 DOI: 10.1126/sciadv.1602564] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 02/17/2017] [Indexed: 05/05/2023]
Abstract
Synthetic aperture radar is a well-known technique for improving resolution in radio imaging. Extending these synthetic aperture techniques to the visible light domain is not straightforward because optical receivers cannot measure phase information. We propose to use macroscopic Fourier ptychography (FP) as a practical means of creating a synthetic aperture for visible imaging to achieve subdiffraction-limited resolution. We demonstrate the first working prototype for macroscopic FP in a reflection imaging geometry that is capable of imaging optically rough objects. In addition, a novel image space denoising regularization is introduced during phase retrieval to reduce the effects of speckle and improve perceptual quality of the recovered high-resolution image. Our approach is validated experimentally where the resolution of various diffuse objects is improved sixfold.
Collapse
Affiliation(s)
- Jason Holloway
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA
| | - Yicheng Wu
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA
| | - Manoj K. Sharma
- Department of Electrical Engineering and Computer Science, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
| | - Oliver Cossairt
- Department of Electrical Engineering and Computer Science, Northwestern University, 2145 Sheridan Road, Evanston, IL 60208, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX 77005, USA
- Corresponding author.
| |
Collapse
|
40
|
Pediredla AK, Zhang S, Avants B, Ye F, Nagayama S, Chen Z, Kemere C, Robinson JT, Veeraraghavan A. Deep imaging in scattering media with selective plane illumination microscopy. J Biomed Opt 2016; 21:126009. [PMID: 27997019 DOI: 10.1117/1.jbo.21.12.126009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Accepted: 11/21/2016] [Indexed: 05/23/2023]
Abstract
In most biological tissues, light scattering due to small differences in refractive index limits the depth of optical imaging systems. Two-photon microscopy (2PM), which significantly reduces the scattering of the excitation light, has emerged as the most common method to image deep within scattering biological tissue. This technique, however, requires high-power pulsed lasers that are both expensive and difficult to integrate into compact portable systems. Using a combination of theoretical and experimental techniques, we show that if the excitation path length can be minimized, selective plane illumination microscopy (SPIM) can image nearly as deep as 2PM without the need for a high-powered pulsed laser. Compared to other single-photon imaging techniques like epifluorescence and confocal microscopy, SPIM can image more than twice as deep in scattering media ( ? 10 times the mean scattering length). These results suggest that SPIM has the potential to provide deep imaging in scattering media in situations in which 2PM systems would be too large or costly.
Collapse
Affiliation(s)
- Adithya Kumar Pediredla
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United States
| | - Shizheng Zhang
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United States
| | - Ben Avants
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United States
| | - Fan Ye
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United States
| | - Shin Nagayama
- The University of Texas Health Science Center at Houston, McGovern Medical School, 6431 Fannin Street, Houston, Texas 77030, United States
| | - Ziying Chen
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United States
| | - Caleb Kemere
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United StatescRice University, Department of Bioengineering, 6100 Main Street, Houston, Texas 77005, United States
| | - Jacob T Robinson
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United StatescRice University, Department of Bioengineering, 6100 Main Street, Houston, Texas 77005, United StatesdBaylor College of Medicine, Department of Neuroscience, 1 Baylor Plaza, Houston, Texas 77030, United States
| | - Ashok Veeraraghavan
- Rice University, Department of Electrical and Computer Engineering, 6100 Main Street, Houston, Texas 77005, United StateseRice University, Department of Computer Science, 6100 Main Street, Houston, Texas 77005, United States
| |
Collapse
|
41
|
Yang D, Rao G, Martinez J, Veeraraghavan A, Rao A. Evaluation of tumor-derived MRI-texture features for discrimination of molecular subtypes and prediction of 12-month survival status in glioblastoma. Med Phys 2016; 42:6725-35. [PMID: 26520762 DOI: 10.1118/1.4934373] [Citation(s) in RCA: 106] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Glioblastoma multiforme (GBM) is the most common and aggressive primary brain cancer. Four molecular subtypes of GBM have been described but can only be determined by an invasive brain biopsy. The goal of this study is to evaluate the utility of texture features extracted from magnetic resonance imaging (MRI) scans as a potential noninvasive method to characterize molecular subtypes of GBM and to predict 12-month overall survival status for GBM patients. METHODS The authors manually segmented the tumor regions from postcontrast T1 weighted and T2 fluid-attenuated inversion recovery (FLAIR) MRI scans of 82 patients with de novo GBM. For each patient, the authors extracted five sets of computer-extracted texture features, namely, 48 segmentation-based fractal texture analysis (SFTA) features, 576 histogram of oriented gradients (HOGs) features, 44 run-length matrix (RLM) features, 256 local binary patterns features, and 52 Haralick features, from the tumor slice corresponding to the maximum tumor area in axial, sagittal, and coronal planes, respectively. The authors used an ensemble classifier called random forest on each feature family to predict GBM molecular subtypes and 12-month survival status (a dichotomized version of overall survival at the 12-month time point indicating if the patient was alive or not at 12 months). The performance of the prediction was quantified and compared using receiver operating characteristic (ROC) curves. RESULTS With the appropriate combination of texture feature set, image plane (axial, coronal, or sagittal), and MRI sequence, the area under ROC curve values for predicting different molecular subtypes and 12-month survival status are 0.72 for classical (with Haralick features on T1 postcontrast axial scan), 0.70 for mesenchymal (with HOG features on T2 FLAIR axial scan), 0.75 for neural (with RLM features on T2 FLAIR axial scan), 0.82 for proneural (with SFTA features on T1 postcontrast coronal scan), and 0.69 for 12-month survival status (with SFTA features on T1 postcontrast coronal scan). CONCLUSIONS The authors evaluated the performance of five types of texture features in predicting GBM molecular subtypes and 12-month survival status. The authors' results show that texture features are predictive of molecular subtypes and survival status in GBM. These results indicate the feasibility of using tumor-derived imaging features to guide genomically informed interventions without the need for invasive biopsies.
Collapse
Affiliation(s)
- Dalu Yang
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030
| | - Ganesh Rao
- Department of Neurosurgery, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030
| | - Juan Martinez
- Department of Neurosurgery, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005
| | - Arvind Rao
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030
| |
Collapse
|
42
|
Kumar M, Suliburk J, Veeraraghavan A, Sabharwal A. PulseCam: high-resolution blood perfusion imaging using a camera and a pulse oximeter. Annu Int Conf IEEE Eng Med Biol Soc 2016; 2016:3904-3909. [PMID: 28269139 DOI: 10.1109/embc.2016.7591581] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Measuring blood perfusion is important in medical care as an indicator of injury and disease. However, currently available devices to measure blood perfusion like laser Doppler flowmetry are bulky, expensive, and cumbersome to use. An alternative low-cost and portable camera-based blood perfusion measurement system has recently been proposed, but such camera-only system produces noisy low-resolution blood perfusion maps. In this paper, we propose a new multi-sensor modality, named PulseCam, for measuring blood perfusion by combining a traditional pulse oximeter with a video camera in a unique way to provide low noise and high-resolution blood perfusion maps. Our proposed multi-sensor modality improves per pixel signal to noise ratio of measured perfusion map by up to 3 dB and improves the spatial resolution by 2 - 3 times compared to best known camera-only methods. Blood perfusion measured in the palm using our PulseCam setup during a post-occlusive reactive hyperemia (PORH) test replicates standard PORH response curve measured using laser Doppler flowmetry device but with much lower cost and a portable setup making it suitable for further development as a clinical device.
Collapse
|
43
|
|
44
|
Kumar M, Veeraraghavan A, Sabharwal A. DistancePPG: Robust non-contact vital signs monitoring using a camera. Biomed Opt Express 2015; 6:1565-88. [PMID: 26137365 PMCID: PMC4467696 DOI: 10.1364/boe.6.001565] [Citation(s) in RCA: 129] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2014] [Revised: 03/10/2015] [Accepted: 03/10/2015] [Indexed: 05/19/2023]
Abstract
Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios.
Collapse
|
45
|
Mitra K, Cossairt OS, Veeraraghavan A. A Framework for Analysis of Computational Imaging Systems: Role of Signal Prior, Sensor Noise and Multiplexing. IEEE Trans Pattern Anal Mach Intell 2014; 36:1909-21. [PMID: 26352624 DOI: 10.1109/tpami.2014.2313118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Over the last decade, a number of computational imaging (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effects of multiplexing by appropriate reconstruction algorithms. Given the widespread appeal and the considerable enthusiasm generated by these techniques, a detailed performance analysis of the benefits conferred by this approach is important. Unfortunately, a detailed analysis of CI has proven to be a challenging problem because performance depends equally on three components: (1) the optical multiplexing, (2) the noise characteristics of the sensor, and (3) the reconstruction algorithm which typically uses signal priors. A few recent papers [12], [30], [49] have performed analysis taking multiplexing and noise characteristics into account. However, analysis of CI systems under state-of-the-art reconstruction algorithms, most of which exploit signal prior models, has proven to be unwieldy. In this paper, we present a comprehensive analysis framework incorporating all three components. In order to perform this analysis, we model the signal priors using a Gaussian Mixture Model (GMM). A GMM prior confers two unique characteristics. First, GMM satisfies the universal approximation property which says that any prior density function can be approximated to any fidelity using a GMM with appropriate number of mixtures. Second, a GMM prior lends itself to analytical tractability allowing us to derive simple expressions for the `minimum mean square error' (MMSE) which we use as a metric to characterize the performance of CI systems. We use our framework to analyze several previously proposed CI techniques (focal sweep, flutter shutter, parabolic exposure, etc.), giving conclusive answer to the question: `How much performance gain is due to use of a signal prior and how much is due to multiplexing? Our analysis also clearly shows that multiplexing provides significant performance gains above and beyond the gains obtained due to use of signal priors.
Collapse
|
46
|
Samaniego A, Porter J, Sabharwal A, Twa M, Veeraraghavan A. mobileVision: Towards a patient-operable, at-home, non-mydriatic retinal imaging system. J Vis 2013. [DOI: 10.1167/13.15.63] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
47
|
|
48
|
Abstract
We analyze multi-bounce propagation of light in an unknown hidden volume and demonstrate that the reflected light contains sufficient information to recover the 3D structure of the hidden scene. We formulate the forward and inverse theory of secondary scattering using ideas from energy front propagation and tomography. We show that using Fresnel approximation greatly simplifies this problem and the inversion can be achieved via a backpropagation process. We study the invertibility, uniqueness and choices of space-time-angle dimensions using synthetic examples. We show that a 2D streak camera can be used to discover and reconstruct hidden geometry. Using a 1D high speed time of flight camera, we show that our method can be used recover 3D shapes of objects "around the corner".
Collapse
Affiliation(s)
- Otkrist Gupta
- MIT Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | | | | | | | |
Collapse
|
49
|
Gupta M, Agrawal A, Veeraraghavan A, Narasimhan SG. A Practical Approach to 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus. Int J Comput Vis 2012. [DOI: 10.1007/s11263-012-0554-3] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
50
|
Liu MY, Tuzel O, Veeraraghavan A, Taguchi Y, Marks TK, Chellappa R. Fast object localization and pose estimation in heavy clutter for robotic bin picking. Int J Rob Res 2012. [DOI: 10.1177/0278364911436018] [Citation(s) in RCA: 120] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We present a practical vision-based robotic bin-picking system that performs detection and three-dimensional pose estimation of objects in an unstructured bin using a novel camera design, picks up parts from the bin, and performs error detection and pose correction while the part is in the gripper. Two main innovations enable our system to achieve real-time robust and accurate operation. First, we use a multi-flash camera that extracts robust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliably detect objects and estimate their poses. FDCM improves the accuracy of chamfer matching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges, a three-dimensional distance transform, and directional integral images. We empirically show that these speedups, combined with the use of bounds in the spatial and hypothesis domains, give the algorithm sublinear computational complexity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantly improving upon the accuracy of previous chamfer matching methods in all of the evaluated applications, FDCM is up to two orders of magnitude faster than the previous methods.
Collapse
Affiliation(s)
- Ming-Yu Liu
- Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
- University of Maryland, College Park, MD, USA
| | - Oncel Tuzel
- Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
| | - Ashok Veeraraghavan
- Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
- Rice University, Houston, TX, USA
| | - Yuichi Taguchi
- Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
| | - Tim K Marks
- Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
| | | |
Collapse
|