1
|
Ma X, Moradi M, Ma X, Tang Q, Levi M, Chen Y, Zhang HK. Large area kidney imaging for pre-transplant evaluation using real-time robotic optical coherence tomography. COMMUNICATIONS ENGINEERING 2024; 3:122. [PMID: 39223332 PMCID: PMC11368928 DOI: 10.1038/s44172-024-00264-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 08/07/2024] [Indexed: 09/04/2024]
Abstract
Optical coherence tomography (OCT) can be used to image microstructures of human kidneys. However, current OCT probes exhibit inadequate field-of-view, leading to potentially biased kidney assessment. Here we present a robotic OCT system where the probe is integrated to a robot manipulator, enabling wider area (covers an area of 106.39 mm by 37.70 mm) spatially-resolved imaging. Our system comprehensively scans the kidney surface at the optimal altitude with preoperative path planning and OCT image-based feedback control scheme. It further parameterizes and visualizes microstructures of large area. We verified the system positioning accuracy on a phantom as 0.0762 ± 0.0727 mm and showed the clinical feasibility by scanning ex vivo kidneys. The parameterization reveals vasculatures beneath the kidney surface. Quantification on the proximal convoluted tubule of a human kidney yields clinical-relevant information. The system promises to assess kidney viability for transplantation after collecting a vast amount of whole-organ parameterization and patient outcomes data.
Collapse
Affiliation(s)
- Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
| | - Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, USA
| | - Xiaoyu Ma
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, USA
| | - Qinggong Tang
- The Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, USA
| | - Moshe Levi
- Department of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC, USA
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, USA.
- College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, PR China.
| | - Haichong K Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, USA.
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, USA.
| |
Collapse
|
2
|
Liu Z, Han X, Gao L, Chen S, Huang W, Li P, Wu Z, Wang M, Zheng Y. Cost-effectiveness of incorporating self-imaging optical coherence tomography into fundus photography-based diabetic retinopathy screening. NPJ Digit Med 2024; 7:225. [PMID: 39181938 PMCID: PMC11344775 DOI: 10.1038/s41746-024-01222-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 08/13/2024] [Indexed: 08/27/2024] Open
Abstract
Diabetic macular edema (DME) has emerged as the foremost cause of vision loss in the population with diabetes. Early detection of DME is paramount, yet the prevailing screening, relying on two-dimensional and labor-intensive fundus photography (FP), results in frequent unwarranted referrals and overlooked diagnoses. Self-imaging optical coherence tomography (SI-OCT), offering fully automated, three-dimensional macular imaging, holds the potential to enhance DR screening. We conducted an observational study within a cohort of 1822 participants with diabetes, who received comprehensive assessments, including visual acuity testing, FP, and SI-OCT examinations. We compared the performance of three screening strategies: the conventional FP-based strategy, a combination strategy of FP and SI-OCT, and a simulated combination strategy of FP and manual SD-OCT. Additionally, we undertook a cost-effectiveness analysis utilizing Markov models to evaluate the costs and benefits of the three strategies for referable DR. We found that the FP + SI-OCT strategy demonstrated superior sensitivity (87.69% vs 61.53%) and specificity (98.29% vs 92.47%) in detecting DME when compared to the FP-based strategy. Importantly, the FP + SI-OCT strategy outperformed the FP-based strategy, with an incremental cost-effectiveness ratio (ICER) of $8016 per quality-adjusted life year (QALY), while the FP + SD-OCT strategy was less cost-effective, with an ICER of $45,754/QALY. Our results were robust to extensive sensitivity analyses, with the FP + SI-OCT strategy standing as the dominant choice in 69.36% of simulations conducted at the current willingness-to-pay threshold. In summary, incorporating SI-OCT into FP-based screening offers substantial enhancements in sensitivity, specificity for detecting DME, and most notably, cost-effectiveness for DR screening.
Collapse
Affiliation(s)
- Zitian Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Le Gao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shida Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Peng Li
- MOPTIM Imaging Technique Co. Ltd, Shenzhen, China
| | - Zhiyan Wu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Mengchi Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| |
Collapse
|
3
|
Lotz S, Göb M, Böttger S, Ha-Wissel L, Hundt J, Ernst F, Huber R. Large area robotically assisted optical coherence tomography (LARA-OCT). BIOMEDICAL OPTICS EXPRESS 2024; 15:3993-4009. [PMID: 38867778 PMCID: PMC11166428 DOI: 10.1364/boe.525524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/14/2024]
Abstract
We demonstrate large-area robotically assisted optical coherence tomography (LARA-OCT), utilizing a seven-degree-of-freedom robotic arm in conjunction with a 3.3 MHz swept-source OCT to raster scan samples of arbitrary shape. By combining multiple fields of view (FOV), LARA-OCT can probe a much larger area than conventional OCT. Also, nonplanar and curved surfaces like skin on arms and legs can be probed. The lenses in the LARA-OCT scanner with their normal FOV can have fewer aberrations and less complex optics compared to a single wide field design. This may be especially critical for high resolution scans. We directly use our fast MHz-OCT for tracking and stitching, making additional machine vision systems like cameras, positioning, tracking or navigation devices obsolete. This also eliminates the need for complex coordinate system registration between OCT and the machine vision system. We implemented a real time probe-to-surface control that maintains the probe alignment orthogonal to the sample by only using surface information from the OCT images. We present OCT data sets with volume sizes of 140 × 170 × 20 mm3, captured in 2.5 minutes.
Collapse
Affiliation(s)
- Simon Lotz
- Institute of Biomedical Optics, Universität zu Lübeck, Peter-Monnik-Weg 4, 23562 Lübeck, Germany
| | - Madita Göb
- Institute of Biomedical Optics, Universität zu Lübeck, Peter-Monnik-Weg 4, 23562 Lübeck, Germany
| | - Sven Böttger
- Institute for Robotic and Cognitive Systems, Universität zu Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
- qtec Services GmbH, Niels-Bohr-Ring 3-5, 23568 Lübeck, Germany
| | - Linh Ha-Wissel
- Lübeck Institute of Experimental Dermatology, Universität zu Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
- Department of Dermatology, Allergology, Venerology, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Lübeck, Germany
| | - Jennifer Hundt
- Lübeck Institute of Experimental Dermatology, Universität zu Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Floris Ernst
- Institute for Robotic and Cognitive Systems, Universität zu Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Robert Huber
- Institute of Biomedical Optics, Universität zu Lübeck, Peter-Monnik-Weg 4, 23562 Lübeck, Germany
- Medizinisches Laserzentrum Lübeck GmbH, Peter-Monnik-Weg 4, 23562 Lübeck, Germany
| |
Collapse
|
4
|
Fang R, Zhang P, Zhang T, Kim D, Sun E, Kuranov R, Kweon J, Huang A, Zhang HF. Freeform robotic optical coherence tomography beyond the optical field-of-view limit. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.21.595073. [PMID: 38826217 PMCID: PMC11142137 DOI: 10.1101/2024.05.21.595073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Imaging complex, non-planar anatomies with optical coherence tomography (OCT) is limited by the optical field of view (FOV) in a single volumetric acquisition. Combining linear mechanical translation with OCT extends the FOV but suffers from inflexibility in imaging non-planar anatomies. We report the freeform robotic OCT to fill this gap. To address challenges in volumetric reconstruction associated with the robotic movement accuracy being two orders of magnitudes worse than OCT imaging resolution, we developed a volumetric registration algorithm based on simultaneous localization and mapping (SLAM) to overcome this limitation. We imaged the entire aqueous humor outflow pathway, whose imaging has the potential to customize glaucoma surgeries but is typically constrained by the FOV, circumferentially in mice as a test. We acquired volumetric OCT data at different robotic poses and reconstructed the entire anterior segment of the eye. The reconstructed volumes showed heterogeneous Schlemm's canal (SC) morphology in the reconstructed anterior segment and revealed a segmental nature in the circumferential distribution of collector channels (CC) with spatial features as small as a few micrometers.
Collapse
|
5
|
Hu Y, Feng Y, Long X, Zheng D, Liu G, Lu Y, Ren Q, Huang Z. Megahertz multi-parametric ophthalmic OCT system for whole eye imaging. BIOMEDICAL OPTICS EXPRESS 2024; 15:3000-3017. [PMID: 38855668 PMCID: PMC11161356 DOI: 10.1364/boe.517757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 04/02/2024] [Accepted: 04/04/2024] [Indexed: 06/11/2024]
Abstract
An ultrahigh-speed, wide-field OCT system for the imaging of anterior, posterior, and ocular biometers is crucial for obtaining comprehensive ocular parameters and quantifying ocular pathology size. Here, we demonstrate a multi-parametric ophthalmic OCT system with a speed of up to 1 MHz for wide-field imaging of the retina and 50 kHz for anterior chamber and ocular biometric measurement. A spectrum correction algorithm is proposed to ensure the accurate pairing of adjacent A-lines and elevate the A-scan speed from 500 kHz to 1 MHz for retinal imaging. A registration method employing position feedback signals was introduced, reducing pixel offsets between forward and reverse galvanometer scanning by 2.3 times. Experimental validation on glass sheets and the human eye confirms feasibility and efficacy. Meanwhile, we propose a revised formula to determine the "true" fundus size using all-axial length parameters from different fields of view. The efficient algorithms and compact design enhance system compatibility with clinical requirements, showing promise for widespread commercialization.
Collapse
Affiliation(s)
- Yicheng Hu
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Yutao Feng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- The College of Biochemical Engineering, Beijing Union University, Beijing 100021, China
| | - Xing Long
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Dongye Zheng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Gangjun Liu
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Yanye Lu
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| | - Zhiyu Huang
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen 518071, China
| |
Collapse
|
6
|
Wang Y, Wei S, Zuo R, Kam M, Opfermann JD, Sunmola I, Hsieh MH, Krieger A, Kang JU. Automatic and real-time tissue sensing for autonomous intestinal anastomosis using hybrid MLP-DC-CNN classifier-based optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:2543-2560. [PMID: 38633079 PMCID: PMC11019703 DOI: 10.1364/boe.521652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/18/2024] [Accepted: 03/18/2024] [Indexed: 04/19/2024]
Abstract
Anastomosis is a common and critical part of reconstructive procedures within gastrointestinal, urologic, and gynecologic surgery. The use of autonomous surgical robots such as the smart tissue autonomous robot (STAR) system demonstrates an improved efficiency and consistency of the laparoscopic small bowel anastomosis over the current da Vinci surgical system. However, the STAR workflow requires auxiliary manual monitoring during the suturing procedure to avoid missed or wrong stitches. To eliminate this monitoring task from the operators, we integrated an optical coherence tomography (OCT) fiber sensor with the suture tool and developed an automatic tissue classification algorithm for detecting missed or wrong stitches in real time. The classification results were updated and sent to the control loop of STAR robot in real time. The suture tool was guided to approach the object by a dual-camera system. If the tissue inside the tool jaw was inconsistent with the desired suture pattern, a warning message would be generated. The proposed hybrid multilayer perceptron dual-channel convolutional neural network (MLP-DC-CNN) classification platform can automatically classify eight different abdominal tissue types that require different suture strategies for anastomosis. In MLP, numerous handcrafted features (∼1955) were utilized including optical properties and morphological features of one-dimensional (1D) OCT A-line signals. In DC-CNN, intensity-based features and depth-resolved tissues' attenuation coefficients were fully exploited. A decision fusion technique was applied to leverage the information collected from both classifiers to further increase the accuracy. The algorithm was evaluated on 69,773 testing A-line data. The results showed that our model can classify the 1D OCT signals of small bowels in real time with an accuracy of 90.06%, a precision of 88.34%, and a sensitivity of 87.29%, respectively. The refresh rate of the displayed A-line signals was set as 300 Hz, the maximum sensing depth of the fiber was 3.6 mm, and the running time of the image processing algorithm was ∼1.56 s for 1,024 A-lines. The proposed fully automated tissue sensing model outperformed the single classifier of CNN, MLP, or SVM with optimized architectures, showing the complementarity of different feature sets and network architectures in classifying intestinal OCT A-line signals. It can potentially reduce the manual involvement of robotic laparoscopic surgery, which is a crucial step towards a fully autonomous STAR system.
Collapse
Affiliation(s)
- Yaning Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Ruizhi Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Justin D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Idris Sunmola
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael H. Hsieh
- Division of Urology, Children’s National Hospital, 111 Michigan Ave NW, Washington, D.C. 20010, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| |
Collapse
|
7
|
Song A, Lusk JB, Roh KM, Hsu ST, Valikodath NG, Lad EM, Muir KW, Engelhard MM, Limkakeng AT, Izatt JA, McNabb RP, Kuo AN. RobOCTNet: Robotics and Deep Learning for Referable Posterior Segment Pathology Detection in an Emergency Department Population. Transl Vis Sci Technol 2024; 13:12. [PMID: 38488431 PMCID: PMC10946693 DOI: 10.1167/tvst.13.3.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/31/2024] [Indexed: 03/19/2024] Open
Abstract
Purpose To evaluate the diagnostic performance of a robotically aligned optical coherence tomography (RAOCT) system coupled with a deep learning model in detecting referable posterior segment pathology in OCT images of emergency department patients. Methods A deep learning model, RobOCTNet, was trained and internally tested to classify OCT images as referable versus non-referable for ophthalmology consultation. For external testing, emergency department patients with signs or symptoms warranting evaluation of the posterior segment were imaged with RAOCT. RobOCTNet was used to classify the images. Model performance was evaluated against a reference standard based on clinical diagnosis and retina specialist OCT review. Results We included 90,250 OCT images for training and 1489 images for internal testing. RobOCTNet achieved an area under the curve (AUC) of 1.00 (95% confidence interval [CI], 0.99-1.00) for detection of referable posterior segment pathology in the internal test set. For external testing, RAOCT was used to image 72 eyes of 38 emergency department patients. In this set, RobOCTNet had an AUC of 0.91 (95% CI, 0.82-0.97), a sensitivity of 95% (95% CI, 87%-100%), and a specificity of 76% (95% CI, 62%-91%). The model's performance was comparable to two human experts' performance. Conclusions A robotically aligned OCT coupled with a deep learning model demonstrated high diagnostic performance in detecting referable posterior segment pathology in a cohort of emergency department patients. Translational Relevance Robotically aligned OCT coupled with a deep learning model may have the potential to improve emergency department patient triage for ophthalmology referral.
Collapse
Affiliation(s)
- Ailin Song
- Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Jay B. Lusk
- Duke University School of Medicine, Durham, NC, USA
| | - Kyung-Min Roh
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - S. Tammy Hsu
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | | | - Eleonora M. Lad
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Kelly W. Muir
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Matthew M. Engelhard
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
| | | | - Joseph A. Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Ryan P. McNabb
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Anthony N. Kuo
- Department of Ophthalmology, Duke University, Durham, NC, USA
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| |
Collapse
|
8
|
Gao Y, Zhang X, Wu D, Wu C, Ren C, Meng T, Ji X. Evaluation of peripapillary retinal nerve fiber layer thickness in intracranial atherosclerotic stenosis. BMC Ophthalmol 2023; 23:455. [PMID: 37957614 PMCID: PMC10641930 DOI: 10.1186/s12886-023-03196-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023] Open
Abstract
PURPOSE To evaluate the peripapillary retinal nerve fiber layer thickness (pRNFL) in patients with intracranial atherosclerotic stenosis (ICAS). METHODS A cross-sectional study was performed in a general hospital. The intracranial atherosclerotic stenosis was evaluated by digital subtraction angiography (DSA), computed tomography angiography (CTA) or magnetic resonance angiography (MRA). High-definition optical coherence tomography (HD-OCT) was used to evaluate the peripapillary retinal nerve fiber layer thickness. RESULTS A total of 102 patients, including 59(57.8%) patients with ICAS and 43(42.2%) patients without ICAS, were finally analysed in the study. The peripapillary retinal nerve fiber layer thickness (pRNFL) was reduced significantly in the average, the superior and the inferior quadrants of the ipsilateral eyes and in the superior quadrant of the contralateral eyes in patients with ICAS compared with patients without ICAS. After multivariate analysis, only the superior pRNFL thickness in the ipsilateral eyes was significantly associated with ICAS (OR,0.968; 95% CI,0.946-0.991; p = 0.006). The area under receiver operator curve was 0.679 (95% CI,0.576-0.782) for it to identify the presence of ICAS. The cut-off value of the superior pRNFL was 109.5 μm, and the sensitivity and specificity were 50.8% and 83.7%, respectively. CONCLUSION The superior pRNFL in the ipsilateral eye was significantly associated with ICAS in this study. Larger studies are needed to explore the relation between pRNFL and ICAS further.
Collapse
Affiliation(s)
- Yuan Gao
- Department of Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, 100191, Beijing, China
- Department of Ophthalmology, Xuanwu hospital, Capital Medical University, 100053, Beijing, China
| | - Xuxiang Zhang
- Department of Ophthalmology, Xuanwu hospital, Capital Medical University, 100053, Beijing, China
| | - Di Wu
- China-America Institute of Neuroscience, Xuanwu hospital, Capital Medical University, 100053, Beijing, China
| | - Chuanjie Wu
- Department of Neurology, Xuanwu hospital, Capital Medical University, 100053, Beijing, China
| | - Changhong Ren
- Beijing Key Laboratory of Hypoxic Conditioning Translational Medicine, Xuanwu hospital, Capital Medical University, 100053, Beijing, China
| | - Tingting Meng
- Department of Ophthalmology, Xuanwu hospital, Capital Medical University, 100053, Beijing, China
| | - Xunming Ji
- Department of Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, 100191, Beijing, China.
- Department of Neurology, Xuanwu hospital, Capital Medical University, 100053, Beijing, China.
| |
Collapse
|
9
|
Ma X, Moradi M, Ma X, Tang Q, Levi M, Chen Y, Zhang HK. Large Area Kidney Imaging for Pre-transplant Evaluation using Real-Time Robotic Optical Coherence Tomography. RESEARCH SQUARE 2023:rs.3.rs-3385622. [PMID: 37886456 PMCID: PMC10602184 DOI: 10.21203/rs.3.rs-3385622/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Optical coherence tomography (OCT) is a high-resolution imaging modality that can be used to image microstructures of human kidneys. These images can be analyzed to evaluate the viability of the organ for transplantation. However, current OCT devices suffer from insufficient field-of-view, leading to biased examination outcomes when only small portions of the kidney can be assessed. Here we present a robotic OCT system where an OCT probe is integrated with a robotic manipulator, enabling wider area spatially-resolved imaging. With the proposed system, it becomes possible to comprehensively scan the kidney surface and provide large area parameterization of the microstructures. We verified the probe tracking accuracy with a phantom as 0.0762±0.0727 mm and demonstrated its clinical feasibility by scanning ex vivo kidneys. The parametric map exhibits fine vasculatures beneath the kidney surface. Quantitative analysis on the proximal convoluted tubule from the ex vivo human kidney yields highly clinical-relevant information.
Collapse
Affiliation(s)
- Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, MA 01609, USA
| | - Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA 01003, USA
| | - Xiaoyu Ma
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA 01003, USA
| | - Qinggong Tang
- The Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Moshe Levi
- Department of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC 20057, USA
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA 01003, USA
| | - Haichong K Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, MA 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, MA 01609, USA
| |
Collapse
|
10
|
Brett J. Painting unknown worlds. Eye (Lond) 2023; 37:2886-2895. [PMID: 37330607 PMCID: PMC10516968 DOI: 10.1038/s41433-023-02609-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 05/23/2023] [Accepted: 05/25/2023] [Indexed: 06/19/2023] Open
Abstract
This paper sets out to discover more about the name 'Tarrant' whose ophthalmic paintings have regularly featured in ophthalmic textbooks over the past 50 years. Through a series of telephone calls, I have spoken to Tarrant about his life and work while I research the origins of ophthalmic illustrations charting the story behind this art movement. The paper also explores the eventual decline of retinal painting and the emergence of photography, concluding that due to the continuing advance of technology the ophthalmic photographer may eventually succumb to the same fate as the artist.
Collapse
Affiliation(s)
- Jonathan Brett
- Eye Research Group Oxford, Oxford Eye Hospital, John Radcliffe Hospital, Oxford, UK.
| |
Collapse
|
11
|
He B, Zhang Y, Zhao L, Sun Z, Hu X, Kang Y, Wang L, Li Z, Huang W, Li Z, Xing G, Hua F, Wang C, Xue P, Zhang N. Robotic-OCT guided inspection and microsurgery of monolithic storage devices. Nat Commun 2023; 14:5701. [PMID: 37709753 PMCID: PMC10502073 DOI: 10.1038/s41467-023-41498-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 09/06/2023] [Indexed: 09/16/2023] Open
Abstract
Data recovery from monolithic storage devices (MSDs) is in high demand for legal or business purposes. However, the conventional data recovery methods are destructive, complicated, and time-consuming. We develop a robotic-arm-assisted optical coherence tomography (robotic-OCT) for non-destructive inspection of MSDs, offering ~7 μm lateral resolution, ~4 μm axial resolution and an adjustable field-of-view to accommodate various MSD sizes. Using a continuous scanning strategy, robotic-OCT achieves automated volumetric imaging of a micro-SD card in ~37 seconds, significantly faster than the traditional stop-and-stare scanning that typically takes tens of minutes. We also demonstrate the robotic-OCT-guided laser ablation as a microsurgical tool for targeted area removal with precision of ±10 μm and accuracy of ~50 μm, eliminating the need to remove the entire insulating layer and operator intervention, thus greatly improving the data recovery efficiency. This work has diverse potential applications in digital forensics, failure analysis, materials testing, and quality control.
Collapse
Affiliation(s)
- Bin He
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
- State Key Laboratory of Low-dimensional Quantum Physics and Department of Physics, Tsinghua University and Beijing Advanced Innovation Center for Structural Biology, 100084, Beijing, China
| | - Yuxin Zhang
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
- State Key Laboratory of Low-dimensional Quantum Physics and Department of Physics, Tsinghua University and Beijing Advanced Innovation Center for Structural Biology, 100084, Beijing, China
| | - Lu Zhao
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Zhenwen Sun
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Xiyuan Hu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 210094, Nanjing, China
| | - Yanrong Kang
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Lei Wang
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Zhihui Li
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Wei Huang
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Zhigang Li
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Guidong Xing
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Feng Hua
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China
| | - Chengming Wang
- State Key Laboratory of Low-dimensional Quantum Physics and Department of Physics, Tsinghua University and Beijing Advanced Innovation Center for Structural Biology, 100084, Beijing, China
| | - Ping Xue
- State Key Laboratory of Low-dimensional Quantum Physics and Department of Physics, Tsinghua University and Beijing Advanced Innovation Center for Structural Biology, 100084, Beijing, China.
| | - Ning Zhang
- Institute of Forensic Science, Ministry of Public Security, 100038, Beijing, China.
| |
Collapse
|
12
|
Zhang H, Yang J, Zheng C, Zhao S, Zhang A. Annotation-efficient learning for OCT segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3294-3307. [PMID: 37497504 PMCID: PMC10368022 DOI: 10.1364/boe.486276] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/29/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Deep learning has been successfully applied to OCT segmentation. However, for data from different manufacturers and imaging protocols, and for different regions of interest (ROIs), it requires laborious and time-consuming data annotation and training, which is undesirable in many scenarios, such as surgical navigation and multi-center clinical trials. Here we propose an annotation-efficient learning method for OCT segmentation that could significantly reduce annotation costs. Leveraging self-supervised generative learning, we train a Transformer-based model to learn the OCT imagery. Then we connect the trained Transformer-based encoder to a CNN-based decoder, to learn the dense pixel-wise prediction in OCT segmentation. These training phases use open-access data and thus incur no annotation costs, and the pre-trained model can be adapted to different data and ROIs without re-training. Based on the greedy approximation for the k-center problem, we also introduce an algorithm for the selective annotation of the target data. We verified our method on publicly-available and private OCT datasets. Compared to the widely-used U-Net model with 100% training data, our method only requires ∼10% of the data for achieving the same segmentation accuracy, and it speeds the training up to ∼3.5 times. Furthermore, our proposed method outperforms other potential strategies that could improve annotation efficiency. We think this emphasis on learning efficiency may help improve the intelligence and application penetration of OCT-based technologies.
Collapse
Affiliation(s)
- Haoran Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shiqing Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Aili Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
13
|
Song A, Roh KM, Lusk JB, Valikodath NG, Lad EM, Draelos M, Ortiz P, Theophanous RG, Limkakeng AT, Izatt JA, McNabb RP, Kuo AN. Robotic Optical Coherence Tomography Retinal Imaging for Emergency Department Patients: A Pilot Study for Emergency Physicians' Diagnostic Performance. Ann Emerg Med 2023; 81:501-508. [PMID: 36669908 PMCID: PMC10038849 DOI: 10.1016/j.annemergmed.2022.10.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 10/05/2022] [Accepted: 10/11/2022] [Indexed: 01/20/2023]
Abstract
STUDY OBJECTIVE To evaluate the diagnostic performance of emergency physicians' interpretation of robotically acquired retinal optical coherence tomography images for detecting posterior eye abnormalities in patients seen in the emergency department (ED). METHODS Adult patients presenting to Duke University Hospital emergency department from November 2020 through October 2021 with acute visual changes, headache, or focal neurologic deficit(s) who received an ophthalmology consultation were enrolled in this pilot study. Emergency physicians provided standard clinical care, including direct ophthalmoscopy, at their discretion. Retinal optical coherence tomography images of these patients were obtained with a robotic, semi-autonomous optical coherence tomography system. We compared the detection of abnormalities in optical coherence tomography images by emergency physicians with a reference standard, a combination of ophthalmology consultation diagnosis and retina specialist optical coherence tomography review. RESULTS Nine emergency physicians reviewed the optical coherence tomography images of 72 eyes from 38 patients. Based on the reference standard, 33 (46%) eyes were normal, 16 (22%) had at least 1 urgent/emergency abnormality, and the remaining 23 (32%) had at least 1 nonurgent abnormality. Emergency physicians' optical coherence tomography interpretation had 69% (95% confidence interval [CI], 49% to 89%) sensitivity for any abnormality, 100% (95% CI, 79% to 100%) sensitivity for urgent/emergency abnormalities, 48% (95% CI, 28% to 68%) sensitivity for nonurgent abnormalities, and 64% (95% CI, 44% to 84%) overall specificity. In contrast, emergency physicians providing standard clinical care did not detect any abnormality with direct ophthalmoscopy. CONCLUSION Robotic, semi-autonomous optical coherence tomography enabled ocular imaging of emergency department patients with a broad range of posterior eye abnormalities. In addition, emergency provider optical coherence tomography interpretation was more sensitive than direct ophthalmoscopy for any abnormalities, urgent/emergency abnormalities, and nonurgent abnormalities in this pilot study with a small sample of patients and emergency physicians.
Collapse
Affiliation(s)
- Ailin Song
- Duke University School of Medicine, Durham, NC
| | - Kyung-Min Roh
- Department of Ophthalmology, Duke University, Durham, NC
| | - Jay B Lusk
- Duke University School of Medicine, Durham, NC
| | | | - Eleonora M Lad
- Department of Ophthalmology, Duke University, Durham, NC
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC
| | - Pablo Ortiz
- Department of Biomedical Engineering, Duke University, Durham, NC
| | | | | | - Joseph A Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC
| | - Ryan P McNabb
- Department of Ophthalmology, Duke University, Durham, NC
| | - Anthony N Kuo
- Department of Ophthalmology, Duke University, Durham, NC; Department of Biomedical Engineering, Duke University, Durham, NC.
| |
Collapse
|
14
|
Draelos M, Ortiz P, Narawane A, McNabb RP, Kuo AN, Izatt JA. Robotic Optical Coherence Tomography of Human Subjects with Posture-Invariant Head and Eye Alignment in Six Degrees of Freedom. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2023; 2023:10.1109/ismr57123.2023.10130250. [PMID: 39092148 PMCID: PMC11293772 DOI: 10.1109/ismr57123.2023.10130250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
Ophthalmic optical coherence tomography (OCT) has achieved remarkable clinical success but remains sequestered in ophthalmology specialty offices. Recently introduced robotic OCT systems seek to expand patient access but fall short of their full potential due to significant imaging workspace and motion planning restrictions. Here, we present a next-generation robotic OCT system capable of imaging in any head orientation or posture that is mechanically reachable. This system overcomes prior restrictions by eliminating fixed-base tracking components, extending robot reach, and planning alignment in six degrees of freedom. With this robotic system, we show repeatable subject imaging independent of posture (standing, seated, reclined, and supine) under widely varying head orientations for multiple human subjects. For each subject, we obtained a consistent view of the retina, including the fovea, retinal vasculature, and edge of the optic nerve head. We believe this robotic approach can extend OCT as an eye disease screening, diagnosis, and monitoring tool to previously unreached patient populations.
Collapse
Affiliation(s)
- Mark Draelos
- Departments of Robotics and Ophthalmology, University of Michigan, 2505 Hayward St, Ann Arbor, MI USA
| | - Pablo Ortiz
- Department of Biomedical Engineering, Duke University, 101 Science Dr, Durham, NC USA
| | - Amit Narawane
- Department of Biomedical Engineering, Duke University, 101 Science Dr, Durham, NC USA
| | - Ryan P McNabb
- Department of Ophthalmology, Duke University Medical Center, 2351 Erwin Rd, Durham, NC USA
| | - Anthony N Kuo
- Department of Ophthalmology, Duke University Medical Center, 2351 Erwin Rd, Durham, NC USA
- Department of Biomedical Engineering, Duke University, 101 Science Dr, Durham, NC USA
| | - Joseph A Izatt
- Department of Biomedical Engineering, Duke University, 101 Science Dr, Durham, NC USA
- Department of Ophthalmology, Duke University Medical Center, 2351 Erwin Rd, Durham, NC USA
| |
Collapse
|
15
|
McNabb R, Ortiz P, Roh KM, Song A, Draelos M, Schuman S, Jaffe G, Lad E, Izatt J, Kuo A. Contactless, autonomous robotic alignment of optical coherence tomography for in vivo evaluation of diseased retinas. RESEARCH SQUARE 2023:rs.3.rs-2371365. [PMID: 36711930 PMCID: PMC9882601 DOI: 10.21203/rs.3.rs-2371365/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
During the COVID-19 pandemic, an emphasis was placed on contactless, physical distancing and improved telehealth; contrariwise, standard-of-care ophthalmic imaging of patients required present, trained personnel. Here, we introduce contactless, autonomous robotic alignment of optical coherence tomography (RAOCT) for in vivo imaging of retinal disease and compare measured retinal thickness and diagnostic readability to technician operated clinical OCT. In a powered study, we found no statistically significant difference in retinal thickness in both healthy and diseased retinas (p > 0.7) or across a variety of demographics (gender, race, and age) between RAOCT and clinical OCT. In a secondary study, a retina specialist labeled a given volume as normal/abnormal. Compared to the clinical diagnostic label, sensitivity/specificity for RAOCT were equal or improved over clinical OCT. Contactless, autonomous RAOCT, that improves upon current clinical OCT, could play a role in both ophthalmic care and non-ophthalmic settings that would benefit from improved eye care.
Collapse
|
16
|
Ni S, Khan S, Nguyen TTP, Ng R, Lujan BJ, Tan O, Huang D, Jian Y. Volumetric directional optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:950-961. [PMID: 35284155 PMCID: PMC8884206 DOI: 10.1364/boe.447882] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/14/2022] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
Photoreceptor loss and resultant thinning of the outer nuclear layer (ONL) is an important pathological feature of retinal degenerations and may serve as a useful imaging biomarker for age-related macular degeneration. However, the demarcation between the ONL and the adjacent Henle's fiber layer (HFL) is difficult to visualize with standard optical coherence tomography (OCT). A dedicated OCT system that can precisely control and continuously and synchronously update the imaging beam entry points during scanning has not been realized yet. In this paper, we introduce a novel imaging technology, Volumetric Directional OCT (VD-OCT), which can dynamically adjust the incident beam on the pupil without manual adjustment during a volumetric OCT scan. We also implement a customized spoke-circular scanning pattern to observe the appearance of HFL with sufficient optical contrast in continuous cross-sectional scans through the entire volume. The application of VD-OCT for retinal imaging to exploit directional reflectivity properties of tissue layers has the potential to allow for early identification of retinal diseases.
Collapse
Affiliation(s)
- Shuibin Ni
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Shanjida Khan
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Thanh-Tin P. Nguyen
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Ringo Ng
- School of Engineering Science, Simon Fraser University, Burnaby, British Columbia V5A 1S6, Canada
| | - Brandon J. Lujan
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Ou Tan
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| | - Yifan Jian
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon 97239, USA
| |
Collapse
|
17
|
Ortiz P, Draelos M, Viehland C, Qian R, McNabb RP, Kuo AN, Izatt JA. Robotically aligned optical coherence tomography with 5 degree of freedom eye tracking for subject motion and gaze compensation. BIOMEDICAL OPTICS EXPRESS 2021; 12:7361-7376. [PMID: 35003839 PMCID: PMC8713666 DOI: 10.1364/boe.443537] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 10/21/2021] [Accepted: 10/26/2021] [Indexed: 05/04/2023]
Abstract
Optical coherence tomography (OCT) has revolutionized diagnostics in ophthalmology. However, OCT requires a trained operator and patient cooperation to carefully align a scanner with the subject's eye and orient it in such a way that it images a desired region of interest at the retina. With the goal of automating this process of orienting and aligning the scanner, we developed a robot-mounted OCT scanner that automatically aligned with the pupil while matching its optical axis with the target region of interest at the retina. The system used two 3D cameras for face tracking and three high-resolution 2D cameras for pupil and gaze tracking. The tracking software identified 5 degrees of freedom for robot alignment and ray aiming through the ocular pupil: 3 degrees of translation (x, y, z) and 2 degrees of orientation (yaw, pitch). We evaluated the accuracy, precision, and range of our tracking system and demonstrated imaging performance on free-standing human subjects. Our results demonstrate that the system stabilized images and that the addition of gaze tracking and aiming allowed for region-of-interest specific alignment at any gaze orientation within a 28° range.
Collapse
Affiliation(s)
- Pablo Ortiz
- Department of Biomedical Engineering,
Duke University, Durham, NC 27708, USA
| | - Mark Draelos
- Department of Biomedical Engineering,
Duke University, Durham, NC 27708, USA
| | - Christian Viehland
- Department of Biomedical Engineering,
Duke University, Durham, NC 27708, USA
| | - Ruobing Qian
- Department of Biomedical Engineering,
Duke University, Durham, NC 27708, USA
| | - Ryan P. McNabb
- Department of Ophthalmology,
Duke University, Durham, NC 27708, USA
| | - Anthony N. Kuo
- Department of Biomedical Engineering,
Duke University, Durham, NC 27708, USA
- Department of Ophthalmology,
Duke University, Durham, NC 27708, USA
| | - Joseph A. Izatt
- Department of Biomedical Engineering,
Duke University, Durham, NC 27708, USA
- Department of Ophthalmology,
Duke University, Durham, NC 27708, USA
| |
Collapse
|