1
|
Moradi M, Hashemabad SK, Vu DM, Soneru AR, Fujita A, Wang M, Elze T, Eslami M, Zebardast N. PyGlaucoMetrics: A Stacked Weight-Based Machine Learning Approach for Glaucoma Detection Using Visual Field Data. MEDICINA (KAUNAS, LITHUANIA) 2025; 61:541. [PMID: 40142352 PMCID: PMC11944261 DOI: 10.3390/medicina61030541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2025] [Revised: 03/10/2025] [Accepted: 03/17/2025] [Indexed: 03/28/2025]
Abstract
Background and Objectives: Glaucoma (GL) classification is crucial for early diagnosis and treatment, yet relying solely on stand-alone models or International Classification of Diseases (ICD) codes is insufficient due to limited predictive power and inconsistencies in clinical labeling. This study aims to improve GL classification using stacked weight-based machine learning models. Materials and Methods: We analyzed a subset of 33,636 participants (58% female) with 340,444 visual fields (VFs) from the Mass Eye and Ear (MEE) dataset. Five clinically relevant GL detection models (LoGTS, UKGTS, Kang, HAP2_part1, and Foster) were selected to serve as base models. Two multi-layer perceptron (MLP) models were trained using 52 total deviation (TD) and pattern deviation (PD) values from Humphrey field analyzer (HFA) 24-2 VF tests, along with four clinical variables (age, gender, follow-up time, and race) to extract model weights. These weights were then utilized to train three meta-learners, including logistic regression (LR), extreme gradient boosting (XGB), and MLP, to classify cases as GL or non-GL. Results: The MLP meta-learner achieved the highest performance, with an accuracy of 96.43%, an F-score of 96.01%, and an AUC of 97.96%, while also demonstrating the lowest prediction uncertainty (0.08 ± 0.13). XGB followed with 92.86% accuracy, a 92.31% F-score, and a 96.10% AUC. LR had the lowest performance, with 89.29% accuracy, an 86.96% F-score, and a 94.81% AUC, as well as the highest uncertainty (0.58 ± 0.07). Permutation importance analysis revealed that the superior temporal sector was the most influential VF feature, with importance scores of 0.08 in Kang's and 0.04 in HAP2_part1 models. Among clinical variables, age was the strongest contributor (score = 0.3). Conclusions: The meta-learner outperformed stand-alone models in GL classification, achieving an accuracy improvement of 8.92% over the best-performing stand-alone model (LoGTS with 87.51%), offering a valuable tool for automated glaucoma detection.
Collapse
Affiliation(s)
- Mousa Moradi
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (M.M.); (S.K.H.); (M.W.); (T.E.); (M.E.)
| | - Saber Kazeminasab Hashemabad
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (M.M.); (S.K.H.); (M.W.); (T.E.); (M.E.)
| | - Daniel M. Vu
- Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (D.M.V.); (A.R.S.); (A.F.)
| | - Allison R. Soneru
- Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (D.M.V.); (A.R.S.); (A.F.)
| | - Asahi Fujita
- Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (D.M.V.); (A.R.S.); (A.F.)
| | - Mengyu Wang
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (M.M.); (S.K.H.); (M.W.); (T.E.); (M.E.)
| | - Tobias Elze
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (M.M.); (S.K.H.); (M.W.); (T.E.); (M.E.)
| | - Mohammad Eslami
- Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (M.M.); (S.K.H.); (M.W.); (T.E.); (M.E.)
| | - Nazlee Zebardast
- Massachusetts Eye and Ear, Harvard Medical School, Boston, MA 02114, USA; (D.M.V.); (A.R.S.); (A.F.)
| |
Collapse
|
2
|
Ma X, Moradi M, Ma X, Tang Q, Levi M, Chen Y, Zhang HK. Large area kidney imaging for pre-transplant evaluation using real-time robotic optical coherence tomography. COMMUNICATIONS ENGINEERING 2024; 3:122. [PMID: 39223332 PMCID: PMC11368928 DOI: 10.1038/s44172-024-00264-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 08/07/2024] [Indexed: 09/04/2024]
Abstract
Optical coherence tomography (OCT) can be used to image microstructures of human kidneys. However, current OCT probes exhibit inadequate field-of-view, leading to potentially biased kidney assessment. Here we present a robotic OCT system where the probe is integrated to a robot manipulator, enabling wider area (covers an area of 106.39 mm by 37.70 mm) spatially-resolved imaging. Our system comprehensively scans the kidney surface at the optimal altitude with preoperative path planning and OCT image-based feedback control scheme. It further parameterizes and visualizes microstructures of large area. We verified the system positioning accuracy on a phantom as 0.0762 ± 0.0727 mm and showed the clinical feasibility by scanning ex vivo kidneys. The parameterization reveals vasculatures beneath the kidney surface. Quantification on the proximal convoluted tubule of a human kidney yields clinical-relevant information. The system promises to assess kidney viability for transplantation after collecting a vast amount of whole-organ parameterization and patient outcomes data.
Collapse
Affiliation(s)
- Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, USA
| | - Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, USA
| | - Xiaoyu Ma
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, USA
| | - Qinggong Tang
- The Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, USA
| | - Moshe Levi
- Department of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC, USA
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, USA.
- College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, PR China.
| | - Haichong K Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, Worcester, MA, USA.
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, USA.
| |
Collapse
|
3
|
Wang Y, Wei S, Zuo R, Kam M, Opfermann JD, Sunmola I, Hsieh MH, Krieger A, Kang JU. Automatic and real-time tissue sensing for autonomous intestinal anastomosis using hybrid MLP-DC-CNN classifier-based optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:2543-2560. [PMID: 38633079 PMCID: PMC11019703 DOI: 10.1364/boe.521652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/18/2024] [Accepted: 03/18/2024] [Indexed: 04/19/2024]
Abstract
Anastomosis is a common and critical part of reconstructive procedures within gastrointestinal, urologic, and gynecologic surgery. The use of autonomous surgical robots such as the smart tissue autonomous robot (STAR) system demonstrates an improved efficiency and consistency of the laparoscopic small bowel anastomosis over the current da Vinci surgical system. However, the STAR workflow requires auxiliary manual monitoring during the suturing procedure to avoid missed or wrong stitches. To eliminate this monitoring task from the operators, we integrated an optical coherence tomography (OCT) fiber sensor with the suture tool and developed an automatic tissue classification algorithm for detecting missed or wrong stitches in real time. The classification results were updated and sent to the control loop of STAR robot in real time. The suture tool was guided to approach the object by a dual-camera system. If the tissue inside the tool jaw was inconsistent with the desired suture pattern, a warning message would be generated. The proposed hybrid multilayer perceptron dual-channel convolutional neural network (MLP-DC-CNN) classification platform can automatically classify eight different abdominal tissue types that require different suture strategies for anastomosis. In MLP, numerous handcrafted features (∼1955) were utilized including optical properties and morphological features of one-dimensional (1D) OCT A-line signals. In DC-CNN, intensity-based features and depth-resolved tissues' attenuation coefficients were fully exploited. A decision fusion technique was applied to leverage the information collected from both classifiers to further increase the accuracy. The algorithm was evaluated on 69,773 testing A-line data. The results showed that our model can classify the 1D OCT signals of small bowels in real time with an accuracy of 90.06%, a precision of 88.34%, and a sensitivity of 87.29%, respectively. The refresh rate of the displayed A-line signals was set as 300 Hz, the maximum sensing depth of the fiber was 3.6 mm, and the running time of the image processing algorithm was ∼1.56 s for 1,024 A-lines. The proposed fully automated tissue sensing model outperformed the single classifier of CNN, MLP, or SVM with optimized architectures, showing the complementarity of different feature sets and network architectures in classifying intestinal OCT A-line signals. It can potentially reduce the manual involvement of robotic laparoscopic surgery, which is a crucial step towards a fully autonomous STAR system.
Collapse
Affiliation(s)
- Yaning Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Ruizhi Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael Kam
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Justin D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Idris Sunmola
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Michael H. Hsieh
- Division of Urology, Children’s National Hospital, 111 Michigan Ave NW, Washington, D.C. 20010, USA
| | - Axel Krieger
- Department of Mechanical Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| | - Jin U. Kang
- Department of Electrical and Computer Engineering, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218, USA
| |
Collapse
|
4
|
Lim EJ, Yen J, Fong KY, Tiong HY, Aslim EJ, Ng LG, Castellani D, Borgheresi A, Agostini A, Somani BK, Gauhar V, Gan VHL. Radiomics in Kidney Transplantation: A Scoping Review of Current Applications, Limitations, and Future Directions. Transplantation 2024; 108:643-653. [PMID: 37389652 DOI: 10.1097/tp.0000000000004711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/01/2023]
Abstract
Radiomics is increasingly applied to the diagnosis, management, and outcome prediction of various urological conditions. The purpose of this scoping review is to evaluate the current evidence of the application of radiomics in kidney transplantation, especially its utility in diagnostics and therapeutics. An electronic literature search on radiomics in the setting of transplantation was conducted on PubMed, EMBASE, and Scopus from inception to September 23, 2022. A total of 16 studies were included. The most widely studied clinical utility of radiomics in kidney transplantation is its use as an adjunct to diagnose rejection, potentially reducing the need for unnecessary biopsies or guiding decisions for earlier biopsies to optimize graft survival. Technology such as optical coherence tomography is a noninvasive procedure to build high-resolution optical cross-section images of the kidney cortex in situ and in real time, which can provide histopathological information of donor kidney candidates for transplantation, and to predict posttransplant function. This review shows that, although radiomics in kidney transplants is still in its infancy, it has the potential for large-scale implementation. Its greatest potential lies in the correlation with conventional established diagnostic evaluation for living donors and potential in predicting and detecting rejection postoperatively.
Collapse
Affiliation(s)
- Ee Jean Lim
- Department of Urology, Singapore General Hospital, Singapore
| | - Jie Yen
- Department of Urology, Singapore General Hospital, Singapore
| | - Khi Yung Fong
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Ho Yee Tiong
- Department of Urology, National University Hospital, Singapore
| | | | - Lay Guat Ng
- Department of Urology, Singapore General Hospital, Singapore
| | - Daniele Castellani
- Urology Unit, Azienda Ospedaliero Universitaria delle Marche, Università Politecnica delle Marche, Ancona, Italy
| | - Alessandra Borgheresi
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, Ancona, Italy
- Department of Radiology, University Hospital "Azienda Ospedaliera Universitaria delle Marche," Ancona, Italy
| | - Andrea Agostini
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, Ancona, Italy
- Department of Radiology, University Hospital "Azienda Ospedaliera Universitaria delle Marche," Ancona, Italy
| | - Bhaskar Kumar Somani
- Department of Urology, University Hospital Southampton NHS Foundation Trust, Southampton, United Kingdom
| | - Vineet Gauhar
- Department of Urology, Ng Teng Fong Hospital, Singapore
| | - Valerie Huei Li Gan
- Department of Urology, Singapore General Hospital, Singapore
- SingHealth Duke-NUS Transplant Centre, Singapore
| |
Collapse
|
5
|
Ma X, Moradi M, Ma X, Tang Q, Levi M, Chen Y, Zhang HK. Large Area Kidney Imaging for Pre-transplant Evaluation using Real-Time Robotic Optical Coherence Tomography. RESEARCH SQUARE 2023:rs.3.rs-3385622. [PMID: 37886456 PMCID: PMC10602184 DOI: 10.21203/rs.3.rs-3385622/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Optical coherence tomography (OCT) is a high-resolution imaging modality that can be used to image microstructures of human kidneys. These images can be analyzed to evaluate the viability of the organ for transplantation. However, current OCT devices suffer from insufficient field-of-view, leading to biased examination outcomes when only small portions of the kidney can be assessed. Here we present a robotic OCT system where an OCT probe is integrated with a robotic manipulator, enabling wider area spatially-resolved imaging. With the proposed system, it becomes possible to comprehensively scan the kidney surface and provide large area parameterization of the microstructures. We verified the probe tracking accuracy with a phantom as 0.0762±0.0727 mm and demonstrated its clinical feasibility by scanning ex vivo kidneys. The parametric map exhibits fine vasculatures beneath the kidney surface. Quantitative analysis on the proximal convoluted tubule from the ex vivo human kidney yields highly clinical-relevant information.
Collapse
Affiliation(s)
- Xihan Ma
- Department of Robotics Engineering, Worcester Polytechnic Institute, MA 01609, USA
| | - Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA 01003, USA
| | - Xiaoyu Ma
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA 01003, USA
| | - Qinggong Tang
- The Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Moshe Levi
- Department of Biochemistry and Molecular & Cellular Biology, Georgetown University, Washington, DC 20057, USA
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA 01003, USA
| | - Haichong K Zhang
- Department of Robotics Engineering, Worcester Polytechnic Institute, MA 01609, USA
- Department of Biomedical Engineering, Worcester Polytechnic Institute, MA 01609, USA
| |
Collapse
|
6
|
Moradi M, Chen Y, Du X, Seddon JM. Deep ensemble learning for automated non-advanced AMD classification using optimized retinal layer segmentation and SD-OCT scans. Comput Biol Med 2023; 154:106512. [PMID: 36701964 DOI: 10.1016/j.compbiomed.2022.106512] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/30/2022] [Accepted: 12/31/2022] [Indexed: 01/11/2023]
Abstract
BACKGROUND Accurate retinal layer segmentation in optical coherence tomography (OCT) images is crucial for quantitatively analyzing age-related macular degeneration (AMD) and monitoring its progression. However, previous retinal segmentation models depend on experienced experts and manually annotating retinal layers is time-consuming. On the other hand, accuracy of AMD diagnosis is directly related to the segmentation model's performance. To address these issues, we aimed to improve AMD detection using optimized retinal layer segmentation and deep ensemble learning. METHOD We integrated a graph-cut algorithm with a cubic spline to automatically annotate 11 retinal boundaries. The refined images were fed into a deep ensemble mechanism that combined a Bagged Tree and end-to-end deep learning classifiers. We tested the developed deep ensemble model on internal and external datasets. RESULTS The total error rates for our segmentation model using the boundary refinement approach was significantly lower than OCT Explorer segmentations (1.7% vs. 7.8%, p-value = 0.03). We utilized the refinement approach to quantify 169 imaging features using Zeiss SD-OCT volume scans. The presence of drusen and thickness of total retina, neurosensory retina, and ellipsoid zone to inner-outer segment (EZ-ISOS) thickness had higher contributions to AMD classification compared to other features. The developed ensemble learning model obtained a higher diagnostic accuracy in a shorter time compared with two human graders. The area under the curve (AUC) for normal vs. early AMD was 99.4%. CONCLUSION Testing results showed that the developed framework is repeatable and effective as a potentially valuable tool in retinal imaging research.
Collapse
Affiliation(s)
- Mousa Moradi
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States
| | - Yu Chen
- Department of Biomedical Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Xian Du
- Department of Mechanical and Industrial Engineering, University of Massachusetts, Amherst, MA, United States.
| | - Johanna M Seddon
- Department of Ophthalmology & Visual Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United States.
| |
Collapse
|