1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Genina EA, Lazareva EN, Surkov YI, Serebryakova IA, Shushunova NA. Optical parameters of healthy and tumor breast tissues in mice. JOURNAL OF BIOPHOTONICS 2024; 17:e202400123. [PMID: 38925916 DOI: 10.1002/jbio.202400123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/23/2024] [Accepted: 05/27/2024] [Indexed: 06/28/2024]
Abstract
Knowledge of the optical parameters of tumors is important for choosing the correct laser treatment parameters. In this paper, optical properties and refraction indices of breast tissue in healthy mice and a 4T1 model mimicking human breast cancer have been measured. A significant decrease in both the scattering and refractive index of tumor tissue has been observed. The change in tissue morphology has induced the change in the slope of the scattering spectrum. Thus, the light penetration depth into tumor has increased by almost 1.5-2 times in the near infrared "optical windows." Raman spectra have shown lower lipid content and higher protein content in tumor. The difference in the optical parameters of the tissues under study makes it possible to reliably differentiate them. The results may be useful for modeling the distribution of laser radiation in healthy tissues and cancers for deriving optimal irradiation conditions in photodynamic therapy.
Collapse
Affiliation(s)
- Elina A Genina
- Institute of Physics, Saratov State University, Saratov, Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, Tomsk, Russia
| | - Ekaterina N Lazareva
- Institute of Physics, Saratov State University, Saratov, Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, Tomsk, Russia
| | - Yuri I Surkov
- Institute of Physics, Saratov State University, Saratov, Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, Tomsk, Russia
- Laboratory of Biomedical Photoacoustic, Saratov State University, Saratov, Russia
| | - Isabella A Serebryakova
- Institute of Physics, Saratov State University, Saratov, Russia
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, Tomsk, Russia
| | - Natalya A Shushunova
- Laboratory of Biomedical Photoacoustic, Saratov State University, Saratov, Russia
| |
Collapse
|
3
|
Sikandar S, Mahum R, Ragab AE, Yayilgan SY, Shaikh S. SCDet: A Robust Approach for the Detection of Skin Lesions. Diagnostics (Basel) 2023; 13:diagnostics13111824. [PMID: 37296686 DOI: 10.3390/diagnostics13111824] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 04/28/2023] [Accepted: 05/10/2023] [Indexed: 06/12/2023] Open
Abstract
Red, blue, white, pink, or black spots with irregular borders and small lesions on the skin are known as skin cancer that is categorized into two types: benign and malignant. Skin cancer can lead to death in advanced stages, however, early detection can increase the chances of survival of skin cancer patients. There exist several approaches developed by researchers to identify skin cancer at an early stage, however, they may fail to detect the tiniest tumours. Therefore, we propose a robust method for the diagnosis of skin cancer, namely SCDet, based on a convolutional neural network (CNN) having 32 layers for the detection of skin lesions. The images, having a size of 227 × 227, are fed to the image input layer, and then pair of convolution layers is utilized to withdraw the hidden patterns of the skin lesions for training. After that, batch normalization and ReLU layers are used. The performance of our proposed SCDet is computed using the evaluation matrices: precision 99.2%; recall 100%; sensitivity 100%; specificity 99.20%; and accuracy 99.6%. Moreover, the proposed technique is compared with the pre-trained models, i.e., VGG16, AlexNet, and SqueezeNet and it is observed that SCDet provides higher accuracy than these pre-trained models and identifies the tiniest skin tumours with maximum precision. Furthermore, our proposed model is faster than the pre-trained model as the depth of its architecture is not too high as compared to pre-trained models such as ResNet50. Additionally, our proposed model consumes fewer resources during training; therefore, it is better in terms of computational cost than the pre-trained models for the detection of skin lesions.
Collapse
Affiliation(s)
- Shahbaz Sikandar
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
| | - Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
| | - Adham E Ragab
- Industrial Engineering Department, College of Engineering, King Saud University, Riyadh 11421, Saudi Arabia
| | - Sule Yildirim Yayilgan
- Department of Information Security and Communication Technology (IIK), Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
| | - Sarang Shaikh
- Department of Information Security and Communication Technology (IIK), Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
| |
Collapse
|
4
|
Skin Lesion Detection Using Hand-Crafted and DL-Based Features Fusion and LSTM. Diagnostics (Basel) 2022; 12:diagnostics12122974. [PMID: 36552983 PMCID: PMC9777409 DOI: 10.3390/diagnostics12122974] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 11/19/2022] [Accepted: 11/20/2022] [Indexed: 11/30/2022] Open
Abstract
The abnormal growth of cells in the skin causes two types of tumor: benign and malignant. Various methods, such as imaging and biopsies, are used by oncologists to assess the presence of skin cancer, but these are time-consuming and require extra human effort. However, some automated methods have been developed by researchers based on hand-crafted feature extraction from skin images. Nevertheless, these methods may fail to detect skin cancers at an early stage if they are tested on unseen data. Therefore, in this study, a novel and robust skin cancer detection model was proposed based on features fusion. First, our proposed model pre-processed the images using a GF filter to remove the noise. Second, the features were manually extracted by employing local binary patterns (LBP), and Inception V3 for automatic feature extraction. Aside from this, an Adam optimizer was utilized for the adjustments of learning rate. In the end, LSTM network was utilized on fused features for the classification of skin cancer into malignant and benign. Our proposed system employs the benefits of both ML- and DL-based algorithms. We utilized the skin lesion DermIS dataset, which is available on the Kaggle website and consists of 1000 images, out of which 500 belong to the benign class and 500 to the malignant class. The proposed methodology attained 99.4% accuracy, 98.7% precision, 98.66% recall, and a 98% F-score. We compared the performance of our features fusion-based method with existing segmentation-based and DL-based techniques. Additionally, we cross-validated the performance of our proposed model using 1000 images from International Skin Image Collection (ISIC), attaining 98.4% detection accuracy. The results show that our method provides significant results compared to existing techniques and outperforms them.
Collapse
|
5
|
Kim E, Cho HH, Kwon J, Oh YT, Ko ES, Park H. Tumor-Attentive Segmentation-Guided GAN for Synthesizing Breast Contrast-Enhanced MRI Without Contrast Agents. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 11:32-43. [PMID: 36478773 PMCID: PMC9721354 DOI: 10.1109/jtehm.2022.3221918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/25/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging technique critical for breast cancer diagnosis. However, the administration of contrast agents poses a potential risk. This can be avoided if contrast-enhanced MRI can be obtained without using contrast agents. Thus, we aimed to generate T1-weighted contrast-enhanced MRI (ceT1) images from pre-contrast T1 weighted MRI (preT1) images in the breast. METHODS We proposed a generative adversarial network to synthesize ceT1 from preT1 breast images that adopted a local discriminator and segmentation task network to focus specifically on the tumor region in addition to the whole breast. The segmentation network performed a related task of segmentation of the tumor region, which allowed important tumor-related information to be enhanced. In addition, edge maps were included to provide explicit shape and structural information. Our approach was evaluated and compared with other methods in the local (n = 306) and external validation (n = 140) cohorts. Four evaluation metrics of normalized mean squared error (NRMSE), Pearson cross-correlation coefficients (CC), peak signal-to-noise ratio (PSNR), and structural similarity index map (SSIM) for the whole breast and tumor region were measured. An ablation study was performed to evaluate the incremental benefits of various components in our approach. RESULTS Our approach performed the best with an NRMSE 25.65, PSNR 54.80 dB, SSIM 0.91, and CC 0.88 on average, in the local test set. CONCLUSION Performance gains were replicated in the validation cohort. SIGNIFICANCE We hope that our method will help patients avoid potentially harmful contrast agents. Clinical and Translational Impact Statement-Contrast agents are necessary to obtain DCE-MRI which is essential in breast cancer diagnosis. However, administration of contrast agents may cause side effects such as nephrogenic systemic fibrosis and risk of toxic residue deposits. Our approach can generate DCE-MRI without contrast agents using a generative deep neural network. Thus, our approach could help patients avoid potentially harmful contrast agents resulting in an improved diagnosis and treatment workflow for breast cancer.
Collapse
Affiliation(s)
- Eunjin Kim
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Hwan-Ho Cho
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
- Department of Medical Aritifical IntelligenceKonyang UniversityDaejon35365South Korea
| | - Junmo Kwon
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Young-Tack Oh
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Eun Sook Ko
- Samsung Medical CenterDepartment of Radiology, School of MedicineSungkyunkwan UniversitySeoul06351South Korea
| | - Hyunjin Park
- School of Electronic and Electrical EngineeringSungkyunkwan UniversitySuwon16419South Korea
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwon16419South Korea
| |
Collapse
|
6
|
Osman A, Crowley J, Gordon GSD. Training generative adversarial networks for optical property mapping using synthetic image data. BIOMEDICAL OPTICS EXPRESS 2022; 13:5171-5186. [PMID: 36425623 PMCID: PMC9664886 DOI: 10.1364/boe.458554] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/13/2022] [Accepted: 06/13/2022] [Indexed: 06/16/2023]
Abstract
We demonstrate the training of a generative adversarial network (GAN) for the prediction of optical property maps (scattering and absorption) using spatial frequency domain imaging (SFDI) image data sets that are generated synthetically with a free open-source 3D modelling and rendering software, Blender. The flexibility of Blender is exploited to simulate 5 models with real-life relevance to clinical SFDI of diseased tissue: flat samples containing a single material, flat samples containing 2 materials, flat samples containing 3 materials, flat samples with spheroidal tumours and cylindrical samples with spheroidal tumours. The last case is particularly relevant as it represents wide-field imaging inside a tubular organ e.g. the gastro-intestinal tract. In all 5 scenarios we show the GAN provides an accurate reconstruction of the optical properties from single SFDI images with a mean normalised error ranging from 1.0-1.2% for absorption and 1.1%-1.2% for scattering, resulting in visually improved contrast for tumour spheroid structures. This compares favourably with the ∼10% absorption error and ∼10% scattering error achieved using GANs on experimental SFDI data. Next, we perform a bi-directional cross-validation of our synthetically-trained GAN, retrained with 90% synthetic and 10% experimental data to encourage domain transfer, with a GAN trained fully on experimental data and observe visually accurate results with an error of 6.3%-10.3% for absorption and 6.6%-11.9% for scattering. Our synthetically trained GAN is therefore highly relevant to real experimental samples but provides the significant added benefits of large training datasets, perfect ground-truths and the ability to test realistic imaging geometries, e.g. inside cylinders, for which no conventional single-shot demodulation algorithms exist. In the future, we expect that the application of techniques such as domain adaptation or training on hybrid real-synthetic datasets will create a powerful tool for fast, accurate production of optical property maps for real clinical imaging systems.
Collapse
Affiliation(s)
- A Osman
- Optics and Photonics Group, Faculty of Engineering, The University of Nottingham, Nottingham, United Kingdom
| | - J Crowley
- Optics and Photonics Group, Faculty of Engineering, The University of Nottingham, Nottingham, United Kingdom
| | - G S D Gordon
- Optics and Photonics Group, Faculty of Engineering, The University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
7
|
Ilan B, Kim AD, Venugopalan V. Radiance backscattered by a strongly scattering medium in the high spatial frequency limit. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1193-1201. [PMID: 36215605 DOI: 10.1364/josaa.462683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 05/19/2022] [Indexed: 06/16/2023]
Abstract
We study the radiative transfer of a spatially modulated plane wave incident on a half-space composed of a uniformly scattering and absorbing medium. For spatial frequencies that are large compared to the scattering coefficient, we find that first-order scattering governs the leading behavior of the radiance backscattered by the medium. The first-order scattering approximation reveals a specific curve on the backscattered hemisphere where the radiance is concentrated. Along this curve, the radiance assumes a particularly simple expression that is directly proportional to the phase function. These results are inherent to the radiative transfer equation at large spatial frequency and do not have a strong dependence on any particular optical property. Consequently, these results provide the means by which spatial frequency domain imaging technologies can directly measure the phase function of a sample. Numerical simulations using the discrete ordinate method along with the source integration interpolation method validate these theoretical findings.
Collapse
|
8
|
Virtual and Real Bidirectional Driving System for the Synchronization of Manipulations in Robotic Joint Surgeries. MACHINES 2022. [DOI: 10.3390/machines10070530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Surgical robots are increasingly important in orthopedic surgeries to assist or replace surgeons in completing operations. During joint surgeries, the patient’s joint needs to be adjusted several times by the surgeon. Therefore, the virtual model, built on the preoperative medical images, cannot match the actual variation of the patient’s joint during the surgery. Conventional virtual reality techniques cannot fully satisfy the requirements of the joint surgeries. This paper proposes a real and virtual bidirectional driving method to synchronize the manipulations in both the real operation site and the virtual scene. The dynamic digital twin of the patient’s joint is obtained by decoupling the joint and dynamically updating its pose via the intraoperative measurements. During surgery, the surgeon can intuitively monitor the real-time position of the patient and the surgical tool through the system and can also manipulate the surgical robot in the virtual scene. In addition, the system can provide visual guidance to the surgeon when the patient’s joint is adjusted. A prototype system is developed for orthopedic surgeries. Proof-of-concept joint surgery demo is carried out to verify the effectiveness of the proposed method. Experimental results show that the proposed system can synchronize the manipulations in both the real operation site and the virtual scene, thus realizing the bidirectional driving.
Collapse
|
9
|
Smith JT, Ochoa M, Faulkner D, Haskins G, Intes X. Deep learning in macroscopic diffuse optical imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210288VRR. [PMID: 35218169 PMCID: PMC8881080 DOI: 10.1117/1.jbo.27.2.020901] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 02/09/2022] [Indexed: 05/02/2023]
Abstract
SIGNIFICANCE Biomedical optics system design, image formation, and image analysis have primarily been guided by classical physical modeling and signal processing methodologies. Recently, however, deep learning (DL) has become a major paradigm in computational modeling and has demonstrated utility in numerous scientific domains and various forms of data analysis. AIM We aim to comprehensively review the use of DL applied to macroscopic diffuse optical imaging (DOI). APPROACH First, we provide a layman introduction to DL. Then, the review summarizes current DL work in some of the most active areas of this field, including optical properties retrieval, fluorescence lifetime imaging, and diffuse optical tomography. RESULTS The advantages of using DL for DOI versus conventional inverse solvers cited in the literature reviewed herein are numerous. These include, among others, a decrease in analysis time (often by many orders of magnitude), increased quantitative reconstruction quality, robustness to noise, and the unique capability to learn complex end-to-end relationships. CONCLUSIONS The heavily validated capability of DL's use across a wide range of complex inverse solving methodologies has enormous potential to bring novel DOI modalities, otherwise deemed impractical for clinical translation, to the patient's bedside.
Collapse
Affiliation(s)
- Jason T. Smith
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Marien Ochoa
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Denzel Faulkner
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Grant Haskins
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Xavier Intes
- Rensselaer Polytechnic Institute, Center for Modeling, Simulation and Imaging for Medicine, Troy, New York, United States
| |
Collapse
|
10
|
Developing diagnostic assessment of breast lumpectomy tissues using radiomic and optical signatures. Sci Rep 2021; 11:21832. [PMID: 34750471 PMCID: PMC8575781 DOI: 10.1038/s41598-021-01414-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 10/28/2021] [Indexed: 02/07/2023] Open
Abstract
High positive margin rates in oncologic breast-conserving surgery are a pressing clinical problem. Volumetric X-ray scanning is emerging as a powerful ex vivo specimen imaging technique for analyzing resection margins, but X-rays lack contrast between non-malignant and malignant fibrous tissues. In this study, combined micro-CT and wide-field optical image radiomics were developed to classify malignancy of breast cancer tissues, demonstrating that X-ray/optical radiomics improve malignancy classification. Ninety-two standardized features were extracted from co-registered micro-CT and optical spatial frequency domain imaging samples extracted from 54 breast tumors exhibiting seven tissue subtypes confirmed by microscopic histological analysis. Multimodal feature sets improved classification performance versus micro-CT alone when adipose samples were included (AUC = 0.88 vs. 0.90; p-value = 3.65e-11) and excluded, focusing the classification task on exclusively non-malignant fibrous versus malignant tissues (AUC = 0.78 vs. 0.85; p-value = 9.33e-14). Extending the radiomics approach to high-dimensional optical data-termed "optomics" in this study-offers a promising optical image analysis technique for cancer detection. Radiomic feature data and classification source code are publicly available.
Collapse
|
11
|
Stier AC, Goth W, Hurley A, Brown T, Feng X, Zhang Y, Lopes FCPS, Sebastian KR, Ren P, Fox MC, Reichenberg JS, Markey MK, Tunnell JW. Imaging sub-diffuse optical properties of cancerous and normal skin tissue using machine learning-aided spatial frequency domain imaging. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210048RR. [PMID: 34558235 PMCID: PMC8459901 DOI: 10.1117/1.jbo.26.9.096007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 08/27/2021] [Indexed: 05/28/2023]
Abstract
SIGNIFICANCE Sub-diffuse optical properties may serve as useful cancer biomarkers, and wide-field heatmaps of these properties could aid physicians in identifying cancerous tissue. Sub-diffuse spatial frequency domain imaging (sd-SFDI) can reveal such wide-field maps, but the current time cost of experimentally validated methods for rendering these heatmaps precludes this technology from potential real-time applications. AIM Our study renders heatmaps of sub-diffuse optical properties from experimental sd-SFDI images in real time and reports these properties for cancerous and normal skin tissue subtypes. APPROACH A phase function sampling method was used to simulate sd-SFDI spectra over a wide range of optical properties. A machine learning model trained on these simulations and tested on tissue phantoms was used to render sub-diffuse optical property heatmaps from sd-SFDI images of cancerous and normal skin tissue. RESULTS The model accurately rendered heatmaps from experimental sd-SFDI images in real time. In addition, heatmaps of a small number of tissue samples are presented to inform hypotheses on sub-diffuse optical property differences across skin tissue subtypes. CONCLUSION These results bring the overall process of sd-SFDI a fundamental step closer to real-time speeds and set a foundation for future real-time medical applications of sd-SFDI such as image guided surgery.
Collapse
Affiliation(s)
- Andrew C. Stier
- The University of Texas at Austin, Department of Electrical and Computer Engineering, Austin, Texas, United States
| | - Will Goth
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| | - Aislinn Hurley
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| | - Treshayla Brown
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| | - Xu Feng
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| | - Yao Zhang
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| | - Fabiana C. P. S. Lopes
- The University of Texas at Austin, Dell Medical School, Department of Internal Medicine, Austin, Texas, United States
| | - Katherine R. Sebastian
- The University of Texas at Austin, Dell Medical School, Department of Internal Medicine, Austin, Texas, United States
| | - Pengyu Ren
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| | - Matthew C. Fox
- The University of Texas at Austin, Dell Medical School, Department of Internal Medicine, Austin, Texas, United States
| | - Jason S. Reichenberg
- The University of Texas at Austin, Dell Medical School, Department of Internal Medicine, Austin, Texas, United States
| | - Mia K. Markey
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
- The University of Texas MD Anderson Cancer Center, Imaging Physics Residency Program, Houston, Texas, United States
| | - James W. Tunnell
- The University of Texas at Austin, Department of Biomedical Engineering, Austin, Texas, United States
| |
Collapse
|