1
|
Liu X, Wang H, Gao J. scIALM: A method for sparse scRNA-seq expression matrix imputation using the Inexact Augmented Lagrange Multiplier with low error. Comput Struct Biotechnol J 2024; 23:549-558. [PMID: 38274995 PMCID: PMC10809077 DOI: 10.1016/j.csbj.2023.12.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/27/2024] Open
Abstract
Single-cell RNA sequencing (scRNA-seq) is a high-throughput sequencing technology that quantifies gene expression profiles of specific cell populations at the single-cell level, providing a foundation for studying cellular heterogeneity and patient pathological characteristics. It is effective for developmental, fertility, and disease studies. However, the cell-gene expression matrix of single-cell sequencing data is often sparse and contains numerous zero values. Some of the zero values derive from noise, where dropout noise has a large impact on downstream analysis. In this paper, we propose a method named scIALM for imputation recovery of sparse single-cell RNA data expression matrices, which employs the Inexact Augmented Lagrange Multiplier method to use sparse but clean (accurate) data to recover unknown entries in the matrix. We perform experimental analysis on four datasets, calling the expression matrix after Quality Control (QC) as the original matrix, and comparing the performance of scIALM with six other methods using mean squared error (MSE), mean absolute error (MAE), Pearson correlation coefficient (PCC), and cosine similarity (CS). Our results demonstrate that scIALM accurately recovers the original data of the matrix with an error of 10e-4, and the mean value of the four metrics reaches 4.5072 (MSE), 0.765 (MAE), 0.8701 (PCC), 0.8896 (CS). In addition, at 10%-50% random masking noise, scIALM is the least sensitive to the masking ratio. For downstream analysis, this study uses adjusted rand index (ARI) and normalized mutual information (NMI) to evaluate the clustering effect, and the results are improved on three datasets containing real cluster labels.
Collapse
Affiliation(s)
- Xiaohong Liu
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, 100029, China
| | - Han Wang
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, 100029, China
| | - Jingyang Gao
- College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, 100029, China
| |
Collapse
|
2
|
Harnischmacher N, Rodner E, Schmitz CH. Detection of breast cancer using machine learning on time-series diffuse optical transillumination data. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:115001. [PMID: 39529875 PMCID: PMC11552526 DOI: 10.1117/1.jbo.29.11.115001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 09/11/2024] [Accepted: 09/25/2024] [Indexed: 11/16/2024]
Abstract
Significance Optical mammography as a promising tool for cancer diagnosis has largely fallen behind expectations. Modern machine learning (ML) methods offer ways to improve cancer detection in diffuse optical transmission data. Aim We aim to quantitatively evaluate the classification of cancer-positive versus cancer-negative patients using ML methods on raw transmission time series data from bilateral breast scans during subjects' rest. Approach We use a support vector machine (SVM) with hyperparameter optimization and cross-validation to systematically explore a range of data preprocessing and feature-generation strategies. We also apply an automated ML (AutoML) framework to validate our findings. We use receiver operating characteristics and the corresponding area under the curve (AUC) to quantify classification performance. Results For the sample group available ( N = 63 , 18 cancer patients), we demonstrate an AUC score of up to 93.3% for SVM classification and up to 95.0% for the AutoML classifier. Conclusions ML offers a viable strategy for clinically relevant breast cancer diagnosis using diffuse-optical transmission measurements. The diagnostic performance of ML on raw data can outperform traditional statistical biomarkers derived from reconstructed image time series. To achieve clinically relevant performance, our ML approach requires simultaneous bilateral scanning of the breasts with spatially dense channel coverage.
Collapse
Affiliation(s)
- Nils Harnischmacher
- HTW - University of Applied Sciences Berlin, Faculty II, KI-Werkstatt, Berlin, Germany
| | - Erik Rodner
- HTW - University of Applied Sciences Berlin, Faculty II, KI-Werkstatt, Berlin, Germany
| | - Christoph H. Schmitz
- HTW - University of Applied Sciences Berlin, Faculty I - Health Electronics, Biomedical Electronics and Applied Research (BEAR) Labs, Berlin, Germany
| |
Collapse
|
3
|
Li S, Zhang M, Xue M, Zhu Q. Real-time breast lesion classification combining diffuse optical tomography frequency domain data and BI-RADS assessment. JOURNAL OF BIOPHOTONICS 2024; 17:e202300483. [PMID: 38430216 PMCID: PMC11065578 DOI: 10.1002/jbio.202300483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/07/2024] [Accepted: 02/08/2024] [Indexed: 03/03/2024]
Abstract
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated potential for breast cancer diagnosis, in which real-time or near real-time diagnosis with high accuracy is desired. However, DOT's relatively slow data processing and image reconstruction speeds have hindered real-time diagnosis. Here, we propose a real-time classification scheme that combines US breast imaging reporting and data system (BI-RADS) readings and DOT frequency domain measurements. A convolutional neural network is trained to generate malignancy probability scores from DOT measurements. Subsequently, these scores are integrated with BI-RADS assessments using a support vector machine classifier, which then provides the final diagnostic output. An area under the receiver operating characteristic curve of 0.978 is achieved in distinguishing between benign and malignant breast lesions in patient data without image reconstruction.
Collapse
Affiliation(s)
- Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Menghao Zhang
- Department of Electrical & Systems Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Minghao Xue
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
- Department of Electrical & Systems Engineering, Washington University in St. Louis, 63130 St. Louis, USA
- Department of Radiology, Washington University School of Medicine, 63110 St. Louis, USA
| |
Collapse
|
4
|
Kwon H, Oh S, Kim MG, Kim Y, Jung G, Lee HJ, Kim SY, Bae HM. Artificial Intelligence-Enhanced Quantitative Ultrasound for Breast Cancer: Pilot Study on Quantitative Parameters and Biopsy Outcomes. Diagnostics (Basel) 2024; 14:419. [PMID: 38396457 PMCID: PMC10888332 DOI: 10.3390/diagnostics14040419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 02/03/2024] [Accepted: 02/08/2024] [Indexed: 02/25/2024] Open
Abstract
Traditional B-mode ultrasound has difficulties distinguishing benign from malignant breast lesions. It appears that Quantitative Ultrasound (QUS) may offer advantages. We examined the QUS imaging system's potential, utilizing parameters like Attenuation Coefficient (AC), Speed of Sound (SoS), Effective Scatterer Diameter (ESD), and Effective Scatterer Concentration (ESC) to enhance diagnostic accuracy. B-mode images and radiofrequency signals were gathered from breast lesions. These parameters were processed and analyzed by a QUS system trained on a simulated acoustic dataset and equipped with an encoder-decoder structure. Fifty-seven patients were enrolled over six months. Biopsies served as the diagnostic ground truth. AC, SoS, and ESD showed significant differences between benign and malignant lesions (p < 0.05), but ESC did not. A logistic regression model was developed, demonstrating an area under the receiver operating characteristic curve of 0.90 (95% CI: 0.78, 0.96) for distinguishing between benign and malignant lesions. In conclusion, the QUS system shows promise in enhancing diagnostic accuracy by leveraging AC, SoS, and ESD. Further studies are needed to validate these findings and optimize the system for clinical use.
Collapse
Affiliation(s)
- Hyuksool Kwon
- Laboratory of Quantitative Ultrasound Imaging, Seoul National University Bundang Hospital, Seong-nam 13620, Republic of Korea; (H.K.); (S.O.)
- Imaging Division, Department of Emergency Medicine, Seoul National University Bundang Hospital, Seong-nam 13620, Republic of Korea
| | - Seokhwan Oh
- Laboratory of Quantitative Ultrasound Imaging, Seoul National University Bundang Hospital, Seong-nam 13620, Republic of Korea; (H.K.); (S.O.)
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| | - Myeong-Gee Kim
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| | - Youngmin Kim
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| | - Guil Jung
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| | - Hyeon-Jik Lee
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| | - Sang-Yun Kim
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| | - Hyeon-Min Bae
- Electrical Engineering Department, Korea Advanced Institute of Science and Technology, Daejeon 34141, Republic of Korea; (M.-G.K.); (Y.K.); (G.J.); (H.-J.L.); (S.-Y.K.)
| |
Collapse
|
5
|
Oyelade ON, Irunokhai EA, Wang H. A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification. Sci Rep 2024; 14:692. [PMID: 38184742 PMCID: PMC10771515 DOI: 10.1038/s41598-024-51329-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 01/03/2024] [Indexed: 01/08/2024] Open
Abstract
There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK.
| | | | - Hui Wang
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK
| |
Collapse
|
6
|
Gupta SK, Pal R, Ahmad A, Melandsø F, Habib A. Image denoising in acoustic microscopy using block-matching and 4D filter. Sci Rep 2023; 13:13212. [PMID: 37580411 PMCID: PMC10425453 DOI: 10.1038/s41598-023-40301-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/08/2023] [Indexed: 08/16/2023] Open
Abstract
Scanning acoustic microscopy (SAM) is a label-free imaging technique used in biomedical imaging, non-destructive testing, and material research to visualize surface and sub-surface structures. In ultrasonic imaging, noises in images can reduce contrast, edge and texture details, and resolution, negatively impacting post-processing algorithms. To reduce the noises in the scanned image, we have employed a 4D block-matching (BM4D) filter that can be used to denoise acoustic volumetric signals. BM4D filter utilizes the transform domain filtering technique with hard thresholding and Wiener filtering stages. The proposed algorithm produces the most suitable denoised output compared to other conventional filtering methods (Gaussian filter, median filter, and Wiener filter) when applied to noisy images. The output from the BM4D-filtered images was compared to the noise level with different conventional filters. Filtered images were qualitatively analyzed using metrics such as structural similarity index matrix (SSIM) and peak signal-to-noise ratio (PSNR). The combined qualitative and quantitative analysis demonstrates that the BM4D technique is the most suitable method for denoising acoustic imaging from the SAM. The proposed block matching filter opens a new avenue in the field of acoustic or photoacoustic image denoising, particularly in scenarios with poor signal-to-noise ratios.
Collapse
Affiliation(s)
- Shubham Kumar Gupta
- Department of Chemical Engineering, Indian Institute of Technology, Guwahati, India
| | - Rishant Pal
- Department of Electronics and Electrical Engineering, Indian Institute of Technology, Guwahati, India
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway
| | - Frank Melandsø
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway
| | - Anowarul Habib
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway.
| |
Collapse
|
7
|
Xi NM, Li JJ. Exploring the optimization of autoencoder design for imputing single-cell RNA sequencing data. Comput Struct Biotechnol J 2023; 21:4079-4095. [PMID: 37671239 PMCID: PMC10475479 DOI: 10.1016/j.csbj.2023.07.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 07/22/2023] [Accepted: 07/31/2023] [Indexed: 09/07/2023] Open
Abstract
Autoencoders are the backbones of many imputation methods that aim to relieve the sparsity issue in single-cell RNA sequencing (scRNA-seq) data. The imputation performance of an autoencoder relies on both the neural network architecture and the hyperparameter choice. So far, literature in the single-cell field lacks a formal discussion on how to design the neural network and choose the hyperparameters. Here, we conducted an empirical study to answer this question. Our study used many real and simulated scRNA-seq datasets to examine the impacts of the neural network architecture, the activation function, and the regularization strategy on imputation accuracy and downstream analyses. Our results show that (i) deeper and narrower autoencoders generally lead to better imputation performance; (ii) the sigmoid and tanh activation functions consistently outperform other commonly used functions including ReLU; (iii) regularization improves the accuracy of imputation and downstream cell clustering and DE gene analyses. Notably, our results differ from common practices in the computer vision field regarding the activation function and the regularization strategy. Overall, our study offers practical guidance on how to optimize the autoencoder design for scRNA-seq data imputation.
Collapse
Affiliation(s)
- Nan Miles Xi
- Department of Mathematics and Statistics, Loyola University Chicago, Chicago, IL 60660, USA
| | - Jingyi Jessica Li
- Department of Statistics and Data Science, University of California, Los Angeles, CA 90095-1554, USA
- Department of Human Genetics, University of California, Los Angeles, CA 90095-7088, USA
- Department of Computational Medicine, University of California, Los Angeles, CA 90095-1766, USA
- Department of Biostatistics, University of California, Los Angeles, CA 90095-1772, USA
| |
Collapse
|
8
|
Zhang M, Li S, Xue M, Zhu Q. Two-stage classification strategy for breast cancer diagnosis using ultrasound-guided diffuse optical tomography and deep learning. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:086002. [PMID: 37638108 PMCID: PMC10457211 DOI: 10.1117/1.jbo.28.8.086002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/29/2023] [Accepted: 08/02/2023] [Indexed: 08/29/2023]
Abstract
Significance Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated great potential for breast cancer diagnosis in which real-time or near real-time diagnosis with high accuracy is desired. Aim We aim to use US-guided DOT to achieve an automated, fast, and accurate classification of breast lesions. Approach We propose a two-stage classification strategy with deep learning. In the first stage, US images and histograms created from DOT perturbation measurements are combined to predict benign lesions. Then the non-benign suspicious lesions are passed through to the second stage, which combine US image features, DOT histogram features, and 3D DOT reconstructed images for final diagnosis. Results The first stage alone identified 73.0% of benign cases without image reconstruction. In distinguishing between benign and malignant breast lesions in patient data, the two-stage classification approach achieved an area under the receiver operating characteristic curve of 0.946, outperforming the diagnoses of all single-modality models and of a single-stage classification model that combines all US images, DOT histogram, and imaging features. Conclusions The proposed two-stage classification strategy achieves better classification accuracy than single-modality-only models and a single-stage classification model that combines all features. It can potentially distinguish breast cancers from benign lesions in near real-time.
Collapse
Affiliation(s)
- Menghao Zhang
- Washington University in St. Louis, Department of Electrical and Systems Engineering, St. Louis, Missouri, United States
| | - Shuying Li
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| | - Minghao Xue
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| | - Quing Zhu
- Washington University in St. Louis, Department of Electrical and Systems Engineering, St. Louis, Missouri, United States
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
- Washington University School of Medicine, Department of Radiology, St. Louis, Missouri, United States
| |
Collapse
|
9
|
Hellström H, Liedes J, Rainio O, Malaspina S, Kemppainen J, Klén R. Classification of head and neck cancer from PET images using convolutional neural networks. Sci Rep 2023; 13:10528. [PMID: 37386289 PMCID: PMC10310830 DOI: 10.1038/s41598-023-37603-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 06/23/2023] [Indexed: 07/01/2023] Open
Abstract
The aim of this study was to develop a convolutional neural network (CNN) for classifying positron emission tomography (PET) images of patients with and without head and neck squamous cell carcinoma (HNSCC) and other types of head and neck cancer. A PET/magnetic resonance imaging scan with 18F-fluorodeoxyglucose (18F-FDG) was performed for 200 head and neck cancer patients, 182 of which were diagnosed with HNSCC, and the location of cancer tumors was marked to the images with a binary mask by a medical doctor. The models were trained and tested with five-fold cross-validation with the primary data set of 1990 2D images obtained by dividing the original 3D images of 178 HNSCC patients into transaxial slices and with an additional test set with 238 images from the patients with head and neck cancer other than HNSCC. A shallow and a deep CNN were built by using the U-Net architecture for classifying the data into two groups based on whether an image contains cancer or not. The impact of data augmentation on the performance of the two CNNs was also considered. According to our results, the best model for this task in terms of area under receiver operator characteristic curve (AUC) is a deep augmented model with a median AUC of 85.1%. The four models had highest sensitivity for HNSCC tumors on the root of the tongue (median sensitivities of 83.3-97.7%), in fossa piriformis (80.2-93.3%), and in the oral cavity (70.4-81.7%). Despite the fact that the models were trained with only HNSCC data, they had also very good sensitivity for detecting follicular and papillary carcinoma of thyroid gland and mucoepidermoid carcinoma of the parotid gland (91.7-100%).
Collapse
Affiliation(s)
- Henri Hellström
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| | - Joonas Liedes
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| | - Oona Rainio
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland.
| | - Simona Malaspina
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
- Department of Clinical Physiology and Nuclear Medicine, Turku University Hospital, Turku, Finland
| | - Jukka Kemppainen
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
- Department of Clinical Physiology and Nuclear Medicine, Turku University Hospital, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
10
|
Xu M, Chen Z, Zheng J, Zhao Q, Yuan Z. Artificial Intelligence-Aided Optical Imaging for Cancer Theranostics. Semin Cancer Biol 2023:S1044-579X(23)00094-9. [PMID: 37302519 DOI: 10.1016/j.semcancer.2023.06.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 06/08/2023] [Accepted: 06/08/2023] [Indexed: 06/13/2023]
Abstract
The use of artificial intelligence (AI) to assist biomedical imaging have demonstrated its high accuracy and high efficiency in medical decision-making for individualized cancer medicine. In particular, optical imaging methods are able to visualize both the structural and functional information of tumors tissues with high contrast, low cost, and noninvasive property. However, no systematic work has been performed to inspect the recent advances on AI-aided optical imaging for cancer theranostics. In this review, we demonstrated how AI can guide optical imaging methods to improve the accuracy on tumor detection, automated analysis and prediction of its histopathological section, its monitoring during treatment, and its prognosis by using computer vision, deep learning and natural language processing. By contrast, the optical imaging techniques involved mainly consisted of various tomography and microscopy imaging methods such as optical endoscopy imaging, optical coherence tomography, photoacoustic imaging, diffuse optical tomography, optical microscopy imaging, Raman imaging, and fluorescent imaging. Meanwhile, existing problems, possible challenges and future prospects for AI-aided optical imaging protocol for cancer theranostics were also discussed. It is expected that the present work can open a new avenue for precision oncology by using AI and optical imaging tools.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, China; Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China
| | - Zhiyi Chen
- Institute of Medical Imaging, Hengyang Medical School, University of South China, Hengyang, China
| | - Junxiao Zheng
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China
| | - Qi Zhao
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Zhen Yuan
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China.
| |
Collapse
|
11
|
Zhang M, Xue M, Li S, Zou Y, Zhu Q. Fusion deep learning approach combining diffuse optical tomography and ultrasound for improving breast cancer classification. BIOMEDICAL OPTICS EXPRESS 2023; 14:1636-1646. [PMID: 37078047 PMCID: PMC10110311 DOI: 10.1364/boe.486292] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 02/25/2023] [Accepted: 03/04/2023] [Indexed: 05/03/2023]
Abstract
Diffuse optical tomography (DOT) is a promising technique that provides functional information related to tumor angiogenesis. However, reconstructing the DOT function map of a breast lesion is an ill-posed and underdetermined inverse process. A co-registered ultrasound (US) system that provides structural information about the breast lesion can improve the localization and accuracy of DOT reconstruction. Additionally, the well-known US characteristics of benign and malignant breast lesions can further improve cancer diagnosis based on DOT alone. Inspired by a fusion model deep learning approach, we combined US features extracted by a modified VGG-11 network with images reconstructed from a DOT deep learning auto-encoder-based model to form a new neural network for breast cancer diagnosis. The combined neural network model was trained with simulation data and fine-tuned with clinical data: it achieved an AUC of 0.931 (95% CI: 0.919-0.943), superior to those achieved using US images alone (0.860) or DOT images alone (0.842).
Collapse
Affiliation(s)
- Menghao Zhang
- Electrical and System Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA
| | - Minghao Xue
- Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA
| | - Shuying Li
- Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA
| | - Yun Zou
- Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA
| | - Quing Zhu
- Electrical and System Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA
- Biomedical Engineering Department, Washington University in St. Louis, 1 Brooking Dr, St. Louis, MO 63130, USA
| |
Collapse
|
12
|
Ali S, Jonmohamadi Y, Fontanarosa D, Crawford R, Pandey AK. One step surgical scene restoration for robot assisted minimally invasive surgery. Sci Rep 2023; 13:3127. [PMID: 36813821 PMCID: PMC9947129 DOI: 10.1038/s41598-022-26647-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/19/2022] [Indexed: 02/24/2023] Open
Abstract
Minimally invasive surgery (MIS) offers several advantages to patients including minimum blood loss and quick recovery time. However, lack of tactile or haptic feedback and poor visualization of the surgical site often result in some unintentional tissue damage. Visualization aspects further limits the collection of imaged frame contextual details, therefore the utility of computational methods such as tracking of tissue and tools, scene segmentation, and depth estimation are of paramount interest. Here, we discuss an online preprocessing framework that overcomes routinely encountered visualization challenges associated with the MIS. We resolve three pivotal surgical scene reconstruction tasks in a single step; namely, (i) denoise, (ii) deblur, and (iii) color correction. Our proposed method provides a latent clean and sharp image in the standard RGB color space from its noisy, blurred, and raw inputs in a single preprocessing step (end-to-end in one step). The proposed approach is compared against current state-of-the-art methods that perform each of the image restoration tasks separately. Results from knee arthroscopy show that our method outperforms existing solutions in tackling high-level vision tasks at a significantly reduced computation time.
Collapse
Affiliation(s)
- Shahnewaz Ali
- grid.1024.70000000089150953School of Electrical Engineering and Robotics, Faculty of Engineering, Queensland University of Technology (QUT), Gardens Point, Brisbane, QLD 4001 Australia
| | - Yaqub Jonmohamadi
- grid.1024.70000000089150953School of Electrical Engineering and Robotics, Faculty of Engineering, Queensland University of Technology (QUT), Gardens Point, Brisbane, QLD 4001 Australia
| | - Davide Fontanarosa
- grid.1024.70000000089150953School of Clinical Sciences, Faculty of Health, Queensland University of Technology (QUT), Gardens Point, Brisbane, QLD 4001 Australia
| | - Ross Crawford
- grid.1024.70000000089150953School of Mechanical, Medical and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Gardens Point, Brisbane, QLD 4001 Australia
| | - Ajay K. Pandey
- grid.1024.70000000089150953School of Electrical Engineering and Robotics, Faculty of Engineering, Queensland University of Technology (QUT), Gardens Point, Brisbane, QLD 4001 Australia
| |
Collapse
|
13
|
Ayana G, Dese K, Dereje Y, Kebede Y, Barki H, Amdissa D, Husen N, Mulugeta F, Habtamu B, Choe SW. Vision-Transformer-Based Transfer Learning for Mammogram Classification. Diagnostics (Basel) 2023; 13:diagnostics13020178. [PMID: 36672988 PMCID: PMC9857963 DOI: 10.3390/diagnostics13020178] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023] Open
Abstract
Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Yisak Dereje
- Department of Information Engineering, Marche Polytechnic University, 60121 Ancona, Italy
| | - Yonas Kebede
- Biomedical Engineering Unit, Black Lion Hospital, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea
| | - Dechassa Amdissa
- Department of Basic and Applied Science for Engineering, Sapienza University of Rome, 00161 Roma, Italy
| | - Nahimiya Husen
- Department of Bioengineering and Robotics, Campus Bio-Medico University of Rome, 00128 Roma, Italy
| | - Fikadu Mulugeta
- Center of Biomedical Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
14
|
Morales-Curiel LF, Gonzalez AC, Castro-Olvera G, Lin LCL, El-Quessny M, Porta-de-la-Riva M, Severino J, Morera LB, Venturini V, Ruprecht V, Ramallo D, Loza-Alvarez P, Krieg M. Volumetric imaging of fast cellular dynamics with deep learning enhanced bioluminescence microscopy. Commun Biol 2022; 5:1330. [PMID: 36463346 PMCID: PMC9719505 DOI: 10.1038/s42003-022-04292-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 11/23/2022] [Indexed: 12/05/2022] Open
Abstract
Bioluminescence microscopy is an appealing alternative to fluorescence microscopy, because it does not depend on external illumination, and consequently does neither produce spurious background autofluorescence, nor perturb intrinsically photosensitive processes in living cells and animals. The low photon emission of known luciferases, however, demands long exposure times that are prohibitive for imaging fast biological dynamics. To increase the versatility of bioluminescence microscopy, we present an improved low-light microscope in combination with deep learning methods to image extremely photon-starved samples enabling subsecond exposures for timelapse and volumetric imaging. We apply our method to image subcellular dynamics in mouse embryonic stem cells, epithelial morphology during zebrafish development, and DAF-16 FoxO transcription factor shuttling from the cytoplasm to the nucleus under external stress. Finally, we concatenate neural networks for denoising and light-field deconvolution to resolve intracellular calcium dynamics in three dimensions of freely moving Caenorhabditis elegans.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Jacqueline Severino
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Laura Battle Morera
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Valeria Venturini
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - Verena Ruprecht
- Center for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
- ICREA, Pg. Lluis Companys 23, 08010, Barcelona, Spain
| | - Diego Ramallo
- ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain
| | | | - Michael Krieg
- ICFO, Institut de Ciencies Fotòniques, Castelldefels, Spain.
| |
Collapse
|
15
|
Wen Y, Guo D, Zhang J, Liu X, Liu T, Li L, Jiang S, Wu D, Jiang H. Clinical photoacoustic/ultrasound dual-modal imaging: Current status and future trends. Front Physiol 2022; 13:1036621. [PMID: 36388111 PMCID: PMC9651137 DOI: 10.3389/fphys.2022.1036621] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/05/2022] [Indexed: 08/24/2023] Open
Abstract
Photoacoustic tomography (PAT) is an emerging biomedical imaging modality that combines optical and ultrasonic imaging, providing overlapping fields of view. This hybrid approach allows for a natural integration of PAT and ultrasound (US) imaging in a single platform. Due to the similarities in signal acquisition and processing, the combination of PAT and US imaging creates a new hybrid imaging for novel clinical applications. Over the recent years, particular attention is paid to the development of PAT/US dual-modal systems highlighting mutual benefits in clinical cases, with an aim of substantially improving the specificity and sensitivity for diagnosis of diseases. The demonstrated feasibility and accuracy in these efforts open an avenue of translating PAT/US imaging to practical clinical applications. In this review, the current PAT/US dual-modal imaging systems are discussed in detail, and their promising clinical applications are presented and compared systematically. Finally, this review describes the potential impacts of these combined systems in the coming future.
Collapse
Affiliation(s)
- Yanting Wen
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Dan Guo
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Jing Zhang
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Xiaotian Liu
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Ting Liu
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Lu Li
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Shixie Jiang
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, United States
| | - Dan Wu
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Huabei Jiang
- Department of Medical Engineering, University of South Florida, Tampa, FL, United States
| |
Collapse
|
16
|
Wen Y, Wu D, Zhang J, Jiang S, Xiong C, Guo D, Chi Z, Chen Y, Li L, Yang Y, Liu T, Jiang H. Evaluation of Tracheal Stenosis in Rabbits Using Multispectral Optoacoustic Tomography. Front Bioeng Biotechnol 2022; 10:860305. [PMID: 35309993 PMCID: PMC8931196 DOI: 10.3389/fbioe.2022.860305] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 02/15/2022] [Indexed: 01/06/2023] Open
Abstract
Objective: Photoacoustic tomography (PAT) and multispectral optoacoustic tomography (MSOT) are evolving technologies that are capable of delivering real-time, high-resolution images of tissues. The purpose of this study was to evaluate the feasibility of using PAT and MSOT for detecting histology in a rabbit tracheal stenosis model.
Method: A total of 12 rabbits (9 stenosis and three control) were randomly divided into four groups (A, B, C and D). Each group consisted of three rabbits, which were staged at the first, fourth, and eighth weeks of stenosis progression, respectively. PAT/MSOT images and corresponding histology from these experimental animals were compared, for analyzing the morphologic features and quantitative tracheal measurements in different tracheal stenosis stage. Result: Both the PAT images and corresponding histology indicated the most severe degree of stenosis in group C. MSOT images indicated notable differences in tracheal contents of group B and D. Conclusion: This study suggests that PAT/MSOT are potentially valuable non-invasive modality which are capable of evaluating tracheal structure and function in vivo.
Collapse
Affiliation(s)
- Yanting Wen
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Dan Wu
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Jing Zhang
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Shixie Jiang
- Department of Psychiatry and Behavioral Neurosciences, Morsani College of Medicine, University of South Florida, Tampa, FL, United States
| | - Chunyan Xiong
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Dan Guo
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Zihui Chi
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yi Chen
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Lun Li
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Ying Yang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Ting Liu
- Department of Ultrasound Imaging, The Fifth People’s Hospital of Chengdu, Chengdu, China
| | - Huabei Jiang
- Department of Medical Engineering, University of South Florida, Tampa, FL, United States
| |
Collapse
|
17
|
Ma Y, Peng Y. Simultaneous detection and diagnosis of mammogram mass using bilateral analysis and soft label based metric learning. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
18
|
Ericsson-Szecsenyi R, Zhang G, Redler G, Feygelman V, Rosenberg S, Latifi K, Ceberg C, Moros EG. Robustness Assessment of Images From a 0.35T Scanner of an Integrated MRI-Linac: Characterization of Radiomics Features in Phantom and Patient Data. Technol Cancer Res Treat 2022; 21:15330338221099113. [PMID: 35521966 PMCID: PMC9083059 DOI: 10.1177/15330338221099113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Purpose: Radiomics entails the extraction of quantitative imaging biomarkers (or radiomics features) hypothesized to provide additional pathophysiological and/or clinical information compared to qualitative visual observation and interpretation. This retrospective study explores the variability of radiomics features extracted from images acquired with the 0.35 T scanner of an integrated MRI-Linac. We hypothesized we would be able to identify features with high repeatability and reproducibility over various imaging conditions using phantom and patient imaging studies. We also compared findings from the literature relevant to our results. Methods: Eleven scans of a Magphan® RT phantom over 13 months and 11 scans of a ViewRay Daily QA phantom over 11 days constituted the phantom data. Patient datasets included 50 images from ten anonymized stereotactic body radiation therapy (SBRT) pancreatic cancer patients (50 Gy in 5 fractions). A True Fast Imaging with Steady-State Free Precession (TRUFI) pulse sequence was selected, using a voxel resolution of 1.5 mm × 1.5 mm × 1.5 mm and 1.5 mm × 1.5 mm × 3.0 mm for phantom and patient data, respectively. A total of 1087 shape-based, first, second, and higher order features were extracted followed by robustness analysis. Robustness was assessed with the Coefficient of Variation (CoV < 5%). Results: We identified 130 robust features across the datasets. Robust features were found within each category, except for 2 second-order sub-groups, namely, Gray Level Size Zone Matrix (GLSZM) and Neighborhood Gray Tone Difference Matrix (NGTDM). Additionally, several robust features agreed with findings from other stability assessments or predictive performance studies in the literature. Conclusion: We verified the stability of the 0.35 T scanner of an integrated MRI-Linac for longitudinal radiomics phantom studies and identified robust features over various imaging conditions. We conclude that phantom measurements can be used to identify robust radiomics features. More stability assessment research is warranted.
Collapse
Affiliation(s)
| | - Geoffrey Zhang
- Radiation Oncology Department, 25301H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Gage Redler
- Radiation Oncology Department, 25301H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Vladimir Feygelman
- Radiation Oncology Department, 25301H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Stephen Rosenberg
- Radiation Oncology Department, 25301H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Kujtim Latifi
- Radiation Oncology Department, 25301H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| | - Crister Ceberg
- Department of Medical Radiation Physics, Clinical Sciences, 5193Lund University, Lund, Sweden
| | - Eduardo G Moros
- Radiation Oncology Department, 25301H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA
| |
Collapse
|
19
|
Gao M, Guo Y, Hormel TT, Sun J, Hwang TS, Jia Y. Reconstruction of high-resolution 6×6-mm OCT angiograms using deep learning. BIOMEDICAL OPTICS EXPRESS 2020; 11:3585-3600. [PMID: 33014553 PMCID: PMC7510902 DOI: 10.1364/boe.394301] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/20/2020] [Accepted: 05/23/2020] [Indexed: 05/06/2023]
Abstract
Typical optical coherence tomographic angiography (OCTA) acquisition areas on commercial devices are 3×3- or 6×6-mm. Compared to 3×3-mm angiograms with proper sampling density, 6×6-mm angiograms have significantly lower scan quality, with reduced signal-to-noise ratio and worse shadow artifacts due to undersampling. Here, we propose a deep-learning-based high-resolution angiogram reconstruction network (HARNet) to generate enhanced 6×6-mm superficial vascular complex (SVC) angiograms. The network was trained on data from 3×3-mm and 6×6-mm angiograms from the same eyes. The reconstructed 6×6-mm angiograms have significantly lower noise intensity, stronger contrast and better vascular connectivity than the original images. The algorithm did not generate false flow signal at the noise level presented by the original angiograms. The image enhancement produced by our algorithm may improve biomarker measurements and qualitative clinical assessment of 6×6-mm OCTA.
Collapse
Affiliation(s)
- Min Gao
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Jiande Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, China
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
20
|
Shaikh TA, Ali R. An intelligent healthcare system for optimized breast cancer diagnosis using harmony search and simulated annealing (HS-SA) algorithm. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100408] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|