1
|
El-Baz A, Giridharan GA, Shalaby A, Mahmoud AH, Ghazal M. Special Issue "Computer Aided Diagnosis Sensors". SENSORS (BASEL, SWITZERLAND) 2022; 22:8052. [PMID: 36298403 PMCID: PMC9610085 DOI: 10.3390/s22208052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors [...].
Collapse
Affiliation(s)
- Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | | | - Ahmed Shalaby
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali H. Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| |
Collapse
|
2
|
Yun Z, Xu Q, Wang G, Jin S, Lin G, Feng Q, Yuan J. EVA: Fully automatic hemodynamics assessment system for the bulbar conjunctival microvascular network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 216:106631. [PMID: 35123347 DOI: 10.1016/j.cmpb.2022.106631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 01/07/2022] [Accepted: 01/09/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Conjunctival microcirculation has been used to quantitatively assess microvascular changes due to systemic disorders. The space between red blood cell clusters in conjunctival microvessels is essential for assessing hemodynamics. However, it causes discontinuities in vessel image segmentation and increases the difficulty of automatically measuring blood velocity. In this study, we developed an EVA system based on deep learning to maintain vessel segmentation continuity and automatically measure blood velocity. METHODS The EVA system sequentially performs image registration, vessel segmentation, diameter measurement, and blood velocity measurement on conjunctival images. A U-Net model optimized with a connectivity-preserving loss function was used to solve the problem of discontinuities in vessel segmentation. Then, an automatic measurement algorithm based on line segment detection was proposed to obtain accurate blood velocity. Finally, the EVA system assessed hemodynamic parameters based on the measured blood velocity in each vessel segment. RESULTS The EVA system was validated for 23 videos of conjunctival microcirculation captured using functional slit-lamp microscopy. The U-Net model produced the longest average vessel segment length, 158.03 ± 181.87 µm, followed by the adaptive threshold method and Frangi filtering, which produced lengths of 120.05 ± 151.47 µm and 99.94 ± 138.12 µm, respectively. The proposed method and one based on cross-correlation were validated to measure blood velocity for a dataset consisting of 30 vessel segments. Bland-Altman analysis showed that compared with the cross-correlation method (bias: 0.36, SD: 0.32), the results of the proposed method were more consistent with a manual measurement-based gold standard (bias: -0.04, SD: 0.14). CONCLUSIONS The proposed EVA system provides an automatic and reliable solution for quantitative assessment of hemodynamics in conjunctival microvascular images, and potentially can be applied to hypoglossal microcirculation images.
Collapse
Affiliation(s)
- Zhaoqiang Yun
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Qing Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Gengyuan Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Shuang Jin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Guoye Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
3
|
Lin G, Bai H, Zhao J, Yun Z, Chen Y, Pang S, Feng Q. Improving sensitivity and connectivity of retinal vessel segmentation via error discrimination network. Med Phys 2022; 49:4494-4507. [PMID: 35338781 DOI: 10.1002/mp.15627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 03/04/2022] [Accepted: 03/08/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automated retinal vessel segmentation is crucial to the early diagnosis and treatment of ophthalmological diseases. Many deep learning-based methods have shown exceptional success in this task. However, current approaches are still inadequate in challenging vessels (e.g., thin vessels) and rarely focus on the connectivity of vessel segmentation. METHODS We propose using an error discrimination network (D) to distinguish whether the vessel pixel predictions of the segmentation network (S) are correct, and S is trained to obtain fewer error predictions of D. Our method is similar to, but not the same as, the generative adversarial network (GAN). Three types of vessel samples and corresponding error masks are used to train D, as follows: (1) vessel ground truth; (2) vessel segmented by S; (3) artificial thin vessel error samples that further improve the sensitivity of D to wrong small vessels. As an auxiliary loss function of S, D strengthens the supervision of difficult vessels. Optionally, we can use the errors predicted by D to correct the segmentation result of S. RESULTS Compared with state-of-the-art methods, our method achieves the highest scores in sensitivity (86.19%, 86.26%, and 86.53%) and G-Mean (91.94%, 91.30%, and 92.76%) on three public datasets, namely, STARE, DRIVE, and HRF. Our method also maintains a competitive level in other metrics. On the STARE dataset, the F1-score and AUC of our method rank second and first, respectively, reaching 84.51% and 98.97%. The top scores of the three topology-relevant metrics (Conn, Inf, and Cor) demonstrate that the vessels extracted by our method have excellent connectivity. We also validate the effectiveness of error discrimination supervision and artificial error sample training through ablation experiments. CONCLUSIONS The proposed method provides an accurate and robust solution for difficult vessel segmentation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Guoye Lin
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Hanhua Bai
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jie Zhao
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.,School of Medical Information Engineering, Guangdong Pharmaceutical University, Guangzhou, Guangdong, China
| | - Zhaoqiang Yun
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yangfan Chen
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Shumao Pang
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
4
|
Rahman MH, Jeong HW, Kim NR, Kim DY. Automatic Quantification of Anterior Lamina Cribrosa Structures in Optical Coherence Tomography Using a Two-Stage CNN Framework. SENSORS (BASEL, SWITZERLAND) 2021; 21:5383. [PMID: 34450823 PMCID: PMC8400634 DOI: 10.3390/s21165383] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 07/28/2021] [Accepted: 08/03/2021] [Indexed: 11/17/2022]
Abstract
In this study, we propose a new intelligent system to automatically quantify the morphological parameters of the lamina cribrosa (LC) of the optical coherence tomography (OCT), including depth, curve depth, and curve index from OCT images. The proposed system consisted of a two-stage deep learning (DL) model, which was composed of the detection and the segmentation models as well as a quantification process with a post-processing scheme. The models were used to solve the class imbalance problem and obtain Bruch's membrane opening (BMO) as well as anterior LC information. The detection model was implemented by using YOLOv3 to acquire the BMO and LC position information. The Attention U-Net segmentation model is used to compute accurate locations of the BMO and LC curve information. In addition, post-processing is applied using polynomial regression to attain the anterior LC curve boundary information. Finally, the numerical values of morphological parameters are quantified from BMO and LC curve information using an image processing algorithm. The average precision values in the detection performances of BMO and LC information were 99.92% and 99.18%, respectively, which is very accurate. A highly correlated performance of R2 = 0.96 between the predicted and ground-truth values was obtained, which was very close to 1 and satisfied the quantification results. The proposed system was performed accurately by fully automatic quantification of BMO and LC morphological parameters using a DL model.
Collapse
Affiliation(s)
- Md Habibur Rahman
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (M.H.R.); (H.W.J.)
| | - Hyeon Woo Jeong
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (M.H.R.); (H.W.J.)
| | - Na Rae Kim
- Department of Ophthalmology, Inha University, Incheon 22212, Korea
| | - Dae Yu Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (M.H.R.); (H.W.J.)
- Inha Research Institute for Aerospace Medicine, Inha University, Incheon 22212, Korea
- Center for Sensor Systems, Inha University, Incheon 22212, Korea
| |
Collapse
|