1
|
Giap BD, Srinivasan K, Mahmoud O, Mian SI, Tannen BL, Nallasamy N. Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery. IEEE J Biomed Health Inform 2024; 28:1599-1610. [PMID: 38127596 PMCID: PMC11018356 DOI: 10.1109/jbhi.2023.3345837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surgeons in detecting risk factors for pupillary instability prior to the development of surgical complications. However, surgical illumination variations, surgical instrument obstruction, and lens material hydration during cataract surgery can limit pupil segmentation accuracy. To address these problems, we propose a novel method named adaptive wavelet tensor feature extraction (AWTFE). AWTFE is designed to enhance the accuracy of deep learning-powered pupil recognition systems. First, we represent the correlations among spatial information, color channels, and wavelet subbands by constructing a third-order tensor. We then utilize higher-order singular value decomposition to eliminate redundant information adaptively and estimate pupil feature information. We evaluated the proposed method by conducting experiments with state-of-the-art deep learning segmentation models on our BigCat dataset consisting of 5,700 annotated intraoperative images from 190 cataract surgeries and a public CaDIS dataset. The experimental results reveal that the AWTFE method effectively identifies features relevant to the pupil region and improved the overall performance of segmentation models by up to 2.26% (BigCat) and 3.31% (CaDIS). Incorporation of the AWTFE method led to statistically significant improvements in segmentation performance (P < 1.29 × 10-10 for each model) and yielded the highest-performing model overall (Dice coefficients of 94.74% and 96.71% for the BigCat and CaDIS datasets, respectively). In performance comparisons, the AWTFE consistently outperformed other feature extraction methods in enhancing model performance. In addition, the proposed AWTFE method significantly improved pupil recognition performance by up to 2.87% in particularly challenging phases of cataract surgery.
Collapse
|
2
|
Dudaie M, Barnea I, Nissim N, Shaked NT. On-chip label-free cell classification based directly on off-axis holograms and spatial-frequency-invariant deep learning. Sci Rep 2023; 13:12370. [PMID: 37524884 PMCID: PMC10390541 DOI: 10.1038/s41598-023-38160-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 07/04/2023] [Indexed: 08/02/2023] Open
Abstract
We present a rapid label-free imaging flow cytometry and cell classification approach based directly on raw digital holograms. Off-axis holography enables real-time acquisition of cells during rapid flow. However, classification of the cells typically requires reconstruction of their quantitative phase profiles, which is time-consuming. Here, we present a new approach for label-free classification of individual cells based directly on the raw off-axis holographic images, each of which contains the complete complex wavefront (amplitude and quantitative phase profiles) of the cell. To obtain this, we built a convolutional neural network, which is invariant to the spatial frequencies and directions of the interference fringes of the off-axis holograms. We demonstrate the effectiveness of this approach using four types of cancer cells. This approach has the potential to significantly improve both speed and robustness of imaging flow cytometry, enabling real-time label-free classification of individual cells.
Collapse
Affiliation(s)
- Matan Dudaie
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Itay Barnea
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Noga Nissim
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Natan T Shaked
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel.
| |
Collapse
|
3
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
4
|
Integrating convolutional neural networks, kNN, and Bayesian optimization for efficient diagnosis of Alzheimer's disease in magnetic resonance images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
5
|
Chola C, Muaad AY, Bin Heyat MB, Benifa JVB, Naji WR, Hemachandran K, Mahmoud NF, Samee NA, Al-Antari MA, Kadah YM, Kim TS. BCNet: A Deep Learning Computer-Aided Diagnosis Framework for Human Peripheral Blood Cell Identification. Diagnostics (Basel) 2022; 12:diagnostics12112815. [PMID: 36428875 PMCID: PMC9689932 DOI: 10.3390/diagnostics12112815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/03/2022] [Accepted: 11/12/2022] [Indexed: 11/19/2022] Open
Abstract
Blood cells carry important information that can be used to represent a person's current state of health. The identification of different types of blood cells in a timely and precise manner is essential to cutting the infection risks that people face on a daily basis. The BCNet is an artificial intelligence (AI)-based deep learning (DL) framework that was proposed based on the capability of transfer learning with a convolutional neural network to rapidly and automatically identify the blood cells in an eight-class identification scenario: Basophil, Eosinophil, Erythroblast, Immature Granulocytes, Lymphocyte, Monocyte, Neutrophil, and Platelet. For the purpose of establishing the dependability and viability of BCNet, exhaustive experiments consisting of five-fold cross-validation tests are carried out. Using the transfer learning strategy, we conducted in-depth comprehensive experiments on the proposed BCNet's architecture and test it with three optimizers of ADAM, RMSprop (RMSP), and stochastic gradient descent (SGD). Meanwhile, the performance of the proposed BCNet is directly compared using the same dataset with the state-of-the-art deep learning models of DensNet, ResNet, Inception, and MobileNet. When employing the different optimizers, the BCNet framework demonstrated better classification performance with ADAM and RMSP optimizers. The best evaluation performance was achieved using the RMSP optimizer in terms of 98.51% accuracy and 96.24% F1-score. Compared with the baseline model, the BCNet clearly improved the prediction accuracy performance 1.94%, 3.33%, and 1.65% using the optimizers of ADAM, RMSP, and SGD, respectively. The proposed BCNet model outperformed the AI models of DenseNet, ResNet, Inception, and MobileNet in terms of the testing time of a single blood cell image by 10.98, 4.26, 2.03, and 0.21 msec. In comparison to the most recent deep learning models, the BCNet model could be able to generate encouraging outcomes. It is essential for the advancement of healthcare facilities to have such a recognition rate improving the detection performance of the blood cells.
Collapse
Affiliation(s)
- Channabasava Chola
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
| | - Abdullah Y. Muaad
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
- Centre for VLSI and Embedded System Technologies, International Institute of Information Technology, Hyderabad 500032, India
- Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW 2770, Australia
| | - J. V. Bibal Benifa
- Department of Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Kerala 686635, India
| | - Wadeea R. Naji
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - K. Hemachandran
- Department of Artificial Intelligence, Woxsen University, Hyderabad 502345, India
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Mugahed A. Al-Antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, College of Electronics and Information, Kyung Hee University, Suwon-si 17104, Republic of Korea
- Correspondence: (N.A.S.); (M.A.A.-A.); (Y.M.K.); (T.-S.K.)
| |
Collapse
|
7
|
Sheng M, Xu W, Yang J, Chen Z. Cross-Attention and Deep Supervision UNet for Lesion Segmentation of Chronic Stroke. Front Neurosci 2022; 16:836412. [PMID: 35392415 PMCID: PMC8980944 DOI: 10.3389/fnins.2022.836412] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 01/26/2022] [Indexed: 12/04/2022] Open
Abstract
Stroke is an acute cerebrovascular disease with high incidence, high mortality, and high disability rate. Determining the location and volume of the disease in MR images promotes accurate stroke diagnosis and surgical planning. Therefore, the automatic recognition and segmentation of stroke lesions has important clinical significance for large-scale stroke imaging analysis. There are some problems in the segmentation of stroke lesions, such as imbalance of the front and back scenes, uncertainty of position, and unclear boundary. To meet this challenge, this paper proposes a cross-attention and deep supervision UNet (CADS-UNet) to segment chronic stroke lesions from T1-weighted MR images. Specifically, we propose a cross-spatial attention module, which is different from the usual self-attention module. The location information interactively selects encode features and decode features to enrich the lost spatial focus. At the same time, the channel attention mechanism is used to screen the channel characteristics. Finally, combined with deep supervision and mixed loss, the model is supervised more accurately. We compared and verified the model on the authoritative open dataset "Anatomical Tracings of Lesions After Stroke" (Atlas), which fully proved the effectiveness of our model.
Collapse
Affiliation(s)
- Manjin Sheng
- School of Informatics, Xiamen University, Xiamen, China
| | - Wenjie Xu
- School of Informatics, Xiamen University, Xiamen, China
| | - Jane Yang
- Department of Cognitive Science, University of California, San Diego, San Diego, CA, United States
| | - Zhongjie Chen
- Department of Neurology, Zhongshan Hospital, Xiamen University, Xiamen, China
| |
Collapse
|