1
|
Kiran A, Ramesh JVN, Rahat IS, Khan MAU, Hossain A, Uddin R. Advancing breast ultrasound diagnostics through hybrid deep learning models. Comput Biol Med 2024; 180:108962. [PMID: 39142222 DOI: 10.1016/j.compbiomed.2024.108962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/26/2024] [Accepted: 07/26/2024] [Indexed: 08/16/2024]
Abstract
Today, doctors rely heavily on medical imaging to identify abnormalities. Proper classification of these abnormalities enables them to take informed actions, leading to early diagnosis and treatment. This paper introduces the "EfficientKNN" model, a novel hybrid deep learning approach that combines the advanced feature extraction capabilities of EfficientNetB3 with the simplicity and effectiveness of the k-Nearest Neighbors (k-NN) algorithm. Initially, EfficientNetB3, pre-trained on ImageNet, is repurposed to serve as a feature extractor. Subsequently, a GlobalAveragePooling2D layer is applied, followed by an optional Principal Component Analysis (PCA) to reduce dimensionality while preserving critical information. PCA is used selectively when deemed necessary. The extracted features are then classified using an optimized k-NN algorithm, fine-tuned through meticulous cross-validation.Our model underwent rigorous training using a curated dataset containing benign, malignant, and normal medical images. Data augmentation techniques, including rotations, shifts, flips, and zooms, were employed to help the model generalize and efficiently handle new, unseen data. To enhance the model's ability to identify the important features necessary for accurate predictions, the dataset was refined using segmentation and overlay techniques. The training utilized an ensemble of optimization algorithms-SGD, Adam, and RMSprop-with hyperparameters set at a learning rate of 0.00045, a batch size of 32, and up to 120 epochs, facilitated by early stopping to prevent overfitting.The results demonstrate that the EfficientKNN model outperforms traditional models such as VGG16, AlexNet, and VGG19 in terms of accuracy, precision, and F1-score. Additionally, the model showed better results compared to EfficientNetB3 alone. Achieving a 100 % accuracy rate on multiple tests, the EfficientKNN model has significant potential for real-world diagnostic applications. This study highlights the model's scalability, efficient use of cloud storage, and real-time prediction capabilities, all while minimizing computational demands.By integrating the strengths of EfficientNetB3's deep learning architecture with the interpretability of k-NN, EfficientKNN presents a significant advancement in medical image classification, promising improved diagnostic accuracy and clinical applicability.
Collapse
Affiliation(s)
- Ajmeera Kiran
- Department of Computer Science and Engineering,MLR Institute of Technology, Dundigal, Hyderabad, Telangana, 500043, India
| | - Janjhyam Venkata Naga Ramesh
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, 522302, India; Department of Computer Science and Engineering, Graphic Era Hill University, Dehradun, 248002, India
| | - Irfan Sadiq Rahat
- School of Computer Science & Engineering (SCOPE), VIT-AP University, Amaravati, Andhra Pradesh, India.
| | | | - Anwar Hossain
- Master Of Information Science and TechnologyCalifornia State University, USA
| | - Roise Uddin
- Master Of Information Science and TechnologyCalifornia State University, USA
| |
Collapse
|
2
|
Lee JS, Wu WK. Breast Tumor Tissue Image Classification Using Single-Task Meta Learning with Auxiliary Network. Cancers (Basel) 2024; 16:1362. [PMID: 38611040 PMCID: PMC11010930 DOI: 10.3390/cancers16071362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 03/25/2024] [Accepted: 03/27/2024] [Indexed: 04/14/2024] Open
Abstract
Breast cancer has a high mortality rate among cancers. If the type of breast tumor can be correctly diagnosed at an early stage, the survival rate of the patients will be greatly improved. Considering the actual clinical needs, the classification model of breast pathology images needs to have the ability to make a correct classification, even in facing image data with different characteristics. The existing convolutional neural network (CNN)-based models for the classification of breast tumor pathology images lack the requisite generalization capability to maintain high accuracy when confronted with pathology images of varied characteristics. Consequently, this study introduces a new classification model, STMLAN (Single-Task Meta Learning with Auxiliary Network), which integrates Meta Learning and an auxiliary network. Single-Task Meta Learning was proposed to endow the model with generalization ability, and the auxiliary network was used to enhance the feature characteristics of breast pathology images. The experimental results demonstrate that the STMLAN model proposed in this study improves accuracy by at least 1.85% in challenging multi-classification tasks compared to the existing methods. Furthermore, the Silhouette Score corresponding to the features learned by the model has increased by 31.85%, reflecting that the proposed model can learn more discriminative features, and the generalization ability of the overall model is also improved.
Collapse
Affiliation(s)
- Jiann-Shu Lee
- Department of Computer Science and Information Engineering, National University of Tainan, Tainan 700, Taiwan;
| | | |
Collapse
|
3
|
Tang J, Zhang T, Gong Z, Huang X. High Precision Cervical Precancerous Lesion Classification Method Based on ConvNeXt. Bioengineering (Basel) 2023; 10:1424. [PMID: 38136015 PMCID: PMC10740838 DOI: 10.3390/bioengineering10121424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 12/24/2023] Open
Abstract
Traditional cervical cancer diagnosis mainly relies on human papillomavirus (HPV) concentration testing. Considering that HPV concentrations vary from individual to individual and fluctuate over time, this method requires multiple tests, leading to high costs. Recently, some scholars have focused on the method of cervical cytology for diagnosis. However, cervical cancer cells have complex textural characteristics and small differences between different cell subtypes, which brings great challenges for high-precision screening of cervical cancer. In this paper, we propose a high-precision cervical cancer precancerous lesion screening classification method based on ConvNeXt, utilizing self-supervised data augmentation and ensemble learning strategies to achieve cervical cancer cell feature extraction and inter-class discrimination, respectively. We used the Deep Cervical Cytological Levels (DCCL) dataset, which includes 1167 cervical cytology specimens from participants aged 32 to 67, for algorithm training and validation. We tested our method on the DCCL dataset, and the final classification accuracy was 8.85% higher than that of previous advanced models, which means that our method has significant advantages compared to other advanced methods.
Collapse
Affiliation(s)
- Jing Tang
- State Key Laboratory of Intelligent Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China;
| | - Ting Zhang
- MOE Key Laboratory of Molecular Biophysics, College of Life Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China;
| | - Zeyu Gong
- State Key Laboratory of Intelligent Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China;
| | - Xianjun Huang
- School of Computer Science and Engineering, Guangzhou Institute of Science and Technology, Guangzhou 510006, China;
| |
Collapse
|
4
|
Dey S, Mitra S, Chakraborty S, Mondal D, Nasipuri M, Das N. GC-EnC: A Copula based ensemble of CNNs for malignancy identification in breast histopathology and cytology images. Comput Biol Med 2023; 152:106329. [PMID: 36473342 DOI: 10.1016/j.compbiomed.2022.106329] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022]
Abstract
In the present work, we have explored the potential of Copula-based ensemble of CNNs(Convolutional Neural Networks) over individual classifiers for malignancy identification in histopathology and cytology images. The Copula-based model that integrates three best performing CNN architectures, namely, DenseNet-161/201, ResNet-101/34, InceptionNet-V3 is proposed. Also, the limitation of small dataset is circumvented using a Fuzzy template based data augmentation technique that intelligently selects multiple region of interests (ROIs) from an image. The proposed framework of data augmentation amalgamated with the ensemble technique showed a gratifying performance in malignancy prediction surpassing the individual CNN's performance on breast cytology and histopathology datasets. The proposed method has achieved accuracies of 84.37%, 97.32%, 91.67% on the JUCYT, BreakHis and BI datasets respectively. This automated technique will serve as a useful guide to the pathologist in delivering the appropriate diagnostic decision in reduced time and effort. The relevant codes of the proposed ensemble model are publicly available on GitHub.
Collapse
Affiliation(s)
- Soumyajyoti Dey
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Shyamali Mitra
- Jadavpur University, Department of Instrumentation & Electronics Engineering, Kolkata, West Bengal, India.
| | | | - Debashri Mondal
- Theism Medical Diagnostics Centre, Kolkata, West Bengal, India.
| | - Mita Nasipuri
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Nibaran Das
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| |
Collapse
|
5
|
Lee JS, Wu WK. Breast Tumor Tissue Image Classification Using DIU-Net. SENSORS (BASEL, SWITZERLAND) 2022; 22:9838. [PMID: 36560207 PMCID: PMC9786106 DOI: 10.3390/s22249838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 12/10/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
Inspired by the observation that pathologists pay more attention to the nuclei regions when analyzing pathological images, this study utilized soft segmentation to imitate the visual focus mechanism and proposed a new segmentation-classification joint model to achieve superior classification performance for breast cancer pathology images. Aiming at the characteristics of different sizes of nuclei in pathological images, this study developed a new segmentation network with excellent cross-scale description ability called DIU-Net. To enhance the generalization ability of the segmentation network, that is, to avoid the segmentation network from learning low-level features, we proposed the Complementary Color Conversion Scheme in the training phase. In addition, due to the disparity between the area of the nucleus and the background in the pathology image, there is an inherent data imbalance phenomenon, dice loss and focal loss were used to overcome this problem. In order to further strengthen the classification performance of the model, this study adopted a joint training scheme, so that the output of the classification network can not only be used to optimize the classification network itself, but also optimize the segmentation network. In addition, this model can also provide the pathologist model's attention area, increasing the model's interpretability. The classification performance verification of the proposed method was carried out with the BreaKHis dataset. Our method obtains binary/multi-class classification accuracy 97.24/93.75 and 98.19/94.43 for 200× and 400× images, outperforming existing methods.
Collapse
|
6
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
7
|
Many heads are better than one: A multiscale neural information feature fusion framework for spatial route selections decoding from multichannel neural recordings of pigeons. Brain Res Bull 2022; 184:1-12. [DOI: 10.1016/j.brainresbull.2022.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 02/07/2022] [Accepted: 03/10/2022] [Indexed: 11/22/2022]
|
8
|
Tewary S, Mukhopadhyay S. AutoIHCNet: CNN architecture and decision fusion for automated HER2 scoring. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
9
|
Rashmi R, Prasad K, Udupa CBK. Multi-channel Chan-Vese model for unsupervised segmentation of nuclei from breast histopathological images. Comput Biol Med 2021; 136:104651. [PMID: 34333226 DOI: 10.1016/j.compbiomed.2021.104651] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 07/13/2021] [Accepted: 07/13/2021] [Indexed: 11/28/2022]
Abstract
T he pathologist determines the malignancy of a breast tumor by studying the histopathological images. In particular, the characteristics and distribution of nuclei contribute greatly to the decision process. Hence, the segmentation of nuclei constitutes a crucial task in the classification of breast histopathological images. Manual analysis of these images is subjective, tedious and susceptible to human error. Consequently, the development of computer-aided diagnostic systems for analysing these images have become a vital factor in the domain of medical imaging. However, the usage of medical image processing techniques to segment nuclei is challenging due to the diverse structure of the cells, poor staining process, the occurrence of artifacts, etc. Although supervised computer-aided systems for nuclei segmentation is popular, it is dependent on the availability of standard annotated datasets. In this regard, this work presents an unsupervised method based on Chan-Vese model to segment nuclei from breast histopathological images. The proposed model utilizes multi-channel color information to efficiently segment the nuclei. Also, this study proposes a pre-processing step to select appropriate color channel such that it discriminates nuclei from the background region. An extensive evaluation of the proposed model on two challenging datasets demonstrates its validity and effectiveness.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Chethana Babu K Udupa
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
10
|
Hao Y, Qiao S, Zhang L, Xu T, Bai Y, Hu H, Zhang W, Zhang G. Breast Cancer Histopathological Images Recognition Based on Low Dimensional Three-Channel Features. Front Oncol 2021; 11:657560. [PMID: 34195073 PMCID: PMC8236881 DOI: 10.3389/fonc.2021.657560] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Accepted: 05/11/2021] [Indexed: 11/28/2022] Open
Abstract
Breast cancer (BC) is the primary threat to women’s health, and early diagnosis of breast cancer is imperative. Although there are many ways to diagnose breast cancer, the gold standard is still pathological examination. In this paper, a low dimensional three-channel features based breast cancer histopathological images recognition method is proposed to achieve fast and accurate breast cancer benign and malignant recognition. Three-channel features of 10 descriptors were extracted, which are gray level co-occurrence matrix on one direction (GLCM1), gray level co-occurrence matrix on four directions (GLCM4), average pixel value of each channel (APVEC), Hu invariant moment (HIM), wavelet features, Tamura, completed local binary pattern (CLBP), local binary pattern (LBP), Gabor, histogram of oriented gradient (Hog), respectively. Then support vector machine (SVM) was used to assess their performance. Experiments on BreaKHis dataset show that GLCM1, GLCM4 and APVEC achieved the recognition accuracy of 90.2%-94.97% at the image level and 89.18%-94.24% at the patient level, which is better than many state-of-the-art methods, including many deep learning frameworks. The experimental results show that the breast cancer recognition based on high dimensional features will increase the recognition time, but the recognition accuracy is not greatly improved. Three-channel features will enhance the recognizability of the image, so as to achieve higher recognition accuracy than gray-level features.
Collapse
Affiliation(s)
- Yan Hao
- School of Information and Communication Engineering, North University of China, Taiyuan, China
| | - Shichang Qiao
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Li Zhang
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Ting Xu
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Yanping Bai
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Hongping Hu
- Department of Mathematics, School of Science, North University of China, Taiyuan, China
| | - Wendong Zhang
- School of Instrument and Electronics, Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| | - Guojun Zhang
- School of Instrument and Electronics, Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, China
| |
Collapse
|
11
|
Liang Y, Pan C, Sun W, Liu Q, Du Y. Global context-aware cervical cell detection with soft scale anchor matching. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 204:106061. [PMID: 33819821 DOI: 10.1016/j.cmpb.2021.106061] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 03/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided cervical cancer screening based on an automated recognition of cervical cells has the potential to significantly reduce error rate and increase productivity compared to manual screening. Traditional methods often rely on the success of accurate cell segmentation and discriminative hand-crafted features extraction. Recently, detector based on convolutional neural network is applied to reduce the dependency on hand-crafted features and eliminate the necessary segmentation. However, these methods tend to yield too much false positive predictions. METHODS This paper proposes a global context-aware framework to deal with this problem, which integrates global context information by an image-level classification branch and a weighted loss. And the prediction of this branch is merged into cell detection for filtering false positive predictions. Furthermore, a new ground truth assignment strategy in the feature pyramid called soft scale anchor matching is proposed, which matches ground truths with anchors across scales softly. This strategy searches the most appropriate representation of ground truths in each layer and add more positive samples with different scales, which facilitate the feature learning. RESULTS Our proposed methods finally get 5.7% increase in mean average precision and 18.5% increase in specificity with sacrifice of 2.6% delay in inference time. CONCLUSIONS Our proposed methods which totally avoid the dependence on segmentation of cervical cells, show the great potential to reduce the workload for pathologists in automation-assisted cervical cancer screening.
Collapse
Affiliation(s)
- Yixiong Liang
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Changli Pan
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Wanxin Sun
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Qing Liu
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yun Du
- The Fourth Hospital of Hebei Medical University, Hebei Province China-Japan Friendship Center for Cancer Detection, China.
| |
Collapse
|
12
|
Liu S, Yuan Z, Qiao X, Liu Q, Song K, Kong B, Su X. Light scattering pattern specific convolutional network static cytometry for label-free classification of cervical cells. Cytometry A 2021; 99:610-621. [PMID: 33840152 DOI: 10.1002/cyto.a.24349] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/11/2021] [Accepted: 04/01/2021] [Indexed: 12/12/2022]
Abstract
Cervical cancer is a major gynecological malignant tumor that threatens women's health. Current cytological methods have certain limitations for cervical cancer early screening. Light scattering patterns can reflect small differences in the internal structure of cells. In this study, we develop a light scattering pattern specific convolutional network (LSPS-net) based on deep learning algorithm and integrate it into a 2D light scattering static cytometry for automatic, label-free analysis of single cervical cells. An accuracy rate of 95.46% for the classification of normal cervical cells and cancerous ones (mixed C-33A and CaSki cells) is obtained. When applied for the subtyping of label-free cervical cell lines, we obtain an accuracy rate of 93.31% with our LSPS-net cytometric technique. Furthermore, the three-way classification of the above different types of cells has an overall accuracy rate of 90.90%, and comparisons with other feature descriptors and classification algorithms show the superiority of deep learning for automatic feature extraction. The LSPS-net static cytometry may potentially be used for cervical cancer early screening, which is rapid, automatic and label-free.
Collapse
Affiliation(s)
- Shanshan Liu
- School of Microelectronics, Shandong University, Jinan, China.,Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, China
| | - Zeng Yuan
- Department of obstetrics and gynecology, Qilu Hospital, Shandong University, Jinan, China
| | - Xu Qiao
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, China
| | - Qiao Liu
- Department of Molecular Medicine and Genetics, School of Basic Medicine Sciences, Shandong University, Jinan, China
| | - Kun Song
- Department of obstetrics and gynecology, Qilu Hospital, Shandong University, Jinan, China
| | - Beihua Kong
- Department of obstetrics and gynecology, Qilu Hospital, Shandong University, Jinan, China
| | - Xuantao Su
- School of Microelectronics, Shandong University, Jinan, China
| |
Collapse
|
13
|
Zhang YD, Satapathy SC, Guttery DS, Górriz JM, Wang SH. Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network. Inf Process Manag 2021. [DOI: 10.1016/j.ipm.2020.102439] [Citation(s) in RCA: 113] [Impact Index Per Article: 37.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
14
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|