1
|
Yang M, Wang J, Quan S, Xu Q. High-precision bladder cancer diagnosis method: 2D Raman spectrum figures based on maintenance technology combined with automatic weighted feature fusion network. Anal Chim Acta 2023; 1282:341908. [PMID: 37923405 DOI: 10.1016/j.aca.2023.341908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 08/28/2023] [Accepted: 10/10/2023] [Indexed: 11/07/2023]
Abstract
BACKGROUND Raman spectroscopy has been extensively utilized as a marker-free detection method in the complementary diagnosis of cancer. Multivariate statistical classification analysis is frequently employed for Raman spectral data classification. Nevertheless, traditional multivariate statistical classification analysis performs poorly when analyzing large samples and multicategory spectral data. In addition, with the advancement of computer vision, convolutional neural networks (CNNs) have demonstrated extraordinarily precise analysis of two-dimensional image processing. RESULT Combining 2D Raman spectrograms with automatic weighted feature fusion network (AWFFN) for bladder cancer detection is presented in this paper. Initially, the s-transform (ST) is implemented for the first time to convert 1D Raman data into 2D spectrograms, achieving 99.2% detection accuracy. Second, four upscaling techniques, including short time fourier transform (STFT), recurrence map (RP), markov transform field (MTF), and grammy angle field (GAF), were used to transform the 1D Raman spectral data into a variety of 2D Raman spectrograms. In addition, a particle swarm optimization (PSO) algorithm is combined with VGG19, ResNet50, and ResNet101 to construct a weighted feature fusion network, and this parallel network is employed for evaluating multiple spectrograms. Class activation mapping (CAM) is additionally employed to illustrate and evaluate the process of feature extraction via the three parallel network branches. The results demonstrate that the combination of a 2D Raman spectrogram along with a CNN for the diagnosis of bladder cancer obtains a 99.2% accuracy rate,which indicates that it is an extremely promising auxiliary technology for cancer diagnosis. SIGNIFICANCE The proposed two-dimensional Raman spectroscopy method has an improved precision than one-dimensional spectroscopic data, which presents a potential methodology for assisted cancer detection and providing crucial technical support for assisted diagnosis.
Collapse
Affiliation(s)
- Mengge Yang
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Jiajia Wang
- School of Information Science and Engineering, Xinjiang University, Urumqi, China; The Key Laboratory of Signal Detection and Processing, Xinjiang Uygur Autonomous Region, Xinjiang University, China; Post-doctoral Workstation of Xinjiang Uygur Autonomous Region Institute of Product Quality Supervision and Inspection, Urumqi, China.
| | - Siyu Quan
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Qiqi Xu
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| |
Collapse
|
2
|
Chen H, Ma M, Liu G, Wang Y, Jin Z, Liu C. Breast Tumor Classification in Ultrasound Images by Fusion of Deep Convolutional Neural Network and Shallow LBP Feature. J Digit Imaging 2023; 36:932-946. [PMID: 36720840 PMCID: PMC10287618 DOI: 10.1007/s10278-022-00711-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 02/02/2023] Open
Abstract
Breast cancer is one of the most dangerous and common cancers in women which leads to a major research topic in medical science. To assist physicians in pre-screening for breast cancer to reduce unnecessary biopsies, breast ultrasound and computer-aided diagnosis (CAD) have been used to distinguish between benign and malignant tumors. In this study, we proposed a CAD system for tumor diagnosis using a multi-channel fusion method and feature extraction structure based on multi-feature fusion on breast ultrasound (BUS) images. In the pre-processing stage, the multi-channel fusion method completed the color conversion of the BUS image to make it contain richer information. In the feature extraction stage, the pre-trained ResNet50 network was selected as the basic network, and three levels of features were combined based on adaptive spatial feature fusion (ASFF), and finally, the shallow local binary pattern (LBP) texture features were fused. Support vector machine (SVM) was used for comparative analysis. A retrospective analysis was carried out, and 1615 breast tumor images (572 benign and 1043 malignant) confirmed by pathological examinations were collected. After data processing and augmentation, for an independent test set consisting of 874 breast ultrasound images (457 benign and 417 malignant), the accuracy, precision, recall, specificity, F1 score, and AUC of our method were 96.91%, 98.75%, 94.72%, 98.91%, 0.97, and 0.991, respectively. The results show that the integration of shallow LBP texture features and multi-level depth features can more effectively improve the comprehensive performance of breast tumor diagnosis, and has strong clinical application value. Compared with the past methods, our proposed method is expected to realize the automatic diagnosis of breast tumors and provide an auxiliary tool for radiologists to accurately diagnose breast diseases.
Collapse
Affiliation(s)
- Hua Chen
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Minglun Ma
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Gang Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China.
| | - Ying Wang
- The Second Hospital of Hebei Medical University, Shijiazhuang, 050000, China
| | - Zhihao Jin
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Chong Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| |
Collapse
|
3
|
Kobayashi H, Nakayama R, Hizukuri A, Ishida M, Kitagawa K, Sakuma H. Improving Image Resolution of Whole-Heart Coronary MRA Using Convolutional Neural Network. J Digit Imaging 2021; 33:497-503. [PMID: 31452007 DOI: 10.1007/s10278-019-00264-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Whole-heart coronary magnetic resonance angiography (WHCMRA) permits the noninvasive assessment of coronary artery disease without radiation exposure. However, the image resolution of WHCMRA is limited. Recently, convolutional neural networks (CNNs) have obtained increased interest as a method for improving the resolution of medical images. The purpose of this study is to improve the resolution of WHCMRA images using a CNN. Free-breathing WHCMRA images with 512 × 512 pixels (pixel size = 0.65 mm) were acquired in 80 patients with known or suspected coronary artery disease using a 1.5 T magnetic resonance (MR) system with 32 channel coils. A CNN model was optimized by evaluating CNNs with different structures. The proposed CNN model was trained based on the relationship of signal patterns between low-resolution patches (small regions) and the corresponding high-resolution patches using a training dataset collected from 40 patients. Images with 512 × 512 pixels were restored from 256 × 256 down-sampled WHCMRA images (pixel size = 1.3 mm) with three different approaches: the proposed CNN, bicubic interpolation (BCI), and the previously reported super-resolution CNN (SRCNN). High-resolution WHCMRA images obtained using the proposed CNN model were significantly better than those of BCI and SRCNN in terms of root mean squared error, peak signal to noise ratio, and structure similarity index measure with respect to the original WHCMRA images. The proposed CNN approach can provide high-resolution WHCMRA images with better accuracy than BCI and SRCNN. The high-resolution WHCMRA obtained using the proposed CNN model will be useful for identifying coronary artery disease.
Collapse
Affiliation(s)
- Hiroki Kobayashi
- Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Ryohei Nakayama
- Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan.
| | - Akiyoshi Hizukuri
- Graduate School of Science and Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Masaki Ishida
- Department of Radiology, Mie University School of Medicine, 2-174 Edobashi, Tsu, Mie, 514-8507, Japan
| | - Kakuya Kitagawa
- Department of Radiology, Mie University School of Medicine, 2-174 Edobashi, Tsu, Mie, 514-8507, Japan
| | - Hajime Sakuma
- Department of Radiology, Mie University School of Medicine, 2-174 Edobashi, Tsu, Mie, 514-8507, Japan
| |
Collapse
|
4
|
Hizukuri A, Nakayama R, Nara M, Suzuki M, Namba K. Computer-Aided Diagnosis Scheme for Distinguishing Between Benign and Malignant Masses on Breast DCE-MRI Images Using Deep Convolutional Neural Network with Bayesian Optimization. J Digit Imaging 2020; 34:116-123. [PMID: 33159279 DOI: 10.1007/s10278-020-00394-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 10/02/2020] [Accepted: 10/19/2020] [Indexed: 01/23/2023] Open
Abstract
Although magnetic resonance imaging (MRI) has a higher sensitivity of early breast cancer than mammography, the specificity is lower. The purpose of this study was to develop a computer-aided diagnosis (CAD) scheme for distinguishing between benign and malignant breast masses on dynamic contrast material-enhanced MRI (DCE-MRI) by using a deep convolutional neural network (DCNN) with Bayesian optimization. Our database consisted of 56 DCE-MRI examinations for 56 patients, each of which contained five sequential phase images. It included 26 benign and 30 malignant masses. In this study, we first determined a baseline DCNN model from well-known DCNN models in terms of classification performance. The optimum architecture of the DCNN model was determined by changing the hyperparameters of the baseline DCNN model such as the number of layers, the filter size, and the number of filters using Bayesian optimization. As the input of the proposed DCNN model, rectangular regions of interest which include an entire mass were selected from each of DCE-MRI images by an experienced radiologist. Three-fold cross validation method was used for training and testing of the proposed DCNN model. The classification accuracy, the sensitivity, the specificity, the positive predictive value, and the negative predictive value were 92.9% (52/56), 93.3% (28/30), 92.3% (24/26), 93.3% (28/30), and 92.3% (24/26), respectively. These results were substantially greater than those with the conventional method based on handcrafted features and a classifier. The proposed DCNN model achieved high classification performance and would be useful in differential diagnoses of masses in breast DCE-MRI images as a diagnostic aid.
Collapse
Affiliation(s)
- Akiyoshi Hizukuri
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577,, Japan.
| | - Ryohei Nakayama
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-higashi, Kusatsu, Shiga, 525-8577,, Japan
| | - Mayumi Nara
- Department of Breast Surgery, Hokuto Hospital, 7-5 Kisen, Inada-cho, Obihiro-shi, Hokkaido, 080-0833,, Japan
| | - Megumi Suzuki
- Department of Breast Surgery, Hokuto Hospital, 7-5 Kisen, Inada-cho, Obihiro-shi, Hokkaido, 080-0833,, Japan
| | - Kiyoshi Namba
- Department of Breast Surgery, Hokuto Hospital, 7-5 Kisen, Inada-cho, Obihiro-shi, Hokkaido, 080-0833,, Japan
| |
Collapse
|
5
|
Klimonda Z, Karwat P, Dobruch-Sobczak K, Piotrzkowska-Wróblewska H, Litniewski J. Breast-lesions characterization using Quantitative Ultrasound features of peritumoral tissue. Sci Rep 2019; 9:7963. [PMID: 31138822 PMCID: PMC6538710 DOI: 10.1038/s41598-019-44376-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 05/16/2019] [Indexed: 12/17/2022] Open
Abstract
The presented studies evaluate for the first time the efficiency of tumour classification based on the quantitative analysis of ultrasound data originating from the tissue surrounding the tumour. 116 patients took part in the study after qualifying for biopsy due to suspicious breast changes. The RF signals collected from the tumour and tumour-surroundings were processed to determine quantitative measures consisting of Nakagami distribution shape parameter, entropy, and texture parameters. The utility of parameters for the classification of benign and malignant lesions was assessed in relation to the results of histopathology. The best multi-parametric classifier reached an AUC of 0.92 and of 0.83 for outer and intra-tumour data, respectively. A classifier composed of two types of parameters, parameters based on signals scattered in the tumour and in the surrounding tissue, allowed the classification of breast changes with sensitivity of 93%, specificity of 88%, and AUC of 0.94. Among the 4095 multi-parameter classifiers tested, only in eight cases the result of classification based on data from the surrounding tumour tissue was worse than when using tumour data. The presented results indicate the high usefulness of QUS analysis of echoes from the tissue surrounding the tumour in the classification of breast lesions.
Collapse
Affiliation(s)
- Ziemowit Klimonda
- Institute of Fundamental Technological Research, Department of Ultrasound, Pawińskiego 5b, 02-106, Warsaw, Poland.
| | - Piotr Karwat
- Institute of Fundamental Technological Research, Department of Ultrasound, Pawińskiego 5b, 02-106, Warsaw, Poland
| | - Katarzyna Dobruch-Sobczak
- Institute of Fundamental Technological Research, Department of Ultrasound, Pawińskiego 5b, 02-106, Warsaw, Poland.,Maria Skłodowska-Curie Memorial Cancer Centre and Institute of Oncology, Wawelska 15b, 02-034, Warsaw, Poland
| | | | - Jerzy Litniewski
- Institute of Fundamental Technological Research, Department of Ultrasound, Pawińskiego 5b, 02-106, Warsaw, Poland
| |
Collapse
|