1
|
Yuan Z, Yang D, Wang W, Zhao J, Liang Y. Self super-resolution of optical coherence tomography images based on deep learning. OPTICS EXPRESS 2023; 31:27566-27581. [PMID: 37710829 DOI: 10.1364/oe.495530] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 07/20/2023] [Indexed: 09/16/2023]
Abstract
As a medical imaging modality, many researches have been devoted to improving the resolution of optical coherence tomography (OCT). We developed a deep-learning based OCT self super-resolution (OCT-SSR) pipeline to improve the axial resolution of OCT images based on the high-resolution and low-resolution spectral data collected by the OCT system. In this pipeline, the enhanced super-resolution asymmetric generative adversarial networks were built to improve the network outputs without increasing the complexity. The feasibility and effectiveness of the approach were demonstrated by experimental results on the images of the biological samples collected by the home-made spectral-domain OCT and swept-source OCT systems. More importantly, we found the sidelobes in the original images can be obviously suppressed while improving the resolution based on the OCT-SSR method, which can help to reduce pseudo-signal in OCT imaging when non-Gaussian spectra light source is used. We believe that the OCT-SSR method has broad prospects in breaking the limitation of the source bandwidth on the axial resolution of the OCT system.
Collapse
|
2
|
Sineesh A, Shankar MR, Hareendranathan A, Panicker MR, Palanisamy P. Single Image based Super Resolution Ultrasound Imaging Using Residual Learning of Wavelet Features. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083258 DOI: 10.1109/embc40787.2023.10340196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The generation of super resolution ultrasound images from the low-resolution (LR) brightness mode (B-mode) images acquired by the portable point of care ultrasound systems has been of sufficient interest in the recent past. With the advancements in deep learning, there have been numerous attempts in this direction. However, all the approaches have been concentrated on employing the direct image as the input to the neural network. In this work, a stationary wavelet (SWT) decomposition is employed to extract the features from the input LR image which is passed through a modified residual network and the learned features are combined using the inverse SWT to reconstruct the high resolution (HR) image at a 4× scale factor. The proposed approach when compared to the state-of-the art approaches, results in an improved high resolution reconstruction.Clinical relevance- The proposed approach will enable the generation of high-resolution images from portable ultrasound systems, allowing for easier interpretation and faster diagnostics in primary care settings.
Collapse
|
3
|
Cheikh Sidiya A, Li X. Toward extreme face super-resolution in the wild: A self-supervised learning approach. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1037435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Extreme face super-resolution (FSR), that is, improving the resolution of face images by an extreme scaling factor (often greater than ×8) has remained underexplored in the literature of low-level vision. Extreme FSR in the wild must address the challenges of both unpaired training data and unknown degradation factors. Inspired by the latest advances in image super-resolution (SR) and self-supervised learning (SSL), we propose a novel two-step approach to FSR by introducing a mid-resolution (MR) image as the stepping stone. In the first step, we leverage ideas from SSL-based SR reconstruction of medical images (e.g., MRI and ultrasound) to modeling the realistic degradation process of face images in the real world; in the second step, we extract the latent codes from MR images and interpolate them in a self-supervised manner to facilitate artifact-suppressed image reconstruction. Our two-step extreme FSR can be interpreted as the combination of existing self-supervised CycleGAN (step 1) and StyleGAN (step 2) that overcomes the barrier of critical resolution in face recognition. Extensive experimental results have shown that our two-step approach can significantly outperform existing state-of-the-art FSR techniques, including FSRGAN, Bulat's method, and PULSE, especially for large scaling factors such as 64.
Collapse
|
4
|
Ma Y, Qi Y, Li Q, Zhu S, Zhao W, Zhang Y. The Expression Significance of LPa, BNP, and McP-1 in CHD Patients and Their Relationship with Echocardiographic Parameters. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:5165203. [PMID: 36101804 PMCID: PMC9440810 DOI: 10.1155/2022/5165203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Revised: 07/13/2022] [Accepted: 08/01/2022] [Indexed: 11/17/2022]
Abstract
In order to investigate the expression levels of Lipoprotein A (LPa), B-type Natriuretic Peptide (BNP) and Monocyte chemoattractor Protein-1 (McP-1) in serum of patients with coronary heart disease (CHD) are used to detect significance and to analyze the correlation between these indicators and parameters of echocardiography. The clinical data of 132 CHD patients in our hospital from January 2021 to October 2021 are retrospectively analyzed and included in the CHD group. Another 100 healthy people who came to our hospital for general physical examination were selected as the control group. The expressions of Serum McP-1 and BNP are detected by the ELISA. The expression of Serum LPa is detected by immunoturbidimetry, and the expressions of SERUM McP-1, BNP, and LPa are compared between the two groups. The experiments show that the expressions of McP-1, BNP, and LPa in serum of control group are significantly lower than those of the CHD group (P < 0.05). Echocardiography results show that left ventricular ejection fraction (LVEF) in CHD group is significantly lower than that in control group, but left ventricular end-systolic volume (LVESV) and left ventricular end-diastolic volume (LVEDV) are significantly higher than those in control group (P < 0.05).
Collapse
Affiliation(s)
- Yunpeng Ma
- Deputy Chief Physician, Cardiovascular Surgery of First People's Hospital of Tianshui, Tianshui 741000, China
| | - Yinzun Qi
- Deputy Chief Physician, Cardiovascular Surgery of First People's Hospital of Tianshui, Tianshui 741000, China
| | - Qiang Li
- Deputy Chief Physician, Cardiovascular Surgery of First People's Hospital of Tianshui, Tianshui 741000, China
| | - Shuangxiong Zhu
- Deputy Chief Physician, Cardiovascular Surgery of First People's Hospital of Tianshui, Tianshui 741000, China
| | - Wenjie Zhao
- Deputy Chief Physician, Cardiovascular Surgery of First People's Hospital of Tianshui, Tianshui 741000, China
| | - Yu Zhang
- Deputy Chief Physician, Cardiovascular Surgery of First People's Hospital of Tianshui, Tianshui 741000, China
| |
Collapse
|
5
|
Analysis of Clinical Manifestations and Imaging of COVID-19 Patients in Intensive Care. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:9697285. [PMID: 35833079 PMCID: PMC9226972 DOI: 10.1155/2022/9697285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/13/2022] [Accepted: 05/18/2022] [Indexed: 12/14/2022]
Abstract
Objective. The study aims to summarize and analyze the clinical and CT findings of severe COVID-19 patients. Methods. From February 11 to March 31, 2020, 61 COVID-19 patients in intensive care in the E1-3 ward of Tongji Hospital were analyzed retrospectively. Results. The main clinical manifestations were cough, expectoration in 56 cases (91.8%), shortness of breath, chest tightness in 48 cases (78.7%), fever in 61 cases (100%), muscle ache and weakness in 40 cases (65.6%), diarrhea or vomiting in 8 cases (13.1%), and headache in 4 cases (6.6%). After admission, the leukocyte count was normal in 40 cases (57.7%), higher in 9 cases (15.4%), and lower in 12 cases (26.9%). The lymphocyte count decreased in 53 cases (86.9%). CRP was increased in 29 cases (47.5%); PCT was increased in 15 cases (24.6%); ESR was increased in 38 cases (62.3%); D-dimer increased in 39 cases (63.9%); ALT/AST increased in 40 cases (65.6%); CK/CK-MB increased in 8 cases (13.1%); troponin I increased in 6 cases (9.8%); NT-proBNP increased in 35 cases (57.4%); IL-1 increased in 5 cases (8.2%); IL-2 receptor increased in 28 cases (45.9%); IL-6 increased in 23 cases (37.7%); IL-8 increased in 15 cases (24.6%); IL-10 increased in 12 cases (19.7%); and NTF increased in 22 cases (36.1%). The chest CT images showed that 38 cases (65.5%) of right lung lesions were more extensive than those of left lung lesions, 20 cases (34.5%) of left lung lesions were more extensive than those of right lung lesions, 42 cases (72.5%) of lower lobe lesions were more extensive than those of upper lobe lesions, 6 cases (10.3%) of upper lobe lesions were more extensive than those of lower lobe lesions, and 10 cases (17.2%) of upper and lower part lesions were roughly the same. Ground-glass opacity (GGO) was found in 12 cases (20.7%); GGO with focal consolidation in 38 cases (65.5%); small patchy edge fuzzy density increased in 24 cases (41.4%); large consolidation in 20 cases (34.5%); reticular or fibrous cord in 54 cases (93.1%); and air bronchogram in 8 cases (13.8%). Conclusions. COVID-19 patients in intensive care have no specific clinical manifestation and CT findings. However, analysis and summary of relevant data can help us assess the severity of the disease, decide the timing of treatment, and predict prognosis.
Collapse
|
6
|
Heidari A, Jafari Navimipour N, Unal M, Toumaj S. Machine learning applications for COVID-19 outbreak management. Neural Comput Appl 2022; 34:15313-15348. [PMID: 35702664 PMCID: PMC9186489 DOI: 10.1007/s00521-022-07424-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 05/10/2022] [Indexed: 12/29/2022]
Abstract
Recently, the COVID-19 epidemic has resulted in millions of deaths and has impacted practically every area of human life. Several machine learning (ML) approaches are employed in the medical field in many applications, including detecting and monitoring patients, notably in COVID-19 management. Different medical imaging systems, such as computed tomography (CT) and X-ray, offer ML an excellent platform for combating the pandemic. Because of this need, a significant quantity of study has been carried out; thus, in this work, we employed a systematic literature review (SLR) to cover all aspects of outcomes from related papers. Imaging methods, survival analysis, forecasting, economic and geographical issues, monitoring methods, medication development, and hybrid apps are the seven key uses of applications employed in the COVID-19 pandemic. Conventional neural networks (CNNs), long short-term memory networks (LSTM), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, random forest, and other ML techniques are frequently used in such scenarios. Next, cutting-edge applications related to ML techniques for pandemic medical issues are discussed. Various problems and challenges linked with ML applications for this pandemic were reviewed. It is expected that additional research will be conducted in the upcoming to limit the spread and catastrophe management. According to the data, most papers are evaluated mainly on characteristics such as flexibility and accuracy, while other factors such as safety are overlooked. Also, Keras was the most often used library in the research studied, accounting for 24.4 percent of the time. Furthermore, medical imaging systems are employed for diagnostic reasons in 20.4 percent of applications.
Collapse
Affiliation(s)
- Arash Heidari
- Department of Computer Engineering, Tabriz Branch, Islamic Azad University, Tabriz, Iran
- Department of Computer Engineering, Shabestar Branch, Islamic Azad University, Shabestar, Iran
| | | | - Mehmet Unal
- Department of Computer Engineering, Nisantasi University, Istanbul, Turkey
| | - Shiva Toumaj
- Urmia University of Medical Sciences, Urmia, Iran
| |
Collapse
|
7
|
Coal Mine Personnel Safety Monitoring Technology Based on Uncooled Infrared Focal Plane Technology. Processes (Basel) 2022. [DOI: 10.3390/pr10061142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
In an effort to overcome the difficulty of real-time early warning via traditional infrared imaging technology caused by the complex working environment in coal mines, this paper proposes a mine early warning method based on uncooled infrared focal plane technology. The infrared thermal spectrogram of the detected object was visually displayed in a pseudo-color image with high resolution and high sensitivity, which can realize the real-time detection and early warning of personnel safety in modern mines. The multipoint compression correction algorithm based on human visual characteristics divided the response units of all acquisition units into gray intervals according to a threshold value, then the corresponding parameters were set in different intervals, and finally, each interval was compressed using a two-point correction algorithm. The volume of stored data was the sum of the calibration curve and the data from an encode table corrected by a MATLAB simulation, and the number of CPU cycles was run by a CCS 3.3 clock calculation algorithm. The results showed that when the temperature of the blackbody reached 115 °C, the nonuniformity before correction was 6.32%, and the nonuniformity after the multipoint correction of human eyes was 2.99%, which implied that the algorithm proposed in this paper had good denoising ability. The number of CPU cycles occupied by this algorithm was 18,257,363 cycles/frame with a frequency of 29.97 Hz. The sharpness of the compressed infrared images was obviously improved, and the uniformity was better. The method proposed in this paper can meet the need for modern mine personnel search and rescue, equipment supervision and dangerous area detection and other early warning requirements so as to achieve the goal of developing smart mines and ensuring safety in coal mine production.
Collapse
|
8
|
Zadeh FA, Ardalani MV, Salehi AR, Jalali Farahani R, Hashemi M, Mohammed AH. An Analysis of New Feature Extraction Methods Based on Machine Learning Methods for Classification Radiological Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3035426. [PMID: 35634075 PMCID: PMC9131703 DOI: 10.1155/2022/3035426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 02/02/2022] [Accepted: 03/08/2022] [Indexed: 12/02/2022]
Abstract
The lungs are COVID-19's most important focus, as it induces inflammatory changes in the lungs that can lead to respiratory insufficiency. Reducing the supply of oxygen to human cells negatively impacts humans, and multiorgan failure with a high mortality rate may, in certain circumstances, occur. Radiological pulmonary evaluation is a vital part of patient therapy for the critically ill patient with COVID-19. The evaluation of radiological imagery is a specialized activity that requires a radiologist. Artificial intelligence to display radiological images is one of the essential topics. Using a deep machine learning technique to identify morphological differences in the lungs of COVID-19-infected patients could yield promising results on digital images of chest X-rays. Minor differences in digital images that are not detectable or apparent to the human eye may be detected using computer vision algorithms. This paper uses machine learning methods to diagnose COVID-19 on chest X-rays, and the findings have been very promising. The dataset includes COVID-19-enhanced X-ray images for disease detection using chest X-ray images. The data were gathered from two publicly accessible datasets. The feature extractions are done using the gray level co-occurrence matrix methods. K-nearest neighbor, support vector machine, linear discrimination analysis, naïve Bayes, and convolutional neural network methods are used for the classification of patients. According to the findings, convolutional neural networks' efficiency linked to imaging modalities with fewer human involvements outperforms other traditional machine learning approaches.
Collapse
Affiliation(s)
| | - Mohammadreza Vazifeh Ardalani
- Robotics Research Laboratory, Center of Excellence in Experimental Solid Mechanics and Dynamics, School of Mechanical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ali Rezaei Salehi
- Industrial Engineering Department, Technical and Engineering Faculty, University of Science and Culture, Tehran, Iran
| | | | - Mandana Hashemi
- School of Industrial and Information Engineering, Politecnico di Milano University, Milan, Italy
| | - Adil Hussein Mohammed
- Department of Communication and Computer Engineering, Faculty of Engineering, Cihan University-Erbil, Erbil, Kurdistan Region, Iraq
| |
Collapse
|
9
|
Hajipour Khire Masjidi B, Bahmani S, Sharifi F, Peivandi M, Khosravani M, Hussein Mohammed A. CT-ML: Diagnosis of Breast Cancer Based on Ultrasound Images and Time-Dependent Feature Extraction Methods Using Contourlet Transformation and Machine Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1493847. [PMID: 35655521 PMCID: PMC9155970 DOI: 10.1155/2022/1493847] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 02/02/2022] [Accepted: 02/28/2022] [Indexed: 12/15/2022]
Abstract
Breast diseases are a group of diseases that appear in different forms. An entire group of these diseases is breast cancer. This disease is one of the most important and common diseases in women. A machine learning system has been trained to identify specific patterns using an algorithm in a machine learning system to diagnose breast cancer. Therefore, designing a feature extraction method is essential to decrease the computation time. In this article, a two-dimensional contourlet is utilized as the input image based on the Breast Cancer Ultrasound Dataset. The sub-banded contourlet coefficients are modeled using the time-dependent model. The features of the time-dependent model are considered the leading property vector. The extracted features are applied separately to determine breast cancer classes based on classification methods. The classification is performed for the diagnosis of tumor types. We used the time-dependent approach to feature contourlet sub-bands from three groups of benign, malignant, and health control test samples. The final feature of 1200 ultrasound images used in three categories is trained based on k-nearest neighbor, support vector machine, decision tree, random forest, and linear discrimination analysis approaches, and the results are recorded. The decision tree results show that the method's sensitivity is 87.8%, 92.0%, and 87.0% for normal, benign, and malignant, respectively. The presented feature extraction method is compatible with the decision tree approach for this problem. Based on the results, the decision tree architecture with the highest accuracy is the more accurate and compatible method for diagnosing breast cancer using ultrasound images.
Collapse
Affiliation(s)
| | - Soufia Bahmani
- Department of Computer Engineering and Information Technology, Amirkabir University of Technology, Tehran 15875-4413, Iran
| | - Fatemeh Sharifi
- Department of Electrical Engineering, University of Applied Science and Technology, Bushehr, Iran
| | - Mohammad Peivandi
- Hochschule für Technik und Wirtschaft Berlin (HTW Berlin), Berlin, Germany
| | - Mohammad Khosravani
- Department of Electrical & Computer Engineering, Arak University of Technology, Arak, Iran
| | - Adil Hussein Mohammed
- Department of Communication and Computer Engineering, Faculty of Engineering, Cihan University-Erbil, Kurdistan Region, Iraq
| |
Collapse
|
10
|
Optimization of Spatial Resolution and Image Reconstruction Parameters for the Small-Animal Metis™ PET/CT System. ELECTRONICS 2022. [DOI: 10.3390/electronics11101542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Purpose: The aim of this study was to investigate the optimization of the spatial resolution and image reconstruction parameters related to image quality in an iterative reconstruction algorithm for the small-animal Metis™ PET/CT system. Methods: We used a homemade Derenzo phantom to evaluate the image quality using visual assessment, the signal-to-noise ratio, the contrast, the coefficient of variation, and the contrast-to-noise ratio of the 0.8 mm hot rods of eight slices in the center of the phantom PET images. A healthy mouse study was performed to analyze the influence of the optimal reconstruction parameters and the Gaussian post-filter FWHM. Results: In the phantom study, the image quality was the best when the phantom was placed at the end, keeping the central axis parallel to the X-axis of the system, and selecting between 30 and 40 iterations, a 0.314 mm reconstructed voxel size, and a 1.57 mm Gaussian post-filter FWHM. The optimization of the spatial resolution could reach 0.6 mm. In the animal study, it was suitable to choose a voxel size of 0.472 mm, between 30 and 40 iterations, and a 2.36 mm Gaussian post-filter FWHM. Conclusions: Our results indicate that the optimal imaging conditions and reconstruction parameters are very necessary to obtain high-resolution images and quantitative accuracy, especially for the high-precision recognition of tiny lesions.
Collapse
|
11
|
FDCNet: Presentation of the Fuzzy CNN and Fractal Feature Extraction for Detection and Classification of Tumors. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7543429. [PMID: 35571692 PMCID: PMC9106477 DOI: 10.1155/2022/7543429] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 04/08/2022] [Indexed: 12/13/2022]
Abstract
The detection of brain tumors using magnetic resonance imaging is currently one of the biggest challenges in artificial intelligence and medical engineering. It is important to identify these brain tumors as early as possible, as they can grow to death. Brain tumors can be classified as benign or malignant. Creating an intelligent medical diagnosis system for the diagnosis of brain tumors from MRI imaging is an integral part of medical engineering as it helps doctors detect brain tumors early and oversee treatment throughout recovery. In this study, a comprehensive approach to diagnosing benign and malignant brain tumors is proposed. The proposed method consists of four parts: image enhancement to reduce noise and unify image size, contrast, and brightness, image segmentation based on morphological operators, feature extraction operations including size reduction and selection of features based on the fractal model, and eventually, feature improvement according to segmentation and selection of optimal class with a fuzzy deep convolutional neural network. The BraTS data set is used as magnetic resonance imaging data in experimental results. A series of evaluation criteria is also compared with previous methods, where the accuracy of the proposed method is 98.68%, which has significant results.
Collapse
|
12
|
Super-Resolution Ultrasound Imaging Scheme Based on a Symmetric Series Convolutional Neural Network. SENSORS 2022; 22:s22083076. [PMID: 35459061 PMCID: PMC9029455 DOI: 10.3390/s22083076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/27/2022] [Accepted: 04/14/2022] [Indexed: 12/04/2022]
Abstract
In this paper, we propose a symmetric series convolutional neural network (SS-CNN), which is a novel deep convolutional neural network (DCNN)-based super-resolution (SR) technique for ultrasound medical imaging. The proposed model comprises two parts: a feature extraction network (FEN) and an up-sampling layer. In the FEN, the low-resolution (LR) counterpart of the ultrasound image passes through a symmetric series of two different DCNNs. The low-level feature maps obtained from the subsequent layers of both DCNNs are concatenated in a feed forward manner, aiding in robust feature extraction to ensure high reconstruction quality. Subsequently, the final concatenated features serve as an input map to the latter 2D convolutional layers, where the textural information of the input image is connected via skip connections. The second part of the proposed model is a sub-pixel convolutional (SPC) layer, which up-samples the output of the FEN by multiplying it with a multi-dimensional kernel followed by a periodic shuffling operation to reconstruct a high-quality SR ultrasound image. We validate the performance of the SS-CNN with publicly available ultrasound image datasets. Experimental results show that the proposed model achieves a high-quality reconstruction of the ultrasound image over the conventional methods in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), while providing compelling SR reconstruction time.
Collapse
|
13
|
Chen Y, Wang J, Gao W, Yu D, Shou X. Construction and Clinical Application Effect of General Surgery Patient-Oriented Nursing Information Platform Using Cloud Computing. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:8273701. [PMID: 35368952 PMCID: PMC8975652 DOI: 10.1155/2022/8273701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 02/23/2022] [Indexed: 12/28/2022]
Abstract
The paper aims to build a nursing information platform (NIP) for general surgery (GS) patients and explore its clinical application effect based on cloud computing (CC) technology. Specifically, the present work first analyzes and expounds on the characteristics of GS patients, the CC concept, the three-tier service mode of CC, and the cloud data center (CDC). Secondly, based on the principle of the overall system design, the evaluation indexes of medical care end, patient end, family end, and management end are constructed using Visual Studio 2010. Thirdly, the expert evaluation and user evaluation methods are selected to analyze the clinical application effect of the proposed system. Finally, SPSS is used to analyze the effect of the proposed system. The results of the first and second rounds of the expert evaluation show that the authority coefficient of experts is greater than 0.7, which indicates that the degree of expert authority is good. The proposed CC-based GS patient-oriented NIP system is universal. The evaluation results of 20 users have shown 15 doctors and nurses, 14 patients, and 18 family members, who mostly still support applying the proposed CC-based GS patient-oriented NIP system and believe that the system brings convenience and improves work efficiency. In short, more incentives should be taken to build a NIP for GS patients.
Collapse
Affiliation(s)
- Yuanyuan Chen
- General Surgery, Zhejiang Hospital, Hangzhou 310012, China
| | - Jingjing Wang
- Health Management Center, Zhejiang Hospital, Hangzhou 310012, China
| | - Weiwei Gao
- The Nursing Department, Zhejiang Hospital, Hangzhou 310012, China
| | - Dongli Yu
- General Surgery, Zhejiang Hospital, Hangzhou 310012, China
| | - Xiaoxia Shou
- General Surgery, Zhejiang Hospital, Hangzhou 310012, China
| |
Collapse
|
14
|
|
15
|
Xiao Y. Key Technologies of New Type of Intravascular Ultrasound Image Processing. Front Surg 2022; 8:770106. [PMID: 35141268 PMCID: PMC8818725 DOI: 10.3389/fsurg.2021.770106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/24/2021] [Indexed: 11/13/2022] Open
Abstract
Since entering the 21st century, the application of ultrasound technology has developed rapidly. Intravascular ultrasound technology has been widely used in the diagnosis and treatment of cardiovascular diseases. With the help of computer image processing technology, it can provide clinicians with more accurate diagnosis. Based on the information to improve the success rate of clinical treatment. Based on this, this article combines the development history of intravascular ultrasound technology, explores the principles of new intravascular ultrasound technology, and analyzes the application of new intravascular ultrasound technology. On this basis, the preprocessing of intravascular ultrasound image data is discussed, involving the acquisition of intravascular ultrasound image data and image analysis. On this basis, explore the combined application of new intravascular ultrasound technology and other imaging examination methods, such as X-rays to use three-dimensional image technology to reconstruct new intravascular ultrasound image sequences, and provide doctors with clearer morphology and properties of tube wall lesions. In order to make a more accurate diagnosis of the lesion, a more detailed and accurate treatment plan can be given, which has extremely high clinical application value.
Collapse
|
16
|
Liu H, Liu J, Chen F, Shan C. Progressive Residual Learning with Memory Upgrade for Ultrasound Image Blind Super-resolution. IEEE J Biomed Health Inform 2022; 26:4390-4401. [PMID: 35041614 DOI: 10.1109/jbhi.2022.3142076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
For clinical medical diagnosis and treatment, image super-resolution (SR) technology will be helpful to improve the ultrasonic imaging quality so as to enhance the accuracy of disease diagnosis. However, due to the differences of sensing devices or transmission media, the resolution degradation process of ultrasound imaging in real scenes is uncontrollable, especially when the blur kernel is usually unknown. This issue makes current endto- end SR networks poor performance when applied to ultrasonic images. Aiming to achieve effective SR in real ultrasound medical scenes, in this work, we propose a blind deep SR method based on progressive residual learning and memory upgrade. Specifically, we estimate the accurate blur kernel from the spatial attention map block of low resolution (LR) ultrasound image through a multi-label classification network, then we construct three modules - up-sampling (US) module, residual learning (RL) model and memory upgrading (MU) model for ultrasound image blind SR. The US module is designed to upscale the input information and the up-sampled residual result will be used for SR reconstruction. The RL module is employed to approximate the original LR and continuously generate the updated residual and feed it to the next US module. The last MU module can store all progressively learned residuals, which offers increased interactions between the US and RL modules, augmenting the details recovery. Extensive experiments and evaluations on the benchmark CCA-US and US-CASE datasets demonstrate the proposed approach achieves better performance against the state-ofthe- art methods.
Collapse
|
17
|
Wang Z, Mou Y, Li H, Yang R, Jia Y. Impact of Early Intravenous Haemostatic Drugs on Brain Haemorrhage Patients and Their Image Segmentation Based on RGB-D Images. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4608648. [PMID: 35035838 PMCID: PMC8759877 DOI: 10.1155/2022/4608648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/08/2021] [Accepted: 12/11/2021] [Indexed: 11/19/2022]
Abstract
Cerebral haemorrhage is a serious subtype of stroke, with most patients experiencing short-term haematoma enlargement leading to worsening neurological symptoms and death. The main hemostatic agents currently used for cerebral haemorrhage are antifibrinolytics and recombinant coagulation factor VIIa. However, there is no clinical evidence that patients with cerebral haemorrhage can benefit from hemostatic treatment. We provide an overview of the mechanisms of haematoma expansion in cerebral haemorrhage and the progress of research on commonly used hemostatic drugs. To improve the semantic segmentation accuracy of cerebral haemorrhage, a segmentation method based on RGB-D images is proposed. Firstly, the parallax map was obtained based on a semiglobal stereo matching algorithm and fused with RGB images to form a four-channel RGB-D image to build a sample library. Secondly, the networks were trained with 2 different learning rate adjustment strategies for 2 different structures of convolutional neural networks. Finally, the trained networks were tested and compared for analysis. The 146 head CT images from the Chinese intracranial haemorrhage image database were divided into a training set and a test set using the random number table method. The validation set was divided into four methods: manual segmentation, algorithmic segmentation, the exact Tada formula, and the traditional Tada formula to measure the haematoma volume. The manual segmentation was used as the "gold standard," and the other three algorithms were tested for consistency. The results showed that the algorithmic segmentation had the lowest percentage error of 15.54 (8.41, 23.18) % compared to the Tada formula method.
Collapse
Affiliation(s)
- Zhenzhen Wang
- Department of Neurology, Gucheng County Hospital, Hengshui 253800, China
| | - Yating Mou
- Department of Neurology, Gucheng County Hospital, Hengshui 253800, China
| | - Hao Li
- Department of Neurology, Gucheng County Hospital, Hengshui 253800, China
| | - Rui Yang
- Department of Neurology, Gucheng County Hospital, Hengshui 253800, China
| | - Yanxun Jia
- Department of Neurology, Gucheng County Hospital, Hengshui 253800, China
| |
Collapse
|
18
|
Sun R, Chang R, Yu T, Wang D, Jiang L. U-Net Modelling-Based Imaging MAP Score for Tl Stage Nephrectomy: An Exploratory Study. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1084853. [PMID: 35035806 PMCID: PMC8754594 DOI: 10.1155/2022/1084853] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 12/18/2021] [Accepted: 12/24/2021] [Indexed: 11/17/2022]
Abstract
We evaluate the stability of the clinical application of the MAP scoring system based on anatomical features of renal tumour images, explore the relevance of this scoring system to the choice of surgical procedure for patients with limited renal tumours, and investigate the effectiveness of automated segmentation and reconstruction 3D models of renal tumour images based on U-net for interpretative cognitive navigation during laparoscopy Tl stage radical renal tumour cancer surgery. A total of 5 000 kidney tumour images containing manual annotations were applied to the training set, and a stable and efficient full CNN algorithm model oriented to clinical needs was constructed to regionalism and multistructure and to finely automate segmentation of kidney tumour images, output modelling information in STL format, and apply a tablet computer to intraoperatively display the Tl stage kidney tumour model for cognitive navigation. Based on a training sample of MR images from 201 patients with stage Tl renal tumour cancer, an adaptation of the classical U-net allows individual segmentation of important structures such as renal tumours and 3D visualisation to visualise the structural relationships and the extent of tumour invasion at key surgical sites. The preoperative CT and clinical data of 225 patients with limited renal tumours treated surgically at our hospital from August 2011 to August 2012 were retrospectively analysed by three imaging physicians using the MAP scoring system for the total score and the variables R (maximum diameter), E (exogenous/endogenous), N (distance from the renal sinus), A (ventral/dorsal), L (relationship along the longitudinal axis of the kidney), and h (whether in contact with the renal hilum). The score for each variable (contact with the renal hilum) was statistically compared with each other for the three observers. Patients were divided into three groups according to the total score-low, medium, and high-and according to the surgical procedure-radical and partial resection. The correlation between the total score and the score of each variable and the choice of surgical procedure was analysed. The agreement rate of the total score and the score of each variable for all three observers was over 90% (P ≤ 0.001). The map scoring system based on the anatomical features of renal tumour imaging was well stabilized, and the scores were significantly correlated with the surgical approach.
Collapse
Affiliation(s)
- Ruixue Sun
- Imaging Department Hengshui People's Hospital, Hengshui 053000, China
| | - Ruiting Chang
- Imaging Department Hengshui People's Hospital, Hengshui 053000, China
| | - Tianshu Yu
- Imaging Department Hengshui People's Hospital, Hengshui 053000, China
| | - Dongxin Wang
- Imaging Department Hengshui People's Hospital, Hengshui 053000, China
| | - Lijie Jiang
- Imaging Department Hengshui People's Hospital, Hengshui 053000, China
| |
Collapse
|
19
|
Valizadeh A, Shariatee M. The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:7265644. [PMID: 34840563 PMCID: PMC8611358 DOI: 10.1155/2021/7265644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 10/18/2021] [Indexed: 11/17/2022]
Abstract
Image medical semantic segmentation has been employed in various areas, including medical imaging, computer vision, and intelligent transportation. In this study, the method of semantic segmenting images is split into two sections: the method of the deep neural network and previous traditional method. The traditional method and the published dataset for segmentation are reviewed in the first step. The presented aspects, including all-convolution network, sampling methods, FCN connector with CRF methods, extended convolutional neural network methods, improvements in network structure, pyramid methods, multistage and multifeature methods, supervised methods, semiregulatory methods, and nonregulatory methods, are then thoroughly explored in current methods based on the deep neural network. Finally, a general conclusion on the use of developed advances based on deep neural network concepts in semantic segmentation is presented.
Collapse
Affiliation(s)
- Amin Valizadeh
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Morteza Shariatee
- Department of Mechanical Engineering, Iowa State University, Ames, IA, USA
| |
Collapse
|
20
|
Sparse data-based image super-resolution with ANFIS interpolation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06500-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractImage processing is a very broad field containing various areas, including image super-resolution (ISR) which re-represents a low-resolution image as a high-resolution one through a certain means of image transformation. The problem with most of the existing ISR methods is that they are devised for the condition in which sufficient training data is expected to be available. This article proposes a new approach for sparse data-based (rather than sufficient training data-based) ISR, by the use of an ANFIS (Adaptive Network-based Fuzzy Inference System) interpolation technique. Particularly, a set of given image training data is split into various subsets of sufficient and sparse training data subsets. Typical ANFIS training process is applied for those subsets involving sufficient data, and ANFIS interpolation is employed for the rest that contains sparse data only. Inadequate work is available in the current literature for the sparse data-based ISR. Consequently, the implementations of the proposed sparse data-based approach, for both training and testing processes, are compared with the state-of-the-art sufficient data-based ISR methods. This is of course very challenging, but the results of experimental evaluation demonstrate positively about the efficacy of the work presented herein.
Collapse
|