1
|
Wang B, Liao X, Ni Y, Zhang L, Liang J, Wang J, Liu Y, Sun X, Ou Y, Wu Q, Shi L, Yang Z, Lan L. High-resolution medical image reconstruction based on residual neural network for diagnosis of cerebral aneurysm. Front Cardiovasc Med 2022; 9:1013031. [PMID: 36337881 PMCID: PMC9632742 DOI: 10.3389/fcvm.2022.1013031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 09/29/2022] [Indexed: 11/13/2022] Open
Abstract
Objective Cerebral aneurysms are classified as severe cerebrovascular diseases due to hidden and critical onset, which seriously threaten life and health. An effective strategy to control intracranial aneurysms is the regular diagnosis and timely treatment by CT angiography (CTA) imaging technology. However, unpredictable patient movements make it challenging to capture sub-millimeter-level ultra-high resolution images in a CTA scan. In order to improve the doctor's judgment, it is necessary to improve the clarity of the cerebral aneurysm medical image algorithm. Methods This paper mainly focuses on researching a three-dimensional medical image super-resolution algorithm applied to cerebral aneurysms. Although some scholars have proposed super-resolution reconstruction methods, there are problems such as poor effect and too much reconstruction time. Therefore, this paper designs a lightweight super-resolution network based on a residual neural network. The residual block structure removes the B.N. layer, which can effectively solve the gradient problem. Considering the high-resolution reconstruction needs to take the complete image as the research object and the fidelity of information, this paper selects the channel domain attention mechanism to improve the performance of the residual neural network. Results The new data set of cerebral aneurysms in this paper was obtained by CTA imaging technology of patients in the Department of neurosurgery, the second affiliated of Guizhou Medical University Hospital. The proposed model was evaluated from objective evaluation, model effect, model performance, and detection comparison. On the brain aneurysm data set, we tested the PSNR and SSIM values of 2 and 4 magnification factors, and the scores of our method were 33.01, 28.39, 33.06, and 28.41, respectively, which were better than those of the traditional SRCNN, ESPCN and FSRCNN. Subsequently, the model is applied to practice in this paper, and the effect, performance index and diagnosis of auxiliary doctors are obtained. The experimental results show that the high-resolution image reconstruction model based on the residual neural network designed in this paper plays a more influential role than other image classification methods. This method has higher robustness, accuracy and intuition. Conclusion With the wide application of CTA images in the clinical diagnosis of cerebral aneurysms and the increasing number of application samples, this method is expected to become an additional diagnostic tool that can effectively improve the diagnostic accuracy of cerebral aneurysms.
Collapse
|
2
|
Chen Q, Bai H, Che B, Zhao T, Zhang C, Wang K, Bai J, Zhao W. Super-Resolution Reconstruction of Cytoskeleton Image Based on A-Net Deep Learning Network. MICROMACHINES 2022; 13:1515. [PMID: 36144138 PMCID: PMC9501965 DOI: 10.3390/mi13091515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/29/2022] [Accepted: 09/03/2022] [Indexed: 06/16/2023]
Abstract
To date, live-cell imaging at the nanometer scale remains challenging. Even though super-resolution microscopy methods have enabled visualization of sub-cellular structures below the optical resolution limit, the spatial resolution is still far from enough for the structural reconstruction of biomolecules in vivo (i.e., ~24 nm thickness of microtubule fiber). In this study, a deep learning network named A-net was developed and shows that the resolution of cytoskeleton images captured by a confocal microscope can be significantly improved by combining the A-net deep learning network with the DWDC algorithm based on a degradation model. Utilizing the DWDC algorithm to construct new datasets and taking advantage of A-net neural network's features (i.e., considerably fewer layers and relatively small dataset), the noise and flocculent structures which originally interfere with the cellular structure in the raw image are significantly removed, with the spatial resolution improved by a factor of 10. The investigation shows a universal approach for exacting structural details of biomolecules, cells and organs from low-resolution images.
Collapse
Affiliation(s)
- Qian Chen
- School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
| | - Haoxin Bai
- State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwestern University, Xi’an 710127, China
| | - Bingchen Che
- State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwestern University, Xi’an 710127, China
| | - Tianyun Zhao
- School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
| | - Ce Zhang
- State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwestern University, Xi’an 710127, China
| | - Kaige Wang
- State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwestern University, Xi’an 710127, China
| | - Jintao Bai
- State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwestern University, Xi’an 710127, China
| | - Wei Zhao
- State Key Laboratory of Photon-Technology in Western China Energy, International Collaborative Center on Photoelectric Technology and Nano Functional Materials, Institute of Photonics & Photon Technology, Northwestern University, Xi’an 710127, China
| |
Collapse
|
3
|
Liu L, Chen CLP, Li S. Hallucinating Color Face Image by Learning Graph Representation in Quaternion Space. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:265-277. [PMID: 32224475 DOI: 10.1109/tcyb.2020.2979320] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Recently, learning-based representation techniques have been well exploited for grayscale face image hallucination. For color images, the previous methods only handle the luminance component or each color channel individually, without considering the abundant correlations among different channels as well as the inherent geometrical structure of data manifold. In this article, we propose a learning-based model in quaternion space with graph representation for color face hallucination. Instead of the spatial domain, the color image is represented in the quaternion domain to preserve correlations among different color channels. Moreover, a quaternion graph is learned to smooth the quaternion feature space, which helps to not only stabilize the linear system but also enclose the inherent topology structure of quaternion patch manifold. Besides, considering that single low-resolution (LR) image patch can just provide limited informative information in representation, we propose to simultaneously encode the query smaller LR patch as well as a larger patch containing the surrounding pixels seated at the same position in the objective. The larger patch with rich patterns is used to compensate the lost information in the query LR patch, which further enhances the manifold consistency assumption between the LR and HR patch spaces. The experimental results demonstrated the efficiency of the proposed method in hallucinating color face images.
Collapse
|
4
|
|
5
|
Liu L, Li S, Chen CLP. Quaternion Locality-Constrained Coding for Color Face Hallucination. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:1474-1485. [PMID: 28541233 DOI: 10.1109/tcyb.2017.2703134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Recently, the locality linear coding (LLC) has attracted more and more attentions in the areas of image processing and computer vision. However, the conventional LLC with real setting is just designed for the grayscale image. For the color image, it usually treats each color channel individually or encodes the monochrome image by concatenating all the color channels, which ignores the correlations among different channels. In this paper, we propose a quaternion-based locality-constrained coding (QLC) model for color face hallucination in the quaternion space. In QLC, the face images are represented as quaternion matrices. By transforming the channel images into an orthogonal feature space and encoding the coefficients in the quaternion domain, the proposed QLC is expected to learn the advantages of both quaternion algebra and locality coding scheme. Hence, the QLC cannot only expose the true topology of image patch manifold but also preserve the inherent correlations among different color channels. Experimental results demonstrated that our proposed QLC method achieved superior performance in color face hallucination compared with other state-of-the-art methods.
Collapse
|
6
|
Liu L, Chen CLP, Li S, Tang YY, Chen L. Robust Face Hallucination via Locality-Constrained Bi-Layer Representation. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:1189-1201. [PMID: 28475071 DOI: 10.1109/tcyb.2017.2682853] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Recently, locality-constrained linear coding (LLC) has been drawn great attentions and been widely used in image processing and computer vision tasks. However, the conventional LLC model is always fragile to outliers. In this paper, we present a robust locality-constrained bi-layer representation model to simultaneously hallucinate the face images and suppress noise and outliers with the assistant of a group of training samples. The proposed scheme is not only able to capture the nonlinear manifold structure but also robust to outliers by incorporating a weight vector into the objective function to subtly tune the contribution of each pixel offered in the objective. Furthermore, a high-resolution (HR) layer is employed to compensate the missed information in the low-resolution (LR) space for coding. The use of two layers (the LR layer and the HR layer) is expected to expose the complicated correlation between the LR and HR patch spaces, which helps to obtain the desirable coefficients to reconstruct the final HR face. The experimental results demonstrate that the proposed method outperforms the state-of-the-art image super-resolution methods in terms of both quantitative measurements and visual effects.
Collapse
|
7
|
Jiang J, Chen C, Huang K, Cai Z, Hu R. Noise robust position-patch based face super-resolution via Tikhonov regularized neighbor representation. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.05.032] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Wang YK, Fan CT. Single image defogging by multiscale depth fusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:4826-4837. [PMID: 25248180 DOI: 10.1109/tip.2014.2358076] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Restoration of fog images is important for the deweathering issue in computer vision. The problem is ill-posed and can be regularized within a Bayesian context using a probabilistic fusion model. This paper presents a multiscale depth fusion (MDF) method for defog from a single image. A linear model representing the stochastic residual of nonlinear filtering is first proposed. Multiscale filtering results are probabilistically blended into a fused depth map based on the model. The fusion is formulated as an energy minimization problem that incorporates spatial Markov dependence. An inhomogeneous Laplacian-Markov random field for the multiscale fusion regularized with smoothing and edge-preserving constraints is developed. A nonconvex potential, adaptive truncated Laplacian, is devised to account for spatially variant characteristics such as edge and depth discontinuity. Defog is solved by an alternate optimization algorithm searching for solutions of depth map by minimizing the nonconvex potential in the random field. The MDF method is experimentally verified by real-world fog images including cluttered-depth scene that is challenging for defogging at finer details. The fog-free images are restored with improving contrast and vivid colors but without over-saturation. Quantitative assessment of image quality is applied to compare various defog methods. Experimental results demonstrate that the accurate estimation of depth map by the proposed edge-preserved multiscale fusion should recover high-quality images with sharp details.
Collapse
|
9
|
Jiang J, Hu R, Wang Z, Han Z. Face super-resolution via multilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:4220-4231. [PMID: 25134081 DOI: 10.1109/tip.2014.2347201] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Based on the assumption that low-resolution (LR) and high-resolution (HR) manifolds are locally isometric, the neighbor embedding super-resolution algorithms try to preserve the geometry (reconstruction weights) of the LR space for the reconstructed HR space, but neglect the geometry of the original HR space. Due to the degradation process of the LR image (e.g., noisy, blurred, and down-sampled), the neighborhood relationship of the LR space cannot reflect the truth. To this end, this paper proposes a coarse-to-fine face super-resolution approach via a multilayer locality-constrained iterative neighbor embedding technique, which intends to represent the input LR patch while preserving the geometry of original HR space. In particular, we iteratively update the LR patch representation and the estimated HR patch, and meanwhile an intermediate dictionary learning scheme is employed to bridge the LR manifold and original HR manifold. The proposed method can faithfully capture the intrinsic image degradation shift and enhance the consistency between the reconstructed HR manifold and the original HR manifold. Experiments with application to face super-resolution on the CAS-PEAL-R1 database and real-world images demonstrate the power of the proposed algorithm.
Collapse
|
10
|
Abstract
Imaging resolution has been standing as a core parameter in various applications of vision. Mostly, high resolutions are desirable or essential for many applications, e.g., in most remote sensing systems, and therefore much has been done to achieve a higher resolution of an image based on one or a series of images of relatively lower resolutions. On the other hand, lower resolutions are also preferred in some cases, e.g., for displaying images in a very small screen or interface. Accordingly, algorithms for image upsampling or downsampling have also been proposed. In the above algorithms, the downsampled or upsampled (super-resolution) versions of the original image are often taken as test images to evaluate the performance of the algorithms. However, there is one important question left unanswered: whether the downsampled or upsampled versions of the original image can represent the low-resolution or high-resolution real images from a camera? To tackle this point, the following works are carried out: 1) a multiresolution camera is designed to simultaneously capture images in three different resolutions; 2) at a given resolution (i.e., image size), the relationship between a pair of images is studied, one gained via either downsampling or super-resolution, and the other is directly captured at this given resolution by an imaging device; and 3) the performance of the algorithms of super-resolution and image downsampling is evaluated by using the given image pairs. The key reason why we can effectively tackle the aforementioned issues is that the designed multiresolution imaging camera can provide us with real images of different resolutions, which builds a solid foundation for evaluating various algorithms and analyzing the images with different resolutions, which is very important for vision.
Collapse
|
11
|
|
12
|
Karam LJ, Sadaka NG, Ferzli R, Ivanovski ZA. An efficient selective perceptual-based super-resolution estimator. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:3470-3482. [PMID: 21672677 DOI: 10.1109/tip.2011.2159324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, a selective perceptual-based (SELP) framework is presented to reduce the complexity of popular super-resolution (SR) algorithms while maintaining the desired quality of the enhanced images/video. A perceptual human visual system model is proposed to compute local contrast sensitivity thresholds. The obtained thresholds are used to select which pixels are super-resolved based on the perceived visibility of local edges. Processing only a set of perceptually significant pixels reduces significantly the computational complexity of SR algorithms without losing the achievable visual quality. The proposed SELP framework is integrated into a maximum-a posteriori-based SR algorithm as well as a fast two-stage fusion-restoration SR estimator. Simulation results show a significant reduction on average in computational complexity with comparable signal-to-noise ratio gains and visual quality.
Collapse
Affiliation(s)
- Lina J Karam
- School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ 85287-5706, USA.
| | | | | | | |
Collapse
|
13
|
Jurio A, Pagola M, Mesiar R, Beliakov G, Bustince H. Image magnification using interval information. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:3112-3123. [PMID: 21632304 DOI: 10.1109/tip.2011.2158227] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, a simple and effective image-magnification algorithm based on intervals is proposed. A low-resolution image is magnified to form a high-resolution image using a block-expanding method. Our proposed method associates each pixel with an interval obtained by a weighted aggregation of the pixels in its neighborhood. From the interval and with a linear K(α) operator, we obtain the magnified image. Experimental results show that our algorithm provides a magnified image with better quality (peak signal-to-noise ratio) than several existing methods.
Collapse
Affiliation(s)
- Aranzazu Jurio
- Departamento de Automatica y Computacion, Universidad Publica de Navarra, Pamplona, Spain.
| | | | | | | | | |
Collapse
|