1
|
Hagras EAA, Aldosary S, Khaled H, Hassan TM. Authenticated Public Key Elliptic Curve Based on Deep Convolutional Neural Network for Cybersecurity Image Encryption Application. SENSORS (BASEL, SWITZERLAND) 2023; 23:6589. [PMID: 37514882 PMCID: PMC10383835 DOI: 10.3390/s23146589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 07/09/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023]
Abstract
The demand for cybersecurity is growing to safeguard information flow and enhance data privacy. This essay suggests a novel authenticated public key elliptic curve based on a deep convolutional neural network (APK-EC-DCNN) for cybersecurity image encryption application. The public key elliptic curve discrete logarithmic problem (EC-DLP) is used for elliptic curve Diffie-Hellman key exchange (EC-DHKE) in order to generate a shared session key, which is used as the chaotic system's beginning conditions and control parameters. In addition, the authenticity and confidentiality can be archived based on ECC to share the EC parameters between two parties by using the EC-DHKE algorithm. Moreover, the 3D Quantum Chaotic Logistic Map (3D QCLM) has an extremely chaotic behavior of the bifurcation diagram and high Lyapunov exponent, which can be used in high-level security. In addition, in order to achieve the authentication property, the secure hash function uses the output sequence of the DCNN and the output sequence of the 3D QCLM in the proposed authenticated expansion diffusion matrix (AEDM). Finally, partial frequency domain encryption (PFDE) technique is achieved by using the discrete wavelet transform in order to satisfy the robustness and fast encryption process. Simulation results and security analysis demonstrate that the proposed encryption algorithm achieved the performance of the state-of-the-art techniques in terms of quality, security, and robustness against noise- and signal-processing attacks.
Collapse
Affiliation(s)
- Esam A A Hagras
- Faculty of Engineering, Delta University for Science and Technology, Gamasa 35712, Egypt
| | - Saad Aldosary
- Department of Computer Science, Community College, King Saud University, Riyadh 11437, Saudi Arabia
| | - Haitham Khaled
- Department of Electronics and Communications, School of Engineering, Edith Cowan University, Perth, WA 6027, Australia
| | - Tarek M Hassan
- Faculty of Engineering, Delta University for Science and Technology, Gamasa 35712, Egypt
| |
Collapse
|
2
|
Liu F, Wang H, Liang SN, Jin Z, Wei S, Li X. MPS-FFA: A multiplane and multiscale feature fusion attention network for Alzheimer's disease prediction with structural MRI. Comput Biol Med 2023; 157:106790. [PMID: 36958239 DOI: 10.1016/j.compbiomed.2023.106790] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/17/2023]
Abstract
Structural magnetic resonance imaging (sMRI) is a popular technique that is widely applied in Alzheimer's disease (AD) diagnosis. However, only a few structural atrophy areas in sMRI scans are highly associated with AD. The degree of atrophy in patients' brain tissues and the distribution of lesion areas differ among patients. Therefore, a key challenge in sMRI-based AD diagnosis is identifying discriminating atrophy features. Hence, we propose a multiplane and multiscale feature-level fusion attention (MPS-FFA) model. The model has three components, (1) A feature encoder uses a multiscale feature extractor with hybrid attention layers to simultaneously capture and fuse multiple pathological features in the sagittal, coronal, and axial planes. (2) A global attention classifier combines clinical scores and two global attention layers to evaluate the feature impact scores and balance the relative contributions of different feature blocks. (3) A feature similarity discriminator minimizes the feature similarities among heterogeneous labels to enhance the ability of the network to discriminate atrophy features. The MPS-FFA model provides improved interpretability for identifying discriminating features using feature visualization. The experimental results on the baseline sMRI scans from two databases confirm the effectiveness (e.g., accuracy and generalizability) of our method in locating pathological locations. The source code is available at https://github.com/LiuFei-AHU/MPSFFA.
Collapse
Affiliation(s)
- Fei Liu
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China
| | - Huabin Wang
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China.
| | - Shiuan-Ni Liang
- School of Engineering, Monash University Malaysia, Kuala Lumpur, Malaysia
| | - Zhe Jin
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China
| | - Shicheng Wei
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China
| | - Xuejun Li
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, China; School of Computer Science and Technology, Anhui University, Hefei, China
| | | |
Collapse
|
3
|
Sedlakova Z, Nachtigalova I, Rusina R, Matej R, Buncova M, Kukal J. Alzheimer ’s disease identification from 3D SPECT brain scans by variational analysis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
4
|
Ding Y, Tan F, Qin Z, Cao M, Choo KKR, Qin Z. DeepKeyGen: A Deep Learning-Based Stream Cipher Generator for Medical Image Encryption and Decryption. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:4915-4929. [PMID: 33729956 DOI: 10.1109/tnnls.2021.3062754] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The need for medical image encryption is increasingly pronounced, for example, to safeguard the privacy of the patients' medical imaging data. In this article, a novel deep learning-based key generation network (DeepKeyGen) is proposed as a stream cipher generator to generate the private key, which can then be used for encrypting and decrypting of medical images. In DeepKeyGen, the generative adversarial network (GAN) is adopted as the learning network to generate the private key. Furthermore, the transformation domain (that represents the "style" of the private key to be generated) is designed to guide the learning network to realize the private key generation process. The goal of DeepKeyGen is to learn the mapping relationship of how to transfer the initial image to the private key. We evaluate DeepKeyGen using three data sets, namely, the Montgomery County chest X-ray data set, the Ultrasonic Brachial Plexus data set, and the BraTS18 data set. The evaluation findings and security analysis show that the proposed key generation network can achieve a high-level security in generating the private key.
Collapse
|
5
|
Ding Y, Yang Q, Wang Y, Chen D, Qin Z, Zhang J. MallesNet: A multi-object assistance based network for brachial plexus segmentation in ultrasound images. Med Image Anal 2022; 80:102511. [PMID: 35753278 DOI: 10.1016/j.media.2022.102511] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 06/02/2022] [Accepted: 06/06/2022] [Indexed: 12/19/2022]
Abstract
Ultrasound-guided injection is widely used to help anesthesiologists perform anesthesia in peripheral nerve blockade (PNB). However, it is a daunting task to accurately identify nerve structure in ultrasound images even for the experienced anesthesiologists. In this paper, a Multi-object assistance based Brachial Plexus Segmentation Network, named MallesNet, is proposed to improve the nerve segmentation performance in ultrasound image with the assistance of simultaneously segmenting its surrounding anatomical structures (e.g., muscle, vein, and artery). The MallesNet is designed by following the framework of Mask R-CNN to implement the multi object identification and segmentation. Moreover, a spatial local contrast feature (SLCF) extraction module is proposed to compute contrast features at different scales to effectively obtain useful features for small objects. And the self-attention gate (SAG) is also utilized to capture the spatial relationships in different channels and further re-weight the channels in feature maps by following the design of non-local operation and channel attention. Furthermore, the upsampling mechanism in original Feature Pyramid Network (FPN) is improved by adopting the transpose convolution and skip concatenation to fine-tune the feature maps. The Ultrasound Brachial Plexus Dataset (UBPD) is also proposed to support the research on brachial plexus segmentation, which consists of 1055 ultrasound images with four objects (i.e., nerve, artery, vein and muscle) and their corresponding label masks. Extensive experimental results using UBPD dataset demonstrate that MallesNet can achieve a better segmentation performance on nerves structure and also on surrounding structures in comparison to other competing approaches.
Collapse
Affiliation(s)
- Yi Ding
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Ningbo WebKing Technology Joint Stock Co., Ltd, Ningbo, Zhejiang, 315000, China.
| | | | - Qiqi Yang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Network and Data Security Key Laboratory of China, Chengdu, Sichuan, 610054 China.
| | - Yiqian Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Network and Data Security Key Laboratory of China, Chengdu, Sichuan, 610054 China.
| | - Dajiang Chen
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | | | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054 China; Network and Data Security Key Laboratory of China, Chengdu, Sichuan, 610054 China.
| | | | - Jian Zhang
- Center of Anaesthesia surgery, Sichuan Provincial Hospital for Women and Children/Affilated Women and Children's Hospital of Chengdu Medical College, Chengdu, China.
| |
Collapse
|
6
|
Ding Y, Zheng W, Geng J, Qin Z, Choo KKR, Qin Z, Hou X. MVFusFra: A Multi-View Dynamic Fusion Framework for Multimodal Brain Tumor Segmentation. IEEE J Biomed Health Inform 2021; 26:1570-1581. [PMID: 34699375 DOI: 10.1109/jbhi.2021.3122328] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Medical practitioners generally rely on multimodal brain images, for example based on the information from the axial, coronal, and sagittal views, to inform brain tumor diagnosis. Hence, to further utilize the 3D information embedded in such datasets, this paper proposes a multi-view dynamic fusion framework (hereafter, referred to as MVFusFra) to improve the performance of brain tumor segmentation. The proposed framework consists of the following three key building blocks. First, a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view. Second, the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views into an integrated method. Then, two different fusion methods (i.e., voting and weighted averaging) are used to evaluate the fusing process. Third, the multi-view fusion loss (comprising segmentation loss, transition loss, and decision loss) is proposed to facilitate the training process of multi-view learning networks, so as to ensure consistency in appearance and space, for both fusing segmentation results and the training of the learning network. We evaluate the performance of MVFusFra on the BRATS 2015 and BRATS 2018 datasets. Findings from the evaluations suggest that fusion results from multi-views achieve better performance than segmentation results from the single view, and also implying effectiveness of the proposed multi-view fusion loss. A comparative summary also shows that MVFusFra achieves better segmentation performance, in terms of efficiency, in comparison to other competing approaches.
Collapse
|
7
|
Ding Y, Zhang C, Cao M, Wang Y, Chen D, Zhang N, Qin Z. ToStaGAN: An end-to-end two-stage generative adversarial network for brain tumor segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.07.066] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
8
|
Ding Y, Gong L, Zhang M, Li C, Qin Z. A multi-path adaptive fusion network for multimodal brain tumor segmentation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.06.078] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|