1
|
Ravi J, Narmadha R. Optimized dual-tree complex wavelet transform aided multimodal image fusion with adaptive weighted average fusion strategy. Sci Rep 2024; 14:30246. [PMID: 39632891 PMCID: PMC11618366 DOI: 10.1038/s41598-024-81594-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 11/27/2024] [Indexed: 12/07/2024] Open
Abstract
Image fusion is generally utilized for retrieving significant data from a set of input images to provide useful informative data. Image fusion enhances the applicability and quality of data. Hence, the analysis of multimodal image fusion is a new to the research topic, which is designed by combining the images of multimodal into single image in order to preserveexact details. On the other hand, the existing approaches face challenges in the precise interpretation of source images, and also it have only captured local information without considering the wide range of information. To consider these weaknesses, a multimodal image fusion model is planned to develop according to the multi-resolution transform along with the optimization strategy. At first, the images are effectively analyzed from standard public datasets and further, the images given into the Optimized Dual-Tree Complex Wavelet Transform (ODTCWT) to acquire low frequency and high frequency coefficients. Here, certain parameters in DTCWT get tuned with the hybridized heuristic strategy with the Probability of Fitness-based Honey Badger Squirrel Search Optimization (PF-HBSSO) to enhance the decomposition quality. Then, the fusion of high-frequency coefficients is performed using adaptive weighted average fusion technique, whereas the weights are optimized using PF-HBSSOto achieve the optimal fused results. Similarly, the low-frequency coefficients are combined by average fusion. Finally, the fused images undergo image reconstruction using the inverse ODTCWT. The experimental evaluation of the designed multimodal image fusion illustratessuperioritythat distinguishes this work from others.
Collapse
Affiliation(s)
- Jampani Ravi
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Semmancheri, Chennai, 600119, India.
| | - R Narmadha
- Department of Electronics and Communication Engineering, Sathyabama Institute of Science and Technology, Semmancheri, Chennai, 600119, India
| |
Collapse
|
2
|
Zhang M, Zhang Y, Liu S, Han Y, Cao H, Qiao B. Dual-attention transformer-based hybrid network for multi-modal medical image segmentation. Sci Rep 2024; 14:25704. [PMID: 39465274 PMCID: PMC11514281 DOI: 10.1038/s41598-024-76234-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 10/11/2024] [Indexed: 10/29/2024] Open
Abstract
Accurate medical image segmentation plays a vital role in clinical practice. Convolutional Neural Network and Transformer are mainstream architectures for this task. However, convolutional neural network lacks the ability of modeling global dependency while Transformer cannot extract local details. In this paper, we propose DATTNet, Dual ATTention Network, an encoder-decoder deep learning model for medical image segmentation. DATTNet is exploited in hierarchical fashion with two novel components: (1) Dual Attention module is designed to model global dependency in spatial and channel dimensions. (2) Context Fusion Bridge is presented to remix the feature maps with multiple scales and construct their correlations. The experiments on ACDC, Synapse and Kvasir-SEG datasets are conducted to evaluate the performance of DATTNet. Our proposed model shows superior performance, effectiveness and robustness compared to SOTA methods, with mean Dice Similarity Coefficient scores of 92.2%, 84.5% and 89.1% on cardiac, abdominal organs and gastrointestinal poly segmentation tasks. The quantitative and qualitative results demonstrate that our proposed DATTNet attains favorable capability across different modalities (MRI, CT, and endoscopy) and can be generalized to various tasks. Therefore, it is envisaged as being potential for practicable clinical applications. The code has been released on https://github.com/MhZhang123/DATTNet/tree/main .
Collapse
Affiliation(s)
- Menghui Zhang
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450001, China
| | - Yuchen Zhang
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450001, China
| | - Shuaibing Liu
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450001, China
| | - Yahui Han
- Department of Pediatric Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450001, China
| | - Honggang Cao
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450001, China
| | - Bingbing Qiao
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450001, China.
| |
Collapse
|
3
|
Suganyadevi S, Seethalakshmi V. CVD-HNet: Classifying Pneumonia and COVID-19 in Chest X-ray Images Using Deep Network. WIRELESS PERSONAL COMMUNICATIONS 2022; 126:3279-3303. [PMID: 35756172 PMCID: PMC9206838 DOI: 10.1007/s11277-022-09864-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/29/2022] [Indexed: 06/04/2023]
Abstract
The use of computer-assisted analysis to improve image interpretation has been a long-standing challenge in the medical imaging industry. In terms of image comprehension, Continuous advances in AI (Artificial Intelligence), predominantly in DL (Deep Learning) techniques, are supporting in the classification, Detection, and quantification of anomalies in medical images. DL techniques are the most rapidly evolving branch of AI, and it's recently been successfully pragmatic in a variety of fields, including medicine. This paper provides a classification method for COVID 19 infected X-ray images based on new novel deep CNN model. For COVID19 specified pneumonia analysis, two new customized CNN architectures, CVD-HNet1 (COVID-HybridNetwork1) and CVD-HNet2 (COVID-HybridNetwork2), have been designed. The suggested method utilizes operations based on boundaries and regions, as well as convolution processes, in a systematic manner. In comparison to existing CNNs, the suggested classification method achieves excellent Accuracy 98 percent, F Score 0.99 and MCC 0.97. These results indicate impressive classification accuracy on a limited dataset, with more training examples, much better results can be achieved. Overall, our CVD-HNet model could be a useful tool for radiologists in diagnosing and detecting COVID 19 instances early.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| | - V. Seethalakshmi
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, Tamilnadu 641 407 India
| |
Collapse
|
4
|
|
5
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 75] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
6
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
7
|
Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert D. Deep Learning for Cardiac Image Segmentation: A Review. Front Cardiovasc Med 2020; 7:25. [PMID: 32195270 PMCID: PMC7066212 DOI: 10.3389/fcvm.2020.00025] [Citation(s) in RCA: 340] [Impact Index Per Article: 68.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 02/17/2020] [Indexed: 12/15/2022] Open
Abstract
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound and major anatomical structures of interest (ventricles, atria, and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research.
Collapse
Affiliation(s)
- Chen Chen
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chen Qin
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Huaqi Qiu
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Giacomo Tarroni
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- CitAI Research Centre, Department of Computer Science, City University of London, London, United Kingdom
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Wenjia Bai
- Data Science Institute, Imperial College London, London, United Kingdom
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|
8
|
Hermessi H, Mourali O, Zagrouba E. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3441-1] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_51] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
10
|
Semi-supervised Learning for Network-Based Cardiac MR Image Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-66185-8_29] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|