1
|
Latonen L, Koivukoski S, Khan U, Ruusuvuori P. Virtual staining for histology by deep learning. Trends Biotechnol 2024; 42:1177-1191. [PMID: 38480025 DOI: 10.1016/j.tibtech.2024.02.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 09/07/2024]
Abstract
In pathology and biomedical research, histology is the cornerstone method for tissue analysis. Currently, the histological workflow consumes plenty of chemicals, water, and time for staining procedures. Deep learning is now enabling digital replacement of parts of the histological staining procedure. In virtual staining, histological stains are created by training neural networks to produce stained images from an unstained tissue image, or through transferring information from one stain to another. These technical innovations provide more sustainable, rapid, and cost-effective alternatives to traditional histological pipelines, but their development is in an early phase and requires rigorous validation. In this review we cover the basic concepts of virtual staining for histology and provide future insights into the utilization of artificial intelligence (AI)-enabled virtual histology.
Collapse
Affiliation(s)
- Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland.
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Turku, Finland
| | | |
Collapse
|
2
|
Tian C, Zhang L. G2NPAN: GAN-guided nuance perceptual attention network for multimodal medical fusion image quality assessment. Front Neurosci 2024; 18:1415679. [PMID: 38803686 PMCID: PMC11128576 DOI: 10.3389/fnins.2024.1415679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 04/29/2024] [Indexed: 05/29/2024] Open
Abstract
Multimodal medical fusion images (MMFI) are formed by fusing medical images of two or more modalities with the aim of displaying as much valuable information as possible in a single image. However, due to the different strategies of various fusion algorithms, the quality of the generated fused images is uneven. Thus, an effective blind image quality assessment (BIQA) method is urgently required. The challenge of MMFI quality assessment is to enable the network to perceive the nuances between fused images of different qualities, and the key point for the success of BIQA is the availability of valid reference information. To this end, this work proposes a generative adversarial network (GAN) -guided nuance perceptual attention network (G2NPAN) to implement BIQA for MMFI. Specifically, we achieve the blind evaluation style via the design of a GAN and develop a Unique Feature Warehouse module to learn the effective features of fused images from the pixel level. The redesigned loss function guides the network to perceive the image quality. In the end, the class activation mapping supervised quality assessment network is employed to obtain the MMFI quality score. Extensive experiments and validation have been conducted in a database of medical fusion images, and the proposed method is superior to the state-of-the-art BIQA method.
Collapse
Affiliation(s)
| | - Lei Zhang
- School of Information Engineering (School of Big Data), Xuzhou University of Technology, Xuzhou, China
| |
Collapse
|
3
|
Han L, Tan T, Zhang T, Huang Y, Wang X, Gao Y, Teuwen J, Mann R. Synthesis-based imaging-differentiation representation learning for multi-sequence 3D/4D MRI. Med Image Anal 2024; 92:103044. [PMID: 38043455 DOI: 10.1016/j.media.2023.103044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 10/14/2023] [Accepted: 11/24/2023] [Indexed: 12/05/2023]
Abstract
Multi-sequence MRIs can be necessary for reliable diagnosis in clinical practice due to the complimentary information within sequences. However, redundant information exists across sequences, which interferes with mining efficient representations by learning-based models. To handle various clinical scenarios, we propose a sequence-to-sequence generation framework (Seq2Seq) for imaging-differentiation representation learning. In this study, not only do we propose arbitrary 3D/4D sequence generation within one model to generate any specified target sequence, but also we are able to rank the importance of each sequence based on a new metric estimating the difficulty of a sequence being generated. Furthermore, we also exploit the generation inability of the model to extract regions that contain unique information for each sequence. We conduct extensive experiments using three datasets including a toy dataset of 20,000 simulated subjects, a brain MRI dataset of 1251 subjects, and a breast MRI dataset of 2101 subjects, to demonstrate that (1) top-ranking sequences can be used to replace complete sequences with non-inferior performance; (2) combining MRI with our imaging-differentiation map leads to better performance in clinical tasks such as glioblastoma MGMT promoter methylation status prediction and breast cancer pathological complete response status prediction. Our code is available at https://github.com/fiy2W/mri_seq2seq.
Collapse
Affiliation(s)
- Luyi Han
- Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands; Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Tao Tan
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands; Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China.
| | - Tianyu Zhang
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands; GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 6202 AZ, Maastricht, The Netherlands
| | - Yunzhi Huang
- Institute for AI in Medicine, School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Xin Wang
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands; GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 6202 AZ, Maastricht, The Netherlands
| | - Yuan Gao
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands; GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 6202 AZ, Maastricht, The Netherlands
| | - Jonas Teuwen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Ritse Mann
- Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands; Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| |
Collapse
|