1
|
Fu Y, Jiang L, Pan S, Chen P, Wang X, Dai N, Chen X, Xu M. Deep multi-task learning for nephropathy diagnosis on immunofluorescence images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107747. [PMID: 37619430 DOI: 10.1016/j.cmpb.2023.107747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 06/14/2023] [Accepted: 08/03/2023] [Indexed: 08/26/2023]
Abstract
BACKGROUND AND OBJECTIVE As an advanced technique, immunofluorescence (IF) is one of the most widely-used medical image for nephropathy diagnosis, due to its ease of acquisition with low cost. In practice, the clinically collected IF images are commonly corrupted by blurs at different degrees, mainly because of the inaccurate focus at the acquisition stage. Although deep neural network (DNN) methods achieve the great success in nephropathy diagnosis, their performance dramatically drops over the blurred IF images. This significantly limits the potential of leveraging the advanced DNN techniques in real-world nephropathy diagnosis scenarios. METHODS This paper first establishes two IF databases with synthetic blurs (IFVB) and real-world blurs (Real-IF) for nephropathy diagnosis, respectively, including 1,659 patients and 6,521 IF images with various degrees of blurs. According to the analysis on these two databases, we propose a deep hierarchical multi-task learning based nephropathy diagnosis (DeepMT-ND) method to bridge the gap between the low-level vision and high-level medical tasks. Specifically, DeepMT-ND simultaneously handles the main task of automatic nephropathy diagnosis, as well as the auxiliary tasks of image quality assessment (IQA) and de-blurring. RESULTS Extensive experiments show the superiority of our DeepMT-ND in terms of diagnosis accuracy and generalization ability. For instance, our method performs better than nephrologists with at least 15.4% and 6.5% accuracy improvements in IFVB and Real-IF, respectively. Meanwhile, our method also achieves comparable performance in two auxiliary tasks of IQA and de-blurring on blurred IF images. CONCLUSIONS In this paper, we propose a new DeepMT-ND method for nephropathy diagnosis on blurred IF images. The proposed hierarchical multi-task learning framework provides the new scope to narrow the gap between the low-level vision and high-level medical tasks, and will contribute to nephropathy diagnosis in clinical scenarios. The diagnosis accuracy and generalization ability of DeepMT-ND are experimentally verified to be effective over both synthetic and real-world databases.
Collapse
Affiliation(s)
- Yibing Fu
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Lai Jiang
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Sai Pan
- Department of Nephrology, Chinese People's Liberation Army General Hospital, Beijing, China
| | - Pu Chen
- Department of Nephrology, Chinese People's Liberation Army General Hospital, Beijing, China
| | - Xiaofei Wang
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Ning Dai
- School of Electronic and Information Engineering, Beihang University, Beijing, China
| | - Xiangmei Chen
- Department of Nephrology, Chinese People's Liberation Army General Hospital, Beijing, China.
| | - Mai Xu
- School of Electronic and Information Engineering, Beihang University, Beijing, China.
| |
Collapse
|
2
|
Wiesel B, Arnon S. Imaging inside highly scattering media using hybrid deep learning and analytical algorithm. JOURNAL OF BIOPHOTONICS 2023; 16:e202300127. [PMID: 37434270 DOI: 10.1002/jbio.202300127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 06/15/2023] [Accepted: 07/10/2023] [Indexed: 07/13/2023]
Abstract
Imaging through highly scattering media is a challenging problem with numerous applications in biomedical and remote-sensing fields. Existing methods that use analytical or deep learning tools are limited by simplified forward models or a requirement for prior physical knowledge, resulting in blurry images or a need for large training databases. To address these limitations, we propose a hybrid scheme called Hybrid-DOT that combines analytically derived image estimates with a deep learning network. Our analysis demonstrates that Hybrid-DOT outperforms a state-of-the-art ToF-DOT algorithm by improving the PSNR ratio by 4.6 dB and reducing the resolution by a factor of 2.5. Furthermore, when compared to a deep learning stand-alone model, Hybrid-DOT achieves a 0.8 dB increase in PSNR, 1.5 times the resolution, and a significant reduction in the required dataset size (factor of 1.6-3). The proposed model remains effective at higher depths, providing similar improvements for up to 160 mean-free paths.
Collapse
Affiliation(s)
- Ben Wiesel
- Ben-Gurion University of the Negev, Department of Electrical and Computer Engineering, Beer-Sheva, Israel
| | - Shlomi Arnon
- Ben-Gurion University of the Negev, Department of Electrical and Computer Engineering, Beer-Sheva, Israel
| |
Collapse
|
3
|
Le TD, Min JJ, Lee C. Enhanced resolution and sensitivity acoustic-resolution photoacoustic microscopy with semi/unsupervised GANs. Sci Rep 2023; 13:13423. [PMID: 37591911 PMCID: PMC10435476 DOI: 10.1038/s41598-023-40583-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/13/2023] [Indexed: 08/19/2023] Open
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) enables visualization of biological tissues at depths of several millimeters with superior optical absorption contrast. However, the lateral resolution and sensitivity of AR-PAM are generally lower than those of optical-resolution PAM (OR-PAM) owing to the intrinsic physical acoustic focusing mechanism. Here, we demonstrate a computational strategy with two generative adversarial networks (GANs) to perform semi/unsupervised reconstruction with high resolution and sensitivity in AR-PAM by maintaining its imaging capability at enhanced depths. The b-scan PAM images were prepared as paired (for semi-supervised conditional GAN) and unpaired (for unsupervised CycleGAN) groups for label-free reconstructed AR-PAM b-scan image generation and training. The semi/unsupervised GANs successfully improved resolution and sensitivity in a phantom and in vivo mouse ear test with ground truth. We also confirmed that GANs could enhance resolution and sensitivity of deep tissues without the ground truth.
Collapse
Affiliation(s)
- Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea
| | - Jung-Joon Min
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea
| | - Changho Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea.
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea.
| |
Collapse
|
4
|
Cervical cytopathology image refocusing via multi-scale attention features and domain normalization. Med Image Anal 2022; 81:102566. [DOI: 10.1016/j.media.2022.102566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 07/27/2022] [Accepted: 08/02/2022] [Indexed: 11/22/2022]
|
5
|
Huang Z, Zhao R, Leung FHF, Banerjee S, Lee TTY, Yang D, Lun DPK, Lam KM, Zheng YP, Ling SH. Joint Spine Segmentation and Noise Removal From Ultrasound Volume Projection Images With Selective Feature Sharing. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1610-1624. [PMID: 35041596 DOI: 10.1109/tmi.2022.3143953] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Volume Projection Imaging from ultrasound data is a promising technique to visualize spine features and diagnose Adolescent Idiopathic Scoliosis. In this paper, we present a novel multi-task framework to reduce the scan noise in volume projection images and to segment different spine features simultaneously, which provides an appealing alternative for intelligent scoliosis assessment in clinical applications. Our proposed framework consists of two streams: i) A noise removal stream based on generative adversarial networks, which aims to achieve effective scan noise removal in a weakly-supervised manner, i.e., without paired noisy-clean samples for learning; ii) A spine segmentation stream, which aims to predict accurate bone masks. To establish the interaction between these two tasks, we propose a selective feature-sharing strategy to transfer only the beneficial features, while filtering out the useless or harmful information. We evaluate our proposed framework on both scan noise removal and spine segmentation tasks. The experimental results demonstrate that our proposed method achieves promising performance on both tasks, which provides an appealing approach to facilitating clinical diagnosis.
Collapse
|
6
|
Agarwala P, Bera T, Sasmal DK. Molecular Mechanism of Interaction of Curcumin with BSA, Surfactants and Live E. Coli Cell Membrane Revealed by Fluorescence Spectroscopy and Confocal Microscopy. Chemphyschem 2022; 23:e202200265. [DOI: 10.1002/cphc.202200265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 05/20/2022] [Indexed: 11/08/2022]
Affiliation(s)
- Pratibha Agarwala
- Indian Institute of Technology Rajasthan: Indian Institute of Technology Jodhpur Department of chemistry 342037 Jodhpur INDIA
| | - Turban Bera
- Indian Institute of Technology Jodhpur Department of chemistry INDIA
| | - Dibyendu Kumar Sasmal
- Indian Institute of Technology Jodhpur Chemistry NH65, Surpura bypass roadkarwar 342037 Jodhpur INDIA
| |
Collapse
|
7
|
Bizhani M, Ardakani OH, Little E. Reconstructing high fidelity digital rock images using deep convolutional neural networks. Sci Rep 2022; 12:4264. [PMID: 35277546 PMCID: PMC8917167 DOI: 10.1038/s41598-022-08170-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 03/03/2022] [Indexed: 01/16/2023] Open
Abstract
Imaging methods have broad applications in geosciences. Scanning electron microscopy (SEM) and micro-CT scanning have been applied for studying various geological problems. Despite significant advances in imaging capabilities, and image processing algorithms, acquiring high-quality data from images is still challenging and time-consuming.
Obtaining a 3D representative volume for a tight rock sample takes days to weeks. Image artifacts such as noise further complicate the use of imaging methods for the determination of rock properties. In this study, we present applications of several convolutional neural networks (CNN) for rapid image denoising, deblurring and super-resolving digital rock images. Such an approach enables rapid imaging of larger samples, which in turn improves the statistical relevance of the subsequent analysis. We demonstrate the application of several CNNs for image restoration applicable to scientific imaging. The results show that images can be denoised without a priori knowledge of the noise with great confidence. Furthermore, we show how attaching several CNNs in an end-to-end fashion can improve the final quality of reconstruction. Our experiments with SEM and CT scan images of several rock types show image denoising, deblurring and super-resolution can be performed simultaneously.
Collapse
Affiliation(s)
- Majid Bizhani
- Natural Resources Canada, Geological Survey of Canada, 3303 33 Street NW, Calgary, AB, T2L 2A7, Canada.
| | - Omid Haeri Ardakani
- Natural Resources Canada, Geological Survey of Canada, 3303 33 Street NW, Calgary, AB, T2L 2A7, Canada.,Department of Geoscience, University of Calgary, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada
| | - Edward Little
- Natural Resources Canada, Geological Survey of Canada, 3303 33 Street NW, Calgary, AB, T2L 2A7, Canada
| |
Collapse
|
8
|
Cheng S, Zhou Y, Chen J, Li H, Wang L, Lai P. High-resolution photoacoustic microscopy with deep penetration through learning. PHOTOACOUSTICS 2022; 25:100314. [PMID: 34824976 PMCID: PMC8604673 DOI: 10.1016/j.pacs.2021.100314] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 05/18/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) enjoys superior spatial resolution and has received intense attention in recent years. The application, however, has been limited to shallow depths because of strong scattering of light in biological tissues. In this work, we propose to achieve deep-penetrating OR-PAM performance by using deep learning enabled image transformation on blurry living mouse vascular images that were acquired with an acoustic-resolution photoacoustic microscopy (AR-PAM) setup. A generative adversarial network (GAN) was trained in this study and improved the imaging lateral resolution of AR-PAM from 54.0 µm to 5.1 µm, comparable to that of a typical OR-PAM (4.7 µm). The feasibility of the network was evaluated with living mouse ear data, producing superior microvasculature images that outperforms blind deconvolution. The generalization of the network was validated with in vivo mouse brain data. Moreover, it was shown experimentally that the deep-learning method can retain high resolution at tissue depths beyond one optical transport mean free path. Whilst it can be further improved, the proposed method provides new horizons to expand the scope of OR-PAM towards deep-tissue imaging and wide applications in biomedicine.
Collapse
Affiliation(s)
- Shengfu Cheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Yingying Zhou
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Huanhao Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Puxiang Lai
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
9
|
Zhang X, Ma F, Zhang Y, Wang J, Liu C, Meng J. Sparse-sampling photoacoustic computed tomography: Deep learning vs. compressed sensing. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103233] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
10
|
State-of-the-Art Approaches for Image Deconvolution Problems, including Modern Deep Learning Architectures. MICROMACHINES 2021; 12:mi12121558. [PMID: 34945408 PMCID: PMC8707587 DOI: 10.3390/mi12121558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 11/29/2021] [Accepted: 12/09/2021] [Indexed: 01/06/2023]
Abstract
In modern digital microscopy, deconvolution methods are widely used to eliminate a number of image defects and increase resolution. In this review, we have divided these methods into classical, deep learning-based, and optimization-based methods. The review describes the major architectures of neural networks, such as convolutional and generative adversarial networks, autoencoders, various forms of recurrent networks, and the attention mechanism used for the deconvolution problem. Special attention is paid to deep learning as the most powerful and flexible modern approach. The review describes the major architectures of neural networks used for the deconvolution problem. We describe the difficulties in their application, such as the discrepancy between the standard loss functions and the visual content and the heterogeneity of the images. Next, we examine how to deal with this by introducing new loss functions, multiscale learning, and prior knowledge of visual content. In conclusion, a review of promising directions and further development of deconvolution methods in microscopy is given.
Collapse
|
11
|
Huang A, Sun L, Lin F, Guo J, Jiang J, Shen B, Chen J. Medical Image Recognition Technology in the Effect of Substituting Soybean Meal for Fish Meal on the Diversity of Intestinal Microflora in Channa argus. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5269169. [PMID: 34868520 PMCID: PMC8639257 DOI: 10.1155/2021/5269169] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/22/2021] [Accepted: 11/02/2021] [Indexed: 12/13/2022]
Abstract
Purpose To study the application of medical image recognition technology based on backpropagation neural network (BPNN) in the effect of soybean meal replacing fish meal on intestinal microbial diversity of Channa argus and to evaluate the application value of this intelligent algorithm, Channa argus was fed with different contents of soybean meal instead of fish meal. Methods After intestinal samples were collected and bacteria were isolated, microscopic imaging was performed, and the images were classified and identified. BPNN was constructed to perform denoising, smoothing, and segmentation. Results After BPNN processing, the bacteria were completely separated from the original image background, and the bacteria was in the closed state, which was beneficial to feature extraction and species recognition. If there were 2 hidden layer nodes, the segmentation accuracy of bacterial microscopic images was the highest, up to 97.3%. With the replacement ratio of fish meal increased, the species of intestinal microbiome gradually enriched, and the relative abundance of intestinal microbiome was higher after fish meal was completely replaced by soybean meal (replacement). The intestinal microbial enzyme activities were affected by different fish meal and soybean meal contents in the diet. The glutamate transaminase and adenosine deaminase activities were increased after the replacement and were higher than those before the replacement, with statistically significant differences (P < 0.05). Conclusion Replacement of fish meal with soybean meal has a significant effect on the intestinal flora diversity of Channa argus, and there is a close relationship between them. The image recognition technology based on BPNN has high recognition rate and segmentation accuracy for microbiological microscopic images.
Collapse
Affiliation(s)
- Aixia Huang
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| | - Lihui Sun
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| | - Feng Lin
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| | - Jianlin Guo
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| | - Jianhu Jiang
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| | - Binqian Shen
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| | - Jianming Chen
- Zhejiang Institute of Freshwater Fisheries, Huzhou, Zhejiang 313001, China
| |
Collapse
|
12
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
13
|
Mandal D, Vahadane A, Sharma S, Majumdar S. Blur-Robust Nuclei Segmentation for Immunofluorescence Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3475-3478. [PMID: 34891988 DOI: 10.1109/embc46164.2021.9629787] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automated nuclei segmentation from immunofluorescence (IF) microscopic image is a crucial first step in digital pathology. A lot of research has been devoted to develop novel nuclei segmentation algorithms to give high performance on good quality images. However, fewer methods were developed for poor-quality images like out-of-focus (blurry) data. In this work, we take a principled approach to study the performance of nuclei segmentation algorithms on out-of-focus images for different levels of blur. A deep learning encoder-decoder framework with a novel Y forked decoder is proposed here. The two fork ends are tied to segmentation and deblur output. The addition of a separate deblurring task in the training paradigm helps to regularize the network on blurry images. Our proposed method accurately predicts the instance nuclei segmentation on sharp as well as out-of-focus images. Additionally, predicted deblurred image provides interpretable insights to experts. Experimental analysis on the Human U2OS cells (out-of-focus) dataset shows that our algorithm is robust and outperforms the state-of-the-art methods.
Collapse
|
14
|
Multi-Task Learning-Based Immunofluorescence Classification of Kidney Disease. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182010798. [PMID: 34682567 PMCID: PMC8535636 DOI: 10.3390/ijerph182010798] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/24/2021] [Accepted: 09/26/2021] [Indexed: 12/14/2022]
Abstract
Chronic kidney disease is one of the most important causes of mortality worldwide, but a shortage of nephrology pathologists has led to delays or errors in its diagnosis and treatment. Immunofluorescence (IF) images of patients with IgA nephropathy (IgAN), membranous nephropathy (MN), diabetic nephropathy (DN), and lupus nephritis (LN) were obtained from the General Hospital of Chinese PLA. The data were divided into training and test data. To simulate the inaccurate focus of the fluorescence microscope, the Gaussian method was employed to blur the IF images. We proposed a novel multi-task learning (MTL) method for image quality assessment, de-blurring, and disease classification tasks. A total of 1608 patients’ IF images were included—1289 in the training set and 319 in the test set. For non-blurred IF images, the classification accuracy of the test set was 0.97, with an AUC of 1.000. For blurred IF images, the proposed MTL method had a higher accuracy (0.94 vs. 0.93, p < 0.01) and higher AUC (0.993 vs. 0.986) than the common MTL method. The novel MTL method not only diagnosed four types of kidney diseases through blurred IF images but also showed good performance in two auxiliary tasks: image quality assessment and de-blurring.
Collapse
|
15
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
16
|
Das D, Sharma A, Rajendran P, Pramanik M. Another decade of photoacoustic imaging. Phys Med Biol 2020; 66. [PMID: 33361580 DOI: 10.1088/1361-6560/abd669] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/23/2020] [Indexed: 01/09/2023]
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
17
|
Sharma A, Pramanik M. Convolutional neural network for resolution enhancement and noise reduction in acoustic resolution photoacoustic microscopy. BIOMEDICAL OPTICS EXPRESS 2020; 11:6826-6839. [PMID: 33408964 PMCID: PMC7747888 DOI: 10.1364/boe.411257] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/24/2020] [Accepted: 10/24/2020] [Indexed: 05/03/2023]
Abstract
In acoustic resolution photoacoustic microscopy (AR-PAM), a high numerical aperture focused ultrasound transducer (UST) is used for deep tissue high resolution photoacoustic imaging. There is a significant degradation of lateral resolution in the out-of-focus region. Improvement in out-of-focus resolution without degrading the image quality remains a challenge. In this work, we propose a deep learning-based method to improve the resolution of AR-PAM images, especially at the out of focus plane. A modified fully dense U-Net based architecture was trained on simulated AR-PAM images. Applying the trained model on experimental images showed that the variation in resolution is ∼10% across the entire imaging depth (∼4 mm) in the deep learning-based method, compared to ∼180% variation in the original PAM images. Performance of the trained network on in vivo rat vasculature imaging further validated that noise-free, high resolution images can be obtained using this method.
Collapse
Affiliation(s)
- Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| |
Collapse
|
18
|
Yuan AY, Gao Y, Peng L, Zhou L, Liu J, Zhu S, Song W. Hybrid deep learning network for vascular segmentation in photoacoustic imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:6445-6457. [PMID: 33282500 PMCID: PMC7687958 DOI: 10.1364/boe.409246] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 10/02/2020] [Accepted: 10/06/2020] [Indexed: 05/04/2023]
Abstract
Photoacoustic (PA) technology has been used extensively on vessel imaging due to its capability of identifying molecular specificities and achieving high optical-diffraction-limited lateral resolution down to the cellular level. Vessel images carry essential medical information that provides guidelines for a professional diagnosis. Modern image processing techniques provide a decent contribution to vessel segmentation. However, these methods suffer from under or over-segmentation. Thus, we demonstrate both the results of adopting a fully convolutional network and U-net, and propose a hybrid network consisting of both applied on PA vessel images. Comparison results indicate that the hybrid network can significantly increase the segmentation accuracy and robustness.
Collapse
Affiliation(s)
- Alan Yilun Yuan
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
- These authors contributed equally to this work
| | - Yang Gao
- Nanophotonics Research Center, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
- These authors contributed equally to this work
| | - Liangliang Peng
- Nanophotonics Research Center, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Lingxiao Zhou
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
- Department of Respiratory Medicine, Zhongshan-Xuhui Hospital, Fudan University, Shanghai, China
| | - Jun Liu
- Tianjin Union Medical Centre, Tianjin, China
| | - Siwei Zhu
- Tianjin Union Medical Centre, Tianjin, China
| | - Wei Song
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| |
Collapse
|