1
|
Guo K, Zheng Z, Zhong W, Li Z, Wang G, Li J, Cao Y, Wang Y, Lin J, Liu Q, Song X. Score-based generative model-assisted information compensation for high-quality limited-view reconstruction in photoacoustic tomography. PHOTOACOUSTICS 2024; 38:100623. [PMID: 38832333 PMCID: PMC11144813 DOI: 10.1016/j.pacs.2024.100623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/11/2024] [Accepted: 05/17/2024] [Indexed: 06/05/2024]
Abstract
Photoacoustic tomography (PAT) regularly operates in limited-view cases owing to data acquisition limitations. The results using traditional methods in limited-view PAT exhibit distortions and numerous artifacts. Here, a novel limited-view PAT reconstruction strategy that combines model-based iteration with score-based generative model was proposed. By incrementally adding noise to the training samples, prior knowledge can be learned from the complex probability distribution. The acquired prior is then utilized as constraint in model-based iteration. The information of missing views can be gradually compensated by cyclic iteration to achieve high-quality reconstruction. The performance of the proposed method was evaluated with the circular phantom and in vivo experimental data. Experimental results demonstrate the outstanding effectiveness of the proposed method in limited-view cases. Notably, the proposed method exhibits excellent performance in limited-view case of 70° compared with traditional method. It achieves a remarkable improvement of 203% in PSNR and 48% in SSIM for the circular phantom experimental data, and an enhancement of 81% in PSNR and 65% in SSIM for in vivo experimental data, respectively. The proposed method has capability of reconstructing PAT images in extremely limited-view cases, which will further expand the application in clinical scenarios.
Collapse
Affiliation(s)
| | | | | | | | - Guijun Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yubin Cao
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yiguang Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiabin Lin
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
2
|
Dong W, Zhu C, Xie D, Zhang Y, Tao S, Tian C. Image restoration for ring-array photoacoustic tomography system based on blind spatially rotational deconvolution. PHOTOACOUSTICS 2024; 38:100607. [PMID: 38665365 PMCID: PMC11044036 DOI: 10.1016/j.pacs.2024.100607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 03/17/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024]
Abstract
Ring-array photoacoustic tomography (PAT) system has been widely used in noninvasive biomedical imaging. However, the reconstructed image usually suffers from spatially rotational blur and streak artifacts due to the non-ideal imaging conditions. To improve the reconstructed image towards higher quality, we propose a concept of spatially rotational convolution to formulate the image blur process, then we build a regularized restoration problem model accordingly and design an alternating minimization algorithm which is called blind spatially rotational deconvolution to achieve the restored image. Besides, we also present an image preprocessing method based on the proposed algorithm to remove the streak artifacts. We take experiments on phantoms and in vivo biological tissues for evaluation, the results show that our approach can significantly enhance the resolution of the image obtained from ring-array PAT system and remove the streak artifacts effectively.
Collapse
Affiliation(s)
- Wende Dong
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
- Key Laboratory of Space Photoelectric Detection and Perception (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, Jiangsu 211106, China
| | - Chenlong Zhu
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
- Key Laboratory of Space Photoelectric Detection and Perception (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, Jiangsu 211106, China
| | - Dan Xie
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Yanli Zhang
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
- Key Laboratory of Space Photoelectric Detection and Perception (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, Jiangsu 211106, China
| | - Shuyin Tao
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China
| | - Chao Tian
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
- Anhui Province Key Laboratory of Biomedical Imaging and Intelligent Processing, Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
| |
Collapse
|
3
|
Sweeney PW, Hacker L, Lefebvre TL, Brown EL, Gröhl J, Bohndiek SE. Unsupervised Segmentation of 3D Microvascular Photoacoustic Images Using Deep Generative Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024:e2402195. [PMID: 38923324 DOI: 10.1002/advs.202402195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 05/27/2024] [Indexed: 06/28/2024]
Abstract
Mesoscopic photoacoustic imaging (PAI) enables label-free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time-consuming and error-prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human-annotated ground-truth labels. To address this, an unsupervised image-to-image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient-derived breast cancer xenograft models and 3D clinical angiograms, VAN-GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN-GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high-quality blood vessel segmentation (F1 score: VAN-GAN vs. U-Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
Collapse
Affiliation(s)
- Paul W Sweeney
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Lina Hacker
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Thierry L Lefebvre
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Emma L Brown
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Janek Gröhl
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Sarah E Bohndiek
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| |
Collapse
|
4
|
Fu J, Tang X, Wang X, Jin Z, Fu Y, Zhang H, Xu X, Qin H. Fully dense generative adversarial network for removing artifacts caused by microwave dielectric effect in thermoacoustic imaging. OPTICS EXPRESS 2024; 32:17464-17478. [PMID: 38858929 DOI: 10.1364/oe.522550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 04/09/2024] [Indexed: 06/12/2024]
Abstract
Microwave-induced thermoacoustic (TA) imaging (MTAI) combines pulsed microwave excitation and ultrasound detection to provide high contrast and spatial resolution images through dielectric contrast, which holds great promise for clinical applications. However, artifacts caused by microwave dielectric effect will seriously affect the accuracy of MTAI images that will hinder the clinical translation of MTAI. In this work, we propose a deep learning-based method fully dense generative adversarial network (FD-GAN) for removing artifacts caused by microwave dielectric effect in MTAI. FD-GAN adds the fully dense block to the generative adversarial network (GAN) based on the mutual confrontation between generator and discriminator, which enables it to learn both local and global features related to the removal of artifacts and generate high-quality images. The practical feasibility was tested in simulated, experimental data. The results demonstrate that FD-GAN can effectively remove the artifacts caused by the microwave dielectric effect, and shows superiority in denoising, background suppression, and improvement of image distortion. Our approach is expected to significantly improve the accuracy and quality of MTAI images, thereby enhancing the diagnostic accuracy of this innovative imaging technique.
Collapse
|
5
|
Wang R, Zhu J, Meng Y, Wang X, Chen R, Wang K, Li C, Shi J. Adaptive machine learning method for photoacoustic computed tomography based on sparse array sensor data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107822. [PMID: 37832425 DOI: 10.1016/j.cmpb.2023.107822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 08/18/2023] [Accepted: 09/17/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Photoacoustic computed tomography (PACT) is a non-invasive biomedical imaging technology that has developed rapidly in recent decades, especially has shown potential for small animal studies and early diagnosis of human diseases. To obtain high-quality images, the photoacoustic imaging system needs a high-element-density detector array. However, in practical applications, due to the cost limitation, manufacturing technology, and the system requirement in miniaturization and robustness, it is challenging to achieve sufficient elements and high-quality reconstructed images, which may even suffer from artifacts. Different from the latest machine learning methods based on removing distortions and artifacts to recover high-quality images, this paper proposes an adaptive machine learning method to firstly predict and complement the photoacoustic sensor channel data from sparse array sampling and then reconstruct images through conventional reconstruction algorithms. METHODS We develop an adaptive machine learning method to predict and complement the photoacoustic sensor channel data. The model consists of XGBoost and a neural network named SS-net. To handle data sets of different sizes and improve the generalization, a tunable parameter is used to control the weights of XGBoost and SS-net outputs. RESULTS The proposed method achieved superior performance as demonstrated by simulation, phantom experiments, and in vivo experiment results. Compared with linear interpolation, XGBoost, CAE, and U-net, the simulation results show that the SSIM value is increased by 12.83%, 6.78%, 21.46%, and 12.33%. Moreover, the median R2 is increased by 34.4%, 8.1%, 28.6%, and 84.1% with the in vivo data. CONCLUSIONS This model provides a framework to predict the missed photoacoustic sensor data on a sparse ring-shaped array for PACT imaging and has achieved considerable improvements in reconstructing the objects. Compared with linear interpolation and other deep learning methods qualitatively and quantitatively, our proposed methods can effectively suppress artifacts and improve image quality. The advantage of our methods is that there is no need for preparing a large number of images as the training dataset, and the data for training is directly from the sensors. It has the potential to be applied to a wide range of photoacoustic imaging detector arrays for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
| | - Jing Zhu
- Zhejiang Lab, Hangzhou 311100, China
| | | | | | | | | | - Chiye Li
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| | - Junhui Shi
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| |
Collapse
|
6
|
Juhong A, Li B, Liu Y, Yao CY, Yang CW, Agnew DW, Lei YL, Luker GD, Bumpers H, Huang X, Piyawattanametha W, Qiu Z. Recurrent and convolutional neural networks for sequential multispectral optoacoustic tomography (MSOT) imaging. JOURNAL OF BIOPHOTONICS 2023; 16:e202300142. [PMID: 37382181 DOI: 10.1002/jbio.202300142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/14/2023] [Accepted: 06/23/2023] [Indexed: 06/30/2023]
Abstract
Multispectral optoacoustic tomography (MSOT) is a beneficial technique for diagnosing and analyzing biological samples since it provides meticulous details in anatomy and physiology. However, acquiring high through-plane resolution volumetric MSOT is time-consuming. Here, we propose a deep learning model based on hybrid recurrent and convolutional neural networks to generate sequential cross-sectional images for an MSOT system. This system provides three modalities (MSOT, ultrasound, and optoacoustic imaging of a specific exogenous contrast agent) in a single scan. This study used ICG-conjugated nanoworms particles (NWs-ICG) as the contrast agent. Instead of acquiring seven images with a step size of 0.1 mm, we can receive two images with a step size of 0.6 mm as input for the proposed deep learning model. The deep learning model can generate five other images with a step size of 0.1 mm between these two input images meaning we can reduce acquisition time by approximately 71%.
Collapse
Affiliation(s)
- Aniwat Juhong
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Bo Li
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Yifan Liu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Cheng-You Yao
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Chia-Wei Yang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Chemistry, Michigan State University, East Lansing, Michigan, USA
| | - Dalen W Agnew
- Department of Pathobiology and Diagnostic Investigation, College of Veterinary Medicine, Michigan State University, East Lansing, Michigan, USA
| | - Yu Leo Lei
- Department of Periodontics and Oral Medicine, University of Michigan, Ann Arbor, Michigan, USA
| | - Gary D Luker
- Department of Radiology, Microbiology and Immunology, and Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | - Harvey Bumpers
- Department of Surgery, Michigan State University, East Lansing, Michigan, USA
| | - Xuefei Huang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Chemistry, Michigan State University, East Lansing, Michigan, USA
| | - Wibool Piyawattanametha
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, School of Engineering, King Mongkut's Institute of Technology Ladkrabang (KMITL), Bangkok, Thailand
| | - Zhen Qiu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, Michigan, USA
| |
Collapse
|
7
|
Shen H, Liu X, Cui Q, Sun Y, Yang B, Li F, Xu X, Liu Z, Liu W. Limited view correction in low-optical-NA photoacoustic microscopy. OPTICS LETTERS 2023; 48:5627-5630. [PMID: 37910719 DOI: 10.1364/ol.502616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/05/2023] [Indexed: 11/03/2023]
Abstract
Photoacoustic microscope (PAM) with a low-optical NA suffers from a limited view along the optical axis, due to the coherent cancellation of acoustic pressure waves after being excited with a smoothly focused beam. Using larger-NA (NA > 0.3) objectives can readily overcome the limited-view problem, while the consequences are the shallow working distance and time-consuming depth scanning for large-volume imaging. Instead, we report an off-axis oblique detection strategy that is compatible with a low-optical-NA PAM for turning up the optical-axis structures. Comprehensive photoacoustic modeling and ex vivo phantom and in vivo mouse brain imaging experiments are conducted to validate the efficacy of correcting the limited view. Proof-of-concept experiment results show that the visibility of optical-axis structures can be greatly enhanced by making the detection angle off the optical axis larger than 45°, strongly recommending that off-axis oblique detection is a simple and cost-effective alternative method to solve the limited-view problems in low-optical-NA PAMs.
Collapse
|
8
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
9
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. PHOTOACOUSTICS 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
10
|
Zheng Q, Yang R, Ni X, Yang S, Jiang Z, Wang L, Chen Z, Liu X. Development and validation of a deep learning-based laparoscopic system for improving video quality. Int J Comput Assist Radiol Surg 2023; 18:257-268. [PMID: 36243805 DOI: 10.1007/s11548-022-02777-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 10/05/2022] [Indexed: 02/03/2023]
Abstract
PURPOSE A clear surgical field of view is a prerequisite for successful laparoscopic surgery. Surgical smoke, image blur, and lens fogging can affect the clarity of laparoscopic imaging. We aimed to develop a real-time assistance system (namely LVQIS) for removing these interfering factors during laparoscopic surgery, thereby improving laparoscopic video quality. METHODS LVQIS was developed with generative adversarial networks (GAN) and transfer learning, which included two classification models (ResNet-50), a motion blur removal model (MPRNet), and a smoke/fog removal model (GAN). 136 laparoscopic surgery videos were retrospectively collected in a tripartite dataset for training and validation. A synthetic dataset was simulated using the image enhancement library Albumentations and the 3D rendering software Blender. The objective evaluation results were through PSNR, SSIM and FID, and the subjective evaluation includes the operation pause time and the degree of anxiety of surgeons. RESULTS The synthesized dataset contained 19,245 clear images, 19,245 motion blur images, and 19,245 smoke/fog images. The ResNet-50 CNN model identified whether a single laparoscopic image had motion blur and smoke/fog with an accuracy of over 0.99. The PSNR, SSIM and FID of the de-smoke model were 29.67, 0.9551 and 74.72, respectively, and the PSNR, SSIM and FID of the de-blurring model were 26.78, 0.9020 and 80.10, respectively, which were better than other advanced de-blurring and de-smoke/fog models. In a comparative study of 100 laparoscopic surgeries, the use of LVQIS significantly reduced the operation pause time (P < 0.001) and the anxiety of surgeons (P = 0.004). CONCLUSIONS In this study, LVQIS is an efficient and robust system that can improve the quality of laparoscopic video, reduce surgical pause time and the anxiety of surgeons, and has the potential for real-time application in real clinical settings.
Collapse
Affiliation(s)
- Qingyuan Zheng
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China
| | - Xinmiao Ni
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China
| | - Song Yang
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China
| | - Zhengyu Jiang
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China
| | - Lei Wang
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China
| | - Zhiyuan Chen
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China.
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China.
| | - Xiuheng Liu
- Department of Urology, Renmin Hospital of Wuhan University, 99 Zhang Zhi-dong Road, Wuhan, Hubei, 430060, People's Republic of China.
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan, 430060, Hubei, China.
| |
Collapse
|
11
|
Jiang Z, Sun B, Wang Y, Gao H, Ren H, Zhang H, Lu T, Ren X, Wei W, Wang X, Zhang L, Li J, Ding D, Lovell JF, Zhang Y. Surfactant-Stripped Micelles with Aggregation-Induced Enhanced Emission for Bimodal Gut Imaging In Vivo and Microbiota Tagging Ex Vivo. Adv Healthc Mater 2021; 10:e2100356. [PMID: 34160147 DOI: 10.1002/adhm.202100356] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/14/2021] [Indexed: 12/22/2022]
Abstract
Aggregation-induced emission luminogens (AIEgens) hold promise for biomedical imaging and new approaches facilitating their aggregation state are desirable for fluorescence enhancement. Herein, a series of surfactant-stripped AIEgen micelles (SSAMs) with improved fluorescence are developed by a low-temperature surfactant-stripping method to encapsulate AIEgens in temperature-sensitive Pluronic block copolymer. After stripping excessive surfactant, SSAMs exhibit altered optical properties and significantly higher fluorescence quantum yield. Using this method, a library of highly concentrated fluorescent nanoparticles are generated with tunable absorption and emission wavelengths, permitting imaging of deep tissues at different wavelengths. SSAMs remain physiologically stable and can pass safely through gastrointestinal tract (GI) without degradation in the harsh conditions, allowing for fluorescence and photoacoustic imaging of intestine with high resolution. d-amino acids (DAA), a natural metabolite for bacteria, can be chemically conjugated on the surface of SSAMs, enabling non-invasive monitoring of the microbial behavior of ex vivo fluorescently labeled gut microbiota in the GI tract.
Collapse
Affiliation(s)
- Zhen Jiang
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| | - Boyang Sun
- Department of Biomedical Engineering, The State University of New York at Buffalo, Buffalo, NY, 14260, USA
| | - Yueqi Wang
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| | - Heqi Gao
- State Key Laboratory of Medicinal Chemical Biology, Key Laboratory of Bioactive Materials Ministry of Education and College of Life Sciences, Nankai University, Tianjin, 300071, P. R. China
| | - He Ren
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| | - Hao Zhang
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| | - Tong Lu
- School of Precision Instrument and Opto-electronics Engineering, Tianjin University, Tianjin, 300072, P. R. China
| | - Xiangkui Ren
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| | - Wei Wei
- South China Normal University, Guangzhou, Guangdong Province, 510631, P. R. China
| | - Xiaoli Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, 300192, R. P. China
| | - Lei Zhang
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| | - Jiao Li
- State Key Laboratory of Medicinal Chemical Biology, Key Laboratory of Bioactive Materials Ministry of Education and College of Life Sciences, Nankai University, Tianjin, 300071, P. R. China
| | - Dan Ding
- State Key Laboratory of Medicinal Chemical Biology, Key Laboratory of Bioactive Materials Ministry of Education and College of Life Sciences, Nankai University, Tianjin, 300071, P. R. China
| | - Jonathan F Lovell
- Department of Biomedical Engineering, The State University of New York at Buffalo, Buffalo, NY, 14260, USA
| | - Yumiao Zhang
- School of Chemical Engineering and Technology, Key Laboratory of Systems Bioengineering, Ministry of Education, Tianjin University, Tianjin, 300350, P. R. China
| |
Collapse
|
12
|
Song TA, Yang F, Dutta J. Noise2Void: unsupervised denoising of PET images. Phys Med Biol 2021; 66. [PMID: 34663767 DOI: 10.1088/1361-6560/ac30a0] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 10/18/2021] [Indexed: 11/11/2022]
Abstract
Objective:Elevated noise levels in positron emission tomography (PET) images lower image quality and quantitative accuracy and are a confounding factor for clinical interpretation. The objective of this paper is to develop a PET image denoising technique based on unsupervised deep learning.Significance:Recent advances in deep learning have ushered in a wide array of novel denoising techniques, several of which have been successfully adapted for PET image reconstruction and post-processing. The bulk of the deep learning research so far has focused on supervised learning schemes, which, for the image denoising problem, require paired noisy and noiseless/low-noise images. This requirement tends to limit the utility of these methods for medical applications as paired training datasets are not always available. Furthermore, to achieve the best-case performance of these methods, it is essential that the datasets for training and subsequent real-world application have consistent image characteristics (e.g. noise, resolution, etc), which is rarely the case for clinical data. To circumvent these challenges, it is critical to develop unsupervised techniques that obviate the need for paired training data.Approach:In this paper, we have adapted Noise2Void, a technique that relies on corrupt images alone for model training, for PET image denoising and assessed its performance using PET neuroimaging data. Noise2Void is an unsupervised approach that uses a blind-spot network design. It requires only a single noisy image as its input, and, therefore, is well-suited for clinical settings. During the training phase, a single noisy PET image serves as both the input and the target. Here we present a modified version of Noise2Void based on a transfer learning paradigm that involves group-level pretraining followed by individual fine-tuning. Furthermore, we investigate the impact of incorporating an anatomical image as a second input to the network.Main Results:We validated our denoising technique using simulation data based on the BrainWeb digital phantom. We show that Noise2Void with pretraining and/or anatomical guidance leads to higher peak signal-to-noise ratios than traditional denoising schemes such as Gaussian filtering, anatomically guided non-local means filtering, and block-matching and 4D filtering. We used the Noise2Noise denoising technique as an additional benchmark. For clinical validation, we applied this method to human brain imaging datasets. The clinical findings were consistent with the simulation results confirming the translational value of Noise2Void as a denoising tool.
Collapse
Affiliation(s)
- Tzu-An Song
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America
| | - Fan Yang
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America
| | - Joyita Dutta
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America.,Massachusetts General Hospital, Boston, MA 02114, United States of America
| |
Collapse
|
13
|
Rajendran P, Pramanik M. Deep-learning-based multi-transducer photoacoustic tomography imaging without radius calibration. OPTICS LETTERS 2021; 46:4510-4513. [PMID: 34525034 DOI: 10.1364/ol.434513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Pulsed laser diodes are used in photoacoustic tomography (PAT) as excitation sources because of their low cost, compact size, and high pulse repetition rate. In combination with multiple single-element ultrasound transducers (SUTs) the imaging speed of PAT can be improved. However, during PAT image reconstruction, the exact radius of each SUT is required for accurate reconstruction. Here we developed a novel deep learning approach to alleviate the need for radius calibration. We used a convolutional neural network (fully dense U-Net) aided with a convolutional long short-term memory block to reconstruct the PAT images. Our analysis on the test set demonstrates that the proposed network eliminates the need for radius calibration and improves the peak signal-to-noise ratio by ∼73% without compromising the image quality. In vivo imaging was used to verify the performance of the network.
Collapse
|
14
|
Yazdani A, Agrawal S, Johnstonbaugh K, Kothapalli SR, Monga V. Simultaneous Denoising and Localization Network for Photoacoustic Target Localization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2367-2379. [PMID: 33939612 PMCID: PMC8526152 DOI: 10.1109/tmi.2021.3077187] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A significant research problem of recent interest is the localization of targets like vessels, surgical needles, and tumors in photoacoustic (PA) images.To achieve accurate localization, a high photoacoustic signal-to-noise ratio (SNR) is required. However, this is not guaranteed for deep targets, as optical scattering causes an exponential decay in optical fluence with respect to tissue depth. To address this, we develop a novel deep learning method designed to explicitly exhibit robustness to noise present in photoacoustic radio-frequency (RF) data. More precisely, we describe and evaluate a deep neural network architecture consisting of a shared encoder and two parallel decoders. One decoder extracts the target coordinates from the input RF data while the other boosts the SNR and estimates clean RF data. The joint optimization of the shared encoder and dual decoders lends significant noise robustness to the features extracted by the encoder, which in turn enables the network to contain detailed information about deep targets that may be obscured by noise. Additional custom layers and newly proposed regularizers in the training loss function (designed based on observed RF data signal and noise behavior) serve to increase the SNR in the cleaned RF output and improve model performance. To account for depth-dependent strong optical scattering, our network was trained with simulated photoacoustic datasets of targets embedded at different depths inside tissue media of different scattering levels. The network trained on this novel dataset accurately locates targets in experimental PA data that is clinically relevant with respect to the localization of vessels, needles, or brachytherapy seeds. We verify the merits of the proposed architecture by outperforming the state of the art on both simulated and experimental datasets.
Collapse
|
15
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|