1
|
Li Z, Lin J, Wang Y, Li J, Cao Y, Liu X, Wan W, Liu Q, Song X. Ultra-sparse reconstruction for photoacoustic tomography: Sinogram domain prior-guided method exploiting enhanced score-based diffusion model. PHOTOACOUSTICS 2025; 41:100670. [PMID: 39687486 PMCID: PMC11648917 DOI: 10.1016/j.pacs.2024.100670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 10/26/2024] [Accepted: 11/18/2024] [Indexed: 12/18/2024]
Abstract
Photoacoustic tomography, a novel non-invasive imaging modality, combines the principles of optical and acoustic imaging for use in biomedical applications. In scenarios where photoacoustic signal acquisition is insufficient due to sparse-view sampling, conventional direct reconstruction methods significantly degrade image resolution and generate numerous artifacts. To mitigate these constraints, a novel sinogram-domain priors guided extremely sparse-view reconstruction method for photoacoustic tomography boosted by enhanced diffusion model is proposed. The model learns prior information from the data distribution of sinograms under full-ring, 512-projections. In iterative reconstruction, the prior information serves as a constraint in least-squares optimization, facilitating convergence towards more plausible solutions. The performance of the method is evaluated using blood vessel simulation, phantoms, and in vivo experimental data. Subsequently, the transformation of the reconstructed sinograms into the image domain is achieved through the delay-and-sum method, enabling a thorough assessment of the proposed method. The results show that the proposed method demonstrates superior performance compared to the U-Net method, yielding images of markedly higher quality. Notably, for in vivo data under 32 projections, the sinogram structural similarity improved by ∼21 % over U-Net, and the image structural similarity increased by ∼51 % and ∼84 % compared to U-Net and delay-and-sum methods, respectively. The reconstruction in the sinogram domain for photoacoustic tomography enhances sparse-view imaging capabilities, potentially expanding the applications of photoacoustic tomography.
Collapse
Affiliation(s)
| | | | - Yiguang Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yubin Cao
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Wenbo Wan
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
2
|
Won NJ, Bartling M, La Macchia J, Markevich S, Holtshousen S, Jagota A, Negus C, Najjar E, Wilson BC, Irish JC, Daly MJ. Deep learning-enabled fluorescence imaging for surgical guidance: in silico training for oral cancer depth quantification. JOURNAL OF BIOMEDICAL OPTICS 2025; 30:S13706. [PMID: 39295734 PMCID: PMC11408754 DOI: 10.1117/1.jbo.30.s1.s13706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 09/21/2024]
Abstract
Significance Oral cancer surgery requires accurate margin delineation to balance complete resection with post-operative functionality. Current in vivo fluorescence imaging systems provide two-dimensional margin assessment yet fail to quantify tumor depth prior to resection. Harnessing structured light in combination with deep learning (DL) may provide near real-time three-dimensional margin detection. Aim A DL-enabled fluorescence spatial frequency domain imaging (SFDI) system trained with in silico tumor models was developed to quantify the depth of oral tumors. Approach A convolutional neural network was designed to produce tumor depth and concentration maps from SFDI images. Three in silico representations of oral cancer lesions were developed to train the DL architecture: cylinders, spherical harmonics, and composite spherical harmonics (CSHs). Each model was validated with in silico SFDI images of patient-derived tongue tumors, and the CSH model was further validated with optical phantoms. Results The performance of the CSH model was superior when presented with patient-derived tumors ( P -value < 0.05 ). The CSH model could predict depth and concentration within 0.4 mm and 0.4 μ g / mL , respectively, for in silico tumors with depths less than 10 mm. Conclusions A DL-enabled SFDI system trained with in silico CSH demonstrates promise in defining the deep margins of oral tumors.
Collapse
Affiliation(s)
- Natalie J Won
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Mandolin Bartling
- University of Toronto, Department of Otolaryngology-Head and Neck Surgery, Toronto, Ontario, Canada
| | - Josephine La Macchia
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Stefanie Markevich
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Scott Holtshousen
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Arjun Jagota
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Christina Negus
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Esmat Najjar
- University of Toronto, Department of Otolaryngology-Head and Neck Surgery, Toronto, Ontario, Canada
| | - Brian C Wilson
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- University of Toronto, Department of Medical Biophysics, Toronto, Ontario, Canada
| | - Jonathan C Irish
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- University of Toronto, Department of Otolaryngology-Head and Neck Surgery, Toronto, Ontario, Canada
| | - Michael J Daly
- University Health Network, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Pandey V, Erbas I, Michalet X, Ulku A, Bruschini C, Charbon E, Barroso M, Intes X. Deep learning-based temporal deconvolution for photon time-of-flight distribution retrieval. OPTICS LETTERS 2024; 49:6457-6460. [PMID: 39546693 DOI: 10.1364/ol.533923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 10/12/2024] [Indexed: 11/17/2024]
Abstract
The acquisition of the time of flight (ToF) of photons has found numerous applications in the biomedical field. Over the last decades, a few strategies have been proposed to deconvolve the temporal instrument response function (IRF) that distorts the experimental time-resolved data. However, these methods require burdensome computational strategies and regularization terms to mitigate noise contributions. Herein, we propose a deep learning model specifically to perform the deconvolution task in fluorescence lifetime imaging (FLI). The model is trained and validated with representative simulated FLI data with the goal of retrieving the true photon ToF distribution. Its performance and robustness are validated with well-controlled in vitro experiments using three time-resolved imaging modalities with markedly different temporal IRFs. The model aptitude is further established with in vivo preclinical investigation. Overall, these in vitro and in vivo validations demonstrate the flexibility and accuracy of deep learning model-based deconvolution in time-resolved FLI and diffuse optical imaging.
Collapse
|
4
|
Nie MY, An XW, Xing YC, Wang Z, Wang YQ, Lü JQ. Artificial intelligence algorithms for real-time detection of colorectal polyps during colonoscopy: a review. Am J Cancer Res 2024; 14:5456-5470. [PMID: 39659923 PMCID: PMC11626263 DOI: 10.62347/bziz6358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 11/14/2024] [Indexed: 12/12/2024] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Early detection and removal of colorectal polyps during colonoscopy are crucial for preventing such cancers. With the development of artificial intelligence (AI) technology, it has become possible to detect and localize colorectal polyps in real time during colonoscopy using computer-aided diagnosis (CAD). This provides a reliable endoscopist reference and leads to more accurate diagnosis and treatment. This paper reviews AI-based algorithms for real-time detection of colorectal polyps, with a particular focus on the development of deep learning algorithms aimed at optimizing both efficiency and correctness. Furthermore, the challenges and prospects of AI-based colorectal polyp detection are discussed.
Collapse
Affiliation(s)
- Meng-Yuan Nie
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| | - Xin-Wei An
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| | - Yun-Can Xing
- Department of Colorectal Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing, China
| | - Zheng Wang
- Department of Colorectal Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing, China
| | - Yan-Qiu Wang
- Langfang Traditional Chinese Medicine HospitalLangfang, Hebei, China
| | - Jia-Qi Lü
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| |
Collapse
|
5
|
Fanous MJ, Casteleiro Costa P, Işıl Ç, Huang L, Ozcan A. Neural network-based processing and reconstruction of compromised biophotonic image data. LIGHT, SCIENCE & APPLICATIONS 2024; 13:231. [PMID: 39237561 PMCID: PMC11377739 DOI: 10.1038/s41377-024-01544-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 09/07/2024]
Abstract
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| |
Collapse
|
6
|
Paul A, Mallidi S. U-Net enhanced real-time LED-based photoacoustic imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300465. [PMID: 38622811 PMCID: PMC11164633 DOI: 10.1002/jbio.202300465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/18/2024] [Accepted: 03/17/2024] [Indexed: 04/17/2024]
Abstract
Photoacoustic (PA) imaging is hybrid imaging modality with good optical contrast and spatial resolution. Portable, cost-effective, smaller footprint light emitting diodes (LEDs) are rapidly becoming important PA optical sources. However, the key challenge faced by the LED-based systems is the low light fluence that is generally compensated by high frame averaging, consequently reducing acquisition frame-rate. In this study, we present a simple deep learning U-Net framework that enhances the signal-to-noise ratio (SNR) and contrast of PA image obtained by averaging low number of frames. The SNR increased by approximately four-fold for both in-class in vitro phantoms (4.39 ± 2.55) and out-of-class in vivo models (4.27 ± 0.87). We also demonstrate the noise invariancy of the network and discuss the downsides (blurry outcome and failure to reduce the salt & pepper noise). Overall, the developed U-Net framework can provide a real-time image enhancement platform for clinically translatable low-cost and low-energy light source-based PA imaging systems.
Collapse
Affiliation(s)
- Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | |
Collapse
|
7
|
Park J, Gao L. Advancements in fluorescence lifetime imaging microscopy Instrumentation: Towards high speed and 3D. CURRENT OPINION IN SOLID STATE & MATERIALS SCIENCE 2024; 30:101147. [PMID: 39086551 PMCID: PMC11290093 DOI: 10.1016/j.cossms.2024.101147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Fluorescence lifetime imaging microscopy (FLIM) is a powerful imaging tool offering molecular specific insights into samples through the measurement of fluorescence decay time, with promising applications in diverse research fields. However, to acquire two-dimensional lifetime images, conventional FLIM relies on extensive scanning in both the spatial and temporal domain, resulting in much slower acquisition rates compared to intensity-based approaches. This problem is further magnified in three-dimensional imaging, as it necessitates additional scanning along the depth axis. Recent advancements have aimed to enhance the speed and three-dimensional imaging capabilities of FLIM. This review explores the progress made in addressing these challenges and discusses potential directions for future developments in FLIM instrumentation.
Collapse
Affiliation(s)
- Jongchan Park
- Department of Bioengineering, University of California, Los Angeles, CA 90025, USA
| | - Liang Gao
- Department of Bioengineering, University of California, Los Angeles, CA 90025, USA
| |
Collapse
|
8
|
Song P, Jadan HV, Howe CL, Foust AJ, Dragotti PL. Model-Based Explainable Deep Learning for Light-Field Microscopy Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3059-3074. [PMID: 38656840 PMCID: PMC11100862 DOI: 10.1109/tip.2024.3387297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/27/2024] [Accepted: 03/12/2024] [Indexed: 04/26/2024]
Abstract
In modern neuroscience, observing the dynamics of large populations of neurons is a critical step of understanding how networks of neurons process information. Light-field microscopy (LFM) has emerged as a type of scanless, high-speed, three-dimensional (3D) imaging tool, particularly attractive for this purpose. Imaging neuronal activity using LFM calls for the development of novel computational approaches that fully exploit domain knowledge embedded in physics and optics models, as well as enabling high interpretability and transparency. To this end, we propose a model-based explainable deep learning approach for LFM. Different from purely data-driven methods, the proposed approach integrates wave-optics theory, sparse representation and non-linear optimization with the artificial neural network. In particular, the architecture of the proposed neural network is designed following precise signal and optimization models. Moreover, the network's parameters are learned from a training dataset using a novel training strategy that integrates layer-wise training with tailored knowledge distillation. Such design allows the network to take advantage of domain knowledge and learned new features. It combines the benefit of both model-based and learning-based methods, thereby contributing to superior interpretability, transparency and performance. By evaluating on both structural and functional LFM data obtained from scattering mammalian brain tissues, we demonstrate the capabilities of the proposed approach to achieve fast, robust 3D localization of neuron sources and accurate neural activity identification.
Collapse
Affiliation(s)
- Pingfan Song
- Department of EngineeringUniversity of CambridgeCB2 1PZCambridgeU.K
| | - Herman Verinaz Jadan
- Faculty of Electrical and Computer EngineeringEscuela Superior Politécnica del Litoral (ESPOL)GuayaquilEC090903Ecuador
| | - Carmel L. Howe
- Department of Chemical Physiology and BiochemistryOregon Health and Science UniversityPortlandOR97239USA
| | - Amanda J. Foust
- Center for NeurotechnologyDepartment of BioengineeringImperial College LondonSW7 2AZLondonU.K
| | - Pier Luigi Dragotti
- Department of Electronic and Electrical EngineeringImperial College LondonSW7 2AZLondonUK
| |
Collapse
|
9
|
Yang K, Zhang H, Qiu Y, Zhai T, Zhang Z. Self-Supervised Joint Learning for pCLE Image Denoising. SENSORS (BASEL, SWITZERLAND) 2024; 24:2853. [PMID: 38732957 PMCID: PMC11086271 DOI: 10.3390/s24092853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 04/26/2024] [Accepted: 04/28/2024] [Indexed: 05/13/2024]
Abstract
Probe-based confocal laser endoscopy (pCLE) has emerged as a powerful tool for disease diagnosis, yet it faces challenges such as the formation of hexagonal patterns in images due to the inherent characteristics of fiber bundles. Recent advancements in deep learning offer promise in image denoising, but the acquisition of clean-noisy image pairs for training networks across all potential scenarios can be prohibitively costly. Few studies have explored training denoising networks on such pairs. Here, we propose an innovative self-supervised denoising method. Our approach integrates noise prediction networks, image quality assessment networks, and denoising networks in a collaborative, jointly trained manner. Compared to prior self-supervised denoising methods, our approach yields superior results on pCLE images and fluorescence microscopy images. In summary, our novel self-supervised denoising technique enhances image quality in pCLE diagnosis by leveraging the synergy of noise prediction, image quality assessment, and denoising networks, surpassing previous methods on both pCLE and fluorescence microscopy images.
Collapse
Affiliation(s)
| | - Haojie Zhang
- State Key Lab of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications (BUPT), Beijing 100876, China; (K.Y.); (Y.Q.); (T.Z.); (Z.Z.)
| | | | | | | |
Collapse
|
10
|
Gouzou D, Taimori A, Haloubi T, Finlayson N, Wang Q, Hopgood JR, Vallejo M. Applications of machine learning in time-domain fluorescence lifetime imaging: a review. Methods Appl Fluoresc 2024; 12:022001. [PMID: 38055998 PMCID: PMC10851337 DOI: 10.1088/2050-6120/ad12f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/25/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Many medical imaging modalities have benefited from recent advances in Machine Learning (ML), specifically in deep learning, such as neural networks. Computers can be trained to investigate and enhance medical imaging methods without using valuable human resources. In recent years, Fluorescence Lifetime Imaging (FLIm) has received increasing attention from the ML community. FLIm goes beyond conventional spectral imaging, providing additional lifetime information, and could lead to optical histopathology supporting real-time diagnostics. However, most current studies do not use the full potential of machine/deep learning models. As a developing image modality, FLIm data are not easily obtainable, which, coupled with an absence of standardisation, is pushing back the research to develop models which could advance automated diagnosis and help promote FLIm. In this paper, we describe recent developments that improve FLIm image quality, specifically time-domain systems, and we summarise sensing, signal-to-noise analysis and the advances in registration and low-level tracking. We review the two main applications of ML for FLIm: lifetime estimation and image analysis through classification and segmentation. We suggest a course of action to improve the quality of ML studies applied to FLIm. Our final goal is to promote FLIm and attract more ML practitioners to explore the potential of lifetime imaging.
Collapse
Affiliation(s)
- Dorian Gouzou
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| | - Ali Taimori
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Tarek Haloubi
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Neil Finlayson
- Neil Finlayson is with Institute for Integrated Micro and Nano Systems, School of Engineering, University ofEdinburgh, Edinburgh EH9 3FF, United Kingdom
| | - Qiang Wang
- Qiang Wang is with Centre for Inflammation Research, University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom
| | - James R Hopgood
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Marta Vallejo
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| |
Collapse
|
11
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
12
|
Gubbi MR, Assis F, Chrispin J, Bell MAL. Deep learning in vivo catheter tip locations for photoacoustic-guided cardiac interventions. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11505. [PMID: 38076439 PMCID: PMC10704189 DOI: 10.1117/1.jbo.29.s1.s11505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/27/2023] [Accepted: 10/23/2023] [Indexed: 12/18/2023]
Abstract
Significance Interventional cardiac procedures often require ionizing radiation to guide cardiac catheters to the heart. To reduce the associated risks of ionizing radiation, photoacoustic imaging can potentially be combined with robotic visual servoing, with initial demonstrations requiring segmentation of catheter tips. However, typical segmentation algorithms applied to conventional image formation methods are susceptible to problematic reflection artifacts, which compromise the required detectability and localization of the catheter tip. Aim We describe a convolutional neural network and the associated customizations required to successfully detect and localize in vivo photoacoustic signals from a catheter tip received by a phased array transducer, which is a common transducer for transthoracic cardiac imaging applications. Approach We trained a network with simulated photoacoustic channel data to identify point sources, which appropriately model photoacoustic signals from the tip of an optical fiber inserted in a cardiac catheter. The network was validated with an independent simulated dataset, then tested on data from the tips of cardiac catheters housing optical fibers and inserted into ex vivo and in vivo swine hearts. Results When validated with simulated data, the network achieved an F 1 score of 98.3% and Euclidean errors (mean ± one standard deviation) of 1.02 ± 0.84 mm for target depths of 20 to 100 mm. When tested on ex vivo and in vivo data, the network achieved F 1 scores as large as 100.0%. In addition, for target depths of 40 to 90 mm in the ex vivo and in vivo data, up to 86.7% of axial and 100.0% of lateral position errors were lower than the axial and lateral resolution, respectively, of the phased array transducer. Conclusions These results demonstrate the promise of the proposed method to identify photoacoustic sources in future interventional cardiology and cardiac electrophysiology applications.
Collapse
Affiliation(s)
- Mardava R. Gubbi
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
| | - Fabrizio Assis
- Johns Hopkins Medical Institutions, Division of Cardiology, Baltimore, Maryland, United States
| | - Jonathan Chrispin
- Johns Hopkins Medical Institutions, Division of Cardiology, Baltimore, Maryland, United States
| | - Muyinatu A. Lediju Bell
- Johns Hopkins University, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
13
|
Shimizu K. Near-Infrared Transillumination for Macroscopic Functional Imaging of Animal Bodies. BIOLOGY 2023; 12:1362. [PMID: 37997961 PMCID: PMC10668962 DOI: 10.3390/biology12111362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/17/2023] [Accepted: 10/18/2023] [Indexed: 11/25/2023]
Abstract
The classical transillumination technique has been revitalized through recent advancements in optical technology, enhancing its applicability in the realm of biomedical research. With a new perspective on near-axis scattered light, we have harnessed near-infrared (NIR) light to visualize intricate internal light-absorbing structures within animal bodies. By leveraging the principle of differentiation, we have extended the applicability of the Beer-Lambert law even in cases of scattering-dominant media, such as animal body tissues. This approach facilitates the visualization of dynamic physiological changes occurring within animal bodies, thereby enabling noninvasive, real-time imaging of macroscopic functionality in vivo. An important challenge inherent to transillumination imaging lies in the image blur caused by pronounced light scattering within body tissues. By extracting near-axis scattered components from the predominant diffusely scattered light, we have achieved cross-sectional imaging of animal bodies. Furthermore, we have introduced software-based techniques encompassing deconvolution using the point spread function and the application of deep learning principles to counteract the scattering effect. Finally, transillumination imaging has been elevated from two-dimensional to three-dimensional imaging. The effectiveness and applicability of these proposed techniques have been validated through comprehensive simulations and experiments involving human and animal subjects. As demonstrated through these studies, transillumination imaging coupled with emerging technologies offers a promising avenue for future biomedical applications.
Collapse
Affiliation(s)
- Koichi Shimizu
- School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China;
- IPS Research Center, Waseda University, Kitakyushu 808-0135, Japan
| |
Collapse
|
14
|
Wang B, Li S, He X, Zhao Y, Zhang H, He X, Yu J, Guo H. Structure-fused deep 3D hierarchical network: A bioluminescence tomography scheme for different imaging objects. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083149 DOI: 10.1109/embc40787.2023.10340967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Monte Carlo eXtreme (MCX) method has a unique advantage for deep neural network based bioluminescence tomography (BLT) reconstruction. However, this method ignores the distribution of sources energy and relies on the determined tissue structure. In this paper, a deep 3D hierarchical reconstruction network for BLT was proposed where the inputs were divided into two parts -- bioluminescence image (BLI) and anatomy of the imaged object by CT. Firstly, a parallel encoder is used to feature the original BLI & CT slices and integrate their features to distinguish the different tissue structure of imaging objects; Secondly, GRU is used to fit the spatial information of different slices and convert it into 3D features; Finally, the 3D features are decoded to the spacial and energy information of source by a symmetrical decoding structure. Our research suggested that this method can effectively compute the radiation intensity and the spatial distribution of the source for different imaging object.
Collapse
|
15
|
Abstract
Over the last half century, the autofluorescence of the metabolic cofactors NADH (reduced nicotinamide adenine dinucleotide) and FAD (flavin adenine dinucleotide) has been quantified in a variety of cell types and disease states. With the spread of nonlinear optical microscopy techniques in biomedical research, NADH and FAD imaging has offered an attractive solution to noninvasively monitor cell and tissue status and elucidate dynamic changes in cell or tissue metabolism. Various tools and methods to measure the temporal, spectral, and spatial properties of NADH and FAD autofluorescence have been developed. Specifically, an optical redox ratio of cofactor fluorescence intensities and NADH fluorescence lifetime parameters have been used in numerous applications, but significant work remains to mature this technology for understanding dynamic changes in metabolism. This article describes the current understanding of our optical sensitivity to different metabolic pathways and highlights current challenges in the field. Recent progress in addressing these challenges and acquiring more quantitative information in faster and more metabolically relevant formats is also discussed.
Collapse
Affiliation(s)
- Irene Georgakoudi
- Department of Biomedical Engineering, Tufts University, Medford, Massachusetts, USA;
- Genetics, Molecular and Cellular Biology Program, Graduate School of Biomedical Sciences, Tufts University, Boston, Massachusetts, USA
| | - Kyle P Quinn
- Department of Biomedical Engineering and the Arkansas Integrative Metabolic Research Center, University of Arkansas, Fayetteville, Arkansas, USA
| |
Collapse
|
16
|
Chen YJ, Vyas S, Huang HM, Luo Y. Self-supervised neural network for phase retrieval in QDPC microscopy. OPTICS EXPRESS 2023; 31:19897-19908. [PMID: 37381395 DOI: 10.1364/oe.491496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 05/08/2023] [Indexed: 06/30/2023]
Abstract
Quantitative differential phase contrast (QDPC) microscope plays an important role in biomedical research since it can provide high-resolution images and quantitative phase information for thin transparent objects without staining. With weak phase assumption, the retrieval of phase information in QDPC can be treated as a linearly inverse problem which can be solved by Tikhonov regularization. However, the weak phase assumption is limited to thin objects, and tuning the regularization parameter manually is inconvenient. A self-supervised learning method based on deep image prior (DIP) is proposed to retrieve phase information from intensity measurements. The DIP model that takes intensity measurements as input is trained to output phase image. To achieve this goal, a physical layer that synthesizes the intensity measurements from the predicted phase is used. By minimizing the difference between the measured and predicted intensities, the trained DIP model is expected to reconstruct the phase image from its intensity measurements. To evaluate the performance of the proposed method, we conducted two phantom studies and reconstructed the micro-lens array and standard phase targets with different phase values. In the experimental results, the deviation of the reconstructed phase values obtained from the proposed method was less than 10% of the theoretical values. Our results show the feasibility of the proposed methods to predict quantitative phase with high accuracy, and no use of ground truth phase.
Collapse
|
17
|
Luo X, Ren Q, Zhang H, Chen C, Yang T, He X, Zhao W. Efficient FMT reconstruction based on L 1-αL 2 regularization via half-quadratic splitting and a two-probe separation light source strategy. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1128-1141. [PMID: 37706766 DOI: 10.1364/josaa.481330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 04/20/2023] [Indexed: 09/15/2023]
Abstract
Fluorescence molecular tomography (FMT) can achieve noninvasive, high-contrast, high-sensitivity three-dimensional imaging in vivo by relying on a variety of fluorescent molecular probes, and has excellent clinical transformation prospects in the detection of tumors in vivo. However, the limited surface fluorescence makes the FMT reconstruction have some ill-posedness, and it is difficult to obtain the ideal reconstruction effect. In this paper, two different emission fluorescent probes and L 1-L 2 regularization are combined to improve the temporal and spatial resolution of FMT visual reconstruction by introducing the weighting factor α and a half-quadratic splitting alternating optimization (HQSAO) iterative algorithm. By introducing an auxiliary variable, the HQSAO method breaks the sparse FMT reconstruction task into two subproblems that can be solved in turn: simple reconstruction and image denoising. The weight factor α (α>1) can increase the weight of nonconvex terms to further promote the sparsity of the algorithm. Importantly, this paper combines two different dominant fluorescent probes to achieve high-quality reconstruction of dual light sources. The performance of the proposed reconstruction strategy was evaluated by digital mouse and nude mouse single/dual light source models. The simulation results show that the HQSAO iterative algorithm can achieve more excellent positioning accuracy and morphology distribution in a shorter time. In vivo experiments also further prove that the HQSAO algorithm has advantages in light source information preservation and artifact suppression. In particular, the introduction of two main emission fluorescent probes makes it easy to separate and reconstruct the dual light sources. When it comes to localization and three-dimensional morphology, the results of the reconstruction are much better than those using a fluorescent probe, which further facilitates the clinical transformation of FMT.
Collapse
|
18
|
Nelson MS, Liu Y, Wilson HM, Li B, Rosado-Mendez IM, Rogers JD, Block WF, Eliceiri KW. Multiscale Label-Free Imaging of Fibrillar Collagen in the Tumor Microenvironment. Methods Mol Biol 2023; 2614:187-235. [PMID: 36587127 DOI: 10.1007/978-1-0716-2914-7_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
With recent advances in cancer therapeutics, there is a great need for improved imaging methods for characterizing cancer onset and progression in a quantitative and actionable way. Collagen, the most abundant extracellular matrix protein in the tumor microenvironment (and the body in general), plays a multifaceted role, both hindering and promoting cancer invasion and progression. Collagen deposition can defend the tumor with immunosuppressive effects, while aligned collagen fiber structures can enable tumor cell migration, aiding invasion and metastasis. Given the complex role of collagen fiber organization and topology, imaging has been a tool of choice to characterize these changes on multiple spatial scales, from the organ and tumor scale to cellular and subcellular level. Macroscale density already aids in the detection and diagnosis of solid cancers, but progress is being made to integrate finer microscale features into the process. Here we review imaging modalities ranging from optical methods of second harmonic generation (SHG), polarized light microscopy (PLM), and optical coherence tomography (OCT) to the medical imaging approaches of ultrasound and magnetic resonance imaging (MRI). These methods have enabled scientists and clinicians to better understand the impact collagen structure has on the tumor environment, at both the bulk scale (density) and microscale (fibrillar structure) levels. We focus on imaging methods with the potential to both examine the collagen structure in as natural a state as possible and still be clinically amenable, with an emphasis on label-free strategies, exploiting intrinsic optical properties of collagen fibers.
Collapse
Affiliation(s)
- Michael S Nelson
- Center for Quantitative Cell Imaging, University of Wisconsin-Madison, Madison, WI, USA.,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | - Yuming Liu
- Center for Quantitative Cell Imaging, University of Wisconsin-Madison, Madison, WI, USA
| | - Helen M Wilson
- Center for Quantitative Cell Imaging, University of Wisconsin-Madison, Madison, WI, USA.,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | - Bin Li
- Center for Quantitative Cell Imaging, University of Wisconsin-Madison, Madison, WI, USA.,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA.,Morgridge Institute for Research, Madison, WI, USA
| | - Ivan M Rosado-Mendez
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - Jeremy D Rogers
- Morgridge Institute for Research, Madison, WI, USA.,McPherson Eye Research Institute, University of Wisconsin-Madison, Madison, WI, USA
| | - Walter F Block
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - Kevin W Eliceiri
- Center for Quantitative Cell Imaging, University of Wisconsin-Madison, Madison, WI, USA. .,Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA. .,Morgridge Institute for Research, Madison, WI, USA. .,Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA. .,McPherson Eye Research Institute, University of Wisconsin-Madison, Madison, WI, USA.
| |
Collapse
|
19
|
Gonzales Martinez R, van Dongen DM. Deep learning algorithms for the early detection of breast cancer: A comparative study with traditional machine learning. INFORMATICS IN MEDICINE UNLOCKED 2023; 41:101317. [DOI: 10.1016/j.imu.2023.101317] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2025] Open
|
20
|
Ochoa M, Smith JT, Gao S, Intes X. Computational macroscopic lifetime imaging and concentration unmixing of autofluorescence. JOURNAL OF BIOPHOTONICS 2022; 15:e202200133. [PMID: 36546622 PMCID: PMC10026351 DOI: 10.1002/jbio.202200133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 07/06/2022] [Accepted: 07/12/2022] [Indexed: 06/17/2023]
Abstract
Single-pixel computational imaging can leverage highly sensitive detectors that concurrently acquire data across spectral and temporal domains. For molecular imaging, such methodology enables to collect rich intensity and lifetime multiplexed fluorescence datasets. Herein we report on the application of a single-pixel structured light-based platform for macroscopic imaging of tissue autofluorescence. The super-continuum visible excitation and hyperspectral single-pixel detection allow for parallel characterization of autofluorescence intensity and lifetime. Furthermore, we exploit a deep learning based data processing pipeline, to perform autofluorescence unmixing while yielding the autofluorophores' concentrations. The full scheme (setup and processing) is validated in silico and in vitro with clinically relevant autofluorophores flavin adenine dinucleotide, riboflavin, and protoporphyrin. The presented results demonstrate the potential of the methodology for macroscopically quantifying the intensity and lifetime of autofluorophores, with higher specificity for cases of mixed emissions, which are ubiquitous in autofluorescence and multiplexed in vivo imaging.
Collapse
Affiliation(s)
- Marien Ochoa
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Jason T Smith
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Shan Gao
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Xavier Intes
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
21
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|
22
|
Eastmond C, Subedi A, De S, Intes X. Deep learning in fNIRS: a review. NEUROPHOTONICS 2022; 9:041411. [PMID: 35874933 PMCID: PMC9301871 DOI: 10.1117/1.nph.9.4.041411] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 06/22/2022] [Indexed: 05/28/2023]
Abstract
Significance: Optical neuroimaging has become a well-established clinical and research tool to monitor cortical activations in the human brain. It is notable that outcomes of functional near-infrared spectroscopy (fNIRS) studies depend heavily on the data processing pipeline and classification model employed. Recently, deep learning (DL) methodologies have demonstrated fast and accurate performances in data processing and classification tasks across many biomedical fields. Aim: We aim to review the emerging DL applications in fNIRS studies. Approach: We first introduce some of the commonly used DL techniques. Then, the review summarizes current DL work in some of the most active areas of this field, including brain-computer interface, neuro-impairment diagnosis, and neuroscience discovery. Results: Of the 63 papers considered in this review, 32 report a comparative study of DL techniques to traditional machine learning techniques where 26 have been shown outperforming the latter in terms of the classification accuracy. In addition, eight studies also utilize DL to reduce the amount of preprocessing typically done with fNIRS data or increase the amount of data via data augmentation. Conclusions: The application of DL techniques to fNIRS studies has shown to mitigate many of the hurdles present in fNIRS studies such as lengthy data preprocessing or small sample sizes while achieving comparable or improved classification accuracy.
Collapse
Affiliation(s)
- Condell Eastmond
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| | - Aseem Subedi
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| | - Suvranu De
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| | - Xavier Intes
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| |
Collapse
|
23
|
Weighted average ensemble-based semantic segmentation in biological electron microscopy images. Histochem Cell Biol 2022; 158:447-462. [DOI: 10.1007/s00418-022-02148-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2022] [Indexed: 12/16/2022]
Abstract
AbstractSemantic segmentation of electron microscopy images using deep learning methods is a valuable tool for the detailed analysis of organelles and cell structures. However, these methods require a large amount of labeled ground truth data that is often unavailable. To address this limitation, we present a weighted average ensemble model that can automatically segment biological structures in electron microscopy images when trained with only a small dataset. Thus, we exploit the fact that a combination of diverse base-learners is able to outperform one single segmentation model. Our experiments with seven different biological electron microscopy datasets demonstrate quantitative and qualitative improvements. We show that the Grad-CAM method can be used to interpret and verify the prediction of our model. Compared with a standard U-Net, the performance of our method is superior for all tested datasets. Furthermore, our model leverages a limited number of labeled training data to segment the electron microscopy images and therefore has a high potential for automated biological applications.
Collapse
|
24
|
Stergar J, Lakota K, Perše M, Tomšič M, Milanič M. Hyperspectral evaluation of vasculature in induced peritonitis mouse models. BIOMEDICAL OPTICS EXPRESS 2022; 13:3461-3475. [PMID: 35781958 PMCID: PMC9208583 DOI: 10.1364/boe.460288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/28/2022] [Accepted: 05/08/2022] [Indexed: 06/15/2023]
Abstract
Imaging of blood vessel structure in combination with functional information about blood oxygenation can be important in characterizing many different health conditions in which the growth of new vessels contributes to the overall condition. In this paper, we present a method for extracting comprehensive maps of the vasculature from hyperspectral images that include tissue and vascular oxygenation. We also show results from a preclinical study of peritonitis in mice. First, we analyze hyperspectral images using Beer-Lambert exponential attenuation law to obtain maps of hemoglobin species throughout the sample. We then use an automatic segmentation algorithm to extract blood vessels from the hemoglobin map and combine them into a vascular structure-oxygenation map. We apply this methodology to a series of hyperspectral images of the abdominal wall of mice with and without induced peritonitis. Peritonitis is an inflammation of peritoneum that leads, if untreated, to complications such as peritoneal sclerosis and even death. Characteristic inflammatory response can also be accompanied by changes in vasculature, such as neoangiogenesis. We demonstrate a potential application of the proposed segmentation and processing method by introducing an abnormal tissue fraction metric that quantifies the amount of tissue that deviates from the average values of healthy controls. It is shown that the proposed metric successfully discriminates between healthy control subjects and model subjects with induced peritonitis and has a high statistical significance.
Collapse
Affiliation(s)
- Jošt Stergar
- J. Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia
- Faculty of Mathematics and Physics, University of Ljubljana, Jadranska ulica 19, 1000 Ljubljana, Slovenia
| | - Katja Lakota
- FAMNIT, University of Primorska, Glagoljaska 8, 6000 Koper, Slovenia
- University Medical Centre, Department of Rheumatology, Vodnikova ulica 62, 1000 Ljubljana, Slovenia
| | - Martina Perše
- Faculty of Medicine,University of Ljubljana, Vrazov trg 2, 1000 Ljubljana, Slovenia
| | - Matija Tomšič
- University Medical Centre, Department of Rheumatology, Vodnikova ulica 62, 1000 Ljubljana, Slovenia
- Faculty of Medicine,University of Ljubljana, Vrazov trg 2, 1000 Ljubljana, Slovenia
| | - Matija Milanič
- J. Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia
- Faculty of Mathematics and Physics, University of Ljubljana, Jadranska ulica 19, 1000 Ljubljana, Slovenia
| |
Collapse
|
25
|
Tian L. Deep learning augmented microscopy: a faster, wider view, higher resolution autofluorescence-harmonic microscopy. LIGHT, SCIENCE & APPLICATIONS 2022; 11:109. [PMID: 35462563 PMCID: PMC9035449 DOI: 10.1038/s41377-022-00801-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning enables bypassing the tradeoffs between imaging speed, field of view, and spatial resolution in autofluorescence-harmonic microscopy.
Collapse
Affiliation(s)
- Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, 02215, USA.
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
26
|
Li S, Yu J, He X, Guo H, He X. VoxDMRN: a voxelwise deep max-pooling residual network for bioluminescence tomography reconstruction. OPTICS LETTERS 2022; 47:1729-1732. [PMID: 35363720 DOI: 10.1364/ol.454672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 03/05/2022] [Indexed: 06/14/2023]
Abstract
Bioluminescence tomography (BLT) has extensive applications in preclinical studies for cancer research and drug development. However, the spatial resolution of BLT is inadequate because the numerical methods are limited for solving the physical models of photon propagation and the restriction of using tetrahedral meshes for reconstruction. We conducted a series of theoretical derivations and divided the BLT reconstruction process into two steps: feature extraction and nonlinear mapping. Inspired by deep learning, a voxelwise deep max-pooling residual network (VoxDMRN) is proposed to establish the nonlinear relationship between the internal bioluminescent source and surface boundary density to improve the spatial resolution in BLT reconstruction. The numerical simulation and in vivo experiments both demonstrated that VoxDMRN greatly improves the reconstruction performance regarding location accuracy, shape recovery capability, dual-source resolution, robustness, and in vivo practicability.
Collapse
|
27
|
Emerging and future use of intra-surgical volumetric X-ray imaging and adjuvant tools for decision support in breast-conserving surgery. CURRENT OPINION IN BIOMEDICAL ENGINEERING 2022; 22. [DOI: 10.1016/j.cobme.2022.100382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
28
|
Smith JT, Ochoa M, Faulkner D, Haskins G, Intes X. Deep learning in macroscopic diffuse optical imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210288VRR. [PMID: 35218169 PMCID: PMC8881080 DOI: 10.1117/1.jbo.27.2.020901] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 02/09/2022] [Indexed: 05/02/2023]
Abstract
SIGNIFICANCE Biomedical optics system design, image formation, and image analysis have primarily been guided by classical physical modeling and signal processing methodologies. Recently, however, deep learning (DL) has become a major paradigm in computational modeling and has demonstrated utility in numerous scientific domains and various forms of data analysis. AIM We aim to comprehensively review the use of DL applied to macroscopic diffuse optical imaging (DOI). APPROACH First, we provide a layman introduction to DL. Then, the review summarizes current DL work in some of the most active areas of this field, including optical properties retrieval, fluorescence lifetime imaging, and diffuse optical tomography. RESULTS The advantages of using DL for DOI versus conventional inverse solvers cited in the literature reviewed herein are numerous. These include, among others, a decrease in analysis time (often by many orders of magnitude), increased quantitative reconstruction quality, robustness to noise, and the unique capability to learn complex end-to-end relationships. CONCLUSIONS The heavily validated capability of DL's use across a wide range of complex inverse solving methodologies has enormous potential to bring novel DOI modalities, otherwise deemed impractical for clinical translation, to the patient's bedside.
Collapse
Affiliation(s)
- Jason T. Smith
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Marien Ochoa
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Denzel Faulkner
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Grant Haskins
- Rensselaer Polytechnic Institute, Department of Biomedical Engineering, Troy, New York, United States
| | - Xavier Intes
- Rensselaer Polytechnic Institute, Center for Modeling, Simulation and Imaging for Medicine, Troy, New York, United States
| |
Collapse
|
29
|
Maneas E, Hauptmann A, Alles EJ, Xia W, Vercauteren T, Ourselin S, David AL, Arridge S, Desjardins AE. Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:543-552. [PMID: 34748488 DOI: 10.1109/tuffc.2021.3126530] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Collapse
|
30
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
31
|
Wang L, Zhu W, Zhang Y, Chen S, Yang D. Harnessing the Power of Hybrid Light Propagation Model for Three-Dimensional Optical Imaging in Cancer Detection. Front Oncol 2021; 11:750764. [PMID: 34804938 PMCID: PMC8601256 DOI: 10.3389/fonc.2021.750764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 08/30/2021] [Indexed: 12/04/2022] Open
Abstract
Optical imaging is an emerging technology capable of qualitatively and quantitatively observing life processes at the cellular or molecular level and plays a significant role in cancer detection. In particular, to overcome the disadvantages of traditional optical imaging that only two-dimensionally and qualitatively detect biomedical information, the corresponding three-dimensional (3D) imaging technology is intensively explored to provide 3D quantitative information, such as localization and distribution and tumor cell volume. To retrieve these information, light propagation models that reflect the interaction between light and biological tissues are an important prerequisite and basis for 3D optical imaging. This review concentrates on the recent advances in hybrid light propagation models, with particular emphasis on their powerful use for 3D optical imaging in cancer detection. Finally, we prospect the wider application of the hybrid light propagation model and future potential of 3D optical imaging in cancer detection.
Collapse
Affiliation(s)
- Lin Wang
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - Wentao Zhu
- Zhejiang Lab, Research Center for Healthcare Data Science, Hangzhou, China
| | - Ying Zhang
- Zhejiang Lab, Research Center for Healthcare Data Science, Hangzhou, China
| | - Shangdong Chen
- School of Information Sciences and Technology, Northwest University, Xi'an, China
| | - Defu Yang
- Intelligent Information Processing Laboratory, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
32
|
Wang J, Zhang Y. Adaptive optics in super-resolution microscopy. BIOPHYSICS REPORTS 2021; 7:267-279. [PMID: 37287764 PMCID: PMC10233472 DOI: 10.52601/bpr.2021.210015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 06/23/2021] [Indexed: 06/09/2023] Open
Abstract
Fluorescence microscopy has become a routine tool in biology for interrogating life activities with minimal perturbation. While the resolution of fluorescence microscopy is in theory governed only by the diffraction of light, the resolution obtainable in practice is also constrained by the presence of optical aberrations. The past two decades have witnessed the advent of super-resolution microscopy that overcomes the diffraction barrier, enabling numerous biological investigations at the nanoscale. Adaptive optics, a technique borrowed from astronomical imaging, has been applied to correct for optical aberrations in essentially every microscopy modality, especially in super-resolution microscopy in the last decade, to restore optimal image quality and resolution. In this review, we briefly introduce the fundamental concepts of adaptive optics and the operating principles of the major super-resolution imaging techniques. We highlight some recent implementations and advances in adaptive optics for active and dynamic aberration correction in super-resolution microscopy.
Collapse
Affiliation(s)
- Jingyu Wang
- Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK
| | - Yongdeng Zhang
- School of Life Sciences, Westlake University, Hangzhou 310024, China
- Westlake Laboratory of Life Sciences and Biomedicine, Hangzhou 310024, China
| |
Collapse
|