1
|
Chen W, Li C, Huang ZL, Wang Z. GJFocuser: a Gaussian difference and joint learning-based autofocus method for whole slide imaging. BIOMEDICAL OPTICS EXPRESS 2025; 16:282-302. [PMID: 39816138 PMCID: PMC11729290 DOI: 10.1364/boe.547119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 12/13/2024] [Accepted: 12/17/2024] [Indexed: 01/18/2025]
Abstract
Whole slide imaging (WSI) provides tissue visualization at the cellular level, thereby enhancing the effectiveness of computer-aided diagnostic systems. High-precision autofocusing methods are essential for ensuring the quality of WSI. However, the accuracy of existing autofocusing techniques can be notably affected by variations in staining and sample heterogeneity, particularly without the addition of extra hardware. This study proposes a robust autofocusing method based on the difference between Gaussians (DoG) and joint learning. The DoG emphasizes image edge information that is closely related to focal distance, thereby mitigating the influence of staining variations. The joint learning framework constrains the network's sensitivity to defocus distance, effectively addressing the impact of the differences in sample morphology. We first conduct comparative experiments on public datasets against state-of-the-art methods, with results indicating that our approach achieves cutting-edge performance. Subsequently, we apply this method in a low-cost digital microscopy system, showcasing its effectiveness and versatility in practical scenarios.
Collapse
Affiliation(s)
- Wujie Chen
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
| | - Caiwei Li
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya 570228, China
| | - Zhen-li Huang
- Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Sanya 570228, China
| | - Zhengxia Wang
- School of Computer Science and Technology, Hainan University, Haikou 570228, China
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Wang Y, Wu C, Gao Y, Liu H. Deep Learning-Based Dynamic Region of Interest Autofocus Method for Grayscale Image. SENSORS (BASEL, SWITZERLAND) 2024; 24:4336. [PMID: 39001115 PMCID: PMC11244388 DOI: 10.3390/s24134336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 06/28/2024] [Accepted: 07/01/2024] [Indexed: 07/16/2024]
Abstract
In the field of autofocus for optical systems, although passive focusing methods are widely used due to their cost-effectiveness, fixed focusing windows and evaluation functions in certain scenarios can still lead to focusing failures. Additionally, the lack of datasets limits the extensive research of deep learning methods. In this work, we propose a neural network autofocus method with the capability of dynamically selecting the region of interest (ROI). Our main work is as follows: first, we construct a dataset for automatic focusing of grayscale images; second, we transform the autofocus issue into an ordinal regression problem and propose two focusing strategies: full-stack search and single-frame prediction; and third, we construct a MobileViT network with a linear self-attention mechanism to achieve automatic focusing on dynamic regions of interest. The effectiveness of the proposed focusing method is verified through experiments, and the results show that the focusing MAE of the full-stack search can be as low as 0.094, with a focusing time of 27.8 ms, and the focusing MAE of the single-frame prediction can be as low as 0.142, with a focusing time of 27.5 ms.
Collapse
Affiliation(s)
- Yao Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (Y.W.); (Y.G.); (H.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chuan Wu
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (Y.W.); (Y.G.); (H.L.)
| | - Yunlong Gao
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (Y.W.); (Y.G.); (H.L.)
| | - Huiying Liu
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (Y.W.); (Y.G.); (H.L.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
4
|
Chalfoun J, Lund SP, Ling C, Peskin A, Pierce L, Halter M, Elliott J, Sarkar S. Establishing a reference focal plane using convolutional neural networks and beads for brightfield imaging. Sci Rep 2024; 14:7768. [PMID: 38565548 PMCID: PMC10987482 DOI: 10.1038/s41598-024-57123-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 03/14/2024] [Indexed: 04/04/2024] Open
Abstract
Repeatability of measurements from image analytics is difficult, due to the heterogeneity and complexity of cell samples, exact microscope stage positioning, and slide thickness. We present a method to define and use a reference focal plane that provides repeatable measurements with very high accuracy, by relying on control beads as reference material and a convolutional neural network focused on the control bead images. Previously we defined a reference effective focal plane (REFP) based on the image gradient of bead edges and three specific bead image features. This paper both generalizes and improves on this previous work. First, we refine the definition of the REFP by fitting a cubic spline to describe the relationship between the distance from a bead's center and pixel intensity and by sharing information across experiments, exposures, and fields of view. Second, we remove our reliance on image features that behave differently from one instrument to another. Instead, we apply a convolutional regression neural network (ResNet 18) trained on cropped bead images that is generalizable to multiple microscopes. Our ResNet 18 network predicts the location of the REFP with only a single inferenced image acquisition that can be taken across a wide range of focal planes and exposure times. We illustrate the different strategies and hyperparameter optimization of the ResNet 18 to achieve a high prediction accuracy with an uncertainty for every image tested coming within the microscope repeatability measure of 7.5 µm from the desired focal plane. We demonstrate the generalizability of this methodology by applying it to two different optical systems and show that this level of accuracy can be achieved using only 6 beads per image.
Collapse
Affiliation(s)
- Joe Chalfoun
- National Institute of Standards and Technology, Gaithersburg, MD, USA.
| | - Steven P Lund
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Chenyi Ling
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Adele Peskin
- National Institute of Standards and Technology, Boulder, CO, USA
| | - Laura Pierce
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Michael Halter
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - John Elliott
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| | - Sumona Sarkar
- National Institute of Standards and Technology, Gaithersburg, MD, USA
| |
Collapse
|
5
|
Oyibo P, Agbana T, van Lieshout L, Oyibo W, Diehl JC, Vdovine G. An automated slide scanning system for membrane filter imaging in diagnosis of urogenital schistosomiasis. J Microsc 2024; 294:52-61. [PMID: 38291833 DOI: 10.1111/jmi.13269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 01/16/2024] [Accepted: 01/22/2024] [Indexed: 02/01/2024]
Abstract
Traditionally, automated slide scanning involves capturing a rectangular grid of field-of-view (FoV) images which can be stitched together to create whole slide images, while the autofocusing algorithm captures a focal stack of images to determine the best in-focus image. However, these methods can be time-consuming due to the need for X-, Y- and Z-axis movements of the digital microscope while capturing multiple FoV images. In this paper, we propose a solution to minimise these redundancies by presenting an optimal procedure for automated slide scanning of circular membrane filters on a glass slide. We achieve this by following an optimal path in the sample plane, ensuring that only FoVs overlapping the filter membrane are captured. To capture the best in-focus FoV image, we utilise a hill-climbing approach that tracks the peak of the mean of Gaussian gradient of the captured FoVs images along the Z-axis. We implemented this procedure to optimise the efficiency of the Schistoscope, an automated digital microscope developed to diagnose urogenital schistosomiasis by imaging Schistosoma haematobium eggs on 13 or 25 mm membrane filters. Our improved method reduces the automated slide scanning time by 63.18% and 72.52% for the respective filter sizes. This advancement greatly supports the practicality of the Schistoscope in large-scale schistosomiasis monitoring and evaluation programs in endemic regions. This will save time, resources and also accelerate generation of data that is critical in achieving the targets for schistosomiasis elimination.
Collapse
Affiliation(s)
- Prosper Oyibo
- Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
| | - Tope Agbana
- Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
| | - Lisette van Lieshout
- Department of Parasitology, Leiden University Medical Center, Leiden, The Netherlands
| | - Wellington Oyibo
- Centre for Transdisciplinary Research for Malaria & Neglected Tropical Diseases, College of Medicine, University of Lagos, Lagos, Nigeria
| | - Jan-Carel Diehl
- Department of Sustainable Design Engineering, Delft University of Technology, Delft, The Netherlands
| | - Gleb Vdovine
- Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
6
|
Greenberg A, Samueli B, Farkash S, Zohar Y, Ish-Shalom S, Hagege RR, Hershkovitz D. Algorithm-assisted diagnosis of Hirschsprung's disease - evaluation of robustness and comparative image analysis on data from various labs and slide scanners. Diagn Pathol 2024; 19:26. [PMID: 38321431 PMCID: PMC10845737 DOI: 10.1186/s13000-024-01452-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND Differences in the preparation, staining and scanning of digital pathology slides create significant pre-analytic variability. Algorithm-assisted tools must be able to contend with this variability in order to be applicable in clinical practice. In a previous study, a decision support algorithm was developed to assist in the diagnosis of Hirschsprung's disease. In the current study, we tested the robustness of this algorithm while assessing for pre-analytic factors which may affect its performance. METHODS The decision support algorithm was used on digital pathology slides obtained from four different medical centers (A-D) and scanned by three different scanner models (by Philips, Hamamatsu and 3DHISTECH). A total of 192 cases and 1782 slides were used in this study. RGB histograms were constructed to compare images from the various medical centers and scanner models and highlight the differences in color and contrast. RESULTS The algorithm was able to correctly identify ganglion cells in 99.2% of cases, from all medical centers (All scanned by the Philips slide scanner) as well as 95.5% and 100% of the slides scanned by the 3DHISTECH and Hamamatsu brand slide scanners, respectively. The total error rate for center D was lower than the other medical centers (3.9% vs 7.1%, 10.8% and 6% for centers A-C, respectively), the vast majority of errors being false positives (3.45% vs 0.45% false negatives). The other medical centers showed a higher rate of false negatives in relation to false positives (6.81% vs 0.29%, 9.8% vs 1.2% and 5.37% vs 0.63% for centers A-C, respectively). The total error rates for the Philips, Hamamatsu and 3DHISTECH brand scanners were 3.9%, 3.2% and 9.8%, respectively. RGB histograms demonstrated significant differences in pixel value distribution between the four medical centers, as well as between the 3DHISTECH brand scanner when compared to the Philips and Hamamatsu brand scanners. CONCLUSIONS The results reported in this paper suggest that the algorithm-based decision support system has sufficient robustness to be applicable for clinical practice. In addition, the novel method used in its development - Hierarchial-Contexual Analysis (HCA) may be applicable to the development of algorithm-assisted tools in other diseases, for which available datasets are limited. Validation of any given algorithm-assisted support system should nonetheless include data from as many medical centers and scanner models as possible.
Collapse
Affiliation(s)
- Ariel Greenberg
- Institute of Pathology, Tel-Aviv Sourasky Medical Center, 6 Weizmann Street, 6423906, Tel Aviv, Israel.
| | - Benzion Samueli
- Department of Pathology, Soroka University Medical Center, 76 Wingate Street, 8486614, Be'er Sheva, Israel
| | - Shai Farkash
- Department of Pathology, Emek Medical Center, Yitshak Rabin Boulevard 21, 1834111, Afula, Israel
| | - Yaniv Zohar
- Department of Pathology, Rambam Medical Center, 8 Haalia Hashnia, 3525408, Haifa, Israel
| | - Shahar Ish-Shalom
- Department of Pathology, Kaplan Medical Center, Pasternak St. P.O.B. 1, 76100, Rehovot, Israel
| | - Rami R Hagege
- Institute of Pathology, Tel-Aviv Sourasky Medical Center, 6 Weizmann Street, 6423906, Tel Aviv, Israel
| | - Dov Hershkovitz
- Institute of Pathology, Tel-Aviv Sourasky Medical Center, 6 Weizmann Street, 6423906, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel-Aviv University, Ramat Aviv 69978, Tel-Aviv, Israel
| |
Collapse
|
7
|
Chen H, Yang L, Zhu W, Tang P, Xing X, Zhang W, Zhong L. Raman signal optimization based on residual network adaptive focusing. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2024; 310:123949. [PMID: 38277779 DOI: 10.1016/j.saa.2024.123949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/18/2024] [Accepted: 01/21/2024] [Indexed: 01/28/2024]
Abstract
Due to its high sensitivity and specificity, Micro-Raman spectroscopy has emerged as a vital technique for molecular recognition and identification. As a weakly scattered signal, ensuring the accurate focus of the sample is essential for acquiring high quality Raman spectral signal and its analysis, especially in some complex microenvironments such as intracellular settings. Traditional autofocus methods are often time consuming or necessitate additional hardware, limiting real-time sample observation and device compatibility. Here, we propose an adaptive focusing method based on residual network to realize rapid and accurate focusing on Micro-Raman measurements. Using only a bright field image of the sample acquired on any image plane, we can predict the defocus distance with a residual network trained by Resnet50, in which the focus position is determined by combining the gradient and discrete cosine transform. Further, detailed regional division of the bright field map used for characterizing the height variation of actual sample surface is performed. As a result, a focus prediction map with 1μm accuracy is obtained from a bright field image in 120 ms. Based on this method, we successfully realize Raman signal optimization and the necessary correction of spectral information. This adaptive focusing method based on residual network is beneficial to further enhance the sensitivity and accuracy of Micro-Raman spectroscopy technology, which is of great significance in promoting the wide application of Raman spectroscopy.
Collapse
Affiliation(s)
- Haozhao Chen
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Liwei Yang
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, South China Normal University, Guangzhou 510006, China
| | - Weile Zhu
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Ping Tang
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Xinyue Xing
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, South China Normal University, Guangzhou 510006, China
| | - Weina Zhang
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China
| | - Liyun Zhong
- Key Laboratory of Photonic Technology for Integrated Sensing and Communication, Ministry of Education, Guangdong University of Technology, Guangzhou 510006, China.
| |
Collapse
|
8
|
Hua Z, Zhang X, Tu D. High-precision microscopic autofocus with a single natural image. OPTICS EXPRESS 2023; 31:43372-43389. [PMID: 38178432 DOI: 10.1364/oe.507757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/20/2023] [Indexed: 01/06/2024]
Abstract
In industrial microscopic detection, learning-based autofocus methods have empowered operators to acquire high-quality images quickly. However, there are two parts of errors in Learning-based methods: the fitting error of the network model and the making error of the prior dataset, which limits the potential for further improvements in focusing accuracy. In this paper, a high-precision autofocus pipeline was introduced, which predicts the defocus distance from a single natural image. A new method for making datasets was proposed, which overcomes the limitations of the sharpness metric itself and improves the overall accuracy of the dataset. Furthermore, a lightweight regression network was built, namely Natural-image Defocus Prediction Model (NDPM), to improve the focusing accuracy. A realistic dataset of sufficient size was made to train all models. The experiment shows NDPM has better focusing performance compared with other models, with a mean focusing error of 0.422µm.
Collapse
|
9
|
Hua Z, Zhang X, Tu D. Autofocus methods based on laser illumination. OPTICS EXPRESS 2023; 31:29465-29479. [PMID: 37710746 DOI: 10.1364/oe.499655] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 08/02/2023] [Indexed: 09/16/2023]
Abstract
Autofocusing system plays an important role in microscopic measurement. However, natural-image-based autofocus methods encounter difficulties in improving focusing accuracy and robustness due to the diversity of detection objects. In this paper, a high-precision autofocus method with laser illumination was proposed, termed laser split-image autofocus (LSA), which actively endows the detection scene with image features. The common non-learning-based and learning-based methods for LSA were quantitatively analyzed and evaluated. Furthermore, a lightweight comparative framework model for LSA, termed split-image comparison model (SCM), was proposed to further improve the focusing accuracy and robustness, and a realistic split-image dataset of sufficient size was made to train all models. The experiment showed LSA has better focusing performance than natural-image-based method. In addition, SCM has a great improvement in accuracy and robustness compared with previous learning and non-learning methods, with a mean focusing error of 0.317µm in complex scenes. Therefore, SCM is more suitable for industrial measurement.
Collapse
|
10
|
Zhang Z, Chan RKY, Wong KKY. Quantized spiral-phase-modulation based deep learning for real-time defocusing distance prediction. OPTICS EXPRESS 2022; 30:26931-26940. [PMID: 36236875 DOI: 10.1364/oe.460858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 04/30/2022] [Indexed: 06/16/2023]
Abstract
Whole slide imaging (WSI) has become an essential tool in pathological diagnosis, owing to its convenience on remote and collaborative review. However, how to bring the sample at the optimal position in the axial direction and image without defocusing artefacts is still a challenge, as traditional methods are either not universal or time-consuming. Until recently, deep learning has been shown to be effective in the autofocusing task in predicting defocusing distance. Here, we apply quantized spiral phase modulation on the Fourier domain of the captured images before feeding them into a light-weight neural network. It can significantly reduce the average predicting error to be lower than any previous work on an open dataset. Also, the high predicting speed strongly supports it can be applied on an edge device for real-time tasks with limited computational source and memory footprint.
Collapse
|
11
|
Innovative Image Processing Method to Improve Autofocusing Accuracy. SENSORS 2022; 22:s22135058. [PMID: 35808552 PMCID: PMC9269835 DOI: 10.3390/s22135058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 06/29/2022] [Accepted: 07/04/2022] [Indexed: 12/10/2022]
Abstract
For automated optical inspection, autofocusing microscopes play an important role in capturing clear images of the measured object. At present, the image processing part of optics-based autofocusing microscopes often has various factors, which makes it impossible to describe the image information of the semicircular (or elliptical) spot with a simple circle-finding method. Accordingly, this study has developed a novel algorithm that can quickly calculate the ideal center of the elliptical spot and effectively compensate the linearity of the focusing characteristic curve. A prototype model was used to characterize and verify the proposed algorithm. The experimental results show that by using the proposed algorithm, the autofocusing accuracy can be effectively improved to less than 1.5 μm.
Collapse
|
12
|
Zhang Q, Wang Y, Li Q, Tao X, Zhou X, Zhang Y, Liu G. An autofocus algorithm considering wavelength changes for large scale microscopic hyperspectral pathological imaging system. JOURNAL OF BIOPHOTONICS 2022; 15:e202100366. [PMID: 35020264 DOI: 10.1002/jbio.202100366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/07/2022] [Accepted: 01/10/2022] [Indexed: 06/14/2023]
Abstract
Microscopic hyperspectral imaging technology has been widely used to acquire pathological information of tissue sections. Autofocus is one of the most important steps in microscopic hyperspectral imaging systems to capture large scale or even whole slide images of pathological slides with high quality and high speed. However, there are quite few autofocus algorithm put forward for the microscopic hyperspectral imaging system. Therefore, this article proposes a Laplace operator based autofocus algorithm for microscopic hyperspectral imaging system which takes the influence of wavelength changes into consideration. Through the proposed algorithm, the focal length for each wavelength can be adjusted automatically to ensure that each single band image can be autofocused precisely with adaptive image sharpness evaluation method. In addition, to increase the capture speed, the relationship of wavelength and focal length is derived and the focal offsets among different single band images are calculated for pre-focusing. We have employed the proposed method on our own datasets and the experimental results show that it can capture large-scale microscopic hyperspectral pathology images with high precise.
Collapse
Affiliation(s)
- Qing Zhang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
- Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
- Engineering Research Center of Nanophotonics & Advanced Instrument, Ministry of Education, East China Normal University, Shanghai, China
- Center of SHMEC for Space Information and GNSS, East China Normal University, Shanghai, China
| | - Xiang Tao
- Obstetrics & Gynecology Hospital of Fudan University, Shanghai, China
| | | | - Yonghe Zhang
- Jiangsu Huachuang High-tech Medical Technology Co., Ltd., Suzhou, China
| | - Gang Liu
- Panovue Biological Technology (Beijing) Co., Ltd, Beijing, China
| |
Collapse
|
13
|
Li Q, Liu X, Han K, Guo C, Jiang J, Ji X, Wu X. Learning to autofocus in whole slide imaging via physics-guided deep cascade networks. OPTICS EXPRESS 2022; 30:14319-14340. [PMID: 35473178 DOI: 10.1364/oe.416824] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 01/08/2021] [Indexed: 06/14/2023]
Abstract
Whole slide imaging (WSI), is an essential technology for digital pathology, the performance of which is primarily affected by the autofocusing process. Conventional autofocusing methods either are time-consuming or require additional hardware and thus are not compatible with the current WSI systems. In this paper, we propose an effective learning-based method for autofocusing in WSI, which can realize accurate autofocusing at high speed as well as without any optical hardware modifications. Our method is inspired by an observation that sample images captured by WSI have distinctive characteristics with respect to positive / negative defocus offsets, due to the asymmetry effect of optical aberrations. Based on this physical knowledge, we develop novel deep cascade networks to enhance autofocusing quality. Specifically, to handle the effect of optical aberrations, a binary classification network is tailored to distinguish sample images with positive / negative defocus. As such, samples within the same category share similar characteristics. It facilitates the followed refocusing network, which is designed to learn the mapping between the defocus image and defocus distance. Experimental results demonstrate that our method achieves superior autofocusing performance to other related methods.
Collapse
|
14
|
Eckardt JN, Schmittmann T, Riechert S, Kramer M, Sulaiman AS, Sockel K, Kroschinsky F, Schetelig J, Wagenführ L, Schuler U, Platzbecker U, Thiede C, Stölzel F, Röllig C, Bornhäuser M, Wendt K, Middeke JM. Deep learning identifies Acute Promyelocytic Leukemia in bone marrow smears. BMC Cancer 2022; 22:201. [PMID: 35193533 PMCID: PMC8864866 DOI: 10.1186/s12885-022-09307-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 02/16/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Acute promyelocytic leukemia (APL) is considered a hematologic emergency due to high risk of bleeding and fatal hemorrhages being a major cause of death. Despite lower death rates reported from clinical trials, patient registry data suggest an early death rate of 20%, especially for elderly and frail patients. Therefore, reliable diagnosis is required as treatment with differentiation-inducing agents leads to cure in the majority of patients. However, diagnosis commonly relies on cytomorphology and genetic confirmation of the pathognomonic t(15;17). Yet, the latter is more time consuming and in some regions unavailable. METHODS In recent years, deep learning (DL) has been evaluated for medical image recognition showing outstanding capabilities in analyzing large amounts of image data and provides reliable classification results. We developed a multi-stage DL platform that automatically reads images of bone marrow smears, accurately segments cells, and subsequently predicts APL using image data only. We retrospectively identified 51 APL patients from previous multicenter trials and compared them to 1048 non-APL acute myeloid leukemia (AML) patients and 236 healthy bone marrow donor samples, respectively. RESULTS Our DL platform segments bone marrow cells with a mean average precision and a mean average recall of both 0.97. Further, it achieves high accuracy in detecting APL by distinguishing between APL and non-APL AML as well as APL and healthy donors with an area under the receiver operating characteristic of 0.8575 and 0.9585, respectively, using visual image data only. CONCLUSIONS Our study underlines not only the feasibility of DL to detect distinct morphologies that accompany a cytogenetic aberration like t(15;17) in APL, but also shows the capability of DL to abstract information from a small medical data set, i. e. 51 APL patients, and infer correct predictions. This demonstrates the suitability of DL to assist in the diagnosis of rare cancer entities. As our DL platform predicts APL from bone marrow smear images alone, this may be used to diagnose APL in regions were molecular or cytogenetic subtyping is not routinely available and raise attention to suspected cases of APL for expert evaluation.
Collapse
Affiliation(s)
- Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany.
| | - Tim Schmittmann
- Institute of Software and Multimedia Technology, Technical University Dresden, Dresden, Germany
| | - Sebastian Riechert
- Institute of Software and Multimedia Technology, Technical University Dresden, Dresden, Germany
| | - Michael Kramer
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Anas Shekh Sulaiman
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Katja Sockel
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Frank Kroschinsky
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Johannes Schetelig
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Lisa Wagenführ
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Ulrich Schuler
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Uwe Platzbecker
- Department of Medicine I, Hematology, Cellular Therapy, Hemostaseology, University of Leipzig, Leipzig, Germany
| | - Christian Thiede
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Friedrich Stölzel
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Christoph Röllig
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| | - Martin Bornhäuser
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany.,German Consortium for Translational Cancer Research, Heidelberg, Germany.,National Center for Tumor Disease (NCT), Dresden, Germany
| | - Karsten Wendt
- Institute of Software and Multimedia Technology, Technical University Dresden, Dresden, Germany
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, 01307, Dresden, Saxony, Germany
| |
Collapse
|
15
|
Li C, Rai MR, Ghashghaei HT, Greenbaum A. Illumination angle correction during image acquisition in light-sheet fluorescence microscopy using deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:888-901. [PMID: 35284156 PMCID: PMC8884226 DOI: 10.1364/boe.447392] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/05/2022] [Accepted: 01/06/2022] [Indexed: 05/07/2023]
Abstract
Light-sheet fluorescence microscopy (LSFM) is a high-speed imaging technique that provides optical sectioning with reduced photodamage. LSFM is routinely used in life sciences for live cell imaging and for capturing large volumes of cleared tissues. LSFM has a unique configuration, in which the illumination and detection paths are separated and perpendicular to each other. As such, the image quality, especially at high resolution, largely depends on the degree of overlap between the detection focal plane and the illuminating beam. However, spatial heterogeneity within the sample, curved specimen boundaries, and mismatch of refractive index between tissues and immersion media can refract the well-aligned illumination beam. This refraction can cause extensive blur and non-uniform image quality over the imaged field-of-view. To address these issues, we tested a deep learning-based approach to estimate the angular error of the illumination beam relative to the detection focal plane. The illumination beam was then corrected using a pair of galvo scanners, and the correction significantly improved the image quality across the entire field-of-view. The angular estimation was based on calculating the defocus level on a pixel level within the image using two defocused images. Overall, our study provides a framework that can correct the angle of the light-sheet and improve the overall image quality in high-resolution LSFM 3D image acquisition.
Collapse
Affiliation(s)
- Chen Li
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
| | - Mani Ratnam Rai
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
| | - H. Troy Ghashghaei
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, NC 27695, USA
| | - Alon Greenbaum
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695, USA
| |
Collapse
|
16
|
Peng Y, Zhang Z, Tu H, Li X. Automatic Segmentation of Novel Coronavirus Pneumonia Lesions in CT Images Utilizing Deep-Supervised Ensemble Learning Network. Front Med (Lausanne) 2022; 8:755309. [PMID: 35047520 PMCID: PMC8761973 DOI: 10.3389/fmed.2021.755309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
Background: The novel coronavirus disease 2019 (COVID-19) has been spread widely in the world, causing a huge threat to the living environment of people. Objective: Under CT imaging, the structure features of COVID-19 lesions are complicated and varied greatly in different cases. To accurately locate COVID-19 lesions and assist doctors to make the best diagnosis and treatment plan, a deep-supervised ensemble learning network is presented for COVID-19 lesion segmentation in CT images. Methods: Since a large number of COVID-19 CT images and the corresponding lesion annotations are difficult to obtain, a transfer learning strategy is employed to make up for the shortcoming and alleviate the overfitting problem. Based on the reality that traditional single deep learning framework is difficult to extract complicated and varied COVID-19 lesion features effectively that may cause some lesions to be undetected. To overcome the problem, a deep-supervised ensemble learning network is presented to combine with local and global features for COVID-19 lesion segmentation. Results: The performance of the proposed method was validated in experiments with a publicly available dataset. Compared with manual annotations, the proposed method acquired a high intersection over union (IoU) of 0.7279 and a low Hausdorff distance (H) of 92.4604. Conclusion: A deep-supervised ensemble learning network was presented for coronavirus pneumonia lesion segmentation in CT images. The effectiveness of the proposed method was verified by visual inspection and quantitative evaluation. Experimental results indicated that the proposed method has a good performance in COVID-19 lesion segmentation.
Collapse
Affiliation(s)
- Yuanyuan Peng
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang, China
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Zixu Zhang
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang, China
| | - Hongbin Tu
- School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang, China
- Technique Center, Hunan Great Wall Technology Information Co. Ltd., Changsha, China
| | - Xiong Li
- School of Software, East China Jiaotong University, Nanchang, China
| |
Collapse
|
17
|
Liao J, Chen X, Ding G, Dong P, Ye H, Wang H, Zhang Y, Yao J. Deep learning-based single-shot autofocus method for digital microscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:314-327. [PMID: 35154873 PMCID: PMC8803042 DOI: 10.1364/boe.446928] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/07/2021] [Accepted: 12/07/2021] [Indexed: 06/14/2023]
Abstract
Digital pathology is being transformed by artificial intelligence (AI)-based pathological diagnosis. One major challenge for correct AI diagnoses is to ensure the focus quality of captured images. Here, we propose a deep learning-based single-shot autofocus method for microscopy. We use a modified MobileNetV3, a lightweight network, to predict the defocus distance with a single-shot microscopy image acquired at an arbitrary image plane without secondary camera or additional optics. The defocus prediction takes only 9 ms with a focusing error of only ∼1/15 depth of field. We also provide implementation examples for the augmented reality microscope and the whole slide imaging (WSI) system. Our proposed technique can perform real-time and accurate autofocus which will not only support pathologists in their daily work, but also provide potential applications in the life sciences, material research, and industrial automatic detection.
Collapse
Affiliation(s)
| | - Xu Chen
- Tencent AI Lab, Shenzhen 518054, China
| | - Ge Ding
- Tencent AI Lab, Shenzhen 518054, China
| | - Pei Dong
- Tencent AI Lab, Shenzhen 518054, China
| | - Hu Ye
- Tencent AI Lab, Shenzhen 518054, China
| | - Han Wang
- Tencent AI Lab, Shenzhen 518054, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 518055, China
| | | |
Collapse
|
18
|
Guo C, Jiang S, Yang L, Song P, Wang T, Shao X, Zhang Z, Murphy M, Zheng G. Deep learning-enabled whole slide imaging (DeepWSI): oil-immersion quality using dry objectives, longer depth of field, higher system throughput, and better functionality. OPTICS EXPRESS 2021; 29:39669-39684. [PMID: 34809325 DOI: 10.1364/oe.441892] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 11/04/2021] [Indexed: 05/18/2023]
Abstract
Whole slide imaging (WSI) has moved the traditional manual slide inspection process to the era of digital pathology. A typical WSI system translates the sample to different positions and captures images using a high numerical aperture (NA) objective lens. Performing oil-immersion microscopy is a major obstacle for WSI as it requires careful liquid handling during the scanning process. Switching between dry objective and oil-immersion lens is often impossible as it disrupts the acquisition process. For a high-NA objective lens, the sub-micron depth of field also poses a challenge to acquiring in-focus images of samples with uneven topography. Additionally, it implies a small field of view for each tile, thus limiting the system throughput and resulting in a long acquisition time. Here we report a deep learning-enabled WSI platform, termed DeepWSI, to substantially improve the system performance and imaging throughput. With this platform, we show that images captured with a regular dry objective lens can be transformed into images comparable to that of a 1.4-NA oil immersion lens. Blurred images with defocus distance from -5 µm to +5 µm can be virtually refocused to the in-focus plane post measurement. We demonstrate an equivalent data throughput of >2 gigapixels per second, the highest among existing WSI systems. Using the same deep neural network, we also report a high-resolution virtual staining strategy and demonstrate it for Fourier ptychographic WSI. The DeepWSI platform may provide a turnkey solution for developing high-performance diagnostic tools for digital pathology.
Collapse
|
19
|
Deep learning detects acute myeloid leukemia and predicts NPM1 mutation status from bone marrow smears. Leukemia 2021; 36:111-118. [PMID: 34497326 PMCID: PMC8727290 DOI: 10.1038/s41375-021-01408-w] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 08/12/2021] [Accepted: 08/27/2021] [Indexed: 12/02/2022]
Abstract
The evaluation of bone marrow morphology by experienced hematopathologists is essential in the diagnosis of acute myeloid leukemia (AML); however, it suffers from a lack of standardization and inter-observer variability. Deep learning (DL) can process medical image data and provides data-driven class predictions. Here, we apply a multi-step DL approach to automatically segment cells from bone marrow images, distinguish between AML samples and healthy controls with an area under the receiver operating characteristic (AUROC) of 0.9699, and predict the mutation status of Nucleophosmin 1 (NPM1)—one of the most common mutations in AML—with an AUROC of 0.92 using only image data from bone marrow smears. Utilizing occlusion sensitivity maps, we observed so far unreported morphologic cell features such as a pattern of condensed chromatin and perinuclear lightening zones in myeloblasts of NPM1-mutated AML and prominent nucleoli in wild-type NPM1 AML enabling the DL model to provide accurate class predictions.
Collapse
|
20
|
Xin K, Jiang S, Chen X, He Y, Zhang J, Wang H, Liu H, Peng Q, Zhang Y, Ji X. Low-cost whole slide imaging system with single-shot autofocusing based on color-multiplexed illumination and deep learning. BIOMEDICAL OPTICS EXPRESS 2021; 12:5644-5657. [PMID: 34692206 PMCID: PMC8515991 DOI: 10.1364/boe.428655] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/30/2021] [Accepted: 08/03/2021] [Indexed: 06/13/2023]
Abstract
Recent research on whole slide imaging (WSI) has greatly promoted the development of digital pathology. However, accurate autofocusing is still the main challenge for WSI acquisition and automated digital microscope. To address this problem, this paper describes a low cost WSI system and proposes a fast, robust autofocusing method based on deep learning. We use a programmable LED array for sample illumination. Before the brightfield image acquisition, we turn on a red and a green LED, and capture a color-multiplexed image, which is fed into a neural network for defocus distance estimation. After the focus tracking process, we employ a low-cost DIY adaptor to digitally adjust the photographic lens instead of the mechanical stage to perform axial position adjustment, and acquire the in-focus image under brightfield illumination. To ensure the calculation speed and image quality, we build a network model based on a 'light weight' backbone network architecture-MobileNetV3. Since the color-multiplexed coherent illuminated images contain abundant information about the defocus orientation, the proposed method enables high performance of autofocusing. Experimental results show that the proposed method can accurately predict the defocus distance of various types of samples and has good generalization ability for new types of samples. In the case of using GPU, the processing time for autofocusing is less than 0.1 second for each field of view, indicating that our method can further speed up the acquisition of whole slide images.
Collapse
Affiliation(s)
- Kaifa Xin
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Shaowei Jiang
- Department of Biomedical Engineering, University of Connecticut, Storrs 06269, USA
| | - Xu Chen
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
- Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Yonghong He
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Jian Zhang
- Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
| | - Hongpeng Wang
- Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Honghai Liu
- Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Qin Peng
- Institute of Systems and Physical Biology, Shenzhen Bay Laboratory, Shenzhen, 518132, China
| | - Yongbing Zhang
- Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Xiangyang Ji
- Department of Automation, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
21
|
Ding H, Li F, Meng Z, Feng S, Ma J, Nie S, Yuan C. Auto-focusing and quantitative phase imaging using deep learning for the incoherent illumination microscopy system. OPTICS EXPRESS 2021; 29:26385-26403. [PMID: 34615075 DOI: 10.1364/oe.434014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
It is well known that the quantitative phase information which is vital in the biomedical study is hard to be directly obtained with bright-field microscopy under incoherent illumination. In addition, it is impossible to maintain the living sample in focus over long-term observation. Therefore, both the autofocusing and quantitative phase imaging techniques have to be solved in microscopy simultaneously. Here, we propose a lightweight deep learning-based framework, which is constructed by residual structure and is constrained by a novel loss function model, to realize both autofocusing and quantitative phase imaging. It outputs the corresponding in-focus amplitude and phase information at high speed (10fps) from a single-shot out-of-focus bright-field image. The training data were captured with a designed system under a hybrid incoherent and coherent illumination system. The experimental results verify that the focused and quantitative phase images of non-biological samples and biological samples can be reconstructed by using the framework. It provides a versatile quantitative technique for continuous monitoring of living cells in long-term and label-free imaging by using a traditional incoherent illumination microscopy system.
Collapse
|
22
|
Peskin A, Lund SP, Pierce L, Kurbanov F, Chan LLY, Halter M, Elliott J, Sarkar S, Chalfoun J. Establishing a reference focal plane using beads for trypan-blue-based viability measurements. J Microsc 2021; 283:243-258. [PMID: 34115371 DOI: 10.1111/jmi.13037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/20/2021] [Accepted: 05/27/2021] [Indexed: 11/30/2022]
Abstract
Trypan blue dye exclusion-based cell viability measurements are highly dependent upon image quality and consistency. In order to make measurements repeatable, one must be able to reliably capture images at a consistent focal plane, and with signal-to-noise ratio within appropriate limits to support proper execution of image analysis routines. Imaging chambers and imaging systems used for trypan blue analysis can be inconsistent or can drift over time, leading to a need to assure the acquisition of images prior to automated image analysis. Although cell-based autofocus techniques can be applied, the heterogeneity and complexity of the cell samples can make it difficult to assure the effectiveness, repeatability and accuracy of the routine for each measurement. Instead of auto-focusing on cells in our images, we add control beads to the images, and use them to repeatedly return to a reference focal plane. We use bead image features that have stable profiles across a wide range of focal values and exposure levels. We created a predictive model based on image quality features computed over reference datasets. Because the beads have little variation, we can determine the reference plane from bead image features computed over a single-shot image and can reproducibly return to that reference plane with each sample. The achieved accuracy (over 95%) is within the limits of the actuator repeatability. We demonstrate that a small number of beads (less than 3 beads per image) is needed to achieve this accuracy. We have also developed an open-source Graphical User Interface called Bead Benchmarking-Focus And Intensity Tool (BB-FAIT) to implement these methods for a semi-automated cell viability analyser.
Collapse
Affiliation(s)
- Adele Peskin
- National Institute of Standards and Technology, Boulder, Colorado
| | - Steven P Lund
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Laura Pierce
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Firdavs Kurbanov
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Leo Li-Ying Chan
- Department of Advanced Technology R&D, Nexcelom Bioscience LL, Lawrence, Maryland
| | - Michael Halter
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - John Elliott
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Sumona Sarkar
- National Institute of Standards and Technology, Gaithersburg, Maryland
| | - Joe Chalfoun
- National Institute of Standards and Technology, Gaithersburg, Maryland
| |
Collapse
|
23
|
Zhang C, Gu Y, Yang J, Yang GZ. Diversity-Aware Label Distribution Learning for Microscopy Auto Focusing. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3061333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
24
|
Eckardt JN, Bornhäuser M, Wendt K, Middeke JM. Application of machine learning in the management of acute myeloid leukemia: current practice and future prospects. Blood Adv 2020; 4:6077-6085. [PMID: 33290546 PMCID: PMC7724910 DOI: 10.1182/bloodadvances.2020002997] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 10/26/2020] [Indexed: 12/19/2022] Open
Abstract
Machine learning (ML) is rapidly emerging in several fields of cancer research. ML algorithms can deal with vast amounts of medical data and provide a better understanding of malignant disease. Its ability to process information from different diagnostic modalities and functions to predict prognosis and suggest therapeutic strategies indicates that ML is a promising tool for the future management of hematologic malignancies; acute myeloid leukemia (AML) is a model disease of various recent studies. An integration of these ML techniques into various applications in AML management can assure fast and accurate diagnosis as well as precise risk stratification and optimal therapy. Nevertheless, these techniques come with various pitfalls and need a strict regulatory framework to ensure safe use of ML. This comprehensive review highlights and discusses recent advances in ML techniques in the management of AML as a model disease of hematologic neoplasms, enabling researchers and clinicians alike to critically evaluate this upcoming, potentially practice-changing technology.
Collapse
Affiliation(s)
- Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, Dresden, Germany
| | - Martin Bornhäuser
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, Dresden, Germany
- National Center for Tumor Diseases, Dresden (NCT/UCC), Dresden, Germany
- German Consortium for Translational Cancer Research, DKFZ, Heidelberg, Germany; and
| | - Karsten Wendt
- Institute of Circuits and Systems, Technical University Dresden, Dresden, Germany
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, Dresden, Germany
| |
Collapse
|
25
|
Modeling adult skeletal stem cell response to laser-machined topographies through deep learning. Tissue Cell 2020; 67:101442. [DOI: 10.1016/j.tice.2020.101442] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 09/11/2020] [Accepted: 09/11/2020] [Indexed: 02/07/2023]
|
26
|
Pandey P, P PA, Kyatham V, Mishra D, Dastidar TR. Target-Independent Domain Adaptation for WBC Classification Using Generative Latent Search. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3979-3991. [PMID: 32746144 DOI: 10.1109/tmi.2020.3009029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Automating the classification of camera-obtained microscopic images of White Blood Cells (WBCs) and related cell subtypes has assumed importance since it aids the laborious manual process of review and diagnosis. Several State-Of-The-Art (SOTA) methods developed using Deep Convolutional Neural Networks suffer from the problem of domain shift - severe performance degradation when they are tested on data (target) obtained in a setting different from that of the training (source). The change in the target data might be caused by factors such as differences in camera/microscope types, lenses, lighting-conditions etc. This problem can potentially be solved using Unsupervised Domain Adaptation (UDA) techniques albeit standard algorithms presuppose the existence of a sufficient amount of unlabelled target data which is not always the case with medical images. In this paper, we propose a method for UDA that is devoid of the need for target data. Given a test image from the target data, we obtain its 'closest-clone' from the source data that is used as a proxy in the classifier. We prove the existence of such a clone given that infinite number of data points can be sampled from the source distribution. We propose a method in which a latent-variable generative model based on variational inference is used to simultaneously sample and find the 'closest-clone' from the source distribution through an optimization procedure in the latent space. We demonstrate the efficacy of the proposed method over several SOTA UDA methods for WBC classification on datasets captured using different imaging modalities under multiple settings.
Collapse
|
27
|
Bian Z, Guo C, Jiang S, Zhu J, Wang R, Song P, Zhang Z, Hoshino K, Zheng G. Autofocusing technologies for whole slide imaging and automated microscopy. JOURNAL OF BIOPHOTONICS 2020; 13:e202000227. [PMID: 32844560 DOI: 10.1002/jbio.202000227] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/14/2020] [Accepted: 08/20/2020] [Indexed: 06/11/2023]
Abstract
Whole slide imaging (WSI) has moved digital pathology closer to diagnostic practice in recent years. Due to the inherent tissue topography variability, accurate autofocusing remains a critical challenge for WSI and automated microscopy systems. The traditional focus map surveying method is limited in its ability to acquire a high degree of focus points while still maintaining high throughput. Real-time approaches decouple image acquisition from focusing, thus allowing for rapid scanning while maintaining continuous accurate focus. This work reviews the traditional focus map approach and discusses the choice of focus measure for focal plane determination. It also discusses various real-time autofocusing approaches including reflective-based triangulation, confocal pinhole detection, low-coherence interferometry, tilted sensor approach, independent dual sensor scanning, beam splitter array, phase detection, dual-LED illumination and deep-learning approaches. The technical concepts, merits and limitations of these methods are explained and compared to those of a traditional WSI system. This review may provide new insights for the development of high-throughput automated microscopy imaging systems that can be made broadly available and utilizable without loss of capacity.
Collapse
Affiliation(s)
- Zichao Bian
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Chengfei Guo
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Shaowei Jiang
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Jiakai Zhu
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Ruihai Wang
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Pengming Song
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Zibang Zhang
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Kazunori Hoshino
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
28
|
Bowden AK, Durr NJ, Erickson D, Ozcan A, Ramanujam N, Jacques PV. Optical Technologies for Improving Healthcare in Low-Resource Settings: introduction to the feature issue. BIOMEDICAL OPTICS EXPRESS 2020; 11:3091-3094. [PMID: 32637243 PMCID: PMC7316015 DOI: 10.1364/boe.397698] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Indexed: 05/03/2023]
Abstract
This feature issue of Biomedical Optics Express presents a cross-section of interesting and emerging work of relevance to optical technologies in low-resource settings. In particular, the technologies described here aim to address challenges to meeting healthcare needs in resource-constrained environments, including in rural and underserved areas. This collection of 18 papers includes papers on both optical system design and image analysis, with applications demonstrated for ex vivo and in vivo use. All together, these works portray the importance of global health research to the scientific community and the role that optics can play in addressing some of the world's most pressing healthcare challenges.
Collapse
Affiliation(s)
- Audrey K. Bowden
- Vanderbilt Biophotonics Center, Department of Biomedical Engineering, Vanderbilt University, 410 24th Avenue South, Nashville, TN 37232, USA
| | - Nicholas J. Durr
- Department of Biomedical Engineering, Johns Hopkins University (JHU), 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - David Erickson
- Cornell University, 9 Millcroft Way, Ithaca, NY 14850, USA
| | - Aydogan Ozcan
- Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles CA 90095, USA
| | - Nirmala Ramanujam
- Duke University, 101 Science Drive, 1427 FCIEMAS, Durham, NC 27708, USA
| | - Paulino Vacas Jacques
- Wellman Center for Photomedicine, Massachusetts General Hospital, 55 Fruit St, Boston, MA 02114, USA
| |
Collapse
|