1
|
Calin VL, Mihailescu M, Petrescu GE, Lisievici MG, Tarba N, Calin D, Ungureanu VG, Pasov D, Brehar FM, Gorgan RM, Moisescu MG, Savopol T. Grading of glioma tumors using digital holographic microscopy. Heliyon 2024; 10:e29897. [PMID: 38694030 PMCID: PMC11061684 DOI: 10.1016/j.heliyon.2024.e29897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 03/14/2024] [Accepted: 04/17/2024] [Indexed: 05/03/2024] Open
Abstract
Gliomas are the most common type of cerebral tumors; they occur with increasing incidence in the last decade and have a high rate of mortality. For efficient treatment, fast accurate diagnostic and grading of tumors are imperative. Presently, the grading of tumors is established by histopathological evaluation, which is a time-consuming procedure and relies on the pathologists' experience. Here we propose a supervised machine learning procedure for tumor grading which uses quantitative phase images of unstained tissue samples acquired by digital holographic microscopy. The algorithm is using an extensive set of statistical and texture parameters computed from these images. The procedure has been able to classify six classes of images (normal tissue and five glioma subtypes) and to distinguish between gliomas types from grades II to IV (with the highest sensitivity and specificity for grade II astrocytoma and grade III oligodendroglioma and very good scores in recognizing grade III anaplastic astrocytoma and grade IV glioblastoma). The procedure bolsters clinical diagnostic accuracy, offering a swift and reliable means of tumor characterization and grading, ultimately the enhancing treatment decision-making process.
Collapse
Affiliation(s)
- Violeta L. Calin
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
- Excellence Center for Research in Biophysics and Cellular Biotechnology, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Mona Mihailescu
- Digital Holography Imaging and Processing Laboratory, Physics Department, Faculty of Applied Sciences, National University for Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
- Centre for Fundamental Sciences Applied in Engineering, National University for Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - George E.D. Petrescu
- Department of Neurosurgery, “Bagdasar-Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Mihai Gheorghe Lisievici
- Department of Pathology, “Bagdasar Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
| | - Nicolae Tarba
- Doctoral School of Automatic Control and Computers, National University for Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Daniel Calin
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Victor Gabriel Ungureanu
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Diana Pasov
- Department of Pathology, “Bagdasar Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
| | - Felix M. Brehar
- Department of Neurosurgery, “Bagdasar-Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Radu M. Gorgan
- Department of Neurosurgery, “Bagdasar-Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Mihaela G. Moisescu
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
- Excellence Center for Research in Biophysics and Cellular Biotechnology, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Tudor Savopol
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
- Excellence Center for Research in Biophysics and Cellular Biotechnology, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| |
Collapse
|
2
|
Xin L, Xiao X, Xiao W, Peng R, Wang H, Pan F. Screening for urothelial carcinoma cells in urine based on digital holographic flow cytometry through machine learning and deep learning methods. LAB ON A CHIP 2024; 24:2736-2746. [PMID: 38660758 DOI: 10.1039/d3lc00854a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
The incidence of urothelial carcinoma continues to rise annually, particularly among the elderly. Prompt diagnosis and treatment can significantly enhance patient survival and quality of life. Urine cytology remains a widely-used early screening method for urothelial carcinoma, but it still has limitations including sensitivity, labor-intensive procedures, and elevated cost. In recent developments, microfluidic chip technology offers an effective and efficient approach for clinical urine specimen analysis. Digital holographic microscopy, a form of quantitative phase imaging technology, captures extensive data on the refractive index and thickness of cells. The combination of microfluidic chips and digital holographic microscopy facilitates high-throughput imaging of live cells without staining. In this study, digital holographic flow cytometry was employed to rapidly capture images of diverse cell types present in urine and to reconstruct high-precision quantitative phase images for each cell type. Then, various machine learning algorithms and deep learning models were applied to categorize these cell images, and remarkable accuracy in cancer cell identification was achieved. This research suggests that the integration of digital holographic flow cytometry with artificial intelligence algorithms offers a promising, precise, and convenient approach for early screening of urothelial carcinoma.
Collapse
Affiliation(s)
- Lu Xin
- Key Laboratory of Precision Opto-mechatronics Technology, Beihang University, Beijing 100191, China.
| | - Xi Xiao
- Peking University Third Hospital, Department of Radiation Oncology, Beijing 100191, China.
| | - Wen Xiao
- Key Laboratory of Precision Opto-mechatronics Technology, Beihang University, Beijing 100191, China.
| | - Ran Peng
- Peking University Third Hospital, Department of Radiation Oncology, Beijing 100191, China.
| | - Hao Wang
- Peking University Third Hospital, Department of Radiation Oncology, Beijing 100191, China.
- Peking University Third Hospital, Cancer Center, Beijing 100191, China
| | - Feng Pan
- Key Laboratory of Precision Opto-mechatronics Technology, Beihang University, Beijing 100191, China.
| |
Collapse
|
3
|
Dong H, Lin J, Tao Y, Jia Y, Sun L, Li WJ, Sun H. AI-enhanced biomedical micro/nanorobots in microfluidics. LAB ON A CHIP 2024; 24:1419-1440. [PMID: 38174821 DOI: 10.1039/d3lc00909b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Human beings encompass sophisticated microcirculation and microenvironments, incorporating a broad spectrum of microfluidic systems that adopt fundamental roles in orchestrating physiological mechanisms. In vitro recapitulation of human microenvironments based on lab-on-a-chip technology represents a critical paradigm to better understand the intricate mechanisms. Moreover, the advent of micro/nanorobotics provides brand new perspectives and dynamic tools for elucidating the complex process in microfluidics. Currently, artificial intelligence (AI) has endowed micro/nanorobots (MNRs) with unprecedented benefits, such as material synthesis, optimal design, fabrication, and swarm behavior. Using advanced AI algorithms, the motion control, environment perception, and swarm intelligence of MNRs in microfluidics are significantly enhanced. This emerging interdisciplinary research trend holds great potential to propel biomedical research to the forefront and make valuable contributions to human health. Herein, we initially introduce the AI algorithms integral to the development of MNRs. We briefly revisit the components, designs, and fabrication techniques adopted by robots in microfluidics with an emphasis on the application of AI. Then, we review the latest research pertinent to AI-enhanced MNRs, focusing on their motion control, sensing abilities, and intricate collective behavior in microfluidics. Furthermore, we spotlight biomedical domains that are already witnessing or will undergo game-changing evolution based on AI-enhanced MNRs. Finally, we identify the current challenges that hinder the practical use of the pioneering interdisciplinary technology.
Collapse
Affiliation(s)
- Hui Dong
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, China.
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, China
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Jiawen Lin
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, China.
| | - Yihui Tao
- Department of Automation Control and System Engineering, University of Sheffield, Sheffield, UK
| | - Yuan Jia
- Sino-German College of Intelligent Manufacturing, Shenzhen Technology University, Shenzhen, China
| | - Lining Sun
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, China
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Wen Jung Li
- Department of Mechanical Engineering, City University of Hong Kong, Hong Kong, China
| | - Hao Sun
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, China.
- School of Mechatronics Engineering, Harbin Institute of Technology, Harbin, China
- Research Center of Aerospace Mechanism and Control, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
4
|
Ciaparrone G, Pirone D, Fiore P, Xin L, Xiao W, Li X, Bardozzo F, Bianco V, Miccio L, Pan F, Memmolo P, Tagliaferri R, Ferraro P. Label-free cell classification in holographic flow cytometry through an unbiased learning strategy. LAB ON A CHIP 2024; 24:924-932. [PMID: 38264771 DOI: 10.1039/d3lc00385j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
Nowadays, label-free imaging flow cytometry at the single-cell level is considered the stepforward lab-on-a-chip technology to address challenges in clinical diagnostics, biology, life sciences and healthcare. In this framework, digital holography in microscopy promises to be a powerful imaging modality thanks to its multi-refocusing and label-free quantitative phase imaging capabilities, along with the encoding of the highest information content within the imaged samples. Moreover, the recent achievements of new data analysis tools for cell classification based on deep/machine learning, combined with holographic imaging, are urging these systems toward the effective implementation of point of care devices. However, the generalization capabilities of learning-based models may be limited from biases caused by data obtained from other holographic imaging settings and/or different processing approaches. In this paper, we propose a combination of a Mask R-CNN to detect the cells, a convolutional auto-encoder, used to the image feature extraction and operating on unlabelled data, thus overcoming the bias due to data coming from different experimental settings, and a feedforward neural network for single cell classification, that operates on the above extracted features. We demonstrate the proposed approach in the challenging classification task related to the identification of drug-resistant endometrial cancer cells.
Collapse
Affiliation(s)
- Gioele Ciaparrone
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
| | - Daniele Pirone
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Pierpaolo Fiore
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
| | - Lu Xin
- Key Laboratory of Precision Opto-Mechatronics Technology of Ministry of Education, School of Instrumentation Science & Optoelectronics Engineering, Beihang University, 100191 Beijing, China.
| | - Wen Xiao
- Key Laboratory of Precision Opto-Mechatronics Technology of Ministry of Education, School of Instrumentation Science & Optoelectronics Engineering, Beihang University, 100191 Beijing, China.
| | - Xiaoping Li
- Department of Obstetrics and Gynecology, Peking University People's Hospital, Beijing 100044, China
| | - Francesco Bardozzo
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Vittorio Bianco
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Lisa Miccio
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Feng Pan
- Key Laboratory of Precision Opto-Mechatronics Technology of Ministry of Education, School of Instrumentation Science & Optoelectronics Engineering, Beihang University, 100191 Beijing, China.
| | - Pasquale Memmolo
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Roberto Tagliaferri
- Neurone Lab, Department of Management and Innovation Systems (DISA-MIS), University of Salerno, Fisciano, Italy.
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| | - Pietro Ferraro
- CNR - Institute of Applied Sciences and Intelligent Systems "Eduardo Caianiello", Pozzuoli, Italy.
| |
Collapse
|
5
|
Jan M, Spangaro A, Lenartowicz M, Mattiazzi Usaj M. From pixels to insights: Machine learning and deep learning for bioimage analysis. Bioessays 2024; 46:e2300114. [PMID: 38058114 DOI: 10.1002/bies.202300114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 10/25/2023] [Accepted: 11/13/2023] [Indexed: 12/08/2023]
Abstract
Bioimage analysis plays a critical role in extracting information from biological images, enabling deeper insights into cellular structures and processes. The integration of machine learning and deep learning techniques has revolutionized the field, enabling the automated, reproducible, and accurate analysis of biological images. Here, we provide an overview of the history and principles of machine learning and deep learning in the context of bioimage analysis. We discuss the essential steps of the bioimage analysis workflow, emphasizing how machine learning and deep learning have improved preprocessing, segmentation, feature extraction, object tracking, and classification. We provide examples that showcase the application of machine learning and deep learning in bioimage analysis. We examine user-friendly software and tools that enable biologists to leverage these techniques without extensive computational expertise. This review is a resource for researchers seeking to incorporate machine learning and deep learning in their bioimage analysis workflows and enhance their research in this rapidly evolving field.
Collapse
Affiliation(s)
- Mahta Jan
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Allie Spangaro
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Michelle Lenartowicz
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Mojca Mattiazzi Usaj
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
6
|
Mirsky SK, Shaked NT. Six-pack holography for dynamic profiling of thick and extended objects by simultaneous three-wavelength phase unwrapping with doubled field of view. Sci Rep 2023; 13:19293. [PMID: 37935758 PMCID: PMC10630357 DOI: 10.1038/s41598-023-45237-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 10/17/2023] [Indexed: 11/09/2023] Open
Abstract
Dynamic holographic profiling of thick samples is limited due to the reduced field of view (FOV) of off-axis holography. We present an improved six-pack holography system for the simultaneous acquisition of six complex wavefronts in a single camera exposure from two fields of view (FOVs) and three wavelengths, for quantitative phase unwrapping of thick and extended transparent objects. By dynamically generating three synthetic wavelength quantitative phase maps for each of the two FOVs, with the longest wavelength being 6207 nm, hierarchical phase unwrapping can be used to reduce noise while maintaining the improvements in the 2π phase ambiguity due to the longer synthetic wavelength. The system was tested on a 7 μm tall PDMS microchannel and is shown to produce quantitative phase maps with 96% accuracy, while the hierarchical unwrapping reduces noise by 93%. A monolayer of live onion epidermal tissue was also successfully scanned, demonstrating the potential of the system to dynamically decrease scanning time of optically thick and extended samples.
Collapse
Affiliation(s)
- Simcha K Mirsky
- Department of Biomedical Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Natan T Shaked
- Department of Biomedical Engineering, Tel Aviv University, 69978, Tel Aviv, Israel.
| |
Collapse
|
7
|
Park J, Bai B, Ryu D, Liu T, Lee C, Luo Y, Lee MJ, Huang L, Shin J, Zhang Y, Ryu D, Li Y, Kim G, Min HS, Ozcan A, Park Y. Artificial intelligence-enabled quantitative phase imaging methods for life sciences. Nat Methods 2023; 20:1645-1660. [PMID: 37872244 DOI: 10.1038/s41592-023-02041-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 09/11/2023] [Indexed: 10/25/2023]
Abstract
Quantitative phase imaging, integrated with artificial intelligence, allows for the rapid and label-free investigation of the physiology and pathology of biological systems. This review presents the principles of various two-dimensional and three-dimensional label-free phase imaging techniques that exploit refractive index as an intrinsic optical imaging contrast. In particular, we discuss artificial intelligence-based analysis methodologies for biomedical studies including image enhancement, segmentation of cellular or subcellular structures, classification of types of biological samples and image translation to furnish subcellular and histochemical information from label-free phase images. We also discuss the advantages and challenges of artificial intelligence-enabled quantitative phase imaging analyses, summarize recent notable applications in the life sciences, and cover the potential of this field for basic and industrial research in the life sciences.
Collapse
Affiliation(s)
- Juyeon Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Chungha Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Mahn Jae Lee
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Jeongwon Shin
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | | | - Yuzhu Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | | | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA, USA.
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.
- Tomocube, Daejeon, Republic of Korea.
| |
Collapse
|
8
|
Strack C, Pomykala KL, Schlemmer HP, Egger J, Kleesiek J. "A net for everyone": fully personalized and unsupervised neural networks trained with longitudinal data from a single patient. BMC Med Imaging 2023; 23:174. [PMID: 37907876 PMCID: PMC10619304 DOI: 10.1186/s12880-023-01128-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 10/16/2023] [Indexed: 11/02/2023] Open
Abstract
BACKGROUND With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.
Collapse
Affiliation(s)
- Christian Strack
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, 69120, Heidelberg, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147, Essen, Germany
- Department of Physics, TU Dortmund University, Otto-Hahn-Straße 4, D-44227, Dortmund, Germany
| |
Collapse
|
9
|
Pozzi P, Candeo A, Paiè P, Bragheri F, Bassi A. Artificial intelligence in imaging flow cytometry. FRONTIERS IN BIOINFORMATICS 2023; 3:1229052. [PMID: 37877042 PMCID: PMC10593470 DOI: 10.3389/fbinf.2023.1229052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/11/2023] [Indexed: 10/26/2023] Open
Affiliation(s)
- Paolo Pozzi
- Department of Physics, Politecnico di Milano, Milano, Italy
| | - Alessia Candeo
- Department of Physics, Politecnico di Milano, Milano, Italy
| | - Petra Paiè
- Department of Physics, Politecnico di Milano, Milano, Italy
| | - Francesca Bragheri
- Institute for Photonics and Nanotechnologies, Consiglio Nazionale delle Ricerche, Milano, Italy
| | - Andrea Bassi
- Department of Physics, Politecnico di Milano, Milano, Italy
| |
Collapse
|
10
|
Pérez-Cota F, Martínez-Arellano G, La Cavera S, Hardiman W, Thornton L, Fuentes-Domínguez R, Smith RJ, McIntyre A, Clark M. Classification of cancer cells at the sub-cellular level by phonon microscopy using deep learning. Sci Rep 2023; 13:16228. [PMID: 37758808 PMCID: PMC10533877 DOI: 10.1038/s41598-023-42793-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023] Open
Abstract
There is a consensus about the strong correlation between the elasticity of cells and tissue and their normal, dysplastic, and cancerous states. However, developments in cell mechanics have not seen significant progress in clinical applications. In this work, we explore the possibility of using phonon acoustics for this purpose. We used phonon microscopy to obtain a measure of the elastic properties between cancerous and normal breast cells. Utilising the raw time-resolved phonon-derived data (300 k individual inputs), we employed a deep learning technique to differentiate between MDA-MB-231 and MCF10a cell lines. We achieved a 93% accuracy using a single phonon measurement in a volume of approximately 2.5 μm3. We also investigated means for classification based on a physical model that suggest the presence of unidentified mechanical markers. We have successfully created a compact sensor design as a proof of principle, demonstrating its compatibility for use with needles and endoscopes, opening up exciting possibilities for future applications.
Collapse
Affiliation(s)
- Fernando Pérez-Cota
- Optics and Photonics Group, Faculty of Engineering, University of Nottingham, Nottingham, UK.
| | | | - Salvatore La Cavera
- Optics and Photonics Group, Faculty of Engineering, University of Nottingham, Nottingham, UK
| | - William Hardiman
- Optics and Photonics Group, Faculty of Engineering, University of Nottingham, Nottingham, UK
| | - Luke Thornton
- Biodiscovery Institute, Centre for Cancer Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| | | | - Richard J Smith
- Optics and Photonics Group, Faculty of Engineering, University of Nottingham, Nottingham, UK
| | - Alan McIntyre
- Biodiscovery Institute, Centre for Cancer Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Matt Clark
- Optics and Photonics Group, Faculty of Engineering, University of Nottingham, Nottingham, UK
| |
Collapse
|
11
|
Pirone D, Montella A, Sirico D, Mugnano M, Del Giudice D, Kurelac I, Tirelli M, Iolascon A, Bianco V, Memmolo P, Capasso M, Miccio L, Ferraro P. Phenotyping neuroblastoma cells through intelligent scrutiny of stain-free biomarkers in holographic flow cytometry. APL Bioeng 2023; 7:036118. [PMID: 37753527 PMCID: PMC10519746 DOI: 10.1063/5.0159399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/21/2023] [Indexed: 09/28/2023] Open
Abstract
To efficiently tackle certain tumor types, finding new biomarkers for rapid and complete phenotyping of cancer cells is highly demanded. This is especially the case for the most common pediatric solid tumor of the sympathetic nervous system, namely, neuroblastoma (NB). Liquid biopsy is in principle a very promising tool for this purpose, but usually enrichment and isolation of circulating tumor cells in such patients remain difficult due to the unavailability of universal NB cell-specific surface markers. Here, we show that rapid screening and phenotyping of NB cells through stain-free biomarkers supported by artificial intelligence is a viable route for liquid biopsy. We demonstrate the concept through a flow cytometry based on label-free holographic quantitative phase-contrast microscopy empowered by machine learning. In detail, we exploit a hierarchical decision scheme where at first level NB cells are classified from monocytes with 97.9% accuracy. Then we demonstrate that different phenotypes are discriminated within NB class. Indeed, for each cell classified as NB its belonging to one of four NB sub-populations (i.e., CHP212, SKNBE2, SHSY5Y, and SKNSH) is evaluated thus achieving accuracy in the range 73.6%-89.1%. The achieved results solve the realistic problem related to the identification circulating tumor cell, i.e., the possibility to recognize and detect tumor cells morphologically similar to blood cells, which is the core issue in liquid biopsy based on stain-free microscopy. The presented approach operates at lab-on-chip scale and emulates real-world scenarios, thus representing a future route for liquid biopsy by exploiting intelligent biomedical imaging.
Collapse
Affiliation(s)
| | | | - Daniele Sirico
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Martina Mugnano
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Danila Del Giudice
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | | | | | | | - Vittorio Bianco
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Pasquale Memmolo
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Mario Capasso
- Authors to whom correspondence should be addressed: and
| | - Lisa Miccio
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| | - Pietro Ferraro
- CNR-ISASI, Institute of Applied Sciences and Intelligent Systems “E. Caianiello,” via Campi Flegrei 34, 80078 Pozzuoli, Napoli, Italy
| |
Collapse
|
12
|
Bazow B, Phan T, Raub CB, Nehmetallah G. Three-dimensional refractive index estimation based on deep-inverse non-interferometric optical diffraction tomography (ODT-Deep). OPTICS EXPRESS 2023; 31:28382-28399. [PMID: 37710893 DOI: 10.1364/oe.491707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 07/27/2023] [Indexed: 09/16/2023]
Abstract
Optical diffraction tomography (ODT) solves an inverse scattering problem to obtain label-free, 3D refractive index (RI) estimation of biological specimens. This work demonstrates 3D RI retrieval methods suitable for partially-coherent ODT systems supported by intensity-only measurements consisting of axial and angular illumination scanning. This framework allows for access to 3D quantitative RI contrast using a simplified non-interferometric technique. We consider a traditional iterative tomographic solver based on a multiple in-plane representation of the optical scattering process and gradient descent optimization adapted for focus-scanning systems, as well as an approach that relies solely on 3D convolutional neural networks (CNNs) to invert the scattering process. The approaches are validated using simulations of the 3D scattering potential for weak phase 3D biological samples.
Collapse
|
13
|
Dudaie M, Barnea I, Nissim N, Shaked NT. On-chip label-free cell classification based directly on off-axis holograms and spatial-frequency-invariant deep learning. Sci Rep 2023; 13:12370. [PMID: 37524884 PMCID: PMC10390541 DOI: 10.1038/s41598-023-38160-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 07/04/2023] [Indexed: 08/02/2023] Open
Abstract
We present a rapid label-free imaging flow cytometry and cell classification approach based directly on raw digital holograms. Off-axis holography enables real-time acquisition of cells during rapid flow. However, classification of the cells typically requires reconstruction of their quantitative phase profiles, which is time-consuming. Here, we present a new approach for label-free classification of individual cells based directly on the raw off-axis holographic images, each of which contains the complete complex wavefront (amplitude and quantitative phase profiles) of the cell. To obtain this, we built a convolutional neural network, which is invariant to the spatial frequencies and directions of the interference fringes of the off-axis holograms. We demonstrate the effectiveness of this approach using four types of cancer cells. This approach has the potential to significantly improve both speed and robustness of imaging flow cytometry, enabling real-time label-free classification of individual cells.
Collapse
Affiliation(s)
- Matan Dudaie
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Itay Barnea
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Noga Nissim
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel
| | - Natan T Shaked
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, 69978, Tel Aviv, Israel.
| |
Collapse
|
14
|
Storås AM, Andersen OE, Lockhart S, Thielemann R, Gnesin F, Thambawita V, Hicks SA, Kanters JK, Strümke I, Halvorsen P, Riegler MA. Usefulness of Heat Map Explanations for Deep-Learning-Based Electrocardiogram Analysis. Diagnostics (Basel) 2023; 13:2345. [PMID: 37510089 PMCID: PMC10378376 DOI: 10.3390/diagnostics13142345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
Deep neural networks are complex machine learning models that have shown promising results in analyzing high-dimensional data such as those collected from medical examinations. Such models have the potential to provide fast and accurate medical diagnoses. However, the high complexity makes deep neural networks and their predictions difficult to understand. Providing model explanations can be a way of increasing the understanding of "black box" models and building trust. In this work, we applied transfer learning to develop a deep neural network to predict sex from electrocardiograms. Using the visual explanation method Grad-CAM, heat maps were generated from the model in order to understand how it makes predictions. To evaluate the usefulness of the heat maps and determine if the heat maps identified electrocardiogram features that could be recognized to discriminate sex, medical doctors provided feedback. Based on the feedback, we concluded that, in our setting, this mode of explainable artificial intelligence does not provide meaningful information to medical doctors and is not useful in the clinic. Our results indicate that improved explanation techniques that are tailored to medical data should be developed before deep neural networks can be applied in the clinic for diagnostic purposes.
Collapse
Affiliation(s)
- Andrea M Storås
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, 0167 Oslo, Norway
- Department of Computer Science, Oslo Metropolitan University, 0130 Oslo, Norway
| | - Ole Emil Andersen
- Department of Public Health, Aarhus University, 8000 Aarhus, Denmark
- Steno Diabetes Center, Aarhus University, 8000 Aarhus, Denmark
| | - Sam Lockhart
- Wellcome Trust-Medical Research Council Institute of Metabolic Science, University of Cambridge, Cambridge CB2 0QQ, UK
| | - Roman Thielemann
- Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen, 2200 Copenhagen, Denmark
| | - Filip Gnesin
- Department of Cardiology, North Zealand Hospital, 3400 Hillerød, Denmark
| | - Vajira Thambawita
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, 0167 Oslo, Norway
| | - Steven A Hicks
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, 0167 Oslo, Norway
| | - Jørgen K Kanters
- Department of Biomedical Sciences, University of Copenhagen, 2200 Copenhagen, Denmark
| | - Inga Strümke
- Department of Computer Science, Norwegian University of Science and Technology, 7491 Trondheim, Norway
| | - Pål Halvorsen
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, 0167 Oslo, Norway
- Department of Computer Science, Oslo Metropolitan University, 0130 Oslo, Norway
| | - Michael A Riegler
- Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, 0167 Oslo, Norway
- Department of Computer Science, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| |
Collapse
|
15
|
Pérez E, Ventura S. Progressive growing of Generative Adversarial Networks for improving data augmentation and skin cancer diagnosis. Artif Intell Med 2023; 141:102556. [PMID: 37295899 DOI: 10.1016/j.artmed.2023.102556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 04/06/2023] [Accepted: 04/14/2023] [Indexed: 06/12/2023]
Abstract
Early melanoma diagnosis is the most important factor in the treatment of skin cancer and can effectively reduce mortality rates. Recently, Generative Adversarial Networks have been used to augment data, prevent overfitting and improve the diagnostic capacity of models. However, its application remains a challenging task due to the high levels of inter and intra-class variance seen in skin images, limited amounts of data, and model instability. We present a more robust Progressive Growing of Adversarial Networks based on residual learning, which is highly recommended to ease the training of deep networks. The stability of the training process was increased by receiving additional inputs from preceding blocks. The architecture is able to produce plausible photorealistic synthetic 512 × 512 skin images, even with small dermoscopic and non-dermoscopic skin image datasets as problem domains. In this manner, we tackle the lack of data and the imbalance problems. Additionally, the proposed approach leverages a skin lesion boundary segmentation algorithm and transfer learning to enhance the diagnosis of melanoma. Inception score and Matthews Correlation Coefficient were used to measure the performance of the models. The architecture was evaluated qualitatively and quantitatively through the use of an extensive experimental study on sixteen datasets, illustrating its effectiveness in the diagnosis of melanoma. Finally, four state-of-the-art data augmentation techniques applied in five convolutional neural network models were significantly outperformed. The results indicated that a bigger number of trainable parameters will not necessarily obtain a better performance in melanoma diagnosis.
Collapse
Affiliation(s)
- Eduardo Pérez
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). University of Córdoba, Córdoba, Spain; Maimónides Biomedical Research Institute of Córdoba (IMIBIC). University of Córdoba, Córdoba, Spain
| | - Sebastián Ventura
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). University of Córdoba, Córdoba, Spain; Maimónides Biomedical Research Institute of Córdoba (IMIBIC). University of Córdoba, Córdoba, Spain.
| |
Collapse
|
16
|
Valentino M, Sirico DG, Memmolo P, Miccio L, Bianco V, Ferraro P. Digital holographic approaches to the detection and characterization of microplastics in water environments. APPLIED OPTICS 2023; 62:D104-D118. [PMID: 37132775 DOI: 10.1364/ao.478700] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Microplastic (MP) pollution is seriously threatening the environmental health of the world, which has accelerated the development of new identification and characterization methods. Digital holography (DH) is one of the emerging tools to detect MPs in a high-throughput flow. Here, we review advances in MP screening by DH. We examine the problem from both the hardware and software viewpoints. Automatic analysis based on smart DH processing is reported by highlighting the role played by artificial intelligence for classification and regression tasks. In this framework, the continuous development and availability in recent years of field-portable holographic flow cytometers for water monitoring also is discussed.
Collapse
|
17
|
Sharafudeen M, J. A, Chandra S. S. V. Leveraging Vision Attention Transformers for Detection of Artificially Synthesized Dermoscopic Lesion Deepfakes Using Derm-CGAN. Diagnostics (Basel) 2023; 13:diagnostics13050825. [PMID: 36899969 PMCID: PMC10001347 DOI: 10.3390/diagnostics13050825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/25/2023] Open
Abstract
Synthesized multimedia is an open concern that has received much too little attention in the scientific community. In recent years, generative models have been utilized in maneuvering deepfakes in medical imaging modalities. We investigate the synthesized generation and detection of dermoscopic skin lesion images by leveraging the conceptual aspects of Conditional Generative Adversarial Networks and state-of-the-art Vision Transformers (ViT). The Derm-CGAN is architectured for the realistic generation of six different dermoscopic skin lesions. Analysis of the similarity between real and synthesized fakes revealed a high correlation. Further, several ViT variations were investigated to distinguish between actual and fake lesions. The best-performing model achieved an accuracy of 97.18% which has over 7% marginal gain over the second best-performing network. The trade-off of the proposed model compared to other networks, as well as a benchmark face dataset, was critically analyzed in terms of computational complexity. This technology is capable of harming laymen through medical misdiagnosis or insurance scams. Further research in this domain would be able to assist physicians and the general public in countering and resisting deepfake threats.
Collapse
Affiliation(s)
- Misaj Sharafudeen
- Department of Computer Science, University of Kerala, Kerala 695581, India
| | - Andrew J.
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
- Correspondence: (A.J.); (V.C.S.S.)
| | - Vinod Chandra S. S.
- Department of Computer Science, University of Kerala, Kerala 695581, India
- Correspondence: (A.J.); (V.C.S.S.)
| |
Collapse
|
18
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
19
|
Gangadhar A, Sari-Sarraf H, Vanapalli SA. Deep learning assisted holography microscopy for in-flow enumeration of tumor cells in blood. RSC Adv 2023; 13:4222-4235. [PMID: 36760296 PMCID: PMC9892890 DOI: 10.1039/d2ra07972k] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 01/25/2023] [Indexed: 02/05/2023] Open
Abstract
Currently, detection of circulating tumor cells (CTCs) in cancer patient blood samples relies on immunostaining, which does not provide access to live CTCs, limiting the breadth of CTC-based applications. Here, we take the first steps to address this limitation, by demonstrating staining-free enumeration of tumor cells spiked into lysed blood samples using digital holographic microscopy (DHM), microfluidics and machine learning (ML). A 3D-printed module for laser assembly was developed to simplify the optical set up for holographic imaging of cells flowing through a sheath-based microfluidic device. Computational reconstruction of the holograms was performed to localize the cells in 3D and obtain the plane of best focus images to train deep learning models. We developed a custom-designed light-weight shallow Network dubbed s-Net and compared its performance against off-the-shelf CNN models including ResNet-50. The accuracy, sensitivity and specificity of the s-Net model was found to be higher than the off-the-shelf ML models. By applying an optimized decision threshold to mixed samples prepared in silico, the false positive rate was reduced from 1 × 10-2 to 2.77 × 10-4. Finally, the developed DHM-ML framework was successfully applied to enumerate spiked MCF-7 breast cancer cells and SkOV3 ovarian cancer cells from lysed blood samples containing white blood cells (WBCs) at concentrations typical of label-free enrichment techniques. We conclude by discussing the advances that need to be made to translate the DHM-ML approach to staining-free enumeration of actual CTCs in cancer patient blood samples.
Collapse
Affiliation(s)
- Anirudh Gangadhar
- Department of Chemical Engineering, Texas Tech University Lubbock TX 79409 USA
| | - Hamed Sari-Sarraf
- Department of Electrical and Computer Engineering, Texas Tech UniversityLubbockTX 79409USA
| | - Siva A. Vanapalli
- Department of Chemical Engineering, Texas Tech UniversityLubbockTX 79409USA
| |
Collapse
|
20
|
Zhang R, Han X, Lei Z, Jiang C, Gul I, Hu Q, Zhai S, Liu H, Lian L, Liu Y, Zhang Y, Dong Y, Zhang CY, Lam TK, Han Y, Yu D, Zhou J, Qin P. RCMNet: A deep learning model assists CAR-T therapy for leukemia. Comput Biol Med 2022; 150:106084. [PMID: 36155267 DOI: 10.1016/j.compbiomed.2022.106084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/16/2022] [Accepted: 09/03/2022] [Indexed: 11/30/2022]
Abstract
Acute leukemia is a type of blood cancer with a high mortality rate. Current therapeutic methods include bone marrow transplantation, supportive therapy, and chemotherapy. Although a satisfactory remission of the disease can be achieved, the risk of recurrence is still high. Therefore, novel treatments are demanding. Chimeric antigen receptor-T (CAR-T) therapy has emerged as a promising approach to treating and curing acute leukemia. To harness the therapeutic potential of CAR-T cell therapy for blood diseases, reliable cell morphological identification is crucial. Nevertheless, the identification of CAR-T cells is a big challenge posed by their phenotypic similarity with other blood cells. To address this substantial clinical challenge, herein we first construct a CAR-T dataset with 500 original microscopy images after staining. Following that, we create a novel integrated model called RCMNet (ResNet18 with Convolutional Block Attention Module and Multi-Head Self-Attention) that combines the convolutional neural network (CNN) and Transformer. The model shows 99.63% top-1 accuracy on the public dataset. Compared with previous reports, our model obtains satisfactory results for image classification. Although testing on the CAR-T cell dataset, a decent performance is observed, which is attributed to the limited size of the dataset. Transfer learning is adapted for RCMNet and a maximum of 83.36% accuracy is achieved, which is higher than that of other state-of-the-art models. This study evaluates the effectiveness of RCMNet on a big public dataset and translates it to a clinical dataset for diagnostic applications.
Collapse
Affiliation(s)
- Ruitao Zhang
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Xueying Han
- The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang 150001, China
| | - Zhengyang Lei
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Chenyao Jiang
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Ijaz Gul
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Qiuyue Hu
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Shiyao Zhai
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Hong Liu
- Animal and Plant Inspection and Quarantine Technical Centre, Shenzhen Customs District, Shenzhen, Guangdong 518045, China
| | - Lijin Lian
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Ying Liu
- Animal and Plant Inspection and Quarantine Technical Centre, Shenzhen Customs District, Shenzhen, Guangdong 518045, China
| | - Yongbing Zhang
- Department of Computer Science, Harbin Institute of Technology, Shenzhen, Guangdong 518055, China
| | - Yuhan Dong
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Can Yang Zhang
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Tsz Kwan Lam
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Yuxing Han
- Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China
| | - Dongmei Yu
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, Shandong 264209, China
| | - Jin Zhou
- The First Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang 150001, China
| | - Peiwu Qin
- Institute of Biopharmaceutical and Health Engineering, Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong 518055, China; Precision Medicine and Public Health, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, Guangdong 518055, China.
| |
Collapse
|
21
|
Zare Harofte S, Soltani M, Siavashy S, Raahemifar K. Recent Advances of Utilizing Artificial Intelligence in Lab on a Chip for Diagnosis and Treatment. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2022; 18:e2203169. [PMID: 36026569 DOI: 10.1002/smll.202203169] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/16/2022] [Indexed: 05/14/2023]
Abstract
Nowadays, artificial intelligence (AI) creates numerous promising opportunities in the life sciences. AI methods can be significantly advantageous for analyzing the massive datasets provided by biotechnology systems for biological and biomedical applications. Microfluidics, with the developments in controlled reaction chambers, high-throughput arrays, and positioning systems, generate big data that is not necessarily analyzed successfully. Integrating AI and microfluidics can pave the way for both experimental and analytical throughputs in biotechnology research. Microfluidics enhances the experimental methods and reduces the cost and scale, while AI methods significantly improve the analysis of huge datasets obtained from high-throughput and multiplexed microfluidics. This review briefly presents a survey of the role of AI and microfluidics in biotechnology. Also, the incorporation of AI with microfluidics is comprehensively investigated. Specifically, recent studies that perform flow cytometry cell classification, cell isolation, and a combination of them by gaining from both AI methods and microfluidic techniques are covered. Despite all current challenges, various fields of biotechnology can be remarkably affected by the combination of AI and microfluidic technologies. Some of these fields include point-of-care systems, precision, personalized medicine, regenerative medicine, prognostics, diagnostics, and treatment of oncology and non-oncology-related diseases.
Collapse
Affiliation(s)
- Samaneh Zare Harofte
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, 19967-15433, Iran
| | - Madjid Soltani
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, 19967-15433, Iran
- Department of Electrical and Computer Engineering, Faculty of Engineering, University of Waterloo, Waterloo, ON, N2L 3G1, Canada
- Centre for Biotechnology and Bioengineering (CBB), University of Waterloo, Waterloo, ON, N2L 3G1, Canada
- Advanced Bioengineering Initiative Center, Multidisciplinary International Complex, K. N. Toosi University of Technology, Tehran, 14176-14411, Iran
- Cancer Biology Research Center, Cancer Institute of Iran, Tehran University of Medical Sciences, Tehran, 14197-33141, Iran
| | - Saeed Siavashy
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, 19967-15433, Iran
| | - Kaamran Raahemifar
- Data Science and Artificial Intelligence Program, College of Information Sciences and Technology (IST), Penn State University, State College, PA, 16801, USA
- School of Optometry and Vision Science, Faculty of Science, University of Waterloo, 200 University Ave. W, Waterloo, ON, N2L 3G1, Canada
- Department of Chemical Engineering, Faculty of Engineering, University of Waterloo, 200 University Ave. W, Waterloo, ON, N2L 3G1, Canada
| |
Collapse
|
22
|
Zhou X, Wang H, Feng C, Xu R, He Y, Li L, Tu C. Emerging Applications of Deep Learning in Bone Tumors: Current Advances and Challenges. Front Oncol 2022; 12:908873. [PMID: 35928860 PMCID: PMC9345628 DOI: 10.3389/fonc.2022.908873] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
Deep learning is a subfield of state-of-the-art artificial intelligence (AI) technology, and multiple deep learning-based AI models have been applied to musculoskeletal diseases. Deep learning has shown the capability to assist clinical diagnosis and prognosis prediction in a spectrum of musculoskeletal disorders, including fracture detection, cartilage and spinal lesions identification, and osteoarthritis severity assessment. Meanwhile, deep learning has also been extensively explored in diverse tumors such as prostate, breast, and lung cancers. Recently, the application of deep learning emerges in bone tumors. A growing number of deep learning models have demonstrated good performance in detection, segmentation, classification, volume calculation, grading, and assessment of tumor necrosis rate in primary and metastatic bone tumors based on both radiological (such as X-ray, CT, MRI, SPECT) and pathological images, implicating a potential for diagnosis assistance and prognosis prediction of deep learning in bone tumors. In this review, we first summarized the workflows of deep learning methods in medical images and the current applications of deep learning-based AI for diagnosis and prognosis prediction in bone tumors. Moreover, the current challenges in the implementation of the deep learning method and future perspectives in this field were extensively discussed.
Collapse
Affiliation(s)
- Xiaowen Zhou
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hua Wang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Chengyao Feng
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Ruilin Xu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Yu He
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Lan Li
- Department of Pathology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Chao Tu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Chao Tu,
| |
Collapse
|
23
|
Automatic Cancer Cell Taxonomy Using an Ensemble of Deep Neural Networks. Cancers (Basel) 2022; 14:cancers14092224. [PMID: 35565352 PMCID: PMC9100154 DOI: 10.3390/cancers14092224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 04/18/2022] [Accepted: 04/26/2022] [Indexed: 12/24/2022] Open
Abstract
Microscopic image-based analysis has been intensively performed for pathological studies and diagnosis of diseases. However, mis-authentication of cell lines due to misjudgments by pathologists has been recognized as a serious problem. To address this problem, we propose a deep-learning-based approach for the automatic taxonomy of cancer cell types. A total of 889 bright-field microscopic images of four cancer cell lines were acquired using a benchtop microscope. Individual cells were further segmented and augmented to increase the image dataset. Afterward, deep transfer learning was adopted to accelerate the classification of cancer types. Experiments revealed that the deep-learning-based methods outperformed traditional machine-learning-based methods. Moreover, the Wilcoxon signed-rank test showed that deep ensemble approaches outperformed individual deep-learning-based models (p < 0.001) and were in effect to achieve the classification accuracy up to 97.735%. Additional investigation with the Wilcoxon signed-rank test was conducted to consider various network design choices, such as the type of optimizer, type of learning rate scheduler, degree of fine-tuning, and use of data augmentation. Finally, it was found that the using data augmentation and updating all the weights of a network during fine-tuning improve the overall performance of individual convolutional neural network models.
Collapse
|
24
|
Laddha S, Kumar V. DGCNN: deep convolutional generative adversarial network based convolutional neural network for diagnosis of COVID-19. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:31201-31218. [PMID: 35431606 PMCID: PMC8993038 DOI: 10.1007/s11042-022-12640-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 11/15/2021] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
The latest threat to global health is the coronavirus disease 2019 (COVID-19) pandemic. To prevent COVID-19, recognizing and isolating the infected patients is an essential step. The primary diagnosis method is Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. However, the sensitivity of this test is not satisfactory to successfully control the COVID-19 outbreak. Although there exist many datasets of chest X-rays (CXR) images, but few COVID-19 CXRs are presently accessible owing to privacy of patients. Thus, many researchers have utilized data augmentation techniques to augment the datasets. But, it may cause over-fitting issues, as the existing data augmentation techniques include small modifications to CXRs. Therefore, in this paper, an efficient deep convolutional generative adversarial network and convolutional neural network (DGCNN) is designed to diagnose COVID-19 suspected subjects. Deep convolutional generative adversarial network (DGAN) consists of two networks trained adversarially such that one generates fake images and the other differentiates between them. Thereafter, convolutional neural network (CNN) is utilized for classification purpose. Extensive experiments are conducted to evaluate the performance of the proposed DGCNN. Performance analysis demonstrates that DGCNN can highly improves the diagnosis performance.
Collapse
Affiliation(s)
- Saloni Laddha
- Computer Science and Engineering, National Institute of Technology, Hamirpur, Himachal Pradesh India
| | - Vijay Kumar
- Computer Science and Engineering, National Institute of Technology, Hamirpur, Himachal Pradesh India
| |
Collapse
|
25
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
|
26
|
Sengupta D, Ali SN, Bhattacharya A, Mustafi J, Mukhopadhyay A, Sengupta K. A deep hybrid learning pipeline for accurate diagnosis of ovarian cancer based on nuclear morphology. PLoS One 2022; 17:e0261181. [PMID: 34995293 PMCID: PMC8741040 DOI: 10.1371/journal.pone.0261181] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 11/24/2021] [Indexed: 12/31/2022] Open
Abstract
Nuclear morphological features are potent determining factors for clinical diagnostic approaches adopted by pathologists to analyze the malignant potential of cancer cells. Considering the structural alteration of the nucleus in cancer cells, various groups have developed machine learning techniques based on variation in nuclear morphometric information like nuclear shape, size, nucleus-cytoplasm ratio and various non-parametric methods like deep learning have also been tested for analyzing immunohistochemistry images of tissue samples for diagnosing various cancers. We aim to correlate the morphometric features of the nucleus along with the distribution of nuclear lamin proteins with classical machine learning to differentiate between normal and ovarian cancer tissues. It has already been elucidated that in ovarian cancer, the extent of alteration in nuclear shape and morphology can modulate genetic changes and thus can be utilized to predict the outcome of low to a high form of serous carcinoma. In this work, we have performed exhaustive imaging of ovarian cancer versus normal tissue and developed a dual pipeline architecture that combines the matrices of morphometric parameters with deep learning techniques of auto feature extraction from pre-processed images. This novel Deep Hybrid Learning model, though derived from classical machine learning algorithms and standard CNN, showed a training and validation AUC score of 0.99 whereas the test AUC score turned out to be 1.00. The improved feature engineering enabled us to differentiate between cancerous and non-cancerous samples successfully from this pilot study.
Collapse
Affiliation(s)
- Duhita Sengupta
- Biophysics and Structural Genomics Division, Saha Institute of Nuclear Physics, Kolkata, West Bengal, India
- Homi Bhaba National Institute, Mumbai, India
| | - Sk Nishan Ali
- Artificial Intelligence and Machine Learning Division, MUST Research Trust, Hyderabad, Telangana, India
| | - Aditya Bhattacharya
- Artificial Intelligence and Machine Learning Division, MUST Research Trust, Hyderabad, Telangana, India
| | - Joy Mustafi
- Artificial Intelligence and Machine Learning Division, MUST Research Trust, Hyderabad, Telangana, India
| | - Asima Mukhopadhyay
- Chittaranjan National Cancer Institute, Newtown, Kolkata, West Bengal, India
| | - Kaushik Sengupta
- Biophysics and Structural Genomics Division, Saha Institute of Nuclear Physics, Kolkata, West Bengal, India
- * E-mail:
| |
Collapse
|
27
|
Lan Y, Huang N, Fu Y, Liu K, Zhang H, Li Y, Yang S. Morphology-Based Deep Learning Approach for Predicting Osteogenic Differentiation. Front Bioeng Biotechnol 2022; 9:802794. [PMID: 35155409 PMCID: PMC8830423 DOI: 10.3389/fbioe.2021.802794] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 12/30/2021] [Indexed: 02/03/2023] Open
Abstract
Early, high-throughput, and accurate recognition of osteogenic differentiation of stem cells is urgently required in stem cell therapy, tissue engineering, and regenerative medicine. In this study, we established an automatic deep learning algorithm, i.e., osteogenic convolutional neural network (OCNN), to quantitatively measure the osteogenic differentiation of rat bone marrow mesenchymal stem cells (rBMSCs). rBMSCs stained with F-actin and DAPI during early differentiation (day 0, 1, 4, and 7) were captured using laser confocal scanning microscopy to train OCNN. As a result, OCNN successfully distinguished differentiated cells at a very early stage (24 h) with a high area under the curve (AUC) (0.94 ± 0.04) and correlated with conventional biochemical markers. Meanwhile, OCNN exhibited better prediction performance compared with the single morphological parameters and support vector machine. Furthermore, OCNN successfully predicted the dose-dependent effects of small-molecule osteogenic drugs and a cytokine. OCNN-based online learning models can further recognize the osteogenic differentiation of rBMSCs cultured on several material surfaces. Hence, this study initially demonstrated the foreground of OCNN in osteogenic drug and biomaterial screening for next-generation tissue engineering and stem cell research.
Collapse
Affiliation(s)
- Yiqing Lan
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - Nannan Huang
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - Yiru Fu
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - Kehao Liu
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - He Zhang
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - Yuzhou Li
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
- *Correspondence: Yuzhou Li, ; Sheng Yang,
| | - Sheng Yang
- Stomatological Hospital of Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
- *Correspondence: Yuzhou Li, ; Sheng Yang,
| |
Collapse
|
28
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Timmerman R, Dan T, Wardak Z, Lu W, Gu X. Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4667. [PMID: 34952535 PMCID: PMC8858586 DOI: 10.1088/1361-6560/ac4667] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/24/2021] [Indexed: 01/21/2023]
Abstract
Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA,; ,
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA,Department of Radiation Oncology, Stanford University, Stanford, CA 94305,; ,
| |
Collapse
|
29
|
Bélanger E, Benadiba C, Rioux-Pellerin É, Becq F, Jourdain P, Marquet P. Engineered fluidic device to achieve multiplexed monitoring of cell cultures with digital holographic microscopy. OPTICS EXPRESS 2022; 30:414-426. [PMID: 35201218 DOI: 10.1364/oe.444701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 11/13/2021] [Indexed: 06/14/2023]
Abstract
We present a low-cost, 3D-printed, and biocompatible fluidic device, engineered to produce laminar and homogeneous flow over a large field-of-view. Such a fluidic device allows us to perform multiplexed temporal monitoring of cell cultures compatible with the use of various pharmacological protocols. Therefore, specific properties of each of the observed cell cultures can be discriminated simultaneously during the same experiment. This was illustrated by monitoring the agonists-mediated cellular responses, with digital holographic microscopy, of four different cell culture models of cystic fibrosis. Quantitatively speaking, this multiplexed approach provides a time saving factor of around four to reveal specific cellular features.
Collapse
|
30
|
de Carvalho Gomes P, Crossman A, Massey E, Stanley Rickard JJ, Oppenheimer PG. Real-time validation of Surface-Enhanced Raman Scattering substrates via convolutional neural network algorithm. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.101076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
31
|
Data Augmentation Based on Generative Adversarial Networks to Improve Stage Classification of Chronic Kidney Disease. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010352] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The prevalence of chronic kidney disease (CKD) is estimated to be 13.4% worldwide and 15% in the United States. CKD has been recognized as a leading public health problem worldwide. Unfortunately, as many as 90% of CKD patients do not know that they already have CKD. Ultrasonography is usually the first and the most commonly used imaging diagnostic tool for patients at risk of CKD. To provide a consistent assessment of the stage classifications of CKD, this study proposes an auxiliary diagnosis system based on deep learning approaches for renal ultrasound images. The system uses the ACWGAN-GP model and MobileNetV2 pre-training model. The images generated by the ACWGAN-GP generation model and the original images are simultaneously input into the pre-training model MobileNetV2 for training. This classification system achieved an accuracy of 81.9% in the four stages of CKD classification. If the prediction results allowed a higher stage tolerance, the accuracy could be improved by up to 90.1%. The proposed deep learning method solves the problem of imbalance and insufficient data samples during training processes for an automatic classification system and also improves the prediction accuracy of CKD stage diagnosis.
Collapse
|
32
|
DEGAS: differentiable efficient generator search. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06309-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
33
|
Ben Baruch S, Rotman-Nativ N, Baram A, Greenspan H, Shaked NT. Cancer-Cell Deep-Learning Classification by Integrating Quantitative-Phase Spatial and Temporal Fluctuations. Cells 2021; 10:3353. [PMID: 34943859 PMCID: PMC8699730 DOI: 10.3390/cells10123353] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 11/23/2021] [Accepted: 11/25/2021] [Indexed: 11/26/2022] Open
Abstract
We present a new classification approach for live cells, integrating together the spatial and temporal fluctuation maps and the quantitative optical thickness map of the cell, as acquired by common-path quantitative-phase dynamic imaging and processed with a deep-learning framework. We demonstrate this approach by classifying between two types of cancer cell lines of different metastatic potential originating from the same patient. It is based on the fact that both the cancer-cell morphology and its mechanical properties, as indicated by the cell temporal and spatial fluctuations, change over the disease progression. We tested different fusion methods for inputting both the morphological optical thickness maps and the coinciding spatio-temporal fluctuation maps of the cells to the classifying network framework. We show that the proposed integrated triple-path deep-learning architecture improves over deep-learning classification that is based only on the cell morphological evaluation via its quantitative optical thickness map, demonstrating the benefit in the acquisition of the cells over time and in extracting their spatio-temporal fluctuation maps, to be used as an input to the classifying deep neural network.
Collapse
Affiliation(s)
| | | | | | | | - Natan T. Shaked
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel; (S.B.B.); (N.R.-N.); (A.B.); (H.G.)
| |
Collapse
|
34
|
Roadmap on Digital Holography-Based Quantitative Phase Imaging. J Imaging 2021; 7:jimaging7120252. [PMID: 34940719 PMCID: PMC8703719 DOI: 10.3390/jimaging7120252] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 11/11/2021] [Accepted: 11/15/2021] [Indexed: 12/02/2022] Open
Abstract
Quantitative Phase Imaging (QPI) provides unique means for the imaging of biological or technical microstructures, merging beneficial features identified with microscopy, interferometry, holography, and numerical computations. This roadmap article reviews several digital holography-based QPI approaches developed by prominent research groups. It also briefly discusses the present and future perspectives of 2D and 3D QPI research based on digital holographic microscopy, holographic tomography, and their applications.
Collapse
|
35
|
Xin L, Xiao W, Che L, Liu J, Miccio L, Bianco V, Memmolo P, Ferraro P, Li X, Pan F. Label-Free Assessment of the Drug Resistance of Epithelial Ovarian Cancer Cells in a Microfluidic Holographic Flow Cytometer Boosted through Machine Learning. ACS OMEGA 2021; 6:31046-31057. [PMID: 34841147 PMCID: PMC8613806 DOI: 10.1021/acsomega.1c04204] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 10/29/2021] [Indexed: 05/13/2023]
Abstract
About 75% of epithelial ovarian cancer (EOC) patients suffer from relapsing and develop drug resistance after primary chemotherapy. The commonly used clinical examinations and biological tumor tissue models for chemotherapeutic sensitivity are time-consuming and expensive. Research studies showed that the cell morphology-based method is promising to be a new route for chemotherapeutic sensitivity evaluation. Here, we offer how the drug resistance of EOC cells can be assessed through a label-free and high-throughput microfluidic flow cytometer equipped with a digital holographic microscope reinforced by machine learning. It is the first time that such type of assessment is performed to the best of our knowledge. Several morphologic and texture features at a single-cell level have been extracted from the quantitative phase images. In addition, we compared four common machine learning algorithms, including naive Bayes, decision tree, K-nearest neighbors, support vector machine (SVM), and fully connected network. The result shows that the SVM classifier achieves the optimal performance with an accuracy of 92.2% and an area under the curve of 0.96. This study demonstrates that the proposed method achieves high-accuracy, high-throughput, and label-free assessment of the drug resistance of EOC cells. Furthermore, it reflects strong potentialities to develop data-driven individualized chemotherapy treatments in the future.
Collapse
Affiliation(s)
- Lu Xin
- Key
Laboratory of Precision Opto-mechatronics Technology, School of Instrumentation
& Optoelectronic Engineering, Beihang
University, Beijing 100191, China
| | - Wen Xiao
- Key
Laboratory of Precision Opto-mechatronics Technology, School of Instrumentation
& Optoelectronic Engineering, Beihang
University, Beijing 100191, China
| | - Leiping Che
- Key
Laboratory of Precision Opto-mechatronics Technology, School of Instrumentation
& Optoelectronic Engineering, Beihang
University, Beijing 100191, China
| | - JinJin Liu
- Department
of Obstetrics and Gynecology, Peking University
People’s Hospital, Beijing 100044, China
| | - Lisa Miccio
- CNR,
Institute of Applied Sciences & Intelligent Systems (ISASI) “E.
Caianiello”, via
Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Vittorio Bianco
- CNR,
Institute of Applied Sciences & Intelligent Systems (ISASI) “E.
Caianiello”, via
Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Pasquale Memmolo
- CNR,
Institute of Applied Sciences & Intelligent Systems (ISASI) “E.
Caianiello”, via
Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Pietro Ferraro
- CNR,
Institute of Applied Sciences & Intelligent Systems (ISASI) “E.
Caianiello”, via
Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Xiaoping Li
- Department
of Obstetrics and Gynecology, Peking University
People’s Hospital, Beijing 100044, China
| | - Feng Pan
- Key
Laboratory of Precision Opto-mechatronics Technology, School of Instrumentation
& Optoelectronic Engineering, Beihang
University, Beijing 100191, China
| |
Collapse
|
36
|
Zeng T, Zhu Y, Lam EY. Deep learning for digital holography: a review. OPTICS EXPRESS 2021; 29:40572-40593. [PMID: 34809394 DOI: 10.1364/oe.443367] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Recent years have witnessed the unprecedented progress of deep learning applications in digital holography (DH). Nevertheless, there remain huge potentials in how deep learning can further improve performance and enable new functionalities for DH. Here, we survey recent developments in various DH applications powered by deep learning algorithms. This article starts with a brief introduction to digital holographic imaging, then summarizes the most relevant deep learning techniques for DH, with discussions on their benefits and challenges. We then present case studies covering a wide range of problems and applications in order to highlight research achievements to date. We provide an outlook of several promising directions to widen the use of deep learning in various DH applications.
Collapse
|
37
|
Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9025470. [PMID: 34754327 PMCID: PMC8572604 DOI: 10.1155/2021/9025470] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 12/30/2022]
Abstract
Deep learning (DL) is a branch of machine learning and artificial intelligence that has been applied to many areas in different domains such as health care and drug design. Cancer prognosis estimates the ultimate fate of a cancer subject and provides survival estimation of the subjects. An accurate and timely diagnostic and prognostic decision will greatly benefit cancer subjects. DL has emerged as a technology of choice due to the availability of high computational resources. The main components in a standard computer-aided design (CAD) system are preprocessing, feature recognition, extraction and selection, categorization, and performance assessment. Reduction of costs associated with sequencing systems offers a myriad of opportunities for building precise models for cancer diagnosis and prognosis prediction. In this survey, we provided a summary of current works where DL has helped to determine the best models for the cancer diagnosis and prognosis prediction tasks. DL is a generic model requiring minimal data manipulations and achieves better results while working with enormous volumes of data. Aims are to scrutinize the influence of DL systems using histopathology images, present a summary of state-of-the-art DL methods, and give directions to future researchers to refine the existing methods.
Collapse
|
38
|
Zhang Y, Hu D, Zhao Q, Quan G, Liu J, Liu Q, Zhang Y, Coatrieux G, Chen Y, Yu H. CLEAR: Comprehensive Learning Enabled Adversarial Reconstruction for Subtle Structure Enhanced Low-Dose CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3089-3101. [PMID: 34270418 DOI: 10.1109/tmi.2021.3097808] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
X-ray computed tomography (CT) is of great clinical significance in medical practice because it can provide anatomical information about the human body without invasion, while its radiation risk has continued to attract public concerns. Reducing the radiation dose may induce noise and artifacts to the reconstructed images, which will interfere with the judgments of radiologists. Previous studies have confirmed that deep learning (DL) is promising for improving low-dose CT imaging. However, almost all the DL-based methods suffer from subtle structure degeneration and blurring effect after aggressive denoising, which has become the general challenging issue. This paper develops the Comprehensive Learning Enabled Adversarial Reconstruction (CLEAR) method to tackle the above problems. CLEAR achieves subtle structure enhanced low-dose CT imaging through a progressive improvement strategy. First, the generator established on the comprehensive domain can extract more features than the one built on degraded CT images and directly map raw projections to high-quality CT images, which is significantly different from the routine GAN practice. Second, a multi-level loss is assigned to the generator to push all the network components to be updated towards high-quality reconstruction, preserving the consistency between generated images and gold-standard images. Finally, following the WGAN-GP modality, CLEAR can migrate the real statistical properties to the generated images to alleviate over-smoothing. Qualitative and quantitative analyses have demonstrated the competitive performance of CLEAR in terms of noise suppression, structural fidelity and visual perception improvement.
Collapse
|
39
|
Javidi B, Carnicer A, Anand A, Barbastathis G, Chen W, Ferraro P, Goodman JW, Horisaki R, Khare K, Kujawinska M, Leitgeb RA, Marquet P, Nomura T, Ozcan A, Park Y, Pedrini G, Picart P, Rosen J, Saavedra G, Shaked NT, Stern A, Tajahuerce E, Tian L, Wetzstein G, Yamaguchi M. Roadmap on digital holography [Invited]. OPTICS EXPRESS 2021; 29:35078-35118. [PMID: 34808951 DOI: 10.1364/oe.435915] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 09/04/2021] [Indexed: 05/22/2023]
Abstract
This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.
Collapse
|
40
|
Verduijn J, Van der Meeren L, Krysko DV, Skirtach AG. Deep learning with digital holographic microscopy discriminates apoptosis and necroptosis. Cell Death Dis 2021; 7:229. [PMID: 34475384 PMCID: PMC8413278 DOI: 10.1038/s41420-021-00616-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 08/13/2021] [Accepted: 08/19/2021] [Indexed: 02/07/2023]
Abstract
Regulated cell death modalities such as apoptosis and necroptosis play an important role in regulating different cellular processes. Currently, regulated cell death is identified using the golden standard techniques such as fluorescence microscopy and flow cytometry. However, they require fluorescent labels, which are potentially phototoxic. Therefore, there is a need for the development of new label-free methods. In this work, we apply Digital Holographic Microscopy (DHM) coupled with a deep learning algorithm to distinguish between alive, apoptotic and necroptotic cells in murine cancer cells. This method is solely based on label-free quantitative phase images, where the phase delay of light by cells is quantified and is used to calculate their topography. We show that a combination of label-free DHM in a high-throughput set-up (~10,000 cells per condition) can discriminate between apoptosis, necroptosis and alive cells in the L929sAhFas cell line with a precision of over 85%. To the best of our knowledge, this is the first time deep learning in the form of convolutional neural networks is applied to distinguish-with a high accuracy-apoptosis and necroptosis and alive cancer cells from each other in a label-free manner. It is expected that the approach described here will have a profound impact on research in regulated cell death, biomedicine and the field of (cancer) cell biology in general.
Collapse
Affiliation(s)
- Joost Verduijn
- grid.5342.00000 0001 2069 7798Nano-Biotechnology Laboratory, Department of Biotechnology, Faculty of Bioscience Engineering, Ghent University, 9000 Ghent, Belgium ,grid.510942.bCancer Research Institute Ghent, 9000 Ghent, Belgium
| | - Louis Van der Meeren
- grid.5342.00000 0001 2069 7798Nano-Biotechnology Laboratory, Department of Biotechnology, Faculty of Bioscience Engineering, Ghent University, 9000 Ghent, Belgium ,grid.510942.bCancer Research Institute Ghent, 9000 Ghent, Belgium
| | - Dmitri V. Krysko
- grid.510942.bCancer Research Institute Ghent, 9000 Ghent, Belgium ,grid.5342.00000 0001 2069 7798Cell Death Investigation and Therapy (CDIT) Laboratory, Anatomy an Embryology Unit, Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium ,grid.448878.f0000 0001 2288 8774Department of Pathophysiology, Sechenov First Moscow State Medical University (Sechenov University), 119146 Moscow, Russian Federation
| | - André G. Skirtach
- grid.5342.00000 0001 2069 7798Nano-Biotechnology Laboratory, Department of Biotechnology, Faculty of Bioscience Engineering, Ghent University, 9000 Ghent, Belgium ,grid.510942.bCancer Research Institute Ghent, 9000 Ghent, Belgium
| |
Collapse
|
41
|
Lin CW, Hong Y, Liu J. Aggregation-and-Attention Network for brain tumor segmentation. BMC Med Imaging 2021; 21:109. [PMID: 34243703 PMCID: PMC8267236 DOI: 10.1186/s12880-021-00639-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 06/30/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Glioma is a malignant brain tumor; its location is complex and is difficult to remove surgically. To diagnosis the brain tumor, doctors can precisely diagnose and localize the disease using medical images. However, the computer-assisted diagnosis for the brain tumor diagnosis is still the problem because the rough segmentation of the brain tumor makes the internal grade of the tumor incorrect. METHODS In this paper, we proposed an Aggregation-and-Attention Network for brain tumor segmentation. The proposed network takes the U-Net as the backbone, aggregates multi-scale semantic information, and focuses on crucial information to perform brain tumor segmentation. To this end, we proposed an enhanced down-sampling module and Up-Sampling Layer to compensate for the information loss. The multi-scale connection module is to construct the multi-receptive semantic fusion between encoder and decoder. Furthermore, we designed a dual-attention fusion module that can extract and enhance the spatial relationship of magnetic resonance imaging and applied the strategy of deep supervision in different parts of the proposed network. RESULTS Experimental results show that the performance of the proposed framework is the best on the BraTS2020 dataset, compared with the-state-of-art networks. The performance of the proposed framework surpasses all the comparison networks, and its average accuracies of the four indexes are 0.860, 0.885, 0.932, and 1.2325, respectively. CONCLUSIONS The framework and modules of the proposed framework are scientific and practical, which can extract and aggregate useful semantic information and enhance the ability of glioma segmentation.
Collapse
Affiliation(s)
- Chih-Wei Lin
- College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou, China.
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China.
- Forestry Post-Doctoral Station of Fujian Agriculture and Forestry University, Fuzhou, China.
- Key Laboratory for Ecology and Resource Statistics of Fujian Province, Fuzhou, China.
| | - Yu Hong
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
- Key Laboratory for Ecology and Resource Statistics of Fujian Province, Fuzhou, China
| | - Jinfu Liu
- College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou, China
- College of Forestry, Fujian Agriculture and Forestry University, Fuzhou, China
- Key Laboratory for Ecology and Resource Statistics of Fujian Province, Fuzhou, China
| |
Collapse
|
42
|
Abstract
The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.
Collapse
|
43
|
Chatterjee A, Roy S, Das S. A Bi-fold Approach to Detect and Classify COVID-19 X-Ray Images and Symptom Auditor. SN COMPUTER SCIENCE 2021; 2:304. [PMID: 34075356 PMCID: PMC8160081 DOI: 10.1007/s42979-021-00701-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 05/12/2021] [Indexed: 11/17/2022]
Abstract
In this paper, we propose an ensemble-based transfer learning method to predict the X-ray image of a COVID-19 affected person. We have used a weighted Euclidean distance average as the parameter to ensemble the transfer learning model viz. ResNet50, VGG16, VGG19, Xception, and InceptionV3. Image augmentations have been carried out using generative adversarial network modelling. We took 784 training images, and 278 test images to validate our model accuracy, and the accuracy of our proposed model was around 98.67% for the training data set and 95.52% for the test data set. Along with that, we also propose a genetic algorithm optimized classification algorithm, to analyze the symptoms of COVID-19 for low, medium, and high-risk patients. The accuracy for the optimized set overshadowed the accuracy of un-optimized classification, and the optimized accuracy is as high as 88.96% for the optimized model. The novelty of this paper lies in the bi-sided model of the paper, i.e., we propose two major models, and one is the genetic algorithm optimized model to analyze the symptoms for a patient of varied risk and the other is to classify the X-ray image using an ensemble-based transfer learning model.
Collapse
Affiliation(s)
- Ahan Chatterjee
- Department of Computer Science and Engineering, The Neotia University, Sarisha, West Bengal India
| | - Swagatam Roy
- Department of Computer Science and Engineering, The Neotia University, Sarisha, West Bengal India
| | - Sunanda Das
- Department of Computer Science and Engineering, SVCET, Chittoor, Andhra Pradesh India
| |
Collapse
|
44
|
Abdar M, Samami M, Dehghani Mahmoodabad S, Doan T, Mazoure B, Hashemifesharaki R, Liu L, Khosravi A, Acharya UR, Makarenkov V, Nahavandi S. Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning. Comput Biol Med 2021; 135:104418. [PMID: 34052016 DOI: 10.1016/j.compbiomed.2021.104418] [Citation(s) in RCA: 70] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 04/01/2021] [Accepted: 04/17/2021] [Indexed: 12/18/2022]
Abstract
Accurate automated medical image recognition, including classification and segmentation, is one of the most challenging tasks in medical image analysis. Recently, deep learning methods have achieved remarkable success in medical image classification and segmentation, clearly becoming the state-of-the-art methods. However, most of these methods are unable to provide uncertainty quantification (UQ) for their output, often being overconfident, which can lead to disastrous consequences. Bayesian Deep Learning (BDL) methods can be used to quantify uncertainty of traditional deep learning methods, and thus address this issue. We apply three uncertainty quantification methods to deal with uncertainty during skin cancer image classification. They are as follows: Monte Carlo (MC) dropout, Ensemble MC (EMC) dropout and Deep Ensemble (DE). To further resolve the remaining uncertainty after applying the MC, EMC and DE methods, we describe a novel hybrid dynamic BDL model, taking into account uncertainty, based on the Three-Way Decision (TWD) theory. The proposed dynamic model enables us to use different UQ methods and different deep neural networks in distinct classification phases. So, the elements of each phase can be adjusted according to the dataset under consideration. In this study, two best UQ methods (i.e., DE and EMC) are applied in two classification phases (the first and second phases) to analyze two well-known skin cancer datasets, preventing one from making overconfident decisions when it comes to diagnosing the disease. The accuracy and the F1-score of our final solution are, respectively, 88.95% and 89.00% for the first dataset, and 90.96% and 91.00% for the second dataset. Our results suggest that the proposed TWDBDL model can be used effectively at different stages of medical image analysis.
Collapse
Affiliation(s)
- Moloud Abdar
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia.
| | - Maryam Samami
- Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran
| | - Sajjad Dehghani Mahmoodabad
- Department of Artificial Intelligence, Faculty of Computer Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Thang Doan
- Department of Computer Science, McGill University / Mila, Montreal, Canada
| | - Bogdan Mazoure
- Department of Computer Science, McGill University / Mila, Montreal, Canada
| | | | - Li Liu
- Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - U Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, Singapore University of Social Sciences, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| | - Vladimir Makarenkov
- Department of Computer Science, University of Quebec in Montreal, Montreal, Canada
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| |
Collapse
|
45
|
Xiao W, Xin L, Cao R, Wu X, Tian R, Che L, Sun L, Ferraro P, Pan F. Sensing morphogenesis of bone cells under microfluidic shear stress by holographic microscopy and automatic aberration compensation with deep learning. LAB ON A CHIP 2021; 21:1385-1394. [PMID: 33585849 DOI: 10.1039/d0lc01113d] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We present sensing time-lapse morphogenesis of living bone cells under micro-fluidic shear stress (FSS) by digital holographic (DH) microscopy. To remove the effect of aberrations on quantitative measurements, we propose a numerical and automatic method to compensate for aberrations based on a convolutional neural network (CNN). For the first time, the aberration compensation issue is considered as a regression task where optimal coefficients for constructing the phase aberration map act as responses corresponding to the input aberrated phase image. We adopted tens of thousands of living cells' phase images reconstructed from digital holograms for training the CNN. The experiments demonstrate that, based on the trained network, phase aberrations can be totally removed in real-time without any hypothesis of object and aberration phase, knowledge of the setup's physical parameters, and the operation of selecting background regions; hence, the morphogenesis of the bone cells under FSS is accurately detected and quantitatively analyzed. The results show that the proposed method could provide a highly efficient and versatile way to investigate the effects of micro-FSS on living biological cells in microfluidic lab-on-chip platforms thanks to the combination of phase-contrast label-free microcopy with artificial intelligence.
Collapse
Affiliation(s)
- Wen Xiao
- Key Laboratory of Precision Opto-mechatronics Technology, School of Instrumentation & Optoelectronic Engineering, Beihang University, Beijing 100191, China.
| | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Zheng C, Bian F, Li L, Xie X, Liu H, Liang J, Chen X, Wang Z, Qiao T, Yang J, Zhang M. Assessment of Generative Adversarial Networks for Synthetic Anterior Segment Optical Coherence Tomography Images in Closed-Angle Detection. Transl Vis Sci Technol 2021; 10:34. [PMID: 34004012 PMCID: PMC8088224 DOI: 10.1167/tvst.10.4.34] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/08/2021] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To develop generative adversarial networks (GANs) that synthesize realistic anterior segment optical coherence tomography (AS-OCT) images and evaluate deep learning (DL) models that are trained on real and synthetic datasets for detecting angle closure. METHODS The GAN architecture was adopted and trained on the dataset with AS-OCT images collected from the Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, synthesizing open- and closed-angle AS-OCT images. A visual Turing test with two glaucoma specialists was performed to assess the image quality of real and synthetic images. DL models, trained on either real or synthetic datasets, were developed. Using the clinicians' grading of the AS-OCT images as the reference standard, we compared the diagnostic performance of open-angle vs. closed-angle detection of DL models and the AS-OCT parameter, defined as a trabecular-iris space area 750 µm anterior to the scleral spur (TISA750), in a small independent validation dataset. RESULTS The GAN training included 28,643 AS-OCT anterior chamber angle (ACA) images. The real and synthetic datasets for DL model training have an equal distribution of open- and closed-angle images (all with 10,000 images each). The independent validation dataset included 238 open-angle and 243 closed-angle AS-OCT ACA images. The image quality of real versus synthetic AS-OCT images was similar, as assessed by the two glaucoma specialists, except for the scleral spur visibility. For the independent validation dataset, both DL models achieved higher areas under the curve compared with TISA750. Two DL models had areas under the curve of 0.97 (95% confidence interval, 0.96-0.99) and 0.94 (95% confidence interval, 0.92-0.96). CONCLUSIONS The GAN synthetic AS-OCT images appeared to be of good quality, according to the glaucoma specialists. The DL models, trained on all-synthetic AS-OCT images, can achieve high diagnostic performance. TRANSLATIONAL RELEVANCE The GANs can generate realistic AS-OCT images, which can also be used to train DL models.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Fang Bian
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology, Deyang People's Hospital, Sichuan, China
| | - Luo Li
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Hui Liu
- Aier School of Ophthalmology, Central South University, Changsha, Hunan, China
| | - Jianheng Liang
- Aier School of Ophthalmology, Central South University, Changsha, Hunan, China
| | - Xu Chen
- Aier School of Ophthalmology, Central South University, Changsha, Hunan, China
- Department of Ophthalmology, Shanghai Aier Eye Hospital, Shanghai, China
| | - Zilei Wang
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Tong Qiao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Jianlong Yang
- Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| |
Collapse
|
47
|
Nejadeh M, Bayat P, Kheirkhah J, Moladoust H. Predicting the response to cardiac resynchronization therapy (CRT) using the deep learning approach. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
|
48
|
Yurt M, Dar SU, Erdem A, Erdem E, Oguz KK, Çukur T. mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis. Med Image Anal 2021; 70:101944. [PMID: 33690024 DOI: 10.1016/j.media.2020.101944] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 12/11/2020] [Accepted: 12/15/2020] [Indexed: 01/28/2023]
Abstract
Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many-to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T1,- T2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.
Collapse
Affiliation(s)
- Mahmut Yurt
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Aykut Erdem
- Department of Computer Engineering, Koç University, İstanbul, TR-34450, Turkey
| | - Erkut Erdem
- Department of Computer Engineering, Hacettepe University, Ankara, TR-06800, Turkey
| | - Kader K Oguz
- National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Department of Radiology, Hacettepe University, Ankara, TR-06100, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent, Ankara, TR-06800, Turkey.
| |
Collapse
|
49
|
Yoo TK, Choi JY, Kim HK. Feasibility study to improve deep learning in OCT diagnosis of rare retinal diseases with few-shot classification. Med Biol Eng Comput 2021; 59:401-415. [PMID: 33492598 PMCID: PMC7829497 DOI: 10.1007/s11517-021-02321-1] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 01/15/2021] [Indexed: 01/16/2023]
Abstract
Deep learning (DL) has been successfully applied to the diagnosis of ophthalmic diseases. However, rare diseases are commonly neglected due to insufficient data. Here, we demonstrate that few-shot learning (FSL) using a generative adversarial network (GAN) can improve the applicability of DL in the optical coherence tomography (OCT) diagnosis of rare diseases. Four major classes with a large number of datasets and five rare disease classes with a few-shot dataset are included in this study. Before training the classifier, we constructed GAN models to generate pathological OCT images of each rare disease from normal OCT images. The Inception-v3 architecture was trained using an augmented training dataset, and the final model was validated using an independent test dataset. The synthetic images helped in the extraction of the characteristic features of each rare disease. The proposed DL model demonstrated a significant improvement in the accuracy of the OCT diagnosis of rare retinal diseases and outperformed the traditional DL models, Siamese network, and prototypical network. By increasing the accuracy of diagnosing rare retinal diseases through FSL, clinicians can avoid neglecting rare diseases with DL assistance, thereby reducing diagnosis delay and patient burden.
Collapse
Affiliation(s)
- Tae Keun Yoo
- Department of Ophthalmology, Medical Research Center, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Sangdang-gu, Cheongju, South Korea.
| | - Joon Yul Choi
- Epilepsy Center, Neurological Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Hong Kyu Kim
- Department of Ophthalmology, Dankook University Hospital, Dankook University College of Medicine, Cheonan, South Korea
| |
Collapse
|
50
|
Djawad YA, Kiely J, Luxton R. Classification of the mechanism of toxicity as applied to human cell line ECV304. Comput Methods Biomech Biomed Engin 2020; 24:933-944. [PMID: 33356573 DOI: 10.1080/10255842.2020.1861255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The objective of this study was to identify the pattern of cytotoxicity testing of the human cell line ECV304 using three techniques of an ensemble learning algorithm (bagging, boosting and stacking). The study of cell morphology of ECV304 cell line was conducted using impedimetric measurement. Three types of toxins were applied to the ECV304 cell line namely 1 mM hydrogen peroxide (H2O2), 5% dimethyl sulfoxide and 10 μg Saponin. The measurement was conducted using electrodes and lock-in amplifier to detect impedance changes during cytotoxicity testing within a frequency range 200 and 830 kHz. The results were analysed, processed and extracted using detrended fluctuation analysis to obtain characteristics and features of the cells when exposed to the each of the toxins. Three ensemble algorithms applied showed slightly different results on the performance for classifying the data set from the feature extraction that was performed. However, the results show that the cell reaction to the toxins could be classified.
Collapse
Affiliation(s)
- Yasser Abd Djawad
- Department of Electronics, Universitas Negeri Makassar, Makassar, Indonesia
| | | | | |
Collapse
|