1
|
Coskuner-Weber O, Alpsoy S, Yolcu O, Teber E, de Marco A, Shumka S. Metagenomics studies in aquaculture systems: Big data analysis, bioinformatics, machine learning and quantum computing. Comput Biol Chem 2025; 118:108444. [PMID: 40187295 DOI: 10.1016/j.compbiolchem.2025.108444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2025] [Revised: 03/15/2025] [Accepted: 03/25/2025] [Indexed: 04/07/2025]
Abstract
The burgeoning field of aquaculture has become a pivotal contributor to global food security and economic growth, presently surpassing capture fisheries in aquatic animal production as evidenced by recent statistics. However, the dense fish populations inherent in aquaculture systems exacerbate abiotic stressors and promote pathogenic spread, posing a risk to sustainability and yield. This study delves into the transformative potential of metagenomics, a method that directly retrieves genetic material from environmental samples, in elucidating microbial dynamics within aquaculture ecosystems. Our findings affirm that metagenomics, bolstered by tools in big data analytics, bioinformatics, and machine learning, can significantly enhance the precision of microbial assessment and pathogen detection. Furthermore, we explore quantum computing's emergent role, which promises unparalleled efficiency in data processing and model construction, poised to address the limitations of conventional computational techniques. Distinct from metabarcoding, metagenomics offers an expansive, unbiased profile of microbial biodiversity, revolutionizing our capacity to monitor, predict, and manage aquaculture systems with high accuracy and adaptability. Despite the challenges of computational demands and variability in data standardization, this study advocates for continued technological integration, thereby fostering resilient and sustainable aquaculture practices in a climate of escalating global food requirements.
Collapse
Affiliation(s)
- Orkid Coskuner-Weber
- Turkish-German University, Molecular Biotechnology, Sahinkaya Caddesi, No. 106, Beykoz, Istanbul 34820, Turkey.
| | - Semih Alpsoy
- Turkish-German University, Molecular Biotechnology, Sahinkaya Caddesi, No. 106, Beykoz, Istanbul 34820, Turkey
| | - Ozgur Yolcu
- Turkish-German University, Molecular Biotechnology, Sahinkaya Caddesi, No. 106, Beykoz, Istanbul 34820, Turkey
| | - Egehan Teber
- Turkish-German University, Molecular Biotechnology, Sahinkaya Caddesi, No. 106, Beykoz, Istanbul 34820, Turkey
| | - Ario de Marco
- Laboratory of Environmental and Life Sciences, University of Nova Gorica, Vipavska cesta 13, Nova Gorica 5000, Slovenia
| | - Spase Shumka
- Faculty of Biotechnology and Food, Agricultural University of Tirana, 1019 Koder Kamza, Tirana, Albania
| |
Collapse
|
2
|
Zhang Z, Gao L, Zheng H, Zhong Y, Li G, Ye Z, Sun Q, Wang B, Weng Z. High-content imaging and deep learning-driven detection of infectious bacteria in wounds. Bioprocess Biosyst Eng 2025; 48:301-315. [PMID: 39621107 DOI: 10.1007/s00449-024-03110-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 11/18/2024] [Indexed: 02/13/2025]
Abstract
Fast and accurate detection of infectious bacteria in wounds is crucial for effective clinical treatment. However, traditional methods take over 24 h to yield results, which is inadequate for urgent clinical needs. Here, we introduce a deep learning-driven framework that detects and classifies four bacteria commonly found in wounds: Acinetobacter baumannii (AB), Escherichia coli (EC), Pseudomonas aeruginosa (PA), and Staphylococcus aureus (SA). This framework leverages the pretrained ResNet50 deep learning architecture, trained on manually collected periodic bacterial colony-growth images from high-content imaging. In in vitro samples, our method achieves a detection rate of over 95% for early colonies cultured for 8 h, reducing detection time by more than 12 h compared to traditional Environmental Protection Agency (EPA)-approved methods. For colony classification, it identifies AB, EC, PA, and SA colonies with accuracies of 96%, 97%, 96%, and 98%, respectively. For mixed bacterial samples, it identifies colonies with 95% accuracy and classifies them with 93% precision. In mouse wound samples, the method identifies over 90% of developing bacterial colonies and classifies colony types with an average accuracy of over 94%. These results highlight the framework's potential for improving the clinical treatment of wound infections. Besides, the framework provides the detection results with key feature visualization, which enhance the prediction credibility for users. To summarize, the proposed framework enables high-throughput identification, significantly reducing detection time and providing a cost-effective tool for early bacterial detection.
Collapse
Affiliation(s)
- Ziyi Zhang
- College of Computer and Data Science/College of Software, Fuzhou University, Fujian, China
| | - Lanmei Gao
- College of Biological Science and Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Houbing Zheng
- Department of Plastic and Cosmetic Surgery, the First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian, China
| | - Yi Zhong
- College of Biological Science and Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Gaozheng Li
- College of Computer and Data Science/College of Software, Fuzhou University, Fujian, China
| | - Zhaoting Ye
- College of Computer and Data Science/College of Software, Fuzhou University, Fujian, China
| | - Qi Sun
- College of Biological Science and Engineering, Fuzhou University, Fuzhou, Fujian, China
| | - Biao Wang
- Department of Plastic and Cosmetic Surgery, the First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian, China
| | - Zuquan Weng
- College of Computer and Data Science/College of Software, Fuzhou University, Fujian, China.
- College of Biological Science and Engineering, Fuzhou University, Fuzhou, Fujian, China.
- Department of Plastic and Cosmetic Surgery, the First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian, China.
| |
Collapse
|
3
|
Struniawski K, Kozera R, Trzciński P, Marasek-Ciołakowska A, Sas-Paszt L. Extreme learning machine for identifying soil-dwelling microorganisms cultivated on agar media. Sci Rep 2024; 14:31034. [PMID: 39730790 DOI: 10.1038/s41598-024-82174-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 12/03/2024] [Indexed: 12/29/2024] Open
Abstract
The aim of this research is to create an automated system for identifying soil microorganisms at the genera level based on raw microscopic images of monocultural colonies grown in laboratory environment. The examined genera are: Fusarium, Trichoderma, Verticillium, Purpureolicillium and Phytophthora. The proposed pipeline deals with unprocessed microscopic images, avoiding additional sample marking or coloration. The methodology includes several stages: image preprocessing, segmenting images to isolate microorganisms from the background, calculating features related to image color and texture for classification. Using an extensive dataset of 2866 images from the National Institute of Horticultural Research in Skierniewice the Extreme Learning Machine model was trained and validated. The model showcases high accuracy and computational efficiency compared to other Machine Learning state-of-the art methods e.g. CatBoost, Random Forest or Convolutional Neural Networks. Statistical techniques, including Multivariate Analysis of Variance were employed to confirm significant differences among the datasets, enhancing the model's robustness. Nevertheless, Shapley Additive Explanations values provided transparency into the model's decision-making process. This approach has the potential to improve early detection and management of soil pathogens, promoting sustainable agriculture and demonstrating machine learning's potential in environmental monitoring, microbial ecology or industrial microbiology.
Collapse
Affiliation(s)
- Karol Struniawski
- Institute of Information Technology, Warsaw University of Life Sciences - SGGW, ul. Nowoursynowska 159, 02-776, Warsaw, Poland.
| | - Ryszard Kozera
- Institute of Information Technology, Warsaw University of Life Sciences - SGGW, ul. Nowoursynowska 159, 02-776, Warsaw, Poland
- School of Physics, Mathematics and Computing, The University of Western Australia, 35 Stirling Highway, Crawley, Perth, WA, 6009, Australia
| | - Paweł Trzciński
- The National Institute of Horticultural Research, ul. Pomologiczna 18, 96-100, Skierniewice, Poland
| | | | - Lidia Sas-Paszt
- The National Institute of Horticultural Research, ul. Pomologiczna 18, 96-100, Skierniewice, Poland
| |
Collapse
|
4
|
Pan GZ, Yang M, Zhou J, Yuan H, Miao C, Zhang G. Quantifying entanglement for unknown quantum states via artificial neural networks. Sci Rep 2024; 14:26267. [PMID: 39487243 PMCID: PMC11530649 DOI: 10.1038/s41598-024-76978-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 10/18/2024] [Indexed: 11/04/2024] Open
Abstract
Quantum entanglement acts as a crucial part in quantum computation and quantum information, hence quantifying unknown entanglement is an important task. Due to the fact that the amount of entanglement cannot be achieved directly by measuring any physical observables, it remains an open problem to quantify entanglement experimentally. In this work, we provide an effective way to quantify entanglement for the unknown quantum states via artificial neural networks. By choosing the expectation values of measurements as input features and the values of entanglement measures as labels, we train artificial neural network models to predict the entanglement for new quantum states accurately. Our method does not require the full information about unknown quantum states, which highlights the effectiveness and versatility of machine learning in exploring quantum entanglement.
Collapse
Affiliation(s)
- Guo-Zhu Pan
- School of Electrical and photoelectric Engineering, West Anhui University, Lu'an, 237012, China
| | - Ming Yang
- School of Physics and Optoelectronic Engineering, Anhui University, Hefei, 230601, China.
- Leibniz International Joint Research Center of Materials Sciences of Anhui Province, Anhui University, Hefei, 230601, China.
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, 230088, China.
| | - Jian Zhou
- School of Electrical and photoelectric Engineering, West Anhui University, Lu'an, 237012, China
| | - Hao Yuan
- School of Electrical and photoelectric Engineering, West Anhui University, Lu'an, 237012, China
| | - Chun Miao
- School of Mechanical and Electronic Engineering, Chizhou University, Chizhou, 247000, China
| | - Gang Zhang
- School of Electrical and photoelectric Engineering, West Anhui University, Lu'an, 237012, China.
| |
Collapse
|
5
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2024; 34:955-967. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
6
|
Ma P, Shang S, Huang Y, Liu R, Yu H, Zhou F, Yu M, Xiao Q, Zhang Y, Ding Q, Nie Y, Wang Z, Chen Y, Yu A, Shi Q. Joint use of population pharmacokinetics and machine learning for prediction of valproic acid plasma concentration in elderly epileptic patients. Eur J Pharm Sci 2024; 201:106876. [PMID: 39128815 DOI: 10.1016/j.ejps.2024.106876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 07/31/2024] [Accepted: 08/08/2024] [Indexed: 08/13/2024]
Abstract
BACKGROUND Valproic acid (VPA) is a commonly used broad-spectrum antiepileptic drug. For elderly epileptic patients, VPA plasma concentrations have a considerable variation. We aim to establish a prediction model via a combination of machine learning and population pharmacokinetics (PPK) for VPA plasma concentration. METHODS A retrospective study was performed incorporating 43 variables, including PPK parameters. Recursive Feature Elimination with Cross-Validation was used for feature selection. Multiple algorithms were employed for ensemble model, and the model was interpreted by Shapley Additive exPlanations. RESULTS The inclusion of PPK parameters significantly enhances the performance of individual algorithm model. The composition of categorical boosting, light gradient boosting machine, and random forest (7:2:1) with the highest R2 (0.74) was determined as the ensemble model. The model included 11 variables after feature selection, of which the predictive performance was comparable to the model that incorporated all variables. CONCLUSIONS Our model was specifically tailored for elderly epileptic patients, providing an efficient and cost-effective approach to predict VPA plasma concentration. The model combined classical PPK with machine learning, and underwent optimization through feature selection and algorithm integration. Our model can serve as a fundamental tool for clinicians in determining VPA plasma concentration and individualized dosing regimens accordingly.
Collapse
Affiliation(s)
- Pan Ma
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China; Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China; Department of Pharmacy, the First Affiliated Hospital of Army Medical University, No. 29 Gaotanyan Street, Chongqing 400038, China
| | - Shenglan Shang
- Department of Clinical Pharmacy, General Hospital of Central Theater Command, No. 627 Wuluo Street, Wuhan City, Hubei Province 430070, China
| | - Yifan Huang
- Medical Big Data and Artificial Intelligence Center, the First Affiliated Hospital of Army Medical University, Chongqing 400038, China
| | - Ruixiang Liu
- Department of Pharmacy, the First Affiliated Hospital of Army Medical University, No. 29 Gaotanyan Street, Chongqing 400038, China
| | - Hongfan Yu
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China; Chongqing Key Laboratory of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China
| | - Fan Zhou
- Department of Clinical Pharmacy, General Hospital of Central Theater Command, No. 627 Wuluo Street, Wuhan City, Hubei Province 430070, China
| | - Mengchen Yu
- Department of Clinical Pharmacy, General Hospital of Central Theater Command, No. 627 Wuluo Street, Wuhan City, Hubei Province 430070, China
| | - Qin Xiao
- Department of Pharmacy, Shengjing Hospital, China Medical University, Shenyang 110002, China
| | - Ying Zhang
- Department of Clinical Pharmacy, General Hospital of Central Theater Command, No. 627 Wuluo Street, Wuhan City, Hubei Province 430070, China
| | - Qianxue Ding
- Department of Clinical Pharmacy, General Hospital of Central Theater Command, No. 627 Wuluo Street, Wuhan City, Hubei Province 430070, China
| | - Yuxian Nie
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China
| | - Zhibiao Wang
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China
| | - Yongchuan Chen
- Department of Pharmacy, the First Affiliated Hospital of Army Medical University, No. 29 Gaotanyan Street, Chongqing 400038, China.
| | - Airong Yu
- Department of Clinical Pharmacy, General Hospital of Central Theater Command, No. 627 Wuluo Street, Wuhan City, Hubei Province 430070, China.
| | - Qiuling Shi
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing 400016, China; School of Public Health, Chongqing Medical University, Chongqing 400016, China.
| |
Collapse
|
7
|
Rosen J, Alford S, Allan B, Anand V, Arnon S, Arockiaraj FG, Art J, Bai B, Balasubramaniam GM, Birnbaum T, Bisht NS, Blinder D, Cao L, Chen Q, Chen Z, Dubey V, Egiazarian K, Ercan M, Forbes A, Gopakumar G, Gao Y, Gigan S, Gocłowski P, Gopinath S, Greenbaum A, Horisaki R, Ierodiaconou D, Juodkazis S, Karmakar T, Katkovnik V, Khonina SN, Kner P, Kravets V, Kumar R, Lai Y, Li C, Li J, Li S, Li Y, Liang J, Manavalan G, Mandal AC, Manisha M, Mann C, Marzejon MJ, Moodley C, Morikawa J, Muniraj I, Narbutis D, Ng SH, Nothlawala F, Oh J, Ozcan A, Park Y, Porfirev AP, Potcoava M, Prabhakar S, Pu J, Rai MR, Rogalski M, Ryu M, Choudhary S, Salla GR, Schelkens P, Şener SF, Shevkunov I, Shimobaba T, Singh RK, Singh RP, Stern A, Sun J, Zhou S, Zuo C, Zurawski Z, Tahara T, Tiwari V, Trusiak M, Vinu RV, Volotovskiy SG, Yılmaz H, De Aguiar HB, Ahluwalia BS, Ahmad A. Roadmap on computational methods in optical imaging and holography [invited]. APPLIED PHYSICS. B, LASERS AND OPTICS 2024; 130:166. [PMID: 39220178 PMCID: PMC11362238 DOI: 10.1007/s00340-024-08280-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 07/10/2024] [Indexed: 09/04/2024]
Abstract
Computational methods have been established as cornerstones in optical imaging and holography in recent years. Every year, the dependence of optical imaging and holography on computational methods is increasing significantly to the extent that optical methods and components are being completely and efficiently replaced with computational methods at low cost. This roadmap reviews the current scenario in four major areas namely incoherent digital holography, quantitative phase imaging, imaging through scattering layers, and super-resolution imaging. In addition to registering the perspectives of the modern-day architects of the above research areas, the roadmap also reports some of the latest studies on the topic. Computational codes and pseudocodes are presented for computational methods in a plug-and-play fashion for readers to not only read and understand but also practice the latest algorithms with their data. We believe that this roadmap will be a valuable tool for analyzing the current trends in computational methods to predict and prepare the future of computational methods in optical imaging and holography. Supplementary Information The online version contains supplementary material available at 10.1007/s00340-024-08280-3.
Collapse
Affiliation(s)
- Joseph Rosen
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Simon Alford
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Blake Allan
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Vijayakumar Anand
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Shlomi Arnon
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Francis Gracy Arockiaraj
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Jonathan Art
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Ganesh M. Balasubramaniam
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Tobias Birnbaum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- Swave BV, Gaston Geenslaan 2, 3001 Leuven, Belgium
| | - Nandan S. Bisht
- Applied Optics and Spectroscopy Laboratory, Department of Physics, Soban Singh Jeena University Campus Almora, Almora, Uttarakhand 263601 India
| | - David Blinder
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Liangcai Cao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
| | - Ziyang Chen
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Vishesh Dubey
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Karen Egiazarian
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Mert Ercan
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Andrew Forbes
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - G. Gopakumar
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala India
| | - Yunhui Gao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Sylvain Gigan
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Paweł Gocłowski
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | | | - Alon Greenbaum
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695 USA
| | - Ryoichi Horisaki
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan
| | - Daniel Ierodiaconou
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Saulius Juodkazis
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Tanushree Karmakar
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Vladimir Katkovnik
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Svetlana N. Khonina
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
- Samara National Research University, 443086 Samara, Russia
| | - Peter Kner
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Vladislav Kravets
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Ravi Kumar
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Yingming Lai
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Chen Li
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Jiaji Li
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shaoheng Li
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Gokul Manavalan
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Aditya Chandra Mandal
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Manisha Manisha
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Christopher Mann
- Department of Applied Physics and Materials Science, Northern Arizona University, Flagstaff, AZ 86011 USA
- Center for Materials Interfaces in Research and Development, Northern Arizona University, Flagstaff, AZ 86011 USA
| | - Marcin J. Marzejon
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Chané Moodley
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Junko Morikawa
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Inbarasan Muniraj
- LiFE Lab, Department of Electronics and Communication Engineering, Alliance School of Applied Engineering, Alliance University, Bangalore, Karnataka 562106 India
| | - Donatas Narbutis
- Institute of Theoretical Physics and Astronomy, Faculty of Physics, Vilnius University, Sauletekio 9, 10222 Vilnius, Lithuania
| | - Soon Hock Ng
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Fazilah Nothlawala
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Jeonghun Oh
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
- Tomocube Inc., Daejeon, 34051 South Korea
| | - Alexey P. Porfirev
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Mariana Potcoava
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Shashi Prabhakar
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Jixiong Pu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Mani Ratnam Rai
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Mikołaj Rogalski
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Meguya Ryu
- Research Institute for Material and Chemical Measurement, National Metrology Institute of Japan (AIST), 1-1-1 Umezono, Tsukuba, 305-8563 Japan
| | - Sakshi Choudhary
- Department Chemical Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Shiva, Israel
| | - Gangi Reddy Salla
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Peter Schelkens
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
| | - Sarp Feykun Şener
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Igor Shevkunov
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Tomoyoshi Shimobaba
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Rakesh K. Singh
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Ravindra P. Singh
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Adrian Stern
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Jiasong Sun
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shun Zhou
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Chao Zuo
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Zack Zurawski
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Tatsuki Tahara
- Applied Electromagnetic Research Center, Radio Research Institute, National Institute of Information and Communications Technology (NICT), 4-2-1 Nukuikitamachi, Koganei, Tokyo 184-8795 Japan
| | - Vipin Tiwari
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Maciej Trusiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - R. V. Vinu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Sergey G. Volotovskiy
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Hasan Yılmaz
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
| | - Hilton Barbosa De Aguiar
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Balpreet S. Ahluwalia
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| |
Collapse
|
8
|
Prabhakar SK, Won DO. A Methodical Framework Utilizing Transforms and Biomimetic Intelligence-Based Optimization with Machine Learning for Speech Emotion Recognition. Biomimetics (Basel) 2024; 9:513. [PMID: 39329535 PMCID: PMC11430715 DOI: 10.3390/biomimetics9090513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 08/19/2024] [Accepted: 08/23/2024] [Indexed: 09/28/2024] Open
Abstract
Speech emotion recognition (SER) tasks are conducted to extract emotional features from speech signals. The characteristic parameters are analyzed, and the speech emotional states are judged. At present, SER is an important aspect of artificial psychology and artificial intelligence, as it is widely implemented in many applications in the human-computer interface, medical, and entertainment fields. In this work, six transforms, namely, the synchrosqueezing transform, fractional Stockwell transform (FST), K-sine transform-dependent integrated system (KSTDIS), flexible analytic wavelet transform (FAWT), chirplet transform, and superlet transform, are initially applied to speech emotion signals. Once the transforms are applied and the features are extracted, the essential features are selected using three techniques: the Overlapping Information Feature Selection (OIFS) technique followed by two biomimetic intelligence-based optimization techniques, namely, Harris Hawks Optimization (HHO) and the Chameleon Swarm Algorithm (CSA). The selected features are then classified with the help of ten basic machine learning classifiers, with special emphasis given to the extreme learning machine (ELM) and twin extreme learning machine (TELM) classifiers. An experiment is conducted on four publicly available datasets, namely, EMOVO, RAVDESS, SAVEE, and Berlin Emo-DB. The best results are obtained as follows: the Chirplet + CSA + TELM combination obtains a classification accuracy of 80.63% on the EMOVO dataset, the FAWT + HHO + TELM combination obtains a classification accuracy of 85.76% on the RAVDESS dataset, the Chirplet + OIFS + TELM combination obtains a classification accuracy of 83.94% on the SAVEE dataset, and, finally, the KSTDIS + CSA + TELM combination obtains a classification accuracy of 89.77% on the Berlin Emo-DB dataset.
Collapse
Affiliation(s)
| | - Dong-Ok Won
- Department of Artificial Intelligence Convergence, Chuncheon 24252, Republic of Korea;
| |
Collapse
|
9
|
Wen J, Ma B. Enhancing museum experience through deep learning and multimedia technology. Heliyon 2024; 10:e32706. [PMID: 38975172 PMCID: PMC11226825 DOI: 10.1016/j.heliyon.2024.e32706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 06/04/2024] [Accepted: 06/07/2024] [Indexed: 07/09/2024] Open
Abstract
Amidst the swift progression of artificial intelligence (AI) technology, the museum sector has witnessed a notable inclination towards its adoption. This manuscript endeavours to amplify the interactive milieu of contemporary museum patrons by amalgamating a deep learning algorithm with multimedia technology. The crux of our investigation is the exploration of an adaptive convolutional neural network (CNN) to enrich the interactive engagement of museum visitors. Initially, we leverage the adaptive CNN for the image recognition chore pertaining to museum artifacts and exhibits, thereby facilitating automatic recognition and categorization. Furthermore, to surmount the constraints of conventional pooling algorithms in image feature extraction, we suggest an adaptive pooling algorithm, grounded in the maximum pooling algorithm paradigm. Subsequently, multimedia algorithms are amalgamated into the interactive apparatus, enabling visitors to immerse in exhibits and avail more profound information and experiences. Through juxtaposition with traditional image processing algorithms, the efficacy of our proposed algorithm within a museum ambiance is assessed. Experimental outcomes evince that our algorithm attains superior accuracy and robustness in artifact identification and classification endeavours. In comparison to alternative algorithms, our methodology furnishes more precise and comprehensive displays and interpretations, accurately discerning and categorizing a myriad of exhibit types. This research unveils innovative notions for the digital metamorphosis and advancement of modern museums. Through the incorporation of avant-garde deep learning algorithms and multimedia technologies, the museum visitor experience is elevated, proffering more enthralling and interactive displays. The elucidations of this manuscript hold substantial merit for the continual evolution and innovation within the museum industry.
Collapse
Affiliation(s)
- Jingbo Wen
- College of Fine Arts, Capital Normal University, Beijing, 100048, China
| | - Baoxia Ma
- Academy of Fine Arts, Beijing Institute of Fashion Technology, Beijing, 100029, China
| |
Collapse
|
10
|
Cao L, Zeng L, Wang Y, Cao J, Han Z, Chen Y, Wang Y, Zhong G, Qiao S. U 2-Net and ResNet50-Based Automatic Pipeline for Bacterial Colony Counting. Microorganisms 2024; 12:201. [PMID: 38258027 PMCID: PMC10820204 DOI: 10.3390/microorganisms12010201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/15/2024] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
In this paper, an automatic colony counting system based on an improved image preprocessing algorithm and convolutional neural network (CNN)-assisted automatic counting method was developed. Firstly, we assembled an LED backlighting illumination platform as an image capturing system to obtain photographs of laboratory cultures. Consequently, a dataset was introduced consisting of 390 photos of agar plate cultures, which included 8 microorganisms. Secondly, we implemented a new algorithm for image preprocessing based on light intensity correction, which facilitated clearer differentiation between colony and media areas. Thirdly, a U2-Net was used to predict the probability distribution of the edge of the Petri dish in images to locate region of interest (ROI), and then threshold segmentation was applied to separate it. This U2-Net achieved an F1 score of 99.5% and a mean absolute error (MAE) of 0.0033 on the validation set. Then, another U2-Net was used to separate the colony region within the ROI. This U2-Net achieved an F1 score of 96.5% and an MAE of 0.005 on the validation set. After that, the colony area was segmented into multiple components containing single or adhesive colonies. Finally, the colony components (CC) were innovatively rotated and the image crops were resized as the input (with 14,921 image crops in the training set and 4281 image crops in the validation set) for the ResNet50 network to automatically count the number of colonies. Our method achieved an overall recovery of 97.82% for colony counting and exhibited excellent performance in adhesion classification. To the best of our knowledge, the proposed "light intensity correction-based image preprocessing→U2-Net segmentation for Petri dish edge→U2-Net segmentation for colony region→ResNet50-based counting" scheme represents a new attempt and demonstrates a high degree of automation and accuracy in recognizing and counting single-colony and multi-colony targets.
Collapse
Affiliation(s)
- Libo Cao
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Liping Zeng
- Department of Pathogen Biology, School of Basic Medical Sciences, Nanjing Medical University, Nanjing 211166, China;
| | - Yaoxuan Wang
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Jiayi Cao
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Ziyu Han
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Yang Chen
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Yuxi Wang
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Guowei Zhong
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| | - Shanlei Qiao
- Center for Global Health, Nanjing Medical University, Nanjing 211166, China (Y.W.)
| |
Collapse
|
11
|
Liu Y, Feng Y, Qian L, Wang Z, Hu X. Deep learning diagnostic performance and visual insights in differentiating benign and malignant thyroid nodules on ultrasound images. Exp Biol Med (Maywood) 2023; 248:2538-2546. [PMID: 38279511 PMCID: PMC10854474 DOI: 10.1177/15353702231220664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 10/13/2023] [Indexed: 01/28/2024] Open
Abstract
This study aims to construct and evaluate a deep learning model, utilizing ultrasound images, to accurately differentiate benign and malignant thyroid nodules. The objective includes visualizing the model's process for interpretability and comparing its diagnostic precision with a cohort of 80 radiologists. We employed ResNet as the classification backbone for thyroid nodule prediction. The model was trained using 2096 ultrasound images of 655 distinct thyroid nodules. For performance evaluation, an independent test set comprising 100 cases of thyroid nodules was curated. In addition, to demonstrate the superiority of the artificial intelligence (AI) model over radiologists, a Turing test was conducted with 80 radiologists of varying clinical experience. This was meant to assess which group of radiologists' conclusions were in closer alignment with AI predictions. Furthermore, to highlight the interpretability of the AI model, gradient-weighted class activation mapping (Grad-CAM) was employed to visualize the model's areas of focus during its prediction process. In this cohort, AI diagnostics demonstrated a sensitivity of 81.67%, a specificity of 60%, and an overall diagnostic accuracy of 73%. In comparison, the panel of radiologists on average exhibited a diagnostic accuracy of 62.9%. The AI's diagnostic process was significantly faster than that of the radiologists. The generated heat-maps highlighted the model's focus on areas characterized by calcification, solid echo and higher echo intensity, suggesting these areas might be indicative of malignant thyroid nodules. Our study supports the notion that deep learning can be a valuable diagnostic tool with comparable accuracy to experienced senior radiologists in the diagnosis of malignant thyroid nodules. The interpretability of the AI model's process suggests that it could be clinically meaningful. Further studies are necessary to improve diagnostic accuracy and support auxiliary diagnoses in primary care settings.
Collapse
Affiliation(s)
- Yujiang Liu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Ying Feng
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Linxue Qian
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Zhixiang Wang
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
- Department of Radiation Oncology (Maastro), GROW—School for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht 6229 ET, The Netherlands
| | - Xiangdong Hu
- Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| |
Collapse
|
12
|
Arora P, Tewary S, Krishnamurthi S, Kumari N. An experimental setup and segmentation method for CFU counting on agar plate for the assessment of drinking water. J Microbiol Methods 2023; 214:106829. [PMID: 37797659 DOI: 10.1016/j.mimet.2023.106829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 10/02/2023] [Accepted: 10/02/2023] [Indexed: 10/07/2023]
Abstract
Quantification of bacterial colonies on an agar plate is a daily routine for a microbiologist to determine the number of viable microorganisms in the sample. In general, microbiologists perform a visual assessment of bacterial colonies which is time-consuming (takes 2 min per plate), tedious, and subjective. Some automatic counting algorithms are developed that save labour and time, but their results are affected by the non-illumination on an agar plate. To improve this, the present manuscript aims to develop an inexpensive and efficient device to acquire S.aureus images via an automatic counting method using image processing techniques under real laboratory conditions. The proposed method (P_ColonyCount) includes the region of interest extraction and color space transformation followed by filtering, thresholding, morphological operation, distance transform, and watershed technique for the quantification of isolated and overlapping colonies. The present work also shows a comparative study on grayscale, K, and green channels by applying different filter and thresholding techniques on 42 images. The results of all channels were compared with the score provided by the expert (manual count). Out of all the proposed method (P_ColonyCount), the K channel gives the best outcome in comparison with the other two channels (grayscale and green) in terms of precision, recall, and F-measure which are 0.99, 0.99, and 0.99 (2 h), 0.98, 0.99, and 0.98 (4 h), and 0.98, 0.98, 0.98 (6 h) respectively. The execution time of the manual and the proposed method (P_ColonyCount) for 42 images ranges from 19 to 113 s and 15 to 31 s respectively. Apart from this, a user-friendly graphical user interface is also developed for the convenient enumeration of colonies without any expert knowledge/training. The developed imaging device will be useful for researchers and teaching lab settings.
Collapse
Affiliation(s)
- Prachi Arora
- Thin Film Coating Facility/Materials Science and Sensor Applications, CSIR-Central Scientific Instruments Organisation (CSIR-CSIO), Sector 30-C, Chandigarh 160030, India; Academy of Scientific and Innovative Research (AcSIR), Ghaziabad 201002, India
| | - Suman Tewary
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad 201002, India; Advanced Materials and Processes, CSIR-National Metallurgical Laboratory (CSIR-NML), Jamshedpur 831007, India
| | - Srinivasan Krishnamurthi
- MTCC-Gene bank, CSIR-Institute of Microbial Technology (CSIR-IMTECH), Sector 39-A, Chandigarh 160039, India
| | - Neelam Kumari
- Thin Film Coating Facility/Materials Science and Sensor Applications, CSIR-Central Scientific Instruments Organisation (CSIR-CSIO), Sector 30-C, Chandigarh 160030, India; Academy of Scientific and Innovative Research (AcSIR), Ghaziabad 201002, India.
| |
Collapse
|
13
|
Valente J, António J, Mora C, Jardim S. Developments in Image Processing Using Deep Learning and Reinforcement Learning. J Imaging 2023; 9:207. [PMID: 37888314 PMCID: PMC10607786 DOI: 10.3390/jimaging9100207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
The growth in the volume of data generated, consumed, and stored, which is estimated to exceed 180 zettabytes in 2025, represents a major challenge both for organizations and for society in general. In addition to being larger, datasets are increasingly complex, bringing new theoretical and computational challenges. Alongside this evolution, data science tools have exploded in popularity over the past two decades due to their myriad of applications when dealing with complex data, their high accuracy, flexible customization, and excellent adaptability. When it comes to images, data analysis presents additional challenges because as the quality of an image increases, which is desirable, so does the volume of data to be processed. Although classic machine learning (ML) techniques are still widely used in different research fields and industries, there has been great interest from the scientific community in the development of new artificial intelligence (AI) techniques. The resurgence of neural networks has boosted remarkable advances in areas such as the understanding and processing of images. In this study, we conducted a comprehensive survey regarding advances in AI design and the optimization solutions proposed to deal with image processing challenges. Despite the good results that have been achieved, there are still many challenges to face in this field of study. In this work, we discuss the main and more recent improvements, applications, and developments when targeting image processing applications, and we propose future research directions in this field of constant and fast evolution.
Collapse
Affiliation(s)
- Jorge Valente
- Techframe-Information Systems, SA, 2785-338 São Domingos de Rana, Portugal; (J.V.); (J.A.)
| | - João António
- Techframe-Information Systems, SA, 2785-338 São Domingos de Rana, Portugal; (J.V.); (J.A.)
| | - Carlos Mora
- Smart Cities Research Center, Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal;
| | - Sandra Jardim
- Smart Cities Research Center, Polytechnic Institute of Tomar, 2300-313 Tomar, Portugal;
| |
Collapse
|
14
|
Huang ZJ, Patel B, Lu WH, Yang TY, Tung WC, Bučinskas V, Greitans M, Wu YW, Lin PT. Yeast cell detection using fuzzy automatic contrast enhancement (FACE) and you only look once (YOLO). Sci Rep 2023; 13:16222. [PMID: 37758830 PMCID: PMC10533879 DOI: 10.1038/s41598-023-43452-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 09/24/2023] [Indexed: 09/29/2023] Open
Abstract
In contemporary biomedical research, the accurate automatic detection of cells within intricate microscopic imagery stands as a cornerstone for scientific advancement. Leveraging state-of-the-art deep learning techniques, this study introduces a novel amalgamation of Fuzzy Automatic Contrast Enhancement (FACE) and the You Only Look Once (YOLO) framework to address this critical challenge of automatic cell detection. Yeast cells, representing a vital component of the fungi family, hold profound significance in elucidating the intricacies of eukaryotic cells and human biology. The proposed methodology introduces a paradigm shift in cell detection by optimizing image contrast through optimal fuzzy clustering within the FACE approach. This advancement mitigates the shortcomings of conventional contrast enhancement techniques, minimizing artifacts and suboptimal outcomes. Further enhancing contrast, a universal contrast enhancement variable is ingeniously introduced, enriching image clarity with automatic precision. Experimental validation encompasses a diverse range of yeast cell images subjected to rigorous quantitative assessment via Root-Mean-Square Contrast and Root-Mean-Square Deviation (RMSD). Comparative analyses against conventional enhancement methods showcase the superior performance of the FACE-enhanced images. Notably, the integration of the innovative You Only Look Once (YOLOv5) facilitates automatic cell detection within a finely partitioned grid system. This leads to the development of two models-one operating on pristine raw images, the other harnessing the enriched landscape of FACE-enhanced imagery. Strikingly, the FACE enhancement achieves exceptional accuracy in automatic yeast cell detection by YOLOv5 across both raw and enhanced images. Comprehensive performance evaluations encompassing tenfold accuracy assessments and confidence scoring substantiate the robustness of the FACE-YOLO model. Notably, the integration of FACE-enhanced images serves as a catalyst, significantly elevating the performance of YOLOv5 detection. Complementing these efforts, OpenCV lends computational acumen to delineate precise yeast cell contours and coordinates, augmenting the precision of cell detection.
Collapse
Affiliation(s)
- Zheng-Jie Huang
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan
| | - Brijesh Patel
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan
| | - Wei-Hao Lu
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan
| | - Tz-Yu Yang
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan
| | - Wei-Cheng Tung
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan
| | | | - Modris Greitans
- Institute of Electronics and Computer Science, Riga, 1006, Latvia
| | - Yu-Wei Wu
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, 11031, Taiwan.
- Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei, 11031, Taiwan.
- TMU Research Center for Digestive Medicine, Taipei Medical University, Taipei, 11031, Taiwan.
| | - Po Ting Lin
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan.
- Intelligent Manufacturing Innovation Center, National Taiwan University of Science and Technology, Taipei, 10607, Taiwan.
| |
Collapse
|
15
|
Kumar S, Arif T, Ahamad G, Chaudhary AA, Khan S, Ali MAM. An Efficient and Effective Framework for Intestinal Parasite Egg Detection Using YOLOv5. Diagnostics (Basel) 2023; 13:2978. [PMID: 37761346 PMCID: PMC10527934 DOI: 10.3390/diagnostics13182978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 06/11/2023] [Accepted: 06/15/2023] [Indexed: 09/29/2023] Open
Abstract
Intestinal parasitic infections pose a grave threat to human health, particularly in tropical and subtropical regions. The traditional manual microscopy system of intestinal parasite detection remains the gold standard procedure for diagnosing parasite cysts or eggs. This approach is costly, time-consuming (30 min per sample), highly tedious, and requires a specialist. However, computer vision, based on deep learning, has made great strides in recent years. Despite the significant advances in deep convolutional neural network-based architectures, little research has been conducted to explore these techniques' potential in parasitology, specifically for intestinal parasites. This research presents a novel proposal for state-of-the-art transfer learning architecture for the detection and classification of intestinal parasite eggs from images. The ultimate goal is to ensure prompt treatment for patients while also alleviating the burden on experts. Our approach comprised two main stages: image pre-processing and augmentation in the first stage, and YOLOv5 algorithms for detection and classification in the second stage, followed by performance comparison based on different parameters. Remarkably, our algorithms achieved a mean average precision of approximately 97% and a detection time of only 8.5 ms per sample for a dataset of 5393 intestinal parasite images. This innovative approach holds tremendous potential to form a solid theoretical basis for real-time detection and classification in routine clinical examinations, addressing the increasing demand and accelerating the diagnostic process. Our research contributes to the development of cutting-edge technologies for the efficient and accurate detection of intestinal parasite eggs, advancing the field of medical imaging and diagnosis.
Collapse
Affiliation(s)
- Satish Kumar
- Department of Information Technology, BGSB University, Rajouri 185131, India
| | - Tasleem Arif
- Department of Information Technology, BGSB University, Rajouri 185131, India
| | - Gulfam Ahamad
- Department of Computer Sciences, Baba Ghulam Shah Badshah University, Rajouri 185131, India
| | - Anis Ahmad Chaudhary
- Department of Biology, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11623, Saudi Arabia
| | - Salahuddin Khan
- Department of Biochemistry, College of Medicine, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11623, Saudi Arabia
| | - Mohamed A. M. Ali
- Department of Biology, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11623, Saudi Arabia
- Department of Biochemistry, Faculty of Science, Ain Shams University, Cairo 11566, Egypt
| |
Collapse
|
16
|
Yang L, Zhang J, Yu J, Yu Z, Hao X, Gao F, Zhou C. Predicting plasma concentration of quetiapine in patients with depression using machine learning techniques based on real-world evidence. Expert Rev Clin Pharmacol 2023; 16:741-750. [PMID: 37466101 DOI: 10.1080/17512433.2023.2238604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/19/2023] [Accepted: 07/13/2023] [Indexed: 07/20/2023]
Abstract
OBJECTIVES We develop a model for predicting quetiapine levels in patients with depression, using machine learning to support decisions on clinical regimens. METHODS Inpatients diagnosed with depression at the First Hospital of Hebei Medical University from 1 November 2019, to 31 August were enrolled. The ratio of training cohort to testing cohort was fixed at 80%:20% for the whole dataset. Univariate analysis was executed on all information to screen the important variables influencing quetiapine TDM. The prediction abilities of nine machine learning and deep learning algorithms were compared. The prediction model was created using an algorithm with better model performance, and the model's interpretation was done using the SHapley Additive exPlanation. RESULTS There were 333 individuals and 412 cases of quetiapine TDM included in the study. Six significant variables were selected to establish the individualized medication model. A quetiapine concentration prediction model was created through CatBoost. In the testing cohort, the projected TDM's accuracy was 61.45%. The prediction accuracy of quetiapine concentration within the effective range (200-750 ng/mL) was 75.47%. CONCLUSIONS This study predicts the plasma concentration of quetiapine in depression patients by machine learning, which is meaningful for the clinical medication guidance.
Collapse
Affiliation(s)
- Lin Yang
- Department of Clinical Pharmacy, The First Hospital of Hebei Medical University, Shijiazhuang, China
- The Technology Innovation Center for Artificial Intelligence in Clinical Pharmacy of Hebei Province, The First Hospital of Hebei Medical University, Shijiazhuang, China
| | - Jinyuan Zhang
- Beijing Medicinovo Technology Co, Ltd, Beijing, China
| | - Jing Yu
- Department of Clinical Pharmacy, The First Hospital of Hebei Medical University, Shijiazhuang, China
- The Technology Innovation Center for Artificial Intelligence in Clinical Pharmacy of Hebei Province, The First Hospital of Hebei Medical University, Shijiazhuang, China
| | - Ze Yu
- Institute of Interdisciplinary Integrative Medicine Research, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Xin Hao
- Dalian Medicinovo Technology Co, Ltd, Dalian, China
| | - Fei Gao
- Beijing Medicinovo Technology Co, Ltd, Beijing, China
| | - Chunhua Zhou
- Department of Clinical Pharmacy, The First Hospital of Hebei Medical University, Shijiazhuang, China
- The Technology Innovation Center for Artificial Intelligence in Clinical Pharmacy of Hebei Province, The First Hospital of Hebei Medical University, Shijiazhuang, China
| |
Collapse
|
17
|
Zhang J, Li C, Rahaman MM, Yao Y, Ma P, Zhang J, Zhao X, Jiang T, Grzegorzek M. A Comprehensive Survey with Quantitative Comparison of Image Analysis Methods for Microorganism Biovolume Measurements. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 30:639-673. [PMID: 36091717 PMCID: PMC9446599 DOI: 10.1007/s11831-022-09811-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/22/2022] [Indexed: 05/25/2023]
Abstract
With the acceleration of urbanization and living standards, microorganisms play an increasingly important role in industrial production, bio-technique, and food safety testing. Microorganism biovolume measurements are one of the essential parts of microbial analysis. However, traditional manual measurement methods are time-consuming and challenging to measure the characteristics precisely. With the development of digital image processing techniques, the characteristics of the microbial population can be detected and quantified. The applications of the microorganism biovolume measurement method have developed since the 1980s. More than 62 articles are reviewed in this study, and the articles are grouped by digital image analysis methods with time. This study has high research significance and application value, which can be referred to as microbial researchers to comprehensively understand microorganism biovolume measurements using digital image analysis methods and potential applications.
Collapse
Affiliation(s)
- Jiawei Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169 China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169 China
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169 China
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052 Australia
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030 USA
| | - Pingli Ma
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169 China
| | - Jinghua Zhang
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169 China
- Institute of Medical Informatics, University of Luebeck, Luebeck, 23538 Germany
| | - Xin Zhao
- School of Resources and Civil Engineering, Northeastern University, Shenyang, 110004 China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, 610225 China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, 23538 Germany
| |
Collapse
|
18
|
Ma P, Liu R, Gu W, Dai Q, Gan Y, Cen J, Shang S, Liu F, Chen Y. Construction and Interpretation of Prediction Model of Teicoplanin Trough Concentration via Machine Learning. Front Med (Lausanne) 2022; 9:808969. [PMID: 35360734 PMCID: PMC8963816 DOI: 10.3389/fmed.2022.808969] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 01/25/2022] [Indexed: 02/02/2023] Open
Abstract
Objective To establish an optimal model to predict the teicoplanin trough concentrations by machine learning, and explain the feature importance in the prediction model using the SHapley Additive exPlanation (SHAP) method. Methods A retrospective study was performed on 279 therapeutic drug monitoring (TDM) measurements obtained from 192 patients who were treated with teicoplanin intravenously at the First Affiliated Hospital of Army Medical University from November 2017 to July 2021. This study included 27 variables, and the teicoplanin trough concentrations were considered as the target variable. The whole dataset was divided into a training group and testing group at the ratio of 8:2, and predictive performance was compared among six different algorithms. Algorithms with higher model performance (top 3) were selected to establish the ensemble prediction model and SHAP was employed to interpret the model. Results Three algorithms (SVR, GBRT, and RF) with high R2 scores (0.676, 0.670, and 0.656, respectively) were selected to construct the ensemble model at the ratio of 6:3:1. The model with R2 = 0.720, MAE = 3.628, MSE = 22.571, absolute accuracy of 83.93%, and relative accuracy of 60.71% was obtained, which performed better in model fitting and had better prediction accuracy than any single algorithm. The feature importance and direction of each variable were visually demonstrated by SHAP values, in which teicoplanin administration and renal function were the most important factors. Conclusion We firstly adopted a machine learning approach to predict the teicoplanin trough concentration, and interpreted the prediction model by the SHAP method, which is of great significance and value for the clinical medication guidance.
Collapse
Affiliation(s)
- Pan Ma
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Ruixiang Liu
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Wenrui Gu
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Qing Dai
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Yu Gan
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Jing Cen
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Shenglan Shang
- Department of Clinical Pharmacy, General Hospital of Central Theater Command of PLA, Wuhan, China
| | - Fang Liu
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| | - Yongchuan Chen
- Department of Pharmacy, The First Affiliated Hospital of Third Military Medical University (Army Medical University), Chongqing, China
| |
Collapse
|
19
|
Amani MA, Sarkodie SA. Mitigating spread of contamination in meat supply chain management using deep learning. Sci Rep 2022; 12:5037. [PMID: 35322116 PMCID: PMC8943173 DOI: 10.1038/s41598-022-08993-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 03/15/2022] [Indexed: 11/08/2022] Open
Abstract
Industry 4.0 recommends a paradigm shift from traditional manufacturing to automated industrial practices, especially in different parts of supply chain management. Besides, the Sustainable Development Goal (SDG) 12 underscores the urgency of ensuring a sustainable supply chain with novel technologies including Artificial Intelligence to decrease food loss, which has the potential of mitigating food waste. These new technologies can increase productivity, especially in perishable products of the supply chain by reducing expenses, increasing the accuracy of operations, accelerating processes, and decreasing the carbon footprint of food. Artificial intelligence techniques such as deep learning can be utilized in various sections of meat supply chain management--where highly perishable products like spoiled meat need to be separated from wholesome ones to prevent cross-contamination with food-borne pathogens. Therefore, to automate this process and prevent meat spoilage and/or improve meat shelf life which is crucial to consumer meat preferences and sustainable consumption, a classification model was trained by the DCNN and PSO algorithms with 100% accuracy, which discerns wholesome meat from spoiled ones.
Collapse
Affiliation(s)
- Mohammad Amin Amani
- School of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | | |
Collapse
|
20
|
Kotwal S, Rani P, Arif T, Manhas J, Sharma S. Automated Bacterial Classifications Using Machine Learning Based Computational Techniques: Architectures, Challenges and Open Research Issues. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2469-2490. [PMID: 34658617 PMCID: PMC8505783 DOI: 10.1007/s11831-021-09660-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
Bacteria are important in a variety of practical domains, including industry, agriculture, medicine etc. A very few species of bacteria are favourable to humans. Whereas, majority of them are extremely dangerous and causes variety of life threatening illness to different living organisms. Traditionally, this class of microbes is detected and classified using different approaches like gram staining, biochemical testing, motility testing etc. However with the availability of large amount of data and technical advances in the field of medical and computer science, the machine learning methods have been widely used and have shown tremendous performance in automatic detection of bacteria. The inclusion of latest technology employing different Artificial Intelligence techniques are greatly assisting microbiologist in solving extremely complex problems in this domain. This paper presents a review of the literature on various machine learning approaches that have been used to classify bacteria, for the period 1998-2020. The resources include research papers and book chapters from different publishers of national and international repute such as Elsevier, Springer, IEEE, PLOS, etc. The study carried out a detailed and critical analysis of penetrating different Machine learning methodologies in the field of bacterial classification along with their limitations and future scope. In addition, different opportunities and challenges in implementing these techniques in the concerned field are also presented to provide a deep insight to the researchers working in this field.
Collapse
Affiliation(s)
- Shallu Kotwal
- Department of Information Technology, Baba Ghulam Shah Badshah University, Rajouri, India
| | - Priya Rani
- Department of Computer Science & IT, University of Jammu, Jammu, India
| | - Tasleem Arif
- Department of Information Technology, Baba Ghulam Shah Badshah University, Rajouri, India
| | - Jatinder Manhas
- Department of Computer Science & IT, Bhaderwah Campus, University of Jammu, Jammu, India
| | - Sparsh Sharma
- Department of Computer Science & Engineering, NIT Srinagar, Jammu, Jammu & Kashmir India
| |
Collapse
|