1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Duque VG, Marquardt A, Velikova Y, Lacourpaille L, Nordez A, Crouzier M, Lee HJ, Mateus D, Navab N. Ultrasound segmentation analysis via distinct and completed anatomical borders. Int J Comput Assist Radiol Surg 2024; 19:1419-1427. [PMID: 38789884 DOI: 10.1007/s11548-024-03170-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024]
Abstract
PURPOSE Segmenting ultrasound images is important for precise area and/or volume calculations, ensuring reliable diagnosis and effective treatment evaluation for diseases. Recently, many segmentation methods have been proposed and shown impressive performance. However, currently, there is no deeper understanding of how networks segment target regions or how they define the boundaries. In this paper, we present a new approach that analyzes ultrasound segmentation networks in terms of learned borders because border delimitation is challenging in ultrasound. METHODS We propose a way to split the boundaries for ultrasound images into distinct and completed. By exploiting the Grad-CAM of the split borders, we analyze the areas each network pays attention to. Further, we calculate the ratio of correct predictions for distinct and completed borders. We conducted experiments on an in-house leg ultrasound dataset (LEG-3D-US) as well as on two additional public datasets of thyroid, nerves, and one private for prostate. RESULTS Quantitatively, the networks exhibit around 10% improvement in handling completed borders compared to distinct borders. Similar to doctors, the network struggles to define the borders in less visible areas. Additionally, the Seg-Grad-CAM analysis underscores how completion uses distinct borders and landmarks, while distinct focuses mainly on the shiny structures. We also observe variations depending on the attention mechanism of each architecture. CONCLUSION In this work, we highlight the importance of studying ultrasound borders differently than other modalities such as MRI or CT. We split the borders into distinct and completed, similar to clinicians, and show the quality of the network-learned information for these two types of borders. Additionally, we open-source a 3D leg ultrasound dataset to the community https://github.com/Al3xand1a/segmentation-border-analysis .
Collapse
Affiliation(s)
- Vanessa Gonzalez Duque
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany.
- Munich Center for Machine Learning, Munich, Germany.
- LS2N Laboratory, Ecole Centrale Nantes, Nantes, France.
- MIP Laboratory, EA 4334, 44000, Nantes, France.
| | - Alexandra Marquardt
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
- Munich Center for Machine Learning, Munich, Germany
| | - Yordanka Velikova
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
- Munich Center for Machine Learning, Munich, Germany
| | | | | | | | - Hong Joo Lee
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
| | - Diana Mateus
- LS2N Laboratory, Ecole Centrale Nantes, Nantes, France
| | - Nassir Navab
- Computer-Aided Medical Procedure and Augmented Reality (CAMP), CIT, Technical University of Munich, Garching bei Muenchen, Germany
- Munich Center for Machine Learning, Munich, Germany
| |
Collapse
|
3
|
Sugino T, Onogi S, Oishi R, Hanayama C, Inoue S, Ishida S, Yao Y, Ogasawara N, Murakawa M, Nakajima Y. Investigation of Appropriate Scaling of Networks and Images for Convolutional Neural Network-Based Nerve Detection in Ultrasound-Guided Nerve Blocks. SENSORS (BASEL, SWITZERLAND) 2024; 24:3696. [PMID: 38894486 PMCID: PMC11175212 DOI: 10.3390/s24113696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 05/31/2024] [Accepted: 06/04/2024] [Indexed: 06/21/2024]
Abstract
Ultrasound imaging is an essential tool in anesthesiology, particularly for ultrasound-guided peripheral nerve blocks (US-PNBs). However, challenges such as speckle noise, acoustic shadows, and variability in nerve appearance complicate the accurate localization of nerve tissues. To address this issue, this study introduces a deep convolutional neural network (DCNN), specifically Scaled-YOLOv4, and investigates an appropriate network model and input image scaling for nerve detection on ultrasound images. Utilizing two datasets, a public dataset and an original dataset, we evaluated the effects of model scale and input image size on detection performance. Our findings reveal that smaller input images and larger model scales significantly improve detection accuracy. The optimal configuration of model size and input image size not only achieved high detection accuracy but also demonstrated real-time processing capabilities.
Collapse
Affiliation(s)
- Takaaki Sugino
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| | - Shinya Onogi
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| | - Rieko Oishi
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Chie Hanayama
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Satoki Inoue
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Shinjiro Ishida
- TCC Media Lab Co., Ltd., Tokyo 192-0152, Japan; (S.I.); (N.O.)
| | - Yuhang Yao
- IOT SOFT Co., Ltd., Tokyo 103-0023, Japan;
| | | | - Masahiro Murakawa
- Department of Anesthesiology, Fukushima Medical University, Fukushima 960-1295, Japan; (R.O.); (C.H.); (S.I.); (M.M.)
| | - Yoshikazu Nakajima
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (S.O.); (Y.N.)
| |
Collapse
|
4
|
Kovacheva VP, Nagle B. Opportunities of AI-powered applications in anesthesiology to enhance patient safety. Int Anesthesiol Clin 2024; 62:26-33. [PMID: 38348838 PMCID: PMC11185868 DOI: 10.1097/aia.0000000000000437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Affiliation(s)
- Vesela P. Kovacheva
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Baily Nagle
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
6
|
Berggreen J, Johansson A, Jahr J, Möller S, Jansson T. Deep Learning on Ultrasound Images Visualizes the Femoral Nerve with Good Precision. Healthcare (Basel) 2023; 11:healthcare11020184. [PMID: 36673552 PMCID: PMC9859453 DOI: 10.3390/healthcare11020184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/30/2022] [Accepted: 01/05/2023] [Indexed: 01/10/2023] Open
Abstract
The number of hip fractures per year worldwide is estimated to reach 6 million by the year 2050. Despite the many advantages of regional blockades when managing pain from such a fracture, these are used to a lesser extent than general analgesia. One reason is that the opportunities for training and obtaining clinical experience in applying nerve blocks can be a challenge in many clinical settings. Ultrasound image guidance based on artificial intelligence may be one way to increase nerve block success rate. We propose an approach using a deep learning semantic segmentation model with U-net architecture to identify the femoral nerve in ultrasound images. The dataset consisted of 1410 ultrasound images that were collected from 48 patients. The images were manually annotated by a clinical professional and a segmentation model was trained. After training the model for 350 epochs, the results were validated with a 10-fold cross-validation. This showed a mean Intersection over Union of 74%, with an interquartile range of 0.66-0.81.
Collapse
Affiliation(s)
- Johan Berggreen
- Biomedical Engineering, Department of Clinical Sciences Lund, Lund University, Lasarettsgatan 37, 22185 Lund, Sweden
- Intensive and Perioperative Care, Skåne University Hospital, Entregatan 7, 22185 Lund, Sweden
| | - Anders Johansson
- Biomedical Engineering, Department of Clinical Sciences Lund, Lund University, Lasarettsgatan 37, 22185 Lund, Sweden
| | - John Jahr
- Biomedical Engineering, Department of Clinical Sciences Lund, Lund University, Lasarettsgatan 37, 22185 Lund, Sweden
| | - Sebastian Möller
- Biomedical Engineering, Department of Clinical Sciences Lund, Lund University, Lasarettsgatan 37, 22185 Lund, Sweden
- Department of Information Technology and Clinical Engineering, Skåne Regional Council, Lasarettsgatan 37, 22185 Lund, Sweden
| | - Tomas Jansson
- Biomedical Engineering, Department of Clinical Sciences Lund, Lund University, Lasarettsgatan 37, 22185 Lund, Sweden
- Department of Information Technology and Clinical Engineering, Skåne Regional Council, Lasarettsgatan 37, 22185 Lund, Sweden
- Correspondence:
| |
Collapse
|
7
|
Camacho J, Svilainis L, Álvarez-Arenas TG. Ultrasonic Imaging and Sensors. SENSORS (BASEL, SWITZERLAND) 2022; 22:7911. [PMID: 36298262 PMCID: PMC9611746 DOI: 10.3390/s22207911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 09/30/2022] [Indexed: 06/16/2023]
Abstract
Ultrasound imaging is a wide research field, covering areas from wave propagation physics, sensors and front-end electronics to image reconstruction algorithms and software [...].
Collapse
Affiliation(s)
- Jorge Camacho
- Instituto de Tecnologías Físicas y de la Información (ITEFI), Spanish National Research Council (CSIC), 28006 Madrid, Spain
| | - Linas Svilainis
- Depertment of Electronics Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
| | - Tomás Gómez Álvarez-Arenas
- Instituto de Tecnologías Físicas y de la Información (ITEFI), Spanish National Research Council (CSIC), 28006 Madrid, Spain
| |
Collapse
|
8
|
Kubicek J, Varysova A, Cerny M, Hancarova K, Oczka D, Augustynek M, Penhaker M, Prokop O, Scurek R. Performance and Robustness of Regional Image Segmentation Driven by Selected Evolutionary and Genetic Algorithms: Study on MR Articular Cartilage Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22176335. [PMID: 36080793 PMCID: PMC9460494 DOI: 10.3390/s22176335] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/16/2022] [Accepted: 08/18/2022] [Indexed: 05/12/2023]
Abstract
The analysis and segmentation of articular cartilage magnetic resonance (MR) images belongs to one of the most commonly routine tasks in diagnostics of the musculoskeletal system of the knee area. Conventional regional segmentation methods, which are based either on the histogram partitioning (e.g., Otsu method) or clustering methods (e.g., K-means), have been frequently used for the task of regional segmentation. Such methods are well known as fast and well working in the environment, where cartilage image features are reliably recognizable. The well-known fact is that the performance of these methods is prone to the image noise and artefacts. In this context, regional segmentation strategies, driven by either genetic algorithms or selected evolutionary computing strategies, have the potential to overcome these traditional methods such as Otsu thresholding or K-means in the context of their performance. These optimization strategies consecutively generate a pyramid of a possible set of histogram thresholds, of which the quality is evaluated by using the fitness function based on Kapur's entropy maximization to find the most optimal combination of thresholds for articular cartilage segmentation. On the other hand, such optimization strategies are often computationally demanding, which is a limitation of using such methods for a stack of MR images. In this study, we publish a comprehensive analysis of the optimization methods based on fuzzy soft segmentation, driven by artificial bee colony (ABC), particle swarm optimization (PSO), Darwinian particle swarm optimization (DPSO), and a genetic algorithm for an optimal thresholding selection against the routine segmentations Otsu and K-means for analysis and the features extraction of articular cartilage from MR images. This study objectively analyzes the performance of the segmentation strategies upon variable noise with dynamic intensities to report a segmentation's robustness in various image conditions for a various number of segmentation classes (4, 7, and 10), cartilage features (area, perimeter, and skeleton) extraction preciseness against the routine segmentation strategies, and lastly the computing time, which represents an important factor of segmentation performance. We use the same settings on individual optimization strategies: 100 iterations and 50 population. This study suggests that the combination of fuzzy thresholding with an ABC algorithm gives the best performance in the comparison with other methods as from the view of the segmentation influence of additive dynamic noise influence, also for cartilage features extraction. On the other hand, using genetic algorithms for cartilage segmentation in some cases does not give a good performance. In most cases, the analyzed optimization strategies significantly overcome the routine segmentation methods except for the computing time, which is normally lower for the routine algorithms. We also publish statistical tests of significance, showing differences in the performance of individual optimization strategies against Otsu and K-means method. Lastly, as a part of this study, we publish a software environment, integrating all the methods from this study.
Collapse
Affiliation(s)
- Jan Kubicek
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
- Correspondence:
| | - Alice Varysova
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Martin Cerny
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Kristyna Hancarova
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - David Oczka
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Martin Augustynek
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Marek Penhaker
- Department of Cybernetics and Biomedical Engineering, VŠB—Technical University of Ostrava, 17.listopadu 2172/15, Poruba, 708 00 Ostrava, Czech Republic
| | - Ondrej Prokop
- MEDIN, a.s., Vlachovicka 619, 592 31 Nove Mesto na Morave, Czech Republic
| | - Radomir Scurek
- Department of Security Services, Faculty of Safety Engineering, VŠB—Technical University of Ostrava, ul. Lumirova 3, 700 30 Ostrava, Czech Republic
| |
Collapse
|