1
|
Wang C, Abdel-Aty M, Han L, Easa SM. Analyzing speed-difference impact on freeway joint injury severities of Leading-Following vehicles using statistical and data-driven models. ACCIDENT; ANALYSIS AND PREVENTION 2024; 206:107695. [PMID: 38972258 DOI: 10.1016/j.aap.2024.107695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 06/20/2024] [Accepted: 06/28/2024] [Indexed: 07/09/2024]
Abstract
Rear-end (RE) crashes are notably prevalent and pose a substantial risk on freeways. This paper explores the correlation between speed difference among the following and leading vehicles (Δν) and RE crash risk. Three joint models, comprising both uncorrelated and correlated joint random-parameters bivariate probit (RPBP) approaches (statistical methods) and a cross-stitch multilayer perceptron (CS-MLP) network (a data-driven method), were estimated and compared against three separate models: Support Vector Machines (SVM), eXtreme Gradient Boosting (XGBoost), and MLP networks (all data-driven methods). Data on 15,980 two-vehicle RE crashes were collected over a two-year period, from January 1, 2021, to December 31, 2022, considering two possible levels of injury severity: no injury and injury/fatality for both drivers of following and leading vehicles. The comparative performance analysis demonstrates the superior predictive capability of the CS-MLP network over the uncorrelated/correlated joint RPBP model, SVM, XGBoost, and MLP networks in terms of recall, F-1 Score, and AUC. Significantly, numerous shared variables influence the injury severity outcomes for the following and leading vehicles across both statistical and data-driven approaches. Among these factors, the following vehicle (a truck) and the leading vehicle (a passenger car) demonstrate contrasting effects on the injury severity outcomes for both vehicles. Furthermore, the SHapley Additive exPlanations (SHAP) values from the CS-MLP network visually show the relationship between Δν and injury severity, revealing non-linear trends unlike the average effects shown by statistical methods. They indicate that the least injury outcomes for both following and leading vehicles occurs at a Δν of 0 to 10 mph, matching observed patterns in RE crash data. Additionally, a marked variation in the trend of SHAP values for the two vehicles is noted as the speed difference increases. Therefore, the findings affirm the superior performance of joint model development and substantiate the non-linear impacts of speed difference on injury outcomes. The adoption of dynamic speed control measures is recommended to mitigate the injury outcomes involved in two-vehicle RE crashes.
Collapse
Affiliation(s)
- Chenzhu Wang
- Department of Civil, Environmental & Construction Engineering, University of Central Florida, Orlando, FL 32816, United States.
| | - Mohamed Abdel-Aty
- Department of Civil, Environmental & Construction Engineering, University of Central Florida, Orlando, FL 32816, United States.
| | - Lei Han
- Department of Civil, Environmental & Construction Engineering, University of Central Florida, Orlando, FL 32816, United States.
| | - Said M Easa
- Department of Civil Engineering, Toronto Metropolitan University, Toronto, Ontario, M5B 2K3, Canada.
| |
Collapse
|
2
|
Li J, Ellis DG, Pepe A, Gsaxner C, Aizenberg MR, Kleesiek J, Egger J. Back to the Roots: Reconstructing Large and Complex Cranial Defects using an Image-based Statistical Shape Model. J Med Syst 2024; 48:55. [PMID: 38780820 PMCID: PMC11116219 DOI: 10.1007/s10916-024-02066-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 04/11/2024] [Indexed: 05/25/2024]
Abstract
Designing implants for large and complex cranial defects is a challenging task, even for professional designers. Current efforts on automating the design process focused mainly on convolutional neural networks (CNN), which have produced state-of-the-art results on reconstructing synthetic defects. However, existing CNN-based methods have been difficult to translate to clinical practice in cranioplasty, as their performance on large and complex cranial defects remains unsatisfactory. In this paper, we present a statistical shape model (SSM) built directly on the segmentation masks of the skulls represented as binary voxel occupancy grids and evaluate it on several cranial implant design datasets. Results show that, while CNN-based approaches outperform the SSM on synthetic defects, they are inferior to SSM when it comes to large, complex and real-world defects. Experienced neurosurgeons evaluate the implants generated by the SSM to be feasible for clinical use after minor manual corrections. Datasets and the SSM model are publicly available at https://github.com/Jianningli/ssm .
Collapse
Affiliation(s)
- Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital, Girardetstraße 2, 45131, Essen, Germany.
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 8010, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 8010, Austria
| | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital, Girardetstraße 2, 45131, Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital, Girardetstraße 2, 45131, Essen, Germany.
| |
Collapse
|
3
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med Image Anal 2024; 93:103100. [PMID: 38340545 DOI: 10.1016/j.media.2024.103100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 11/20/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.
Collapse
Affiliation(s)
- André Ferreira
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany.
| | - Victor Alves
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal.
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria.
| |
Collapse
|
4
|
Rempe M, Mentzel F, Pomykala KL, Haubold J, Nensa F, Kroeninger K, Egger J, Kleesiek J. k-strip: A novel segmentation algorithm in k-space for the application of skull stripping. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107912. [PMID: 37981454 DOI: 10.1016/j.cmpb.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/30/2023] [Accepted: 11/02/2023] [Indexed: 11/21/2023]
Abstract
BACKGROUND AND OBJECTIVE We present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich complex valued k-space. METHODS Using four datasets from different institutions with a total of around 200,000 MRI slices, we show that our network can perform skull-stripping on the raw data of MRIs while preserving the phase information which no other skull stripping algorithm is able to work with. For two of the datasets, skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain is used as the ground truth, whereas the third and fourth dataset comes with per-hand annotated brain segmentations. RESULTS All four datasets were very similar to the ground truth (DICE scores of 92 %-99 % and Hausdorff distances of under 5.5 pixel). Results on slices above the eye-region reach DICE scores of up to 99 %, whereas the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-Strip often has smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. CONCLUSION With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Besides preserving valuable information for further diagnostics, this approach makes an immediate anonymization of patient data possible, already before being transformed into the image domain. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.
Collapse
Affiliation(s)
- Moritz Rempe
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany; Otto-Hahn-Straße 4a, Department of Physics of the Technical University Dortmund, Dortmund 44227, Germany
| | - Florian Mentzel
- Otto-Hahn-Straße 4a, Department of Physics of the Technical University Dortmund, Dortmund 44227, Germany
| | - Kelsey L Pomykala
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany
| | - Johannes Haubold
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany
| | - Felix Nensa
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany
| | - Kevin Kroeninger
- Otto-Hahn-Straße 4a, Department of Physics of the Technical University Dortmund, Dortmund 44227, Germany
| | - Jan Egger
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany; The Computer Algorithms for Medicine Laboratory, Graz, Austria; The Institute of Computer Graphics and Vision, Inffeldgasse 16, Graz University of Technology, Graz 8010, Austria; Cancer Research Center Cologne Essen (CCCE), Hufelandstraße 55, University Medicine Essen, Essen 45147, Germany
| | - Jens Kleesiek
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany; Cancer Research Center Cologne Essen (CCCE), Hufelandstraße 55, University Medicine Essen, Essen 45147, Germany; Partner Site Essen, Hufelandstraße 55, German Cancer Consortium (DKTK), Essen 45147, Germany.
| |
Collapse
|
5
|
Strack C, Pomykala KL, Schlemmer HP, Egger J, Kleesiek J. "A net for everyone": fully personalized and unsupervised neural networks trained with longitudinal data from a single patient. BMC Med Imaging 2023; 23:174. [PMID: 37907876 PMCID: PMC10619304 DOI: 10.1186/s12880-023-01128-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 10/16/2023] [Indexed: 11/02/2023] Open
Abstract
BACKGROUND With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.
Collapse
Affiliation(s)
- Christian Strack
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, 69120, Heidelberg, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147, Essen, Germany
- Department of Physics, TU Dortmund University, Otto-Hahn-Straße 4, D-44227, Dortmund, Germany
| |
Collapse
|
6
|
Akiyama R, Goto T, Tameshige T, Sugisaka J, Kuroki K, Sun J, Akita J, Hatakeyama M, Kudoh H, Kenta T, Tonouchi A, Shimahara Y, Sese J, Kutsuna N, Shimizu-Inatsugi R, Shimizu KK. Seasonal pigment fluctuation in diploid and polyploid Arabidopsis revealed by machine learning-based phenotyping method PlantServation. Nat Commun 2023; 14:5792. [PMID: 37737204 PMCID: PMC10517152 DOI: 10.1038/s41467-023-41260-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 08/29/2023] [Indexed: 09/23/2023] Open
Abstract
Long-term field monitoring of leaf pigment content is informative for understanding plant responses to environments distinct from regulated chambers but is impractical by conventional destructive measurements. We developed PlantServation, a method incorporating robust image-acquisition hardware and deep learning-based software that extracts leaf color by detecting plant individuals automatically. As a case study, we applied PlantServation to examine environmental and genotypic effects on the pigment anthocyanin content estimated from leaf color. We processed >4 million images of small individuals of four Arabidopsis species in the field, where the plant shape, color, and background vary over months. Past radiation, coldness, and precipitation significantly affected the anthocyanin content. The synthetic allopolyploid A. kamchatica recapitulated the fluctuations of natural polyploids by integrating diploid responses. The data support a long-standing hypothesis stating that allopolyploids can inherit and combine the traits of progenitors. PlantServation facilitates the study of plant responses to complex environments termed "in natura".
Collapse
Affiliation(s)
- Reiko Akiyama
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, CH-8057, Zurich, Switzerland
| | - Takao Goto
- Research and Development Division, LPIXEL Inc., Chiyoda-ku, Tokyo, 100-0004, Japan
| | - Toshiaki Tameshige
- Kihara Institute for Biological Research (KIBR), Yokohama City University, 641-12 Maioka, Totsuka-ward, Yokohama, 244-0813, Japan
- Division of Biological Science, Graduate School of Science and Technology, Nara Institute of Science and Technology (NAIST), 8916-5 Takayama-Cho, Ikoma, Nara, 630-0192, Japan
| | - Jiro Sugisaka
- Kihara Institute for Biological Research (KIBR), Yokohama City University, 641-12 Maioka, Totsuka-ward, Yokohama, 244-0813, Japan
- Center for Ecological Research, Kyoto University, Hirano 2-509-3, Otsu, 520-2113, Japan
| | - Ken Kuroki
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Jianqiang Sun
- Research Center for Agricultural Information Technology, National Agriculture and Food Research Organization, 3-1-1 Kannondai, Tsukuba, Ibaraki, 305-8517, Japan
| | - Junichi Akita
- Department of Electric and Computer Engineering, Kanazawa University, Kakuma, Kanazawa, 920-1192, Japan
| | - Masaomi Hatakeyama
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, CH-8057, Zurich, Switzerland
- Functional Genomics Center Zurich, Winterthurerstrasse 190, CH-8057, Zurich, Switzerland
| | - Hiroshi Kudoh
- Center for Ecological Research, Kyoto University, Hirano 2-509-3, Otsu, 520-2113, Japan
| | - Tanaka Kenta
- Sugadaira Research Station, Mountain Science Center, University of Tsukuba, 1278-294 Sugadaira-kogen, Ueda, 386-2204, Japan
| | - Aya Tonouchi
- Research and Development Division, LPIXEL Inc., Chiyoda-ku, Tokyo, 100-0004, Japan
| | - Yuki Shimahara
- Research and Development Division, LPIXEL Inc., Chiyoda-ku, Tokyo, 100-0004, Japan
| | - Jun Sese
- Artificial Intelligence Research Center, AIST, 2-3-26 Aomi, Koto-ku, Tokyo, 135-0064, Japan
- Humanome Lab, Inc., L-HUB 3F, 1-4, Shumomiyabi-cho, Shinjuku, Tokyo, 162-0822, Japan
- AIST-Tokyo Tech RWBC-OIL, 2-12-1 O-okayama, Meguro-ku, Tokyo, 152-8550, Japan
| | - Natsumaro Kutsuna
- Research and Development Division, LPIXEL Inc., Chiyoda-ku, Tokyo, 100-0004, Japan
| | - Rie Shimizu-Inatsugi
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, CH-8057, Zurich, Switzerland.
| | - Kentaro K Shimizu
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, CH-8057, Zurich, Switzerland.
- Kihara Institute for Biological Research (KIBR), Yokohama City University, 641-12 Maioka, Totsuka-ward, Yokohama, 244-0813, Japan.
| |
Collapse
|
7
|
Zhang J, Cui X, Yang C, Zhong D, Sun Y, Yue X, Lan G, Zhang L, Lu L, Yuan H. A deep learning-based interpretable decision tool for predicting high risk of chemotherapy-induced nausea and vomiting in cancer patients prescribed highly emetogenic chemotherapy. Cancer Med 2023; 12:18306-18316. [PMID: 37609808 PMCID: PMC10524079 DOI: 10.1002/cam4.6428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 07/27/2023] [Accepted: 07/31/2023] [Indexed: 08/24/2023] Open
Abstract
OBJECTIVE This study aims to develop a risk prediction model for chemotherapy-induced nausea and vomiting (CINV) in cancer patients receiving highly emetogenic chemotherapy (HEC) and identify the variables that have the most significant impact on prediction. METHODS Data from Tianjin Medical University General Hospital were collected and subjected to stepwise data preprocessing. Deep learning algorithms, including deep forest, and typical machine learning algorithms such as support vector machine (SVM), categorical boosting (CatBoost), random forest, decision tree, and neural network were used to develop the prediction model. After training the model and conducting hyperparameter optimization (HPO) through cross-validation in the training set, the performance was evaluated using the test set. Shapley additive explanations (SHAP), partial dependence plot (PDP), and Local Interpretable Model-Agnostic Explanations (LIME) techniques were employed to explain the optimal model. Model performance was assessed using AUC, F1 score, accuracy, specificity, sensitivity, and Brier score. RESULTS The deep forest model exhibited good discrimination, outperforming typical machine learning models, with an AUC of 0.850 (95%CI, 0.780-0.919), an F1 score of 0.757, an accuracy of 0.852, a specificity of 0.863, a sensitivity of 0.784, and a Brier score of 0.082. The top five important features in the model were creatinine clearance (Ccr), age, gender, anticipatory nausea and vomiting, and antiemetic regimen. Among these, Ccr had the most significant predictive value. The risk of CINV decreased with increased Ccr and age, while it was higher in the presence of anticipatory nausea and vomiting, female gender, and non-standard antiemetic regimen. CONCLUSION The deep forest model demonstrated good discrimination in predicting the risk of CINV in cancer patients prescribed HEC. Kidney function, as represented by Ccr, played a crucial role in the model's prediction. The clinical application of this predictive tool can help assess individual risks and improve patient care by proactively optimizing the use of antiemetics in cancer patients receiving HEC.
Collapse
Affiliation(s)
- Jingyue Zhang
- Department of PharmacyTianjin Medical University General HospitalTianjinChina
| | - Xudong Cui
- School of MathematicsTianjin UniversityTianjinChina
| | - Chong Yang
- Department of PharmacyTianjin Medical University General HospitalTianjinChina
- Department of PharmacyTianjin Huanhu HospitalTianjinChina
| | - Diansheng Zhong
- Department of Medical OncologyTianjin Medical University General HospitalTianjinChina
| | - Yinjuan Sun
- Department of Medical OncologyTianjin Medical University General HospitalTianjinChina
| | - Xiaoxiong Yue
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
| | - Gaoshuang Lan
- Department of PharmacyTianjin Medical University General HospitalTianjinChina
| | - Linlin Zhang
- Department of Medical OncologyTianjin Medical University General HospitalTianjinChina
| | - Liangfu Lu
- Academy of Medical Engineering and Translational MedicineTianjin UniversityTianjinChina
| | - Hengjie Yuan
- Department of PharmacyTianjin Medical University General HospitalTianjinChina
| |
Collapse
|
8
|
Feature selection for distance-based regression: An umbrella review and a one-shot wrapper. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
9
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
10
|
Ravikumar A, Sriraman H, Sai Saketh PM, Lokesh S, Karanam A. Effect of neural network structure in accelerating performance and accuracy of a convolutional neural network with GPU/TPU for image analytics. PeerJ Comput Sci 2022; 8:e909. [PMID: 35494877 PMCID: PMC9044238 DOI: 10.7717/peerj-cs.909] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In deep learning the most significant breakthrough in the field of image recognition, object detection language processing was done by Convolutional Neural Network (CNN). Rapid growth in data and neural networks the performance of the DNN algorithms depends on the computation power and the storage capacity of the devices. METHODS In this paper, the convolutional neural network used for various image applications was studied and its acceleration in the various platforms like CPU, GPU, TPU was done. The neural network structure and the computing power and characteristics of the GPU, TPU was analyzed and summarized, the effect of these on accelerating the tasks is also explained. Cross-platform comparison of the CNN was done using three image applications the face mask detection (object detection/Computer Vision), Virus Detection in Plants (Image Classification: agriculture sector), and Pneumonia detection from X-ray Images (Image Classification/medical field). RESULTS The CNN implementation was done and a comprehensive comparison was done on the platforms to identify the performance, throughput, bottlenecks, and training time. The CNN layer-wise execution in GPU and TPU is explained with layer-wise analysis. The impact of the fully connected layer and convolutional layer on the network is analyzed. The challenges faced during the acceleration process were discussed and future works are identified.
Collapse
Affiliation(s)
- Aswathy Ravikumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Harini Sriraman
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - P. Maruthi Sai Saketh
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Saddikuti Lokesh
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Abhiram Karanam
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| |
Collapse
|
11
|
Radl L, Jin Y, Pepe A, Li J, Gsaxner C, Zhao FH, Egger J. AVT: Multicenter aortic vessel tree CTA dataset collection with ground truth segmentation masks. Data Brief 2022; 40:107801. [PMID: 35059483 PMCID: PMC8760499 DOI: 10.1016/j.dib.2022.107801] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 12/21/2021] [Accepted: 01/04/2022] [Indexed: 11/24/2022] Open
Abstract
In this article, we present a multicenter aortic vessel tree database collection, containing 56 aortas and their branches. The datasets have been acquired with computed tomography angiography (CTA) scans and each scan covers the ascending aorta, the aortic arch and its branches into the head/neck area, the thoracic aorta, the abdominal aorta and the lower abdominal aorta with the iliac arteries branching into the legs. For each scan, the collection provides a semi-automatically generated segmentation mask of the aortic vessel tree (ground truth). The scans come from three different collections and various hospitals, having various resolutions, which enables studying the geometry/shape variabilities of human aortas and its branches from different geographic locations. Furthermore, creating a robust statistical model of the shape of human aortic vessel trees, which can be used for various tasks such as the development of fully-automatic segmentation algorithms for new, unseen aortic vessel tree cases, e.g. by training deep learning-based approaches. Hence, the collection can serve as an evaluation set for automatic aortic vessel tree segmentation algorithms.
Collapse
Affiliation(s)
- Lukas Radl
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Yuan Jin
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, 311121 China
| | - Antonio Pepe
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Jianning Li
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Institute for AI in Medicine (IKIM), University Hospital Essen (UKE), Ruhrgebiet, Essen, Germany
| | - Christina Gsaxner
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Fen-hua Zhao
- Department of Radiology, Affiliated Dongyang Hospital of Wenzhou Medical University, Dongyang, Zhejiang, 322100 China
| | - Jan Egger
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Institute for AI in Medicine (IKIM), University Hospital Essen (UKE), Ruhrgebiet, Essen, Germany
| |
Collapse
|