1
|
A G B, Srinivasan S, D P, P M, Mathivanan SK, Shah MA. Robust brain tumor classification by fusion of deep learning and channel-wise attention mode approach. BMC Med Imaging 2024; 24:147. [PMID: 38886661 PMCID: PMC11181652 DOI: 10.1186/s12880-024-01323-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024] Open
Abstract
Diagnosing brain tumors is a complex and time-consuming process that relies heavily on radiologists' expertise and interpretive skills. However, the advent of deep learning methodologies has revolutionized the field, offering more accurate and efficient assessments. Attention-based models have emerged as promising tools, focusing on salient features within complex medical imaging data. However, the precise impact of different attention mechanisms, such as channel-wise, spatial, or combined attention within the Channel-wise Attention Mode (CWAM), for brain tumor classification remains relatively unexplored. This study aims to address this gap by leveraging the power of ResNet101 coupled with CWAM (ResNet101-CWAM) for brain tumor classification. The results show that ResNet101-CWAM surpassed conventional deep learning classification methods like ConvNet, achieving exceptional performance metrics of 99.83% accuracy, 99.21% recall, 99.01% precision, 99.27% F1-score and 99.16% AUC on the same dataset. This enhanced capability holds significant implications for clinical decision-making, as accurate and efficient brain tumor classification is crucial for guiding treatment strategies and improving patient outcomes. Integrating ResNet101-CWAM into existing brain classification software platforms is a crucial step towards enhancing diagnostic accuracy and streamlining clinical workflows for physicians.
Collapse
Affiliation(s)
- Balamurugan A G
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Preethi D
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Ramapuram , Chennai, India
| | - Monica P
- School of Electrical and Electronics Engineering, VIT Bhopal University, Bhopal, Indore Highway, Kothrikalan, Sehore, Madhya Pradesh, 466114, India
| | | | - Mohd Asif Shah
- Department of Economics, Kardan University, Parwan-e-Du, Kabul, 1001, Afghanistan.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
2
|
Alshuhail A, Thakur A, Chandramma R, Mahesh TR, Almusharraf A, Vinoth Kumar V, Khan SB. Refining neural network algorithms for accurate brain tumor classification in MRI imagery. BMC Med Imaging 2024; 24:118. [PMID: 38773391 PMCID: PMC11110259 DOI: 10.1186/s12880-024-01285-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/29/2024] [Indexed: 05/23/2024] Open
Abstract
Brain tumor diagnosis using MRI scans poses significant challenges due to the complex nature of tumor appearances and variations. Traditional methods often require extensive manual intervention and are prone to human error, leading to misdiagnosis and delayed treatment. Current approaches primarily include manual examination by radiologists and conventional machine learning techniques. These methods rely heavily on feature extraction and classification algorithms, which may not capture the intricate patterns present in brain MRI images. Conventional techniques often suffer from limited accuracy and generalizability, mainly due to the high variability in tumor appearance and the subjective nature of manual interpretation. Additionally, traditional machine learning models may struggle with the high-dimensional data inherent in MRI images. To address these limitations, our research introduces a deep learning-based model utilizing convolutional neural networks (CNNs).Our model employs a sequential CNN architecture with multiple convolutional, max-pooling, and dropout layers, followed by dense layers for classification. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The proposed model demonstrates a significant improvement in diagnostic accuracy, achieving an overall accuracy of 98% on the test dataset. The precision, recall, and F1-scores ranging from 97 to 98% with a roc-auc ranging from 99 to 100% for each tumor category further substantiate the model's effectiveness. Additionally, the utilization of Grad-CAM visualizations provides insights into the model's decision-making process, enhancing interpretability. This research addresses the pressing need for enhanced diagnostic accuracy in identifying brain tumors through MRI imaging, tackling challenges such as variability in tumor appearance and the need for rapid, reliable diagnostic tools.
Collapse
Affiliation(s)
- Asma Alshuhail
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, 562112, India
| | - R Chandramma
- Department of Computer Science & Engineering (AI & ML), Global Academy of Technology, Bangalore, India
| | - T R Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, 562112, India
| | - Ahlam Almusharraf
- Department of Management, College of Business Administration, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia.
| | - V Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632001, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, UK
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| |
Collapse
|
3
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
4
|
Valbuena Rubio S, García-Ordás MT, García-Olalla Olivera O, Alaiz-Moretón H, González-Alonso MI, Benítez-Andrades JA. Survival and grade of the glioma prediction using transfer learning. PeerJ Comput Sci 2023; 9:e1723. [PMID: 38192446 PMCID: PMC10773899 DOI: 10.7717/peerj-cs.1723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/06/2023] [Indexed: 01/10/2024]
Abstract
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3-6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.
Collapse
Affiliation(s)
| | - María Teresa García-Ordás
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | - Héctor Alaiz-Moretón
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | | |
Collapse
|
5
|
Zou Q, Miller Z, Dzelebdzic S, Abadeer M, Johnson KM, Hussain T. Time-Resolved 3D cardiopulmonary MRI reconstruction using spatial transformer network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15982-15998. [PMID: 37919998 DOI: 10.3934/mbe.2023712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
The accurate visualization and assessment of the complex cardiac and pulmonary structures in 3D is critical for the diagnosis and treatment of cardiovascular and respiratory disorders. Conventional 3D cardiac magnetic resonance imaging (MRI) techniques suffer from long acquisition times, motion artifacts, and limited spatiotemporal resolution. This study proposes a novel time-resolved 3D cardiopulmonary MRI reconstruction method based on spatial transformer networks (STNs) to reconstruct the 3D cardiopulmonary MRI acquired using 3D center-out radial ultra-short echo time (UTE) sequences. The proposed reconstruction method employed an STN-based deep learning framework, which used a combination of data-processing, grid generator, and sampler. The reconstructed 3D images were compared against the start-of-the-art time-resolved reconstruction method. The results showed that the proposed time-resolved 3D cardiopulmonary MRI reconstruction using STNs offers a robust and efficient approach to obtain high-quality images. This method effectively overcomes the limitations of conventional 3D cardiac MRI techniques and has the potential to improve the diagnosis and treatment planning of cardiopulmonary disorders.
Collapse
Affiliation(s)
- Qing Zou
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zachary Miller
- Department of Biomedical Engineering, University of Wisconsin, Madison, WI, USA
| | - Sanja Dzelebdzic
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Maher Abadeer
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin M Johnson
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Tarique Hussain
- Division of Pediatric Cardiology, Department of Pediatrics, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Advanced Imaging Research Center, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
6
|
Inomata S, Yoshimura T, Tang M, Ichikawa S, Sugimori H. Estimation of Left and Right Ventricular Ejection Fractions from cine-MRI Using 3D-CNN. SENSORS (BASEL, SWITZERLAND) 2023; 23:6580. [PMID: 37514888 PMCID: PMC10384911 DOI: 10.3390/s23146580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/17/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023]
Abstract
Cardiac function indices must be calculated using tracing from short-axis images in cine-MRI. A 3D-CNN (convolutional neural network) that adds time series information to images can estimate cardiac function indices without tracing using images with known values and cardiac cycles as the input. Since the short-axis image depicts the left and right ventricles, it is unclear which motion feature is captured. This study aims to estimate the indices by learning the short-axis images and the known left and right ventricular ejection fractions and to confirm the accuracy and whether each index is captured as a feature. A total of 100 patients with publicly available short-axis cine images were used. The dataset was divided into training:test = 8:2, and a regression model was built by training with the 3D-ResNet50. Accuracy was assessed using a five-fold cross-validation. The correlation coefficient, MAE (mean absolute error), and RMSE (root mean squared error) were determined as indices of accuracy evaluation. The mean correlation coefficient of the left ventricular ejection fraction was 0.80, MAE was 9.41, and RMSE was 12.26. The mean correlation coefficient of the right ventricular ejection fraction was 0.56, MAE was 11.35, and RMSE was 14.95. The correlation coefficient was considerably higher for the left ventricular ejection fraction. Regression modeling using the 3D-CNN indicated that the left ventricular ejection fraction was estimated more accurately, and left ventricular systolic function was captured as a feature.
Collapse
Affiliation(s)
- Soichiro Inomata
- Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| | - Takaaki Yoshimura
- Department of Health Sciences and Technology, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
- Department of Medical Physics, Hokkaido University Hospital, Sapporo 060-8648, Japan
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
| | - Minghui Tang
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Diagnostic Imaging, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo 060-8638, Japan
| | - Shota Ichikawa
- Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Niigata University, Niigata 951-8518, Japan
- Institute for Research Administration, Niigata University, Niigata 950-2181, Japan
| | - Hiroyuki Sugimori
- Global Center for Biomedical Science and Engineering, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Clinical AI Human Resources Development Program, Faculty of Medicine, Hokkaido University, Sapporo 060-8648, Japan
- Department of Biomedical Science and Engineering, Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
| |
Collapse
|
7
|
Mukherjee T, Pournik O, Lim Choi Keung SN, Arvanitis TN. Clinical Decision Support Systems for Brain Tumour Diagnosis and Prognosis: A Systematic Review. Cancers (Basel) 2023; 15:3523. [PMID: 37444633 DOI: 10.3390/cancers15133523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/02/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
CDSSs are being continuously developed and integrated into routine clinical practice as they assist clinicians and radiologists in dealing with an enormous amount of medical data, reduce clinical errors, and improve diagnostic capabilities. They assist detection, classification, and grading of brain tumours as well as alert physicians of treatment change plans. The aim of this systematic review is to identify various CDSSs that are used in brain tumour diagnosis and prognosis and rely on data captured by any imaging modality. Based on the 2020 preferred reporting items for systematic reviews and meta-analyses (PRISMA) protocol, the literature search was conducted in PubMed and Engineering Village Compendex databases. Different types of CDSSs identified through this review include Curiam BT, FASMA, MIROR, HealthAgents, and INTERPRET, among others. This review also examines various CDSS tool types, system features, techniques, accuracy, and outcomes, to provide the latest evidence available in the field of neuro-oncology. An overview of such CDSSs used to support clinical decision-making in the management and treatment of brain tumours, along with their benefits, challenges, and future perspectives has been provided. Although a CDSS improves diagnostic capabilities and healthcare delivery, there is lack of specific evidence to support these claims. The absence of empirical data slows down both user acceptance and evaluation of the actual impact of CDSS on brain tumour management. Instead of emphasizing the advantages of implementing CDSS, it is important to address its potential drawbacks and ethical implications. By doing so, it can promote the responsible use of CDSS and facilitate its faster adoption in clinical settings.
Collapse
Affiliation(s)
- Teesta Mukherjee
- Department of Electronic, Electrical and Systems Engineering, School of Engineering, College of Engineering and Physical Sciences, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Omid Pournik
- Department of Electronic, Electrical and Systems Engineering, School of Engineering, College of Engineering and Physical Sciences, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Sarah N Lim Choi Keung
- Department of Electronic, Electrical and Systems Engineering, School of Engineering, College of Engineering and Physical Sciences, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Theodoros N Arvanitis
- Department of Electronic, Electrical and Systems Engineering, School of Engineering, College of Engineering and Physical Sciences, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| |
Collapse
|
8
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
9
|
Hida M, Eto S, Wada C, Kitagawa K, Imaoka M, Nakamura M, Imai R, Kubo T, Inoue T, Sakai K, Orui J, Tazaki F, Takeda M, Hasegawa A, Yamasaka K, Nakao H. Development of Hallux Valgus Classification Using Digital Foot Images with Machine Learning. Life (Basel) 2023; 13:life13051146. [PMID: 37240791 DOI: 10.3390/life13051146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/03/2023] [Accepted: 05/07/2023] [Indexed: 05/28/2023] Open
Abstract
Hallux valgus, a frequently seen foot deformity, requires early detection to prevent it from becoming more severe. It is a medical economic problem, so a means of quickly distinguishing it would be helpful. We designed and investigated the accuracy of an early version of a tool for screening hallux valgus using machine learning. The tool would ascertain whether patients had hallux valgus by analyzing pictures of their feet. In this study, 507 images of feet were used for machine learning. Image preprocessing was conducted using the comparatively simple pattern A (rescaling, angle adjustment, and trimming) and slightly more complicated pattern B (same, plus vertical flip, binary formatting, and edge emphasis). This study used the VGG16 convolutional neural network. Pattern B machine learning was more accurate than pattern A. In our early model, Pattern A achieved 0.62 for accuracy, 0.56 for precision, 0.94 for recall, and 0.71 for F1 score. As for Pattern B, the scores were 0.79, 0.77, 0.96, and 0.86, respectively. Machine learning was sufficiently accurate to distinguish foot images between feet with hallux valgus and normal feet. With further refinement, this tool could be used for the easy screening of hallux valgus.
Collapse
Affiliation(s)
- Mitsumasa Hida
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Shinji Eto
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Hibikino 2-4, Wakamatsu-ku, Kitakyushu 808-0135, Japan
| | - Chikamune Wada
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Hibikino 2-4, Wakamatsu-ku, Kitakyushu 808-0135, Japan
| | - Kodai Kitagawa
- Department of Industrial Systems Engineering, National Institute of Technology, Hachinohe College, 16-1 Uwanotai, Tamonoki, Hachinohe 039-1192, Japan
| | - Masakazu Imaoka
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Misa Nakamura
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Ryota Imai
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Takanari Kubo
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Takao Inoue
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Keiko Sakai
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Junya Orui
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Fumie Tazaki
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Masatoshi Takeda
- Department of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
- Graduate School of Rehabilitation, Osaka Kawasaki Rehabilitation University, Mizuma 158, Kaizuka 597-0104, Japan
| | - Ayuna Hasegawa
- Department of Rehabilitation, Takata-Kamitani Hospital, Kamiyamaguchi 4-26-14, Yamaguchi, Nishinomiya 651-1421, Japan
| | - Kota Yamasaka
- Department of Rehabilitation, Takata-Kamitani Hospital, Kamiyamaguchi 4-26-14, Yamaguchi, Nishinomiya 651-1421, Japan
| | - Hidetoshi Nakao
- Department of Physical Therapy, Josai International University, 1 Gumyo, Togane 283-8555, Japan
| |
Collapse
|