1
|
Alty J, Goldberg LR, Roccati E, Lawler K, Bai Q, Huang G, Bindoff AD, Li R, Wang X, St George RJ, Rudd K, Bartlett L, Collins JM, Aiyede M, Fernando N, Bhagwat A, Giffard J, Salmon K, McDonald S, King AE, Vickers JC. Development of a smartphone screening test for preclinical Alzheimer's disease and validation across the dementia continuum. BMC Neurol 2024; 24:127. [PMID: 38627686 DOI: 10.1186/s12883-024-03609-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 03/21/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Dementia prevalence is predicted to triple to 152 million globally by 2050. Alzheimer's disease (AD) constitutes 70% of cases. There is an urgent need to identify individuals with preclinical AD, a 10-20-year period of progressive brain pathology without noticeable cognitive symptoms, for targeted risk reduction. Current tests of AD pathology are either too invasive, specialised or expensive for population-level assessments. Cognitive tests are normal in preclinical AD. Emerging evidence demonstrates that movement analysis is sensitive to AD across the disease continuum, including preclinical AD. Our new smartphone test, TapTalk, combines analysis of hand and speech-like movements to detect AD risk. This study aims to [1] determine which combinations of hand-speech movement data most accurately predict preclinical AD [2], determine usability, reliability, and validity of TapTalk in cognitively asymptomatic older adults and [3], prospectively validate TapTalk in older adults who have cognitive symptoms against cognitive tests and clinical diagnoses of Mild Cognitive Impairment and AD dementia. METHODS Aim 1 will be addressed in a cross-sectional study of at least 500 cognitively asymptomatic older adults who will complete computerised tests comprising measures of hand motor control (finger tapping) and oro-motor control (syllabic diadochokinesis). So far, 1382 adults, mean (SD) age 66.20 (7.65) years, range 50-92 (72.07% female) have been recruited. Motor measures will be compared to a blood-based AD biomarker, phosphorylated tau 181 to develop an algorithm that classifies preclinical AD risk. Aim 2 comprises three sub-studies in cognitively asymptomatic adults: (i) a cross-sectional study of 30-40 adults to determine the validity of data collection from different types of smartphones, (ii) a prospective cohort study of 50-100 adults ≥ 50 years old to determine usability and test-retest reliability, and (iii) a prospective cohort study of ~1,000 adults ≥ 50 years old to validate against cognitive measures. Aim 3 will be addressed in a cross-sectional study of ~200 participants with cognitive symptoms to validate TapTalk against Montreal Cognitive Assessment and interdisciplinary consensus diagnosis. DISCUSSION This study will establish the precision of TapTalk to identify preclinical AD and estimate risk of cognitive decline. If accurate, this innovative smartphone app will enable low-cost, accessible screening of individuals for AD risk. This will have wide applications in public health initiatives and clinical trials. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT06114914, 29 October 2023. Retrospectively registered.
Collapse
Affiliation(s)
- Jane Alty
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia.
- School of Medicine, University of Tasmania, Hobart, TAS, 7001, Australia.
- Royal Hobart Hospital, Hobart, TAS, 7001, Australia.
| | - Lynette R Goldberg
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Eddy Roccati
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Katherine Lawler
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
- School of Allied Health, Human Services and Sport, La Trobe University, Melbourne, VIC, 3086, Australia
| | - Quan Bai
- School of Information and Communication Technology, University of Tasmania, Hobart, TAS, 7005, Australia
| | - Guan Huang
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Aidan D Bindoff
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Renjie Li
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
- School of Information and Communication Technology, University of Tasmania, Hobart, TAS, 7005, Australia
| | - Xinyi Wang
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Rebecca J St George
- School of Psychological Sciences, University of Tasmania, Hobart, TAS, 7005, Australia
| | - Kaylee Rudd
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Larissa Bartlett
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Jessica M Collins
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Mimieveshiofuo Aiyede
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | | | - Anju Bhagwat
- Royal Hobart Hospital, Hobart, TAS, 7001, Australia
| | - Julia Giffard
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Katharine Salmon
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
- Royal Hobart Hospital, Hobart, TAS, 7001, Australia
| | - Scott McDonald
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - Anna E King
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| | - James C Vickers
- Wicking Dementia Research and Education Centre, University of Tasmania, Liverpool Street, Hobart, TAS, 7001, Australia
| |
Collapse
|
2
|
Tan TF, Yap CL, Peterson CL, Wong D, Wong TY, Cheung CMG, Schmetterer L, Tan ACS. Defining the structure-function relationship of specific lesions in early and advanced age-related macular degeneration. Sci Rep 2024; 14:8724. [PMID: 38622152 PMCID: PMC11018739 DOI: 10.1038/s41598-024-54619-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 02/14/2024] [Indexed: 04/17/2024] Open
Abstract
The objective of this study is to define structure-function relationships of pathological lesions related to age-related macular degeneration (AMD) using microperimetry and multimodal retinal imaging. We conducted a cross-sectional study of 87 patients with AMD (30 eyes with early and intermediate AMD and 110 eyes with advanced AMD), compared to 33 normal controls (66 eyes) recruited from a single tertiary center. All participants had enface and cross-sectional optical coherence tomography (Heidelberg HRA-2), OCT angiography, color and infra-red (IR) fundus and microperimetry (MP) (Nidek MP-3) performed. Multimodal images were graded for specific AMD pathological lesions. A custom marking tool was used to demarcate lesion boundaries on corresponding enface IR images, and subsequently superimposed onto MP color fundus photographs with retinal sensitivity points (RSP). The resulting overlay was used to correlate pathological structural changes to zonal functional changes. Mean age of patients with early/intermediate AMD, advanced AMD and controls were 73(SD = 8.2), 70.8(SD = 8), and 65.4(SD = 7.7) years respectively. Mean retinal sensitivity (MRS) of both early/intermediate (23.1 dB; SD = 5.5) and advanced AMD (18.1 dB; SD = 7.8) eyes were significantly worse than controls (27.8 dB, SD = 4.3) (p < 0.01). Advanced AMD eyes had significantly more unstable fixation (70%; SD = 63.6), larger mean fixation area (3.9 mm2; SD = 3.0), and focal fixation point further away from the fovea (0.7 mm; SD = 0.8), than controls (29%; SD = 43.9; 2.6 mm2; SD = 1.9; 0.4 mm; SD = 0.3) (p ≤ 0.01). Notably, 22 fellow eyes of AMD eyes (25.7 dB; SD = 3.0), with no AMD lesions, still had lower MRS than controls (p = 0.04). For specific AMD-related lesions, end-stage changes such as fibrosis (5.5 dB, SD = 5.4 dB) and atrophy (6.2 dB, SD = 7.0 dB) had the lowest MRS; while drusen and pigment epithelial detachment (17.7 dB, SD = 8.0 dB) had the highest MRS. Peri-lesional areas (20.2 dB, SD = 7.6 dB) and surrounding structurally normal areas (22.2 dB, SD = 6.9 dB) of the retina with no AMD lesions still had lower MRS compared to controls (27.8 dB, SD = 4.3 dB) (p < 0.01). Our detailed topographic structure-function correlation identified specific AMD pathological changes associated with a poorer visual function. This can provide an added value to the assessment of visual function to optimize treatment outcomes to existing and potentially future novel therapies.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 119228, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Chun Lin Yap
- Singapore Eye Research Institute, Singapore, Singapore
| | - Claire L Peterson
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 119228, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Damon Wong
- Singapore Eye Research Institute, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 119228, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Chui Ming Gemmy Cheung
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 119228, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 119228, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore, Singapore
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Anna Cheng Sim Tan
- Singapore National Eye Centre, Singapore General Hospital, 11 Third Hospital Avenue, Singapore, 119228, Singapore.
- Singapore Eye Research Institute, Singapore, Singapore.
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-National University of Singapore Medical School, Singapore, Singapore.
| |
Collapse
|
3
|
Liu Z, Jia J, Bai F, Ding Y, Han L, Bai G. Predicting rectal cancer tumor budding grading based on MRI and CT with multimodal deep transfer learning: A dual-center study. Heliyon 2024; 10:e28769. [PMID: 38590908 PMCID: PMC11000007 DOI: 10.1016/j.heliyon.2024.e28769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 03/24/2024] [Accepted: 03/24/2024] [Indexed: 04/10/2024] Open
Abstract
Objective To investigate the effectiveness of a multimodal deep learning model in predicting tumor budding (TB) grading in rectal cancer (RC) patients. Materials and methods A retrospective analysis was conducted on 355 patients with rectal adenocarcinoma from two different hospitals. Among them, 289 patients from our institution were randomly divided into an internal training cohort (n = 202) and an internal validation cohort (n = 87) in a 7:3 ratio, while an additional 66 patients from another hospital constituted an external validation cohort. Various deep learning models were constructed and compared for their performance using T1CE and CT-enhanced images, and the optimal models were selected for the creation of a multimodal fusion model. Based on single and multiple factor logistic regression, clinical N staging and fecal occult blood were identified as independent risk factors and used to construct the clinical model. A decision-level fusion was employed to integrate these two models to create an ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), DeLong's test, calibration curve, and decision curve analysis (DCA). Model visualization Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for model interpretation. Results The multimodal fusion model demonstrated superior performance compared to single-modal models, with AUC values of 0.869 (95% CI: 0.761-0.976) for the internal validation cohort and 0.848 (95% CI: 0.721-0.975) for the external validation cohort. N-stage and fecal occult blood were identified as clinically independent risk factors through single and multivariable logistic regression analysis. The final ensemble model exhibited the best performance, with AUC values of 0.898 (95% CI: 0.820-0.975) for the internal validation cohort and 0.868 (95% CI: 0.768-0.968) for the external validation cohort. Conclusion Multimodal deep learning models can effectively and non-invasively provide individualized predictions for TB grading in RC patients, offering valuable guidance for treatment selection and prognosis assessment.
Collapse
Affiliation(s)
- Ziyan Liu
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Jianye Jia
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Fan Bai
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Yuxin Ding
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Lei Han
- Deparment of Medical Imaging, Huaian Hospital Affiliated to Xuzhou Medical University, Huaian, Jiangsu, China
| | - Genji Bai
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| |
Collapse
|
4
|
Veras Magalhães G, L. de S. Santos R, H. S. Vogado L, Cardoso de Paiva A, de Alcântara dos Santos Neto P. XRaySwinGen: Automatic medical reporting for X-ray exams with multimodal model. Heliyon 2024; 10:e27516. [PMID: 38560155 PMCID: PMC10979158 DOI: 10.1016/j.heliyon.2024.e27516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 02/29/2024] [Accepted: 03/01/2024] [Indexed: 04/04/2024] Open
Abstract
The importance of radiology in modern medicine is acknowledged for its non-invasive diagnostic capabilities, yet the manual formulation of unstructured medical reports poses time constraints and error risks. This study addresses the common limitation of Artificial Intelligence applications in medical image captioning, which typically focus on classification problems, lacking detailed information about the patient's condition. Despite advancements in AI-generated medical reports that incorporate descriptive details from X-ray images, which are essential for comprehensive reports, the challenge persists. The proposed solution involves a multimodal model utilizing Computer Vision for image representation and Natural Language Processing for textual report generation. A notable contribution is the innovative use of the Swin Transformer as the image encoder, enabling hierarchical mapping and enhanced model perception without a surge in parameters or computational costs. The model incorporates GPT-2 as the textual decoder, integrating cross-attention layers and bilingual training with datasets in Portuguese PT-BR and English. Promising results are noted in the proposed database with ROUGE-L 0.748, METEOR 0.741, and NIH CHEST X-ray with ROUGE-L 0.404 and METEOR 0.393.
Collapse
Affiliation(s)
| | | | - Luis H. S. Vogado
- Departamento de Computação, Universidade Federal do Piauí, Teresina, Brazil
| | | | | |
Collapse
|
5
|
Schmid S, Uecker C, Fröhlich A, Langhorst J. Effects of an integrative multimodal inpatient program on fatigue and work ability in patients with Post-COVID Syndrome-a prospective observational study. Eur Arch Psychiatry Clin Neurosci 2024:10.1007/s00406-024-01792-1. [PMID: 38578435 DOI: 10.1007/s00406-024-01792-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 03/09/2024] [Indexed: 04/06/2024]
Abstract
Post-COVID syndrome (PCS) is characterized by a variety of non-specific symptoms. One of the leading symptoms is fatigue. So far, there is no evidence-based causal therapy established and treatment of PCS is primarily symptom-oriented. The Clinic for Internal and Integrative Medicine in Bamberg, Germany, offers a comprehensive multimodal integrative inpatient therapy for PCS patients. Within a prospective uncontrolled observational study, the results of N = 79 patients were analysed. Post-COVID fatigue patients were hospitalized for up to 14 days. The treatment consists of individual modules depending on the patient's needs. It includes a wide range of integrative non-pharmacological treatment modalities. Outcomes were assessed before and after the inpatient treatment as well as 6 months after discharge from the hospital. Results show that fatigue of post-COVID patients in this study (M = 76.30, SD = 10.18, N = 64) was initially significantly higher than in the subsample "women aged 60-92 years" of the general German population (M = 51.5, Schwarz et al. [Schwarz et al. in Onkologie 26:140-144, 2003]; T(63) = 19.50, p < .001). Fatigue was significantly and clinically relevant reduced directly after discharge (MT1 = 76.21, SD = 11.38, N = 42; MT2 = 66.57, SD = 15.55, N = 42), F(1, 41) = 19.80, p < .001, partial eta squared = .326, as well as six months after discharge (MT3 = 65.31, SD = 17.20, N = 42), F(1, 41), p < .001, partial eta squared = .371. Additionally, self-reported ability to work (NRS, 0-10) improved significantly from admission (MT1 = 2.54, SD = 2.23, N = 39) to discharge (MT2 = 4.26, SD = 2.60, N = 39), F(1, 38) = 26.37, p < .001, partial eta squared = .410), as well as to six months later (MT3 = 4.41, SD = 3.23, N = 39), F(1, 38) = 15.00, p < .001, partial eta squared = .283. The study showed that patients suffering from chronic post-COVID syndrome for several months can achieve a significant improvement in their leading fatigue symptoms and a significant improvement in the subjective assessment of their ability to work through a comprehensive two-week multimodal integrative inpatient program.
Collapse
Affiliation(s)
- Sarah Schmid
- Department of Internal and Integrative Medicine, Sozialstiftung Bamberg, 96049, Bamberg, Germany
| | - Christine Uecker
- Department of Internal and Integrative Medicine, Sozialstiftung Bamberg, 96049, Bamberg, Germany
| | - Antje Fröhlich
- Department of Internal and Integrative Medicine, Sozialstiftung Bamberg, 96049, Bamberg, Germany
| | - Jost Langhorst
- Department of Internal and Integrative Medicine, Sozialstiftung Bamberg, 96049, Bamberg, Germany.
- Department of Integrative Medicine, Medicinal Faculty, University of Duisburg-Essen, 96049, Bamberg, Germany.
| |
Collapse
|
6
|
Pan L, Peng Y, Li Y, Wang X, Liu W, Xu L, Liang Q, Peng S. SELECTOR: Heterogeneous graph network with convolutional masked autoencoder for multimodal robust prediction of cancer survival. Comput Biol Med 2024; 172:108301. [PMID: 38492453 DOI: 10.1016/j.compbiomed.2024.108301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/03/2024] [Accepted: 03/12/2024] [Indexed: 03/18/2024]
Abstract
Accurately predicting the survival rate of cancer patients is crucial for aiding clinicians in planning appropriate treatment, reducing cancer-related medical expenses, and significantly enhancing patients' quality of life. Multimodal prediction of cancer patient survival offers a more comprehensive and precise approach. However, existing methods still grapple with challenges related to missing multimodal data and information interaction within modalities. This paper introduces SELECTOR, a heterogeneous graph-aware network based on convolutional mask encoders for robust multimodal prediction of cancer patient survival. SELECTOR comprises feature edge reconstruction, convolutional mask encoder, feature cross-fusion, and multimodal survival prediction modules. Initially, we construct a multimodal heterogeneous graph and employ the meta-path method for feature edge reconstruction, ensuring comprehensive incorporation of feature information from graph edges and effective embedding of nodes. To mitigate the impact of missing features within the modality on prediction accuracy, we devised a convolutional masked autoencoder (CMAE) to process the heterogeneous graph post-feature reconstruction. Subsequently, the feature cross-fusion module facilitates communication between modalities, ensuring that output features encompass all features of the modality and relevant information from other modalities. Extensive experiments and analysis on six cancer datasets from TCGA demonstrate that our method significantly outperforms state-of-the-art methods in both modality-missing and intra-modality information-confirmed cases. Our codes are made available at https://github.com/panliangrui/Selector.
Collapse
Affiliation(s)
- Liangrui Pan
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Yijun Peng
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Yan Li
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Xiang Wang
- Department of Thoracic Surgery, The second xiangya hospital, Central South University, Changsha, 410011, Hunan, China.
| | - Wenjuan Liu
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Liwen Xu
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| | - Qingchun Liang
- Department of Pathology, The second xiangya hospital, Central South University, Changsha, 410011, Hunan, China.
| | - Shaoliang Peng
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, 410083, Hunan, China.
| |
Collapse
|
7
|
Lu H, Mao Y, Li J, Zhu L. Multimodal deep learning-based diagnostic model for BPPV. BMC Med Inform Decis Mak 2024; 24:82. [PMID: 38515156 PMCID: PMC10956181 DOI: 10.1186/s12911-024-02438-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 01/23/2024] [Indexed: 03/23/2024] Open
Abstract
BACKGROUND Benign paroxysmal positional vertigo (BPPV) is a prevalent form of vertigo that necessitates a skilled physician to diagnose by observing the nystagmus and vertigo resulting from specific changes in the patient's position. In this study, we aim to explore the integration of eye movement video and position information for BPPV diagnosis and apply artificial intelligence (AI) methods to improve the accuracy of BPPV diagnosis. METHODS We collected eye movement video and diagnostic data from 518 patients with BPPV who visited the hospital for examination from January to March 2021 and developed a BPPV dataset. Based on the characteristics of the dataset, we propose a multimodal deep learning diagnostic model, which combines a video understanding model, self-encoder, and cross-attention mechanism structure. RESULT Our validation test on the test set showed that the average accuracy of the model reached 81.7%, demonstrating the effectiveness of the proposed multimodal deep learning method for BPPV diagnosis. Furthermore, our study highlights the significance of combining head position information and eye movement information in BPPV diagnosis. We also found that postural and eye movement information plays a critical role in the diagnosis of BPPV, as demonstrated by exploring the necessity of postural information for the diagnostic model and the contribution of cross-attention mechanisms to the fusion of postural and oculomotor information. Our results underscore the potential of AI-based methods for improving the accuracy of BPPV diagnosis and the importance of considering both postural and oculomotor information in BPPV diagnosis.
Collapse
Affiliation(s)
- Hang Lu
- State Key Laboratory of Power Transmission Equipment Technology, School of Electrical Engineering, Chongqing University, Chongqing, China
| | - Yuxing Mao
- State Key Laboratory of Power Transmission Equipment Technology, School of Electrical Engineering, Chongqing University, Chongqing, China.
| | - Jinsen Li
- State Key Laboratory of Power Transmission Equipment Technology, School of Electrical Engineering, Chongqing University, Chongqing, China
| | - Lin Zhu
- State Key Laboratory of Power Transmission Equipment Technology, School of Electrical Engineering, Chongqing University, Chongqing, China
| |
Collapse
|
8
|
Koga S, Du W. ChatGPT's limited accuracy in generating anatomical images for medical education. Skeletal Radiol 2024:10.1007/s00256-024-04655-x. [PMID: 38506966 DOI: 10.1007/s00256-024-04655-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 03/06/2024] [Accepted: 03/11/2024] [Indexed: 03/22/2024]
Affiliation(s)
- Shunsuke Koga
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA, 19104, USA.
| | - Wei Du
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA, 19104, USA
| |
Collapse
|
9
|
Tian S, Chen L, Wang X, Li G, Fu Z, Ji Y, Lu J, Wang X, Shan S, Bi Y. Vision matters for shape representation: Evidence from sculpturing and drawing in the blind. Cortex 2024:S0010-9452(24)00072-8. [PMID: 38582629 DOI: 10.1016/j.cortex.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/23/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024]
Abstract
Shape is a property that could be perceived by vision and touch, and is classically considered to be supramodal. While there is mounting evidence for the shared cognitive and neural representation space between visual and tactile shape, previous research tended to rely on dissimilarity structures between objects and had not examined the detailed properties of shape representation in the absence of vision. To address this gap, we conducted three explicit object shape knowledge production experiments with congenitally blind and sighted participants, who were asked to produce verbal features, 3D clay models, and 2D drawings of familiar objects with varying levels of tactile exposure, including tools, large nonmanipulable objects, and animals. We found that the absence of visual experience (i.e., in the blind group) led to stronger differences in animals than in tools and large objects, suggesting that direct tactile experience of objects is essential for shape representation when vision is unavailable. For tools with rich tactile/manipulation experiences, the blind produced overall good shapes comparable to the sighted, yet also showed intriguing differences. The blind group had more variations and a systematic bias in the geometric property of tools (making them stubbier than the sighted), indicating that visual experience contributes to aligning internal representations and calibrating overall object configurations, at least for tools. Taken together, the object shape representation reflects the intricate orchestration of vision, touch and language.
Collapse
Affiliation(s)
- Shuang Tian
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Lingjuan Chen
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Guochao Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ze Fu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yufeng Ji
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jiahui Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shiguang Shan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
10
|
Hagoort P, Özyürek A. Extending the Architecture of Language From a Multimodal Perspective. Top Cogn Sci 2024. [PMID: 38493475 DOI: 10.1111/tops.12728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 02/26/2024] [Accepted: 02/27/2024] [Indexed: 03/19/2024]
Abstract
Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.
Collapse
Affiliation(s)
- Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, Nijmegen
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen
| |
Collapse
|
11
|
Soloukey S, Generowicz B, Warnert E, Springeling G, Schouten J, De Zeeuw C, Dirven C, Vincent A, Kruizinga P. Patient-Specific Vascular Flow Phantom for MRI- and Doppler Ultrasound Imaging. Ultrasound Med Biol 2024:S0301-5629(24)00109-1. [PMID: 38471997 DOI: 10.1016/j.ultrasmedbio.2024.02.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 01/29/2024] [Accepted: 02/16/2024] [Indexed: 03/14/2024]
Abstract
OBJECTIVE Intraoperative Doppler ultrasound imaging of human brain vasculature is an emerging neuro-imaging modality that offers vascular brain mapping with unprecedented spatiotemporal resolution. At present, however, access to the human brain using Doppler Ultrasound is only possible in this intraoperative context, posing a significant challenge for validation of imaging techniques. This challenge necessitates the development of realistic flow phantoms outside of the neurosurgical operating room as external platforms for testing hardware and software. An ideal ultrasound flow phantom should provide reference-like values in standardized topologies such as a slanted pipe, and allow for measurements in structures closely resembling vascular morphology of actual patients. Additionally, the phantom should be compatible with other clinical cerebrovascular imaging modalities. To meet these criteria, we developed and validated a versatile, multimodal MRI- and ultrasound Doppler phantom. METHODS Our approach incorporates the latest advancements in phantom research using tissue-mimicking material and 3D-printing with water-soluble resin to create wall-less patient-specific lumens, compatible for ultrasound and MRI. RESULTS We successfully produced three distinct phantoms: a slanted pipe, a y-shape phantom representing a bifurcating vessel and an arteriovenous malformation (AVM) derived from clinical Digital Subtraction Angiography (DSA)-data of the brain. We present 3D ultrafast power Doppler imaging results from these phantoms, demonstrating their ability to mimic complex flow patterns as observed in the human brain. Furthermore, we showcase the compatibility of our phantom with Magnetic Resonance Imaging (MRI). CONCLUSION We developed an MRI- and Doppler Ultrasound-compatible flow-phantom using customizable, water-soluble resin prints ranging from geometrical forms to patient-specific vasculature.
Collapse
Affiliation(s)
- Sadaf Soloukey
- Department of Neuroscience, Erasmus MC, Rotterdam, The Netherlands; Department of Neurosurgery, Erasmus MC, Rotterdam, The Netherlands.
| | | | - Esther Warnert
- Deparment of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Erasmus MC Cancer Institute, Rotterdam, The Netherlands
| | - Geert Springeling
- Deparment of Experimental Medical Instrumentation, Erasmus MC, Rotterdam, The Netherlands
| | - Joost Schouten
- Department of Neurosurgery, Erasmus MC, Rotterdam, The Netherlands
| | - Chris De Zeeuw
- Department of Neuroscience, Erasmus MC, Rotterdam, The Netherlands; Netherlands Institute for Neuroscience, Royal Dutch Academy for Arts and Sciences, Amsterdam, Netherlands
| | - Clemens Dirven
- Department of Neurosurgery, Erasmus MC, Rotterdam, The Netherlands
| | - Arnaud Vincent
- Department of Neurosurgery, Erasmus MC, Rotterdam, The Netherlands
| | - Pieter Kruizinga
- Department of Neuroscience, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
12
|
Lin J, Yang J, Yin M, Tang Y, Chen L, Xu C, Zhu S, Gao J, Liu L, Liu X, Gu C, Huang Z, Wei Y, Zhu J. Development and Validation of Multimodal Models to Predict the 30-Day Mortality of ICU Patients Based on Clinical Parameters and Chest X-Rays. J Imaging Inform Med 2024:10.1007/s10278-024-01066-1. [PMID: 38448758 DOI: 10.1007/s10278-024-01066-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 02/21/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
We aimed to develop and validate multimodal ICU patient prognosis models that combine clinical parameters data and chest X-ray (CXR) images. A total of 3798 subjects with clinical parameters and CXR images were extracted from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database and an external hospital (the test set). The primary outcome was 30-day mortality after ICU admission. Automated machine learning (AutoML) and convolutional neural networks (CNNs) were used to construct single-modal models based on clinical parameters and CXR separately. An early fusion approach was used to integrate both modalities (clinical parameters and CXR) into a multimodal model named PrismICU. Compared to the single-modal models, i.e., the clinical parameter model (AUC = 0.80, F1-score = 0.43) and the CXR model (AUC = 0.76, F1-score = 0.45) and the scoring system APACHE II (AUC = 0.83, F1-score = 0.77), PrismICU (AUC = 0.95, F1 score = 0.95) showed improved performance in predicting the 30-day mortality in the validation set. In the test set, PrismICU (AUC = 0.82, F1-score = 0.61) was also better than the clinical parameters model (AUC = 0.72, F1-score = 0.50), CXR model (AUC = 0.71, F1-score = 0.36), and APACHE II (AUC = 0.62, F1-score = 0.50). PrismICU, which integrated clinical parameters data and CXR images, performed better than single-modal models and the existing scoring system. It supports the potential of multimodal models based on structured data and imaging in clinical management.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jin Yang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Yuxiu Tang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Liquan Chen
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Chenqi Gu
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhou Huang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yao Wei
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China.
| |
Collapse
|
13
|
Bae EB, Han KM. A structural equation modeling approach using behavioral and neuroimaging markers in major depressive disorder. J Psychiatr Res 2024; 171:246-255. [PMID: 38325105 DOI: 10.1016/j.jpsychires.2024.02.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 12/16/2023] [Accepted: 02/01/2024] [Indexed: 02/09/2024]
Abstract
Major depressive disorder (MDD) has consistently proven to be a multifactorial and highly comorbid disease. Despite recent depression-related research demonstrating causalities between MDD-related factors and a small number of variables, including brain structural changes, a high-statistical power analysis of the various factors is yet to be conducted. We retrospectively analyzed data from 155 participants (84 healthy controls and 71 patients with MDD). We used magnetic resonance imaging and diffusion tensor imaging data, scales assessing childhood trauma, depression severity, cognitive dysfunction, impulsivity, and suicidal ideation. To simultaneously evaluate the causalities between multivariable, we implemented two types of MDD-specified structural equation models (SEM), the behavioral and neurobehavioral models. Behavioral SEM showed significant results in the MDD group: Comparative Fit Index [CFI] = 1.000, Root Mean Square Error of Approximation [RMSEA]) = 0.000), with a strong correlation in the scales for childhood trauma, depression severity, suicidal ideation, impulsivity, and cognitive dysfunction. Based on behavioral SEM, we established neurobehavioral models showing the best-fit in MDD, especially including the right cingulate cortex, central to the posterior corpus callosum, right putamen, pallidum, whole brainstem, and ventral diencephalon, including the thalamus (CFI >0.96, RMSEA <0.05). Our MDD-specific model revealed that the limbic-associated regions are strongly connected with childhood trauma rather than depression severity, and that they independently affect suicidal ideation and cognitive dysfunction. Furthermore, cognitive dysfunction could affect impulsivity.
Collapse
Affiliation(s)
- Eun Bit Bae
- Research Institute for Medical Bigdata Science, Korea University, Seoul, Republic of Korea; Department of Psychiatry, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Kyu-Man Han
- Department of Psychiatry, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
14
|
Talty A, Morris R, Deighan C. Home-based self-management multimodal cancer interventions & cardiotoxicity: a scoping review. Cardiooncology 2024; 10:12. [PMID: 38424647 PMCID: PMC10903028 DOI: 10.1186/s40959-024-00204-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 01/17/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Due to advancements in methods of cancer treatment, the population of people living with and beyond cancer is dramatically growing. The number of cancer survivors developing cardiovascular diseases and heart failure is also rising, due in part to the cardiotoxic nature of many cancer treatments. Guidelines are being increasingly released, emphasising the need for interdisciplinary action to address this gap in survivorship care. However, the extent to which interventions exist, incorporating the recommendations of cardio-oncology research, remains undetermined. OBJECTIVE The aim of this scoping review is to assess the nature, extent and remit of existing cancer care interventions and their integration of cardio-oncology principles. METHODS The review was conducted in accordance with the PRISMA Extension for Scoping Reviews Guidelines. Databases were independently searched for articles from 2010 to 2022, by two members of the research team. Data were charted and synthesised using the following criteria: (a) the focus of the intervention (b) the medium of delivery (c) the duration (d) the modalities included in the interventions (e) the research articles associated with each intervention (f) the type of studies conducted (g) key measures used (h) outcomes reported. RESULTS Interventions encompassed six key modalities: Psychological Support, Physical Activity, Nutrition, Patient Education, Lifestyle and Caregiver Support. The focus, medium of delivery and duration of interventions varied significantly. While a considerable number of study protocols and pilot studies exist documenting HSMIs, only 25% appear to have progressed beyond this stage of development. Of those that have, the present review did not identify any 'feasible' interventions that covered each of the six modalities, while being generalisable to all cancer survivors and incorporating the recommendations from cardio-oncology research. CONCLUSION Despite the substantial volume of research and evidence from the field of cardio-oncology, the findings of this scoping review suggest that the recommendations from guidelines have yet to be successfully translated from theory to practice. There is an opportunity, if not necessity, for cardiac rehabilitation to expand to meet the needs of those living with and beyond cancer.
Collapse
Affiliation(s)
- Anna Talty
- The Heart Manual Department, Astley Ainslie Hospital, Grange Loan, Edinburgh, Scotland, UK, EH9 2HL
| | - Roseanne Morris
- The Heart Manual Department, Astley Ainslie Hospital, Grange Loan, Edinburgh, Scotland, UK, EH9 2HL
| | - Carolyn Deighan
- The Heart Manual Department, Astley Ainslie Hospital, Grange Loan, Edinburgh, Scotland, UK, EH9 2HL.
| |
Collapse
|
15
|
Li W, Zhang Y, Zhou X, Quan X, Chen B, Hou X, Xu Q, He W, Chen L, Liu X, Zhang Y, Xiang T, Li R, Liu Q, Wu SN, Wang K, Liu W, Zheng J, Luan H, Yu X, Chen A, Xu C, Luo T, Hu Z. Ensemble learning-assisted prediction of prolonged hospital length of stay after spine correction surgery: a multi-center cohort study. J Orthop Surg Res 2024; 19:112. [PMID: 38308336 PMCID: PMC10838003 DOI: 10.1186/s13018-024-04576-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/23/2024] [Indexed: 02/04/2024] Open
Abstract
PURPOSE This research aimed to develop a machine learning model to predict the potential risk of prolonged length of stay in hospital before operation, which can be used to strengthen patient management. METHODS Patients who underwent posterior spinal deformity surgery (PSDS) from eleven medical institutions in China between 2015 and 2022 were included. Detailed preoperative patient data, including demographics, medical history, comorbidities, preoperative laboratory results, and surgery details, were collected from their electronic medical records. The cohort was randomly divided into a training dataset and a validation dataset with a ratio of 70:30. Based on Boruta algorithm, nine different machine learning algorithms and a stack ensemble model were trained after hyperparameters tuning visualization and evaluated on the area under the receiver operating characteristic curve (AUROC), precision-recall curve, calibration, and decision curve analysis. Visualization of Shapley Additive exPlanations method finally contributed to explaining model prediction. RESULTS Of the 162 included patients, the K Nearest Neighbors algorithm performed the best in the validation group compared with other machine learning models (yielding an AUROC of 0.8191 and PRAUC of 0.6175). The top five contributing variables were the preoperative hemoglobin, height, body mass index, age, and preoperative white blood cells. A web-based calculator was further developed to improve the predictive model's clinical operability. CONCLUSIONS Our study established and validated a clinical predictive model for prolonged postoperative hospitalization duration in patients who underwent PSDS, which offered valuable prognostic information for preoperative planning and postoperative care for clinicians. Trial registration ClinicalTrials.gov identifier NCT05867732, retrospectively registered May 22, 2023, https://classic. CLINICALTRIALS gov/ct2/show/NCT05867732 .
Collapse
Affiliation(s)
- Wenle Li
- State Key Laboratory of Molecular Vaccinology and Molecular, Diagnostics and Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen, China.
- Key Laboratory of Neurological Diseases, The Second Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu, China.
- Department of Spinal Surgery, Guangxi Medical University Affiliated Liuzhou People's Hospital, Liuzhou, China.
| | - Yusi Zhang
- Cancer Center, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
- Precision Medicine Center, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
- Department of Medical Oncology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Xin Zhou
- Third Hospital of Shanxi Medical University, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Taiyuan, 030032, China
| | - Xubin Quan
- Department of Spinal Surgery, Guangxi Medical University Affiliated Liuzhou People's Hospital, Liuzhou, China
| | - Binghao Chen
- Department of Spinal Surgery, Guangxi Medical University Affiliated Liuzhou People's Hospital, Liuzhou, China
| | - Xuewen Hou
- Department of Radiology, The First Dongguan Affiliated Hospital, Guangdong Medical University, Dongguan, China
| | - Qizhong Xu
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, China
| | - Weiheng He
- Department of Radiology, People's Hospital of Ningxia Hui Autonomous Region, Yinchuan, China
| | - Liang Chen
- Department of Radiology, Hubei Provincial Hospital of Traditional Chinese Medicine, Wuhan, China
| | - Xiaozhu Liu
- Department of Critical Care Medicine, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
- Department of Cardiology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Yang Zhang
- College of Medical Informatics, Chongqing Medical University, Chongqing, China
- Medical Data Science Academy, Chongqing Medical University, Chongqing, China
| | - Tianyu Xiang
- Information Center, The University-Town Hospital of Chongqing Medical University, Chongqing, China
| | - Runmin Li
- Department of Foot and Ankle Surgery, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province, China
| | - Qiang Liu
- Department of Orthopedics, Xianyang Central Hospital, Xianyang, Shannxi, China
| | - Shi-Nan Wu
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, Fujian, China
| | - Kai Wang
- Key Laboratory of Neurological Diseases, The Second Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu, China
| | - Wencai Liu
- Department of Orthopedics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Jialiang Zheng
- Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Haopeng Luan
- Department of Spine Surgery, The Six Affiliated Hospital of Xinjiang Medical University, Urumqi, Xinjiang, China
| | - Xiaolin Yu
- Department of Orthopedics, Affiliated Hospital of Guizhou Medical University, Guiyang, Guizhou, China
| | - Anfa Chen
- Department of Orthopedics, Jiangxi Province Hospital of Integrated Chinese and Western Medicine, Nanchang, China
| | - Chan Xu
- State Key Laboratory of Molecular Vaccinology and Molecular, Diagnostics and Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen, China
| | - Tongqing Luo
- Department of Spinal Surgery, Guangxi Medical University Affiliated Liuzhou People's Hospital, Liuzhou, China.
| | - Zhaohui Hu
- Department of Spinal Surgery, Guangxi Medical University Affiliated Liuzhou People's Hospital, Liuzhou, China.
| |
Collapse
|
16
|
Nievas Offidani MA, Delrieux CA. Dataset of clinical cases, images, image labels and captions from open access case reports from PubMed Central (1990-2023). Data Brief 2024; 52:110008. [PMID: 38235175 PMCID: PMC10792687 DOI: 10.1016/j.dib.2023.110008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 01/19/2024] Open
Abstract
This paper details the acquisition, structure and preprocessing of the MultiCaRe Dataset, a multimodal case report dataset which contains data from 75,382 open access PubMed Central articles spanning the period from 1990 to 2023. The dataset includes 96,428 clinical cases, 135,596 images, and their corresponding labels and captions. Data extraction was performed using different APIs and packages such as Biopython, requests, Beautifulsoup, BioC API for PMC and EuropePMC RESTful API. Image labels were created based on the contents of their corresponding captions, by using Spark NLP for Healthcare and manual annotations. Images were preprocessed with OpenCV in order to remove borders and split figures containing multiple images, data were analyzed and described, and a subset was randomly selected for quality assessment. The dataset's structure allows for seamless integration of different types of data, making it a valuable resource for training or fine-tuning medical language, computer vision or multi-modal models.
Collapse
Affiliation(s)
- Mauro Andrés Nievas Offidani
- Department of Electrical and Computer Engineering, National University of the South, Avda. Alem 1253 - Body A - 1st Floor, B8000CPB Bahía Blanca, Argentina
| | - Claudio Augusto Delrieux
- Department of Electrical and Computer Engineering, National University of the South, Avda. Alem 1253 - Body A - 1st Floor, B8000CPB Bahía Blanca, Argentina
| |
Collapse
|
17
|
Yang Z, Xiao S, Su T, Gong J, Qi Z, Chen G, Chen P, Tang G, Fu S, Yan H, Huang L, Wang Y. A multimodal meta-analysis of regional functional and structural brain abnormalities in obsessive-compulsive disorder. Eur Arch Psychiatry Clin Neurosci 2024; 274:165-180. [PMID: 37000246 DOI: 10.1007/s00406-023-01594-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 03/14/2023] [Indexed: 04/01/2023]
Abstract
Numerous neuroimaging studies of resting-state functional imaging and voxel-based morphometry (VBM) have revealed abnormalities in specific brain regions in obsessive-compulsive disorder (OCD), but results have been inconsistent. We conducted a whole-brain voxel-wise meta-analysis on resting-state functional imaging and VBM studies that investigated differences of functional activity and gray matter volume (GMV) between patients with OCD and healthy controls (HCs) using seed-based d mapping (SDM) software. A total of 41 independent studies (51 datasets) for resting-state functional imaging and 42 studies (46 datasets) for VBM were included by a systematic literature search. Overall, patients with OCD displayed increased spontaneous functional activity in the bilateral inferior frontal gyrus (IFG) (extending to the bilateral insula) and bilateral medial prefrontal cortex/anterior cingulate cortex (mPFC/ACC), as well as decreased spontaneous functional activity in the bilateral paracentral lobule, bilateral cerebellum, left caudate nucleus, left inferior parietal gyri, and right precuneus cortex. For the VBM meta-analysis, patients with OCD displayed increased GMV in the bilateral thalamus (extending to the bilateral cerebellum), right striatum, and decreased GMV in the bilateral mPFC/ACC and left IFG (extending to the left insula). The conjunction analyses found that the bilateral mPFC/ACC, left IFG (extending to the left insula) showed decreased GMV with increased intrinsic function in OCD patients compared to HCs. This meta-analysis demonstrated that OCD exhibits abnormalities in both function and structure in the bilateral mPFC/ACC, insula, and IFG. A few regions exhibited only functional or only structural abnormalities in OCD, such as the default mode network, striatum, sensorimotor areas, and cerebellum. It may provide useful insights for understanding the underlying pathophysiology of OCD and developing more targeted and efficacious treatment and intervention strategies.
Collapse
Affiliation(s)
- Zibin Yang
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Shu Xiao
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Ting Su
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Jiayin Gong
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
- Department of Radiology, Six Affiliated Hospital of Sun Yat-Sen University, Guangzhou, 510655, China
| | - Zhangzhang Qi
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Guanmao Chen
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Pan Chen
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Guixian Tang
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - SiYing Fu
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Hong Yan
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Li Huang
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China
| | - Ying Wang
- Medical Imaging Center, First Affiliated Hospital of Jinan University, Guangzhou, 510630, China.
- Institute of Molecular and Functional Imaging, Jinan University, Guangzhou, 510630, China.
| |
Collapse
|
18
|
Geiser N, Kaufmann BC, Knobel SEJ, Cazzoli D, Nef T, Nyffeler T. Comparison of uni- and multimodal motion stimulation on visual neglect: A proof-of-concept study. Cortex 2024; 171:194-203. [PMID: 38007863 DOI: 10.1016/j.cortex.2023.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/31/2023] [Accepted: 10/16/2023] [Indexed: 11/28/2023]
Abstract
Spatial neglect is characterized by the failure to attend stimuli presented in the contralesional space. Typically, the visual modality is more severely impaired than the auditory one. This dissociation offers the possibility of cross-modal interactions, whereby auditory stimuli may have beneficial effects on the visual modality. A new auditory motion stimulation method with music dynamically moving from the right to the left hemispace has recently been shown to improve visual neglect. The aim of the present study was twofold: a) to compare the effects of unimodal auditory against visual motion stimulation, i.e., smooth pursuit training, which is an established therapeutical approach in neglect therapy and b) to explore whether a combination of auditory + visual motion stimulation, i.e., multimodal motion stimulation, would be more effective than unimodal auditory or visual motion stimulation. 28 patients with left-sided neglect due to a first-ever, right-hemispheric subacute stroke were included. Patients either received auditory, visual, or multimodal motion stimulation. The between-group effect of each motion stimulation condition as well as a control group without motion stimulation was investigated by means of a one-way ANOVA with the patient's visual exploration behaviour as an outcome variable. Our results showed that unimodal auditory motion stimulation is equally effective as unimodal visual motion stimulation: both interventions significantly improved neglect compared to the control group. Multimodal motion stimulation also significantly improved neglect, however, did not show greater improvement than unimodal auditory or visual motion stimulation alone. Besides the established visual motion stimulation, this proof-of-concept study suggests that auditory motion stimulation seems to be an alternative promising therapeutic approach to improve visual attention in neglect patients. Multimodal motion stimulation does not lead to any additional therapeutic gain. In neurorehabilitation, the implementation of either auditory or visual motion stimulation seems therefore reasonable.
Collapse
Affiliation(s)
- Nora Geiser
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Graduate School for Health Sciences, University of Bern, Switzerland
| | - Brigitte Charlotte Kaufmann
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, Paris, France
| | | | - Dario Cazzoli
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Department of Psychology, University of Bern, Bern, Switzerland
| | - Tobias Nef
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Gerontechnology & Rehabilitation Group, University of Bern, Bern, Switzerland; Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Thomas Nyffeler
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| |
Collapse
|
19
|
Deng L, Lan Q, Zhi Q, Huang S, Wang J, Yang X. Deep learning-based 3D brain multimodal medical image registration. Med Biol Eng Comput 2024; 62:505-519. [PMID: 37938452 DOI: 10.1007/s11517-023-02941-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 09/24/2023] [Indexed: 11/09/2023]
Abstract
Medical image registration is a critical preprocessing step in medical image analysis. While traditional medical image registration techniques have matured, their registration speed and accuracy still fall short of clinical requirements. In this paper, we propose an improved VoxelMorph network incorporating ResNet modules and CBAM (RCV-Net), for 3D multimodal unsupervised registration. Unlike popular convolution-based U-shaped registration networks like VoxelMorph, RCV-Net incorporates the convolutional block attention module (CBAM) during the convolution process. This inclusion enhances the feature map information extraction capabilities during training and effectively prevents information loss. Additionally, we introduce a lightweight and residual network module at the network's base, which enhances learning ability without significantly increasing training parameters. To evaluate the superiority of our registration model, we utilize evaluation metrics such as structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and mean square error (MSE). Experimental results demonstrate that our proposed network structure outperforms current state-of-the-art methods, yielding better performance in multimodal registration tasks. Furthermore, generalization testing on databases outside of the training set has confirmed the optimal registration effectiveness of our model.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080, Heilongjiang, China
- School of Computer Science and Technology, Harbin University of Science and Technology, HarbinHeilongjiang, 150080, China
| | - Qi Lan
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080, Heilongjiang, China
| | - Qiang Zhi
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080, Heilongjiang, China
| | - Sijuan Huang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong, China
| | - Jing Wang
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, Guangdong, China.
| | - Xin Yang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong, China.
| |
Collapse
|
20
|
Parvin S, Nimmy SF, Kamal MS. Convolutional neural network based data interpretable framework for Alzheimer's treatment planning. Vis Comput Ind Biomed Art 2024; 7:3. [PMID: 38296864 PMCID: PMC10830981 DOI: 10.1186/s42492-024-00154-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 01/08/2024] [Indexed: 02/02/2024] Open
Abstract
Alzheimer's disease (AD) is a neurological disorder that predominantly affects the brain. In the coming years, it is expected to spread rapidly, with limited progress in diagnostic techniques. Various machine learning (ML) and artificial intelligence (AI) algorithms have been employed to detect AD using single-modality data. However, recent developments in ML have enabled the application of these methods to multiple data sources and input modalities for AD prediction. In this study, we developed a framework that utilizes multimodal data (tabular data, magnetic resonance imaging (MRI) images, and genetic information) to classify AD. As part of the pre-processing phase, we generated a knowledge graph from the tabular data and MRI images. We employed graph neural networks for knowledge graph creation, and region-based convolutional neural network approach for image-to-knowledge graph generation. Additionally, we integrated various explainable AI (XAI) techniques to interpret and elucidate the prediction outcomes derived from multimodal data. Layer-wise relevance propagation was used to explain the layer-wise outcomes in the MRI images. We also incorporated submodular pick local interpretable model-agnostic explanations to interpret the decision-making process based on the tabular data provided. Genetic expression values play a crucial role in AD analysis. We used a graphical gene tree to identify genes associated with the disease. Moreover, a dashboard was designed to display XAI outcomes, enabling experts and medical professionals to easily comprehend the prediction results.
Collapse
Affiliation(s)
- Sazia Parvin
- Information Technology, Melbourne Polytechnic, Melbourne, VIC 3072, Australia.
| | - Sonia Farhana Nimmy
- Faculty of Economics and Business, University of New South Wales, Sydney, ACT 2612, Australia
| | - Md Sarwar Kamal
- School of Computer Science, Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia
| |
Collapse
|
21
|
Thirugnanasambandam K, Murugan J, Ramalingam R, Rashid M, Raghav RS, Kim TH, Sampedro GA, Abisado M. Optimizing multimodal feature selection using binary reinforced cuckoo search algorithm for improved classification performance. PeerJ Comput Sci 2024; 10:e1816. [PMID: 38435570 PMCID: PMC10909206 DOI: 10.7717/peerj-cs.1816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/19/2023] [Indexed: 03/05/2024]
Abstract
Background Feature selection is a vital process in data mining and machine learning approaches by determining which characteristics, out of the available features, are most appropriate for categorization or knowledge representation. However, the challenging task is finding a chosen subset of elements from a given set of features to represent or extract knowledge from raw data. The number of features selected should be appropriately limited and substantial to prevent results from deviating from accuracy. When it comes to the computational time cost, feature selection is crucial. A feature selection model is put out in this study to address the feature selection issue concerning multimodal. Methods In this work, a novel optimization algorithm inspired by cuckoo birds' behavior is the Binary Reinforced Cuckoo Search Algorithm (BRCSA). In addition, we applied the proposed BRCSA-based classification approach for multimodal feature selection. The proposed method aims to select the most relevant features from multiple modalities to improve the model's classification performance. The BRCSA algorithm is used to optimize the feature selection process, and a binary encoding scheme is employed to represent the selected features. Results The experiments are conducted on several benchmark datasets, and the results are compared with other state-of-the-art feature selection methods to evaluate the effectiveness of the proposed method. The experimental results demonstrate that the proposed BRCSA-based approach outperforms other methods in terms of classification accuracy, indicating its potential applicability in real-world applications. In specific on accuracy of classification (average), the proposed algorithm outperforms the existing methods such as DGUFS with 32%, MBOICO with 24%, MBOLF with 29%, WOASAT 22%, BGSA with 28%, HGSA 39%, FS-BGSK 37%, FS-pBGSK 42%, and BSSA 40%.
Collapse
Affiliation(s)
- Kalaipriyan Thirugnanasambandam
- Centre for Smart Grid Technologies, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Jayalakshmi Murugan
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - Rajakumar Ramalingam
- Centre for Automation, School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India
| | - Mamoon Rashid
- Department of Computer Engineering, Faculty of Science and Technology, Vishwakarma University, Pune, India
| | - R. S. Raghav
- School of Computing, SASTRA Deemed University, Villupuram, India
| | - Tai-hoon Kim
- School of Electrical and Computer Engineering, Chonnam National University, Daehak-7, Republic of Korea
| | - Gabriel Avelino Sampedro
- Faculty of Information and Communication Studies, University of the Philippines Open University, Los Baños, Philippines
- Center for Computational Imaging and Visual Innovations, De La Salle University, Malate, Philippines
| | - Mideth Abisado
- College of Computing and Information Technologies, National University, Manila, Philippines
| |
Collapse
|
22
|
Chen Z, Ye L, Zhu M, Xia C, Fan J, Chen H, Li Z, Mou S. Single cell multi-omics of fibrotic kidney reveal epigenetic regulation of antioxidation and apoptosis within proximal tubule. Cell Mol Life Sci 2024; 81:56. [PMID: 38270638 PMCID: PMC10811088 DOI: 10.1007/s00018-024-05118-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 12/10/2023] [Accepted: 01/07/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Until now, there has been no particularly effective treatment for chronic kidney disease (CKD). Fibrosis is a common pathological change that exist in CKD. METHODS To better understand the transcriptional dynamics in fibrotic kidney, we make use of single-nucleus assay for transposase-accessible chromatin sequencing (snATAC-seq) and single-cell RNA sequencing (scRNA-seq) from GEO datasets and perform scRNA-seq of human biopsy to seek possible transcription factors (TFs) regulating target genes in the progress of kidney fibrosis across mouse and human kidneys. RESULTS Our analysis has displayed chromatin accessibility, gene expression pattern and cell-cell communications at single-cell level in kidneys suffering from unilateral ureteral obstruction (UUO) or chronic interstitial nephritis (CIN). Using multimodal data, there exists epigenetic regulation producing less Sod1 and Sod2 mRNA within the proximal tubule which is hard to withstand oxidative stress during fibrosis. Meanwhile, a transcription factor Nfix promoting the apoptosis-related gene Ifi27 expression found by multimodal data was validated by an in vitro study. And the gene Ifi27 upregulated by in situ AAV injection within the kidney cortex aggravates kidney fibrosis. CONCLUSIONS In conclusion, as we know oxidation and apoptosis are traumatic factors during fibrosis, thus enhancing antioxidation and inhibiting the Nfix-Ifi27 pathway to inhibit apoptosis could be a potential treatment for kidney fibrosis.
Collapse
Affiliation(s)
- Zhejun Chen
- Department of Nephrology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, 310000, Zhejiang, China.
| | - Liqing Ye
- Department of Nephrology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, 310000, Zhejiang, China
| | - Minyan Zhu
- Department of Nephrology, Molecular Cell Lab for Kidney Disease, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No 1630, Dong Fang Road, Shanghai, 200127, China
| | - Cong Xia
- Department of Nephrology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, 310000, Zhejiang, China
| | - Junfen Fan
- Department of Nephrology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, 310000, Zhejiang, China
| | - Hongbo Chen
- Department of Nephrology, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, 310000, Zhejiang, China.
| | - Zhijian Li
- Broad Institute of Harvard and MIT, Cambridge, MA, 02142, USA.
| | - Shan Mou
- Department of Nephrology, Molecular Cell Lab for Kidney Disease, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, No 1630, Dong Fang Road, Shanghai, 200127, China.
| |
Collapse
|
23
|
Levitis E, Liu S, Whitman ET, Warling A, Torres E, Clasen LS, Lalonde FM, Sarlls J, Alexander DC, Raznahan A. The Variegation of Human Brain Vulnerability to Rare Genetic Disorders and Convergence With Behaviorally Defined Disorders. Biol Psychiatry 2024; 95:136-146. [PMID: 37480975 PMCID: PMC10799187 DOI: 10.1016/j.biopsych.2023.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 06/16/2023] [Accepted: 07/10/2023] [Indexed: 07/24/2023]
Abstract
BACKGROUND Diverse gene dosage disorders (GDDs) increase risk for psychiatric impairment, but characterization of GDD effects on the human brain has so far been piecemeal, with few simultaneous analyses of multiple brain features across different GDDs. METHODS Here, through multimodal neuroimaging of 3 aneuploidy syndromes (XXY [total n = 191, 92 control participants], XYY [total n = 81, 47 control participants], and trisomy 21 [total n = 69, 41 control participants]), we systematically mapped the effects of supernumerary X, Y, and chromosome 21 dosage across a breadth of 15 different macrostructural, microstructural, and functional imaging-derived phenotypes (IDPs). RESULTS The results revealed considerable diversity in cortical changes across GDDs and IDPs. This variegation of IDP change underlines the limitations of studying GDD effects unimodally. Integration across all IDP change maps revealed highly distinct architectures of cortical change in each GDD along with partial coalescence onto a common spatial axis of cortical vulnerability that is evident in all 3 GDDs. This common axis shows strong alignment with shared cortical changes in behaviorally defined psychiatric disorders and is enriched for specific molecular and cellular signatures. CONCLUSIONS Use of multimodal neuroimaging data in 3 aneuploidies indicates that different GDDs impose unique fingerprints of change in the human brain that differ widely depending on the imaging modality that is being considered. Embedded in this variegation is a spatial axis of shared multimodal change that aligns with shared brain changes across psychiatric disorders and therefore represents a major high-priority target for future translational research in neuroscience.
Collapse
Affiliation(s)
- Elizabeth Levitis
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland; Center for Medical Image Computing, Department of Computer Science, UCL, London, UK.
| | - Siyuan Liu
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland
| | - Ethan T Whitman
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland
| | - Allysa Warling
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland
| | - Erin Torres
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland
| | - Liv S Clasen
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland
| | - François M Lalonde
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland
| | - Joelle Sarlls
- National Institutes of Health MRI Research Facility, National Institute of Mental Health, Bethesda, Maryland
| | - Daniel C Alexander
- Center for Medical Image Computing, Department of Computer Science, UCL, London, UK
| | - Armin Raznahan
- Section on Developmental Neurogenomics, National Institute of Mental Health, Bethesda, Maryland.
| |
Collapse
|
24
|
Lorenz EA, Su X, Skjæret-Maroni N. A review of combined functional neuroimaging and motion capture for motor rehabilitation. J Neuroeng Rehabil 2024; 21:3. [PMID: 38172799 PMCID: PMC10765727 DOI: 10.1186/s12984-023-01294-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 12/11/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Technological advancements in functional neuroimaging and motion capture have led to the development of novel methods that facilitate the diagnosis and rehabilitation of motor deficits. These advancements allow for the synchronous acquisition and analysis of complex signal streams of neurophysiological data (e.g., EEG, fNIRS) and behavioral data (e.g., motion capture). The fusion of those data streams has the potential to provide new insights into cortical mechanisms during movement, guide the development of rehabilitation practices, and become a tool for assessment and therapy in neurorehabilitation. RESEARCH OBJECTIVE This paper aims to review the existing literature on the combined use of motion capture and functional neuroimaging in motor rehabilitation. The objective is to understand the diversity and maturity of technological solutions employed and explore the clinical advantages of this multimodal approach. METHODS This paper reviews literature related to the combined use of functional neuroimaging and motion capture for motor rehabilitation following the PRISMA guidelines. Besides study and participant characteristics, technological aspects of the used systems, signal processing methods, and the nature of multimodal feature synchronization and fusion were extracted. RESULTS Out of 908 publications, 19 were included in the final review. Basic or translation studies were mainly represented and based predominantly on healthy participants or stroke patients. EEG and mechanical motion capture technologies were most used for biomechanical data acquisition, and their subsequent processing is based mainly on traditional methods. The system synchronization techniques at large were underreported. The fusion of multimodal features mainly supported the identification of movement-related cortical activity, and statistical methods were occasionally employed to examine cortico-kinematic relationships. CONCLUSION The fusion of motion capture and functional neuroimaging might offer advantages for motor rehabilitation in the future. Besides facilitating the assessment of cognitive processes in real-world settings, it could also improve rehabilitative devices' usability in clinical environments. Further, by better understanding cortico-peripheral coupling, new neuro-rehabilitation methods can be developed, such as personalized proprioceptive training. However, further research is needed to advance our knowledge of cortical-peripheral coupling, evaluate the validity and reliability of multimodal parameters, and enhance user-friendly technologies for clinical adaptation.
Collapse
Affiliation(s)
- Emanuel A Lorenz
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Xiaomeng Su
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Nina Skjæret-Maroni
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
25
|
Bräutigam LC, Leuthold H, Mackenzie IG, Mittelstädt V. Exploring behavioral adjustments of proportion congruency manipulations in an Eriksen flanker task with visual and auditory distractor modalities. Mem Cognit 2024; 52:91-114. [PMID: 37548866 PMCID: PMC10806239 DOI: 10.3758/s13421-023-01447-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2023] [Indexed: 08/08/2023]
Abstract
The present study investigated global behavioral adaptation effects to conflict arising from different distractor modalities. Three experiments were conducted using an Eriksen flanker paradigm with constant visual targets, but randomly varying auditory or visual distractors. In Experiment 1, the proportion of congruent to incongruent trials was varied for both distractor modalities, whereas in Experiments 2A and 2B, this proportion congruency (PC) manipulation was applied to trials with one distractor modality (inducer) to test potential behavioral transfer effects to trials with the other distractor modality (diagnostic). In all experiments, mean proportion congruency effects (PCEs) were present in trials with a PC manipulation, but there was no evidence of transfer to diagnostic trials in Experiments 2A and 2B. Distributional analyses (delta plots) provided further evidence for distractor modality-specific global behavioral adaptations by showing differences in the slope of delta plots with visual but not auditory distractors when increasing the ratio of congruent trials. Thus, it is suggested that distractor modalities constrain global behavioral adaptation effects due to the learning of modality-specific memory traces (e.g., distractor-target associations) and/or the modality-specific cognitive control processes (e.g., suppression of modality-specific distractor-based activation). Moreover, additional analyses revealed partial transfer of the congruency sequence effect across trials with different distractor modalities suggesting that distractor modality may differentially affect local and global behavioral adaptations.
Collapse
Affiliation(s)
- Linda C Bräutigam
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany.
| | - Hartmut Leuthold
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany
| | - Ian G Mackenzie
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany
| | - Victor Mittelstädt
- Department of Psychology, University of Tübingen, Schleichstrasse 4, 72076, Tübingen, Germany
| |
Collapse
|
26
|
Dhaliwal A, Ma J, Zheng M, Lyu Q, Rajora MA, Ma S, Oliva L, Ku A, Valic M, Wang B, Zheng G. Deep learning for automatic organ and tumor segmentation in nanomedicine pharmacokinetics. Theranostics 2024; 14:973-987. [PMID: 38250039 PMCID: PMC10797295 DOI: 10.7150/thno.90246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 11/17/2023] [Indexed: 01/23/2024] Open
Abstract
Rationale: Multimodal imaging provides important pharmacokinetic and dosimetry information during nanomedicine development and optimization. However, accurate quantitation is time-consuming, resource intensive, and requires anatomical expertise. Methods: We present NanoMASK: a 3D U-Net adapted deep learning tool capable of rapid, automatic organ segmentation of multimodal imaging data that can output key clinical dosimetry metrics without manual intervention. This model was trained on 355 manually-contoured PET/CT data volumes of mice injected with a variety of nanomaterials and imaged over 48 hours. Results: NanoMASK produced 3-dimensional contours of the heart, lungs, liver, spleen, kidneys, and tumor with high volumetric accuracy (pan-organ average %DSC of 92.5). Pharmacokinetic metrics including %ID/cc, %ID, and SUVmax achieved correlation coefficients exceeding R = 0.987 and relative mean errors below 0.2%. NanoMASK was applied to novel datasets of lipid nanoparticles and antibody-drug conjugates with a minimal drop in accuracy, illustrating its generalizability to different classes of nanomedicines. Furthermore, 20 additional auto-segmentation models were developed using training data subsets based on image modality, experimental imaging timepoint, and tumor status. These were used to explore the fundamental biases and dependencies of auto-segmentation models built on a 3D U-Net architecture, revealing significant differential impacts on organ segmentation accuracy. Conclusions: NanoMASK is an easy-to-use, adaptable tool for improving accuracy and throughput in imaging-based pharmacokinetic studies of nanomedicine. It has been made publicly available to all readers for automatic segmentation and pharmacokinetic analysis across a diverse array of nanoparticles, expediting agent development.
Collapse
Affiliation(s)
- Alex Dhaliwal
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Jun Ma
- Department of Laboratory Medicine and Pathobiology, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Mark Zheng
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Qing Lyu
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Maneesha A. Rajora
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Institute of Biomedical Engineering, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Shihao Ma
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Laura Oliva
- Techna Institute, University Health Network, 190 Elizabeth Street, Toronto, M5G 2C4, Ontario, Canada
| | - Anthony Ku
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, 94305-5484, California, United States of America
| | - Michael Valic
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Institute of Biomedical Engineering, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Bo Wang
- Department of Laboratory Medicine and Pathobiology, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
| |
Collapse
|
27
|
Schilcher J, Nilsson A, Andlid O, Eklund A. Fusion of electronic health records and radiographic images for a multimodal deep learning prediction model of atypical femur fractures. Comput Biol Med 2024; 168:107704. [PMID: 37980797 DOI: 10.1016/j.compbiomed.2023.107704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/15/2023] [Accepted: 11/07/2023] [Indexed: 11/21/2023]
Abstract
Atypical femur fractures (AFF) represent a very rare type of fracture that can be difficult to discriminate radiologically from normal femur fractures (NFF). AFFs are associated with drugs that are administered to prevent osteoporosis-related fragility fractures, which are highly prevalent in the elderly population. Given that these fractures are rare and the radiologic changes are subtle currently only 7% of AFFs are correctly identified, which hinders adequate treatment for most patients with AFF. Deep learning models could be trained to classify automatically a fracture as AFF or NFF, thereby assisting radiologists in detecting these rare fractures. Historically, for this classification task, only imaging data have been used, using convolutional neural networks (CNN) or vision transformers applied to radiographs. However, to mimic situations in which all available data are used to arrive at a diagnosis, we adopted an approach of deep learning that is based on the integration of image data and tabular data (from electronic health records) for 159 patients with AFF and 914 patients with NFF. We hypothesized that the combinatorial data, compiled from all the radiology departments of 72 hospitals in Sweden and the Swedish National Patient Register, would improve classification accuracy, as compared to using only one modality. At the patient level, the area under the ROC curve (AUC) increased from 0.966 to 0.987 when using the integrated set of imaging data and seven pre-selected variables, as compared to only using imaging data. More importantly, the sensitivity increased from 0.796 to 0.903. We found a greater impact of data fusion when only a randomly selected subset of available images was used to make the image and tabular data more balanced for each patient. The AUC then increased from 0.949 to 0.984, and the sensitivity increased from 0.727 to 0.849. These AUC improvements are not large, mainly because of the already excellent performance of the CNN (AUC of 0.966) when only images are used. However, the improvement is clinically highly relevant considering the importance of accuracy in medical diagnostics. We expect an even greater effect when imaging data from a clinical workflow, comprising a more diverse set of diagnostic images, are used.
Collapse
Affiliation(s)
- Jörg Schilcher
- Department of Orthopedics and Experimental and Clinical Medicine, Faculty of Health Science, Linköping University, Linköping, Sweden; Wallenberg Centre for Molecular Medicine, Linköping University, Linköping, Sweden; Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Alva Nilsson
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Oliver Andlid
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Anders Eklund
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden; Division of Statistics and Machine Learning, Department of Computer and Information Science, Linköping University, Linköping, Sweden; Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden.
| |
Collapse
|
28
|
Sulfaro AA, Robinson AK, Carlson TA. Properties of imagined experience across visual, auditory, and other sensory modalities. Conscious Cogn 2024; 117:103598. [PMID: 38086154 DOI: 10.1016/j.concog.2023.103598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 01/16/2024]
Abstract
Little is known about the perceptual characteristics of mental images nor how they vary across sensory modalities. We conducted an exhaustive survey into how mental images are experienced across modalities, mainly targeting visual and auditory imagery of a single stimulus, the letter "O", to facilitate direct comparisons. We investigated temporal properties of mental images (e.g. onset latency, duration), spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), movement requirements (e.g. eye movements), real-imagined interactions (e.g. inner speech while reading), beliefs about imagery norms and terminologies, as well as respondent confidence. Participants also reported on the five traditional senses and their prominence during thinking, imagining, and dreaming. Overall, visual and auditory experiences dominated mental events, although auditory mental images were superior to visual mental images on almost every metric tested except regarding spatial properties. Our findings suggest that modality-specific differences in mental imagery may parallel those of other sensory neural processes.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia.
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia; Queensland Brain Institute, The University of Queensland, St Lucia 4072, Queensland, Australia.
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia.
| |
Collapse
|
29
|
Zeng L, Liu B, Duan L, Gao G. Tough, recyclable and biocompatible carrageenan-modified polyvinyl alcohol ionic hydrogel with physical cross-linked for multimodal sensing. Int J Biol Macromol 2023; 253:126954. [PMID: 37734518 DOI: 10.1016/j.ijbiomac.2023.126954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/20/2023] [Accepted: 09/12/2023] [Indexed: 09/23/2023]
Abstract
Biocompatibility hydrogel conductors are considered as sustainable bio-electronic materials for the application of wearable sensors and implantable devices. However, they mostly face the limitations of mismatched mechanical properties with skin tissue and the difficulty of recycling. In this regard, here, a biocompatible, tough, reusable sensor based on physical crosslinked polyvinyl alcohol (PVA) ionic hydrogel modified with ι-carrageenan (ι-CG) helical network was reported. Through simulating the ion transport and network structure of biological systems, the ionic hydrogels with skin-like mechanical features exhibit large tensile strain of 640 %, robust fracture strength of 800 kPa, soft modulus and high fatigue resistance. Meanwhile, the ionic hydrogel-based sensors possess a high response to strain/pressure over a wide range and could be utilized for multimodal sensing of human activity signals. Benefit from biosafety and temperature reversibility of ι-CG and PVA endow hydrogels with not only biocompatibility, but also meaningfully recyclability. The as-prepared hydrogels could be freely reconstructed into new flexible electronics and safely integrated with the human skin. It could be anticipated that the physically cross-linked ionic hydrogel conductor could expand the options for next-generation bio-based sensors.
Collapse
Affiliation(s)
- Lingjun Zeng
- Polymeric and Soft Materials Laboratory, School of Chemical Engineering, Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, PR China
| | - Bo Liu
- Polymeric and Soft Materials Laboratory, School of Chemical Engineering, Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, PR China
| | - Lijie Duan
- Polymeric and Soft Materials Laboratory, School of Chemical Engineering, Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, PR China
| | - Guanghui Gao
- Polymeric and Soft Materials Laboratory, School of Chemical Engineering, Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, PR China.
| |
Collapse
|
30
|
Zheng L, Jiang Y, Huang F, Wu Q, Lou Y. A colorimetric, photothermal, and fluorescent triple-mode CRISPR/cas biosensor for drug-resistance bacteria detection. J Nanobiotechnology 2023; 21:493. [PMID: 38115051 PMCID: PMC10731848 DOI: 10.1186/s12951-023-02262-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023] Open
Abstract
A multimodal analytical strategy utilizing different modalities to cross-validate each other, can effectively minimize false positives or negatives and ensure the accuracy of detection results. Herein, we establish a colorimetric, photothermal, and fluorescent triple modal CRISPR/Cas12a detection platform (CPF-CRISPR). An MNPs-ssDNA-HRP signal probe is designed to act as a substrate to trigger three signal outputs. In the presence of the DNA target, MNPs-ssDNA-HRP is cleaved by the activated CRISPR/Cas12a, resulting in the release of HRP and generating short DNA strands with 3-terminal hydroxyl on magnetic beads. The released HRP subsequently catalyzed TMB-H2O2 reaction and oxidized TMB is used for colorimetric and photothermal signal detection. Under the catalysis of terminal deoxynucleotidyl transferase (TdT), the remaining short DNA strands are used as primers to form poly-T and function as scaffolds to form copper nanoclusters for fluorescent signal output. To verify the practical application of CPF-CRISPR, we employed MRSA as a model. The results demonstrate the platform's high accuracy and sensitivity, with a limit of detection of 101 CFU/mL when combined with recombinase polymerase amplification. Therefore, by harnessing the programmability of CRISPR/Cas12a, the biosensor has the potential to detect various drug-resistant bacteria, demonstrating significant practical applicability.
Collapse
Affiliation(s)
- Laibao Zheng
- Wenzhou Key Laboratory of Sanitary Microbiology, Key Laboratory of Laboratory Medicine, School of Laboratory Medicine and Life Sciences, Ministry of Education, Wenzhou Medical University, Wenzhou, Zhejiang, China.
| | - Yayun Jiang
- Department of Clinical Laboratory, People's Hospital of Deyang City, Deyang, China
| | - Fuyuan Huang
- Wenzhou Key Laboratory of Sanitary Microbiology, Key Laboratory of Laboratory Medicine, School of Laboratory Medicine and Life Sciences, Ministry of Education, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Qiaoli Wu
- Wenzhou Key Laboratory of Sanitary Microbiology, Key Laboratory of Laboratory Medicine, School of Laboratory Medicine and Life Sciences, Ministry of Education, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Yongliang Lou
- Wenzhou Key Laboratory of Sanitary Microbiology, Key Laboratory of Laboratory Medicine, School of Laboratory Medicine and Life Sciences, Ministry of Education, Wenzhou Medical University, Wenzhou, Zhejiang, China.
| |
Collapse
|
31
|
Shao M, Zhang W, Li Y, Tang L, Hao ZZ, Liu S. Patch-seq: Advances and Biological Applications. Cell Mol Neurobiol 2023; 44:8. [PMID: 38123823 DOI: 10.1007/s10571-023-01436-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 11/24/2023] [Indexed: 12/23/2023]
Abstract
Multimodal analysis of gene-expression patterns, electrophysiological properties, and morphological phenotypes at the single-cell/single-nucleus level has been arduous because of the diversity and complexity of neurons. The emergence of Patch-sequencing (Patch-seq) directly links transcriptomics, morphology, and electrophysiology, taking neuroscience research to a multimodal era. In this review, we summarized the development of Patch-seq and recent applications in the cortex, hippocampus, and other nervous systems. Through generating multimodal cell type atlases, targeting specific cell populations, and correlating transcriptomic data with phenotypic information, Patch-seq has provided new insight into outstanding questions in neuroscience. We highlight the challenges and opportunities of Patch-seq in neuroscience and hope to shed new light on future neuroscience research.
Collapse
Affiliation(s)
- Mingting Shao
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China
| | - Wei Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China
| | - Ye Li
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China
| | - Lei Tang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China
| | - Zhao-Zhe Hao
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China
| | - Sheng Liu
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, 510060, China.
- Guangdong Province Key Laboratory of Brain Function and Disease, Guangzhou, 510080, China.
| |
Collapse
|
32
|
Wu Z, Sun DW, Pu H. CRISPR/Cas12a and G-quadruplex DNAzyme-driven multimodal biosensor for visual detection of Aflatoxin B1. Spectrochim Acta A Mol Biomol Spectrosc 2023; 302:123121. [PMID: 37579713 DOI: 10.1016/j.saa.2023.123121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/06/2023] [Accepted: 07/07/2023] [Indexed: 08/16/2023]
Abstract
Aflatoxin B1 (AFB1) contamination severely threatens human and animal health, it is thus critical to construct a strategy for its rapid, accurate, and visual detection. Herein, a multimodal biosensor was proposed based on CRISPR/Cas12a cleaved G-quadruplex (G4) for AFB1 detection. Briefly, specific binding of AFB1 to the aptamer occupied the binding site of the complementary DNA (cDNA), and cDNA then activated Cas12a to cleave G4 into fragments. Meanwhile, the intact G4-DNAzyme could catalyze 3, 3', 5, 5'-tetramethylbenzidine (TMB) to form colourimetric/SERS/fluorescent signal-enhanced TMBox, and the yellow solution produced by TMBox under acidic conditions could be integrated with a smartphone application for visual detection. The colourimetric/SERS/fluorescent biosensor yielded detection limits of 0.85, 0.79, and 1.65 pg·mL-1, respectively, and was applied for detecting AFB1 in peanut, maize, and badam samples. The method is suitable for visual detection in naturally contaminated peanut samples and has prospective applications in the food industry.
Collapse
Affiliation(s)
- Zhihui Wu
- School of Food Science and Engineering, South China University of Technology, Guangzhou 510641, China; Academy of Contemporary Food Engineering, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; Engineering and Technological Research Centre of Guangdong Province on Intelligent Sensing and Process Control of Cold Chain Foods, & Guangdong Province Engineering Laboratory for Intelligent Cold Chain Logistics Equipment for Agricultural Products, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China
| | - Da-Wen Sun
- School of Food Science and Engineering, South China University of Technology, Guangzhou 510641, China; Academy of Contemporary Food Engineering, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; Engineering and Technological Research Centre of Guangdong Province on Intelligent Sensing and Process Control of Cold Chain Foods, & Guangdong Province Engineering Laboratory for Intelligent Cold Chain Logistics Equipment for Agricultural Products, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China; Food Refrigeration and Computerized Food Technology (FRCFT), Agriculture and Food Science Centre, University College Dublin, National University of Ireland, Belfield, Dublin 4, Ireland.
| | - Hongbin Pu
- School of Food Science and Engineering, South China University of Technology, Guangzhou 510641, China; Academy of Contemporary Food Engineering, South China University of Technology, Guangzhou Higher Education Mega Center, Guangzhou 510006, China; Engineering and Technological Research Centre of Guangdong Province on Intelligent Sensing and Process Control of Cold Chain Foods, & Guangdong Province Engineering Laboratory for Intelligent Cold Chain Logistics Equipment for Agricultural Products, Guangzhou Higher Education Mega Centre, Guangzhou 510006, China
| |
Collapse
|
33
|
Wu Y, Ridwan AR, Niaz MR, Bennett DA, Arfanakis K. High resolution 0.5mm isotropic T 1-weighted and diffusion tensor templates of the brain of non-demented older adults in a common space for the MIITRA atlas. Neuroimage 2023; 282:120387. [PMID: 37783362 PMCID: PMC10625170 DOI: 10.1016/j.neuroimage.2023.120387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 09/22/2023] [Indexed: 10/04/2023] Open
Abstract
High quality, high resolution T1-weighted (T1w) and diffusion tensor imaging (DTI) brain templates located in a common space can enhance the sensitivity and precision of template-based neuroimaging studies. However, such multimodal templates have not been constructed for the older adult brain. The purpose of this work which is part of the MIITRA atlas project was twofold: (A) to develop 0.5 mm isotropic resolution T1w and DTI templates that are representative of the brain of non-demented older adults and are located in the same space, using advanced multimodal template construction techniques and principles of super resolution on data from a large, diverse, community cohort of 400 non-demented older adults, and (B) to systematically compare the new templates to other standardized templates. It was demonstrated that the new MIITRA-0.5mm T1w and DTI templates are well-matched in space, exhibit good definition of brain structures, including fine structures, exhibit higher image sharpness than other standardized templates, and are free of artifacts. The MIITRA-0.5mm T1w and DTI templates allowed higher intra-modality inter-subject spatial normalization precision as well as higher inter-modality intra-subject spatial matching of older adult T1w and DTI data compared to other available templates. Consequently, MIITRA-0.5mm templates allowed detection of smaller inter-group differences for older adult data compared to other templates. The MIITRA-0.5mm templates were also shown to be most representative of the brain of non-demented older adults compared to other templates with submillimeter resolution. The new templates constructed in this work constitute two of the final products of the MIITRA atlas project and are anticipated to have important implications for the sensitivity and precision of studies on older adults.
Collapse
Affiliation(s)
- Yingjuan Wu
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States
| | - Abdur Raquib Ridwan
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States
| | - Mohammad Rakeen Niaz
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States
| | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States
| | - Konstantinos Arfanakis
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, United States; Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, IL, United States.
| |
Collapse
|
34
|
Pang Y, Zhu X, Liu S, Lee C. A Natural Gradient Biological-Enabled Multimodal Triboelectric Nanogenerator for Driving Safety Monitoring. ACS Nano 2023; 17:21878-21892. [PMID: 37924297 DOI: 10.1021/acsnano.3c08102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2023]
Abstract
A key element to ensuring driving safety is to provide a sufficient braking distance. Inspired by the nature triply periodic minimal surface (TPMS), a gradient and multimodal triboelectric nanogenerator (GM-TENG) is proposed with high sensitivity and excellent multimodal monitoring. The gradient TPMS structure exhibits the multi-stage stress-strain properties of typical porous metamaterials. Significantly, the multimodal monitoring capability depends on the implicit function of the defined level constant c, which directly contributes to the multimodal driving safety monitoring. The mechanical and electrical responsive behavior of the GM-TENG is analyzed to identify the applied speed, load, and working mode. In addition, optimized peak open-circuit voltage (Voc) is demonstrated for self-awareness of the braking condition. The braking distance factor (L) is conceived to construct the self-aware equation of the friction coefficient based on the integration of Voc with respect to time. Importantly, R-squared up to 94.29 % can be obtained, which improves self-aware accuracy and real-time capabilities. This natural structure and self-aware device provide an effective strategy to improve driving safety, which contributes to the improvement of road safety and presents self-powered sensing with potential applications in an intelligent transportation system.
Collapse
Affiliation(s)
- Yafeng Pang
- Key Laboratory of Road and Traffic Engineering of Ministry of Education, Tongji University, Shanghai 200092, P. R. China
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117576, Singapore
- Center for Intelligent Sensors and MEMS, National University of Singapore, Block E6 #05-11, 5 Engineering Drive 1, Singapore 117608, Singapore
| | - Xingyi Zhu
- Key Laboratory of Road and Traffic Engineering of Ministry of Education, Tongji University, Shanghai 200092, P. R. China
| | - Shuainian Liu
- Key Laboratory of Road and Traffic Engineering of Ministry of Education, Tongji University, Shanghai 200092, P. R. China
| | - Chengkuo Lee
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117576, Singapore
- Center for Intelligent Sensors and MEMS, National University of Singapore, Block E6 #05-11, 5 Engineering Drive 1, Singapore 117608, Singapore
| |
Collapse
|
35
|
Chang MC, Park D. Algorithm for multimodal medication therapy in patients with complex regional pain syndrome. J Yeungnam Med Sci 2023; 40:S125-S128. [PMID: 37434359 DOI: 10.12701/jyms.2023.00360] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 05/15/2023] [Indexed: 07/13/2023]
Abstract
Complex regional pain syndrome (CRPS), previously known as reflex sympathetic dystrophy and causalgia, is a clinical entity characterized by classic neuropathic pain, autonomic involvement, motor symptoms, and trophic changes in the skin, nails, and hair. Although various therapeutic modalities are used to control CRPS-related pain, severe pain due to CRPS often persists and progresses to the chronic phase. In this study, we constructed an algorithm for multimodal medication therapy for CRPS based on the established pathology of CRPS. Oral steroid pulse therapy is recommended for initial pain management in patients with CRPS. Oral steroid therapy can reduce peripheral and central neuroinflammation, contributing to the development of neuropathic pain during the acute and chronic phases. If steroid pulse therapy offers poor relief or is ineffective, treatment to control central sensitization in the chronic phase should be initiated. If pain persists despite all drug adjustments, ketamine with midazolam 2 mg before and after ketamine injection can be administered intravenously to inhibit the N-methyl D-aspartate receptor. If this treatment fails to achieve sufficient efficacy, intravenous lidocaine can be administered for 2 weeks. We hope that our proposed drug treatment algorithm to control CRPS pain will help clinicians appropriately treat patients with CRPS. Further clinical studies assessing patients with CRPS are warranted to establish this treatment algorithm in clinical practice.
Collapse
Affiliation(s)
- Min Cheol Chang
- Department of Physical Medicine and Rehabilitation, Yeungnam University College of Medicine, Daegu, Korea
| | - Donghwi Park
- Department of Rehabilitation Medicine, Daegu Fatima Hospital, Daegu, Korea
| |
Collapse
|
36
|
Muniyappan S, Rayan AXA, Varrieth GT. EGeRepDR: An enhanced genetic-based representation learning for drug repurposing using multiple biomedical sources. J Biomed Inform 2023; 147:104528. [PMID: 37858852 DOI: 10.1016/j.jbi.2023.104528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/11/2023] [Accepted: 10/16/2023] [Indexed: 10/21/2023]
Abstract
MOTIVATION Drug repurposing (DR) is an imminent approach for identifying novel therapeutic indications for the available drugs and discovering novel drugs for previously untreatable diseases. Nowadays, DR has major attention in the pharmaceutical industry due to the high cost and time of launching new drugs to the market through traditional drug development. DR task majorly depends on genetic information since the drugs revert the modified Gene Expression (GE) of diseases to normal. Many of the existing studies have not considered the genetic importance of predicting the potential candidates. METHOD We proposed a novel multimodal framework that utilizes genetic aspects of drugs and diseases such as genes, pathways, gene signatures, or expression to enhance the performance of DR using various data sources. Firstly, the heterogeneous biological network (HBN) is constructed with three types of nodes namely drug, disease, and gene, and 4 types of edges similarities (drug, gene, and disease), drug-gene, gene-disease, and drug-disease. Next, a modified graph auto-encoder (GAE*) model is applied to learn the representation of drug and disease nodes using the topological structure and edge information. Secondly, the HBN is enhanced with the information extracted from biomedical literature and ontology using a novel semi-supervised pattern embedding-based bootstrapping model and novel DR perspective representation learning respectively to improve the prediction performance. Finally, our proposed system uses a neural network model to generate the probability score of drug-disease pairs. RESULTS We demonstrate the efficiency of the proposed model on various datasets and achieved outstanding performance in 5-fold cross-validation (AUC = 0.99, AUPR = 0.98). Further, we validated the top-ranked potential candidates using pathway analysis and proved that the known and predicted candidates share common genes in the pathways.
Collapse
Affiliation(s)
- Saranya Muniyappan
- Computer Science and Engineering, CEG Campus, Anna University, Chennai, Tamil Nadu, India.
| | | | | |
Collapse
|
37
|
Clough S, Padilla VG, Brown-Schmidt S, Duff MC. Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia 2023; 189:108665. [PMID: 37619936 PMCID: PMC10592037 DOI: 10.1016/j.neuropsychologia.2023.108665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/27/2023] [Accepted: 08/18/2023] [Indexed: 08/26/2023]
Abstract
PURPOSE Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying "He searched for a new recipe" while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays. METHODS 60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., "He searched for a new recipe"), a Gesture Match (e.g., "He searched for a new recipe online), or Other ("He looked for a new recipe"). We also examined whether participants produced representative gestures themselves when retelling these details. RESULTS Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story. CONCLUSION We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
Collapse
Affiliation(s)
- Sharice Clough
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States.
| | - Victoria-Grace Padilla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Vanderbilt University, United States
| | - Melissa C Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| |
Collapse
|
38
|
Meng H, Ding SX, Zhang Y, Zhu FY, Wang J, Wang JN, Gao BL, Yin XP. Value of Multimodal Diffusion-weighted Imaging in Preoperative Evaluation of Ki-67 Expression in Endometrial Carcinoma. Curr Med Imaging 2023; 20:CMIR-EPUB-133564. [PMID: 37876269 DOI: 10.2174/1573405620666230811142710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 06/14/2023] [Accepted: 06/27/2023] [Indexed: 10/26/2023]
Abstract
PURPOSE To investigate the value of multimodal diffusion weighted imaging (DWI) in preoperative evaluation of Ki-67 expression of endometrial carcinoma (EC). MATERIALS AND METHODS Patients who had undergone pelvic DWI, intravoxel incoherent motion (IVIM), and diffusion kurtosis imaging (DKI) sequence MRI scan before surgery were retrospectively enrolled. Single index model, double index model, and DKI were used for post-processing of the DWI data, and the apparent diffusion coefficient (ADC), real diffusion coefficient (D), pseudo diffusion coefficient (D*), perfusion fraction (f), non-Gaussian mean diffusion kurtosis (MK), mean diffusion coefficient (MD) and anisotropy fraction (FA) were calculated and compared between the Ki-67 high (≥50%) and low (<50%) expression groups. RESULTS Forty-two patients with a median age of 56 (range 37 - 75) years were enrolled, including 15 patients with a high Ki-67 (≥50%) expression and 27 with a low Ki-67 (<50%) expression. The MK (0.91 ± 0.12 vs. 0.76 ± 0.12) was significantly (P<0.05) higher while MD (0.99 ± 0.17 vs. 1.16 ± 0.22), D (0.55 ± 0.06 vs. 0.62 ± 0.08), and f (0.21 vs. 0.28) were significantly (P<0.05) lower in the high than in the low expression group. The combined model of MK, MD, D, and f-values had the largest area under the curve (AUC) value of 0.869 (95% CI: 0.764-0.974), sensitivity 0.733 and specificity 0.852, followed by the MK value with an AUC value 0.827 (95% CI: 0.700-0.954), sensitivity 0.733 and specificity 0.815. CONCLUSIONS IVIM and DKI have certain diagnostic values for preoperative evaluation of the EC Ki-67 expression, and the combined model has the highest diagnostic efficiency.
Collapse
Affiliation(s)
- Huan Meng
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Si-Xuan Ding
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Yu Zhang
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Feng-Ying Zhu
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Jing Wang
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Jia-Ning Wang
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Bu-Lang Gao
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Xiao-Ping Yin
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| |
Collapse
|
39
|
Lakhan A, Mohammed MA, Abdulkareem KH, Hamouda H, Alyahya S. Autism Spectrum Disorder detection framework for children based on federated learning integrated CNN-LSTM. Comput Biol Med 2023; 166:107539. [PMID: 37804778 DOI: 10.1016/j.compbiomed.2023.107539] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/03/2023] [Accepted: 09/28/2023] [Indexed: 10/09/2023]
Abstract
The incidence of Autism Spectrum Disorder (ASD) among children, attributed to genetics and environmental factors, has been increasing daily. ASD is a non-curable neurodevelopmental disorder that affects children's communication, behavior, social interaction, and learning skills. While machine learning has been employed for ASD detection in children, existing ASD frameworks offer limited services to monitor and improve the health of ASD patients. This paper presents a complex and efficient ASD framework with comprehensive services to enhance the results of existing ASD frameworks. Our proposed approach is the Federated Learning-enabled CNN-LSTM (FCNN-LSTM) scheme, designed for ASD detection in children using multimodal datasets. The ASD framework is built in a distributed computing environment where different ASD laboratories are connected to the central hospital. The FCNN-LSTM scheme enables local laboratories to train and validate different datasets, including Ages and Stages Questionnaires (ASQ), Facial Communication and Symbolic Behavior Scales (CSBS) Dataset, Parents Evaluate Developmental Status (PEDS), Modified Checklist for Autism in Toddlers (M-CHAT), and Screening Tool for Autism in Toddlers and Children (STAT) datasets, on different computing laboratories. To ensure the security of patient data, we have implemented a security mechanism based on advanced standard encryption (AES) within the federated learning environment. This mechanism allows all laboratories to offload and download data securely. We integrate all trained datasets into the aggregated nodes and make the final decision for ASD patients based on the decision process tree. Additionally, we have designed various Internet of Things (IoT) applications to improve the efficiency of ASD patients and achieve more optimal learning results. Simulation results demonstrate that our proposed framework achieves an ASD detection accuracy of approximately 99% compared to all existing ASD frameworks.
Collapse
Affiliation(s)
- Abdullah Lakhan
- Department of Cybersecurity and Computer Science, Dawood University of Engineering and Technology, Karachi City 74800, Sindh, Pakistan.
| | - Mazin Abed Mohammed
- Department of Artificial Intelligence, College of Computer Science and Information Technology, University of Anbar, Anbar 31001, Iraq.
| | | | - Hassen Hamouda
- College of Science and Humanities at Alghat, Majmaah University, Al-Majmaah 11952, Saudi Arabia.
| | - Saleh Alyahya
- Department of Electrical Engineering, College of Engineering and Information Technology, Onaizah Colleges, Onaizah 2053, Saudi Arabia.
| |
Collapse
|
40
|
Bresee CS, Belli HM, Luo Y, Hartmann MJZ. Comparative morphology of the whiskers and faces of mice (Mus musculus) and rats (Rattus norvegicus). J Exp Biol 2023; 226:jeb245597. [PMID: 37577985 PMCID: PMC10617617 DOI: 10.1242/jeb.245597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 07/17/2023] [Indexed: 08/15/2023]
Abstract
Understanding neural function requires quantification of the sensory signals that an animal's brain evolved to interpret. These signals in turn depend on the morphology and mechanics of the animal's sensory structures. Although the house mouse (Mus musculus) is one of the most common model species used in neuroscience, the spatial arrangement of its facial sensors has not yet been quantified. To address this gap, the present study quantifies the facial morphology of the mouse, with a particular focus on the geometry of its vibrissae (whiskers). The study develops equations that establish relationships between the three-dimensional (3D) locations of whisker basepoints, whisker geometry (arclength, curvature) and the 3D angles at which the whiskers emerge from the face. Additionally, the positions of facial sensory organs are quantified relative to bregma-lambda. Comparisons with the Norway rat (Rattus norvegicus) indicate that when normalized for head size, the whiskers of these two species have similar spacing density. The rostral-caudal distances between facial landmarks of the rat are a factor of ∼2.0 greater than the mouse, while the scale of bilateral distances is larger and more variable. We interpret these data to suggest that the larger size of rats compared with mice is a derived (apomorphic) trait. As rodents are increasingly important models in behavioral neuroscience, the morphological model developed here will help researchers generate naturalistic, multimodal patterns of stimulation for neurophysiological experiments and allow the generation of synthetic datasets and simulations to close the loop between brain, body and environment.
Collapse
Affiliation(s)
- Chris S. Bresee
- Northwestern University Institute for Neuroscience, Northwestern University, Evanston, IL 60208,USA
| | - Hayley M. Belli
- Department of Biomedical Engineering,Northwestern University, Evanston, IL 60208, USA
| | - Yifu Luo
- Department of Mechanical Engineering,Northwestern University, Evanston, IL 60208,USA
| | - Mitra J. Z. Hartmann
- Department of Biomedical Engineering,Northwestern University, Evanston, IL 60208, USA
- Department of Mechanical Engineering,Northwestern University, Evanston, IL 60208,USA
| |
Collapse
|
41
|
Luo N, Zhong X, Su L, Cheng Z, Ma W, Hao P. Artificial intelligence-assisted dermatology diagnosis: From unimodal to multimodal. Comput Biol Med 2023; 165:107413. [PMID: 37703714 DOI: 10.1016/j.compbiomed.2023.107413] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/02/2023] [Accepted: 08/28/2023] [Indexed: 09/15/2023]
Abstract
Artificial Intelligence (AI) is progressively permeating medicine, notably in the realm of assisted diagnosis. However, the traditional unimodal AI models, reliant on large volumes of accurately labeled data and single data type usage, prove insufficient to assist dermatological diagnosis. Augmenting these models with text data from patient narratives, laboratory reports, and image data from skin lesions, dermoscopy, and pathologies could significantly enhance their diagnostic capacity. Large-scale pre-training multimodal models offer a promising solution, exploiting the burgeoning reservoir of clinical data and amalgamating various data types. This paper delves into unimodal models' methodologies, applications, and shortcomings while exploring how multimodal models can enhance accuracy and reliability. Furthermore, integrating cutting-edge technologies like federated learning and multi-party privacy computing with AI can substantially mitigate patient privacy concerns in dermatological datasets and further fosters a move towards high-precision self-diagnosis. Diagnostic systems underpinned by large-scale pre-training multimodal models can facilitate dermatology physicians in formulating effective diagnostic and treatment strategies and herald a transformative era in healthcare.
Collapse
Affiliation(s)
- Nan Luo
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Xiaojing Zhong
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Luxin Su
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Zilin Cheng
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Wenyi Ma
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| | - Pingsheng Hao
- Hospital of Chengdu University of Traditional Chinese Medicine, No. 39 Shi-er-qiao Road, Chengdu, 610075, Sichuan, China.
| |
Collapse
|
42
|
Zhou Z, Hong B, Qian X, Hu J, Shen M, Ji J, Dai Y. macJNet: weakly-supervised multimodal image deformable registration using joint learning framework and multi-sampling cascaded MIND. Biomed Eng Online 2023; 22:91. [PMID: 37726780 PMCID: PMC10510294 DOI: 10.1186/s12938-023-01143-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/27/2023] [Indexed: 09/21/2023] Open
Abstract
Deformable multimodal image registration plays a key role in medical image analysis. It remains a challenge to find accurate dense correspondences between multimodal images due to the significant intensity distortion and the large deformation. macJNet is proposed to align the multimodal medical images, which is a weakly-supervised multimodal image deformable registration method using a joint learning framework and multi-sampling cascaded modality independent neighborhood descriptor (macMIND). The joint learning framework consists of a multimodal image registration network and two segmentation networks. The proposed macMIND is a modality-independent image structure descriptor to provide dense correspondence for registration, which incorporates multi-orientation and multi-scale sampling patterns to build self-similarity context. It greatly enhances the representation ability of cross-modal features in the registration network. The semi-supervised segmentation networks generate anatomical labels to provide semantics correspondence for registration, and the registration network helps to improve the performance of multimodal image segmentation by providing the consistency of anatomical labels. 3D CT-MR liver image dataset with 118 samples is built for evaluation, and comprehensive experiments have been conducted to demonstrate that macJNet achieves superior performance over state-of-the-art multi-modality medical image registration methods.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, Jiangsu China
| | - Ben Hong
- School of Electronic and Optical Engineering, NanJing University of Science and Technology, Nanjing, Jiangsu China
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, Jiangsu China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, Jiangsu China
| | - Minglei Shen
- School of Electronic and Optical Engineering, NanJing University of Science and Technology, Nanjing, Jiangsu China
| | - Jiansong Ji
- Key Laboratory of Imaging Diagnosis and Minimally Invasive Intervention Research, The Fifth Affiliated Hospital of Wenzhou Medical University, Lishui, Zhejiang China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, Jiangsu China
| |
Collapse
|
43
|
Hootsmans N, Parmiter S, Connors K, Badve SB, Snyder E, Turcotte JJ, Jayaraman SS, Zahiri HR. Outcomes of an enhanced recovery after surgery (ERAS) program to limit perioperative opioid use in outpatient minimally invasive GI and hernia surgeries. Surg Endosc 2023; 37:7192-7198. [PMID: 37353653 DOI: 10.1007/s00464-023-10217-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 06/12/2023] [Indexed: 06/25/2023]
Abstract
BACKGROUND Perioperative pain management is important for patient satisfaction while returning to homeostasis in the safest way possible. Studies show that patients don't require as much opioids as once thought. The benefits of ERAS pathways extend beyond enhancement of patients' perioperative experience, and include reducing opioid prescriptions in the face of the ongoing nationwide opioid crisis and evidence of prescription opioids as a contributor. METHODS We performed a retrospective cohort study of patients undergoing same day minimally invasive surgery (MIS) procedures for GI and hernia disease using a minimal-opioid ERAS protocol at two community hospitals between January 2020 and May 2022. We included elective laparoscopic cholecystectomy (LC), laparoscopic appendectomy (LA) for acute appendicitis without perforation, and minimally invasive (laparoscopic and robotic) inguinal and ventral hernia repair or abdominal wall reconstruction (AWR). Primary outcome was postoperative opioid use. RESULTS A total of 509 patients were included, undergoing procedures of MIS hernia repair (52.5%), LC (43.6%), and LA (7.9%). Only 9.4% of patients received opioid prescriptions at discharge, with no difference between groups. Among the patients receiving a prescription at discharge, there was a significant difference in morphine milligram equivalents (MME) prescribed (25.0 ± 0.0 in the LA group, 65.0 ± 41.4 in the LC group, 100.6 ± 46.2 in the MIS hernia/AWR group; P = 0.015). Nine percent of patients called with pain management concerns postoperatively. ASA score ≥ 3 was associated with increased odds for postoperative opioid prescription (OR 2.084; P = 0.014). CONCLUSIONS We demonstrate that an opioid-sparing ERAS program effectively manages pain for patients undergoing multiple outpatient MIS GI/hernia procedures, and suggests generalizability across a diverse range of operations. Therefore, the use of ERAS may safely and effectively expand beyond inpatient MIS and open surgeries that target reduced length of stay to also minimize opioids for outpatient procedures.
Collapse
Affiliation(s)
- Norbert Hootsmans
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA.
| | - Sara Parmiter
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA
| | - Kevin Connors
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA
| | - Shivani B Badve
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA
| | - Elise Snyder
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA
| | - Justin J Turcotte
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA
| | | | - H Reza Zahiri
- Luminis Health Anne Arundel Medical Center, 2001 Medical Pkwy, Annapolis, MD, USA
- Luminis Health Doctors Community Medical Center, Lanham, MD, USA
| |
Collapse
|
44
|
Mostafavi MA, Bahloul M, Boucher N, Routhier F. A Novel Geospatial Assistive Navigation Technology for Seamless Multimodal Mobility of Wheelchair Users. Stud Health Technol Inform 2023; 306:409-415. [PMID: 37638943 DOI: 10.3233/shti230652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Mobility is fundamental for social participation of people with disabilities. Unfortunately, traditional design of urban environments, including infrastructure and services are developed based largely on a standard perception of an independent, fully functional citizen without disability which limits the mobility social participation of PWD. This paper presents the design and development of a novel geospatial assistive navigation technology to support multimodal mobility of people with disabilities, especially those using manual wheelchair in urban areas.
Collapse
Affiliation(s)
- Mir Abolfazl Mostafavi
- Centre de recherche en données et en intelligence géospatiale (CRDIG), Université Laval, Québec, Canada
- Centre interdisciplinaire de recherche en réadaptation et en intégration sociale (CIRRIS), Université Laval, Québec, Canada
| | - Mohamed Bahloul
- Centre de recherche en données et en intelligence géospatiale (CRDIG), Université Laval, Québec, Canada
- Centre interdisciplinaire de recherche en réadaptation et en intégration sociale (CIRRIS), Université Laval, Québec, Canada
| | - Normand Boucher
- Centre interdisciplinaire de recherche en réadaptation et en intégration sociale (CIRRIS), Université Laval, Québec, Canada
| | - François Routhier
- Centre interdisciplinaire de recherche en réadaptation et en intégration sociale (CIRRIS), Université Laval, Québec, Canada
| |
Collapse
|
45
|
Qu S, Shi S, Quan Z, Gao Y, Wang M, Wang Y, Pan G, Lai HY, Roe AW, Zhang X. Design and application of a multimodality-compatible 1Tx/6Rx RF coil for monkey brain MRI at 7T. Neuroimage 2023; 276:120185. [PMID: 37244320 DOI: 10.1016/j.neuroimage.2023.120185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/09/2023] [Accepted: 05/22/2023] [Indexed: 05/29/2023] Open
Abstract
OBJECTIVE Blood-oxygen-level-dependent functional MRI allows to investigte neural activities and connectivity. While the non-human primate plays an essential role in neuroscience research, multimodal methods combining functional MRI with other neuroimaging and neuromodulation enable us to understand the brain network at multiple scales. APPROACH In this study, a tight-fitting helmet-shape receive array with a single transmit loop for anesthetized macaque brain MRI at 7T was fabricated with four openings constructed in the coil housing to accommodate multimodal devices, and the coil performance was quantitatively evaluated and compared to a commercial knee coil. In addition, experiments over three macaques with infrared neural stimulation (INS), focused ultrasound stimulation (FUS), and transcranial direct current stimulation (tDCS) were conducted. MAIN RESULTS The RF coil showed higher transmit efficiency, comparable homogeneity, improved SNR and enlarged signal coverage over the macaque brain. Infrared neural stimulation was applied to the amygdala in deep brain region, and activations in stimulation sites and connected sites were detected, with the connectivity consistent with anatomical information. Focused ultrasound stimulation was applied to the left visual cortex, and activations were acquired along the ultrasound traveling path, with all time course curves consistent with pre-designed paradigms. The existence of transcranial direct current stimulation electrodes brought no interference to the RF system, as evidenced through high-resolution MPRAGE structure images. SIGNIFICANCE This pilot study reveals the feasibility for brain investigation at multiple spatiotemporal scales, which may advance our understanding in dynamic brain networks.
Collapse
Affiliation(s)
- Shuxian Qu
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China
| | - Sunhang Shi
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China
| | - Zhiyan Quan
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China
| | - Yang Gao
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China; College of Electrical Engineering, Zhejiang University, Hangzhou, China
| | - Minmin Wang
- Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, China
| | - Yueming Wang
- Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, China; State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China
| | - Gang Pan
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China; State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China.
| | - Hsin-Yi Lai
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China; Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China.
| | - Anna Wang Roe
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China; Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China.
| | - Xiaotong Zhang
- The Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China; MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China; Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; College of Electrical Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
46
|
Streri A, de Hevia MD. How do human newborns come to understand the multimodal environment? Psychon Bull Rev 2023; 30:1171-1186. [PMID: 36862372 DOI: 10.3758/s13423-023-02260-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2023] [Indexed: 03/03/2023]
Abstract
For a long time, newborns were considered as human beings devoid of perceptual abilities who had to learn with effort everything about their physical and social environment. Extensive empirical evidence gathered in the last decades has systematically invalidated this notion. Despite the relatively immature state of their sensory modalities, newborns have perceptions that are acquired, and are triggered by, their contact with the environment. More recently, the study of the fetal origins of the sensory modes has revealed that in utero all the senses prepare to operate, except for the vision mode, which is only functional starting from the first minutes after birth. This discrepancy between the maturation of the different senses leads to the question of how human newborns come to understand our multimodal and complex environment. More precisely, how the visual mode interacts with the tactile and auditory modes from birth. After having defined the tools that newborns use to interact with other sensory modalities, we review studies across different fields of research such as the intermodal transfer between touch and vision, auditory-visual speech perception, and the existence of links between the dimensions of space, time, and number. Overall, evidence from these studies supports the idea that human newborns are spontaneously driven, and cognitively equipped, to link information collected by the different sensory modes in order to create a representation of a stable world.
Collapse
Affiliation(s)
- Arlette Streri
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006, Paris, France
| | - Maria Dolores de Hevia
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006, Paris, France.
| |
Collapse
|
47
|
de Figueiredo M, Saugy J, Saugy M, Faiss R, Salamin O, Nicoli R, Kuuranne T, Rudaz S, Botrè F, Boccard J. A new multimodal paradigm for biomarkers longitudinal monitoring: a clinical application to women steroid profiles in urine and blood. Anal Chim Acta 2023; 1267:341389. [PMID: 37257979 DOI: 10.1016/j.aca.2023.341389] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/12/2023] [Accepted: 05/16/2023] [Indexed: 06/02/2023]
Abstract
BACKGROUND Most current state-of-the-art strategies to generate individual adaptive reference ranges are designed to monitor one clinical parameter at a time. An innovative methodology is proposed for the simultaneous longitudinal monitoring of multiple biomarkers. The estimation of individual thresholds is performed by applying a Bayesian modeling strategy to a multivariate score integrating several biomarkers (compound concentration and/or ratio). This multimodal monitoring was applied to data from a clinical study involving 14 female volunteers with normal menstrual cycles receiving testosterone via transdermal route, as to test its ability to detect testosterone administration. The study samples consisted of urine and blood collected during 4 weeks of a control phase and 4 weeks with a daily testosterone gel application. RESULTS Integrating multiple biomarkers improved the detection of testosterone gel administration with substantially higher sensitivity compared with the distinct follow-up of each biomarker, when applied to selected urine and serum steroid biomarkers, as well as the combination of both. Among the 175 known positive samples, 38% were identified by the multimodal approach using urine biomarkers, 79% using serum biomarkers and 83% by combining biomarkers from both biological matrices, whereas 10%, 67% and 64% were respectively detected using standard unimodal monitoring. SIGNIFICANCE AND NOVELTY The detection of abnormal patterns can be improved using multimodal approaches. The combination of urine and serum biomarkers reduced the overall number of false-negatives, thus evidencing promising complementarity between urine and blood sampling for doping control, as highlighted in the case of the use of transdermal testosterone preparations. The generation in a multimodal setting of adaptive and personalized reference ranges opens up new opportunities in clinical and anti-doping profiling. The integration of multiple parameters in a longitudinal monitoring is expected to provide a more complete evaluation of individual profiles generating actionable intelligence to further guide sample collection, analysis protocols and decision-making in clinics and anti-doping.
Collapse
Affiliation(s)
- Miguel de Figueiredo
- School of Pharmaceutical Sciences, University of Geneva, Geneva, Switzerland; Institute of Pharmaceutical Sciences of Western Switzerland, University of Geneva, Geneva, Switzerland
| | - Jonas Saugy
- Center of Research and Expertise in Anti-Doping Sciences, Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland
| | - Martial Saugy
- Center of Research and Expertise in Anti-Doping Sciences, Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland
| | - Raphaël Faiss
- Center of Research and Expertise in Anti-Doping Sciences, Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland
| | - Olivier Salamin
- Center of Research and Expertise in Anti-Doping Sciences, Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland; Swiss Laboratory for Doping Analyses, University Center of Legal Medicine, Lausanne and Geneva, Lausanne University, Hospital and University of Lausanne, Switzerland
| | - Raul Nicoli
- Swiss Laboratory for Doping Analyses, University Center of Legal Medicine, Lausanne and Geneva, Lausanne University, Hospital and University of Lausanne, Switzerland
| | - Tiia Kuuranne
- Swiss Laboratory for Doping Analyses, University Center of Legal Medicine, Lausanne and Geneva, Lausanne University, Hospital and University of Lausanne, Switzerland
| | - Serge Rudaz
- School of Pharmaceutical Sciences, University of Geneva, Geneva, Switzerland; Institute of Pharmaceutical Sciences of Western Switzerland, University of Geneva, Geneva, Switzerland
| | - Francesco Botrè
- Center of Research and Expertise in Anti-Doping Sciences, Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland
| | - Julien Boccard
- School of Pharmaceutical Sciences, University of Geneva, Geneva, Switzerland; Institute of Pharmaceutical Sciences of Western Switzerland, University of Geneva, Geneva, Switzerland.
| |
Collapse
|
48
|
Azher ZL, Suvarna A, Chen JQ, Zhang Z, Christensen BC, Salas LA, Vaickus LJ, Levy JJ. Assessment of emerging pretraining strategies in interpretable multimodal deep learning for cancer prognostication. BioData Min 2023; 16:23. [PMID: 37481666 PMCID: PMC10363299 DOI: 10.1186/s13040-023-00338-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 07/05/2023] [Indexed: 07/24/2023] Open
Abstract
BACKGROUND Deep learning models can infer cancer patient prognosis from molecular and anatomic pathology information. Recent studies that leveraged information from complementary multimodal data improved prognostication, further illustrating the potential utility of such methods. However, current approaches: 1) do not comprehensively leverage biological and histomorphological relationships and 2) make use of emerging strategies to "pretrain" models (i.e., train models on a slightly orthogonal dataset/modeling objective) which may aid prognostication by reducing the amount of information required for achieving optimal performance. In addition, model interpretation is crucial for facilitating the clinical adoption of deep learning methods by fostering practitioner understanding and trust in the technology. METHODS Here, we develop an interpretable multimodal modeling framework that combines DNA methylation, gene expression, and histopathology (i.e., tissue slides) data, and we compare performance of crossmodal pretraining, contrastive learning, and transfer learning versus the standard procedure. RESULTS Our models outperform the existing state-of-the-art method (average 11.54% C-index increase), and baseline clinically driven models (average 11.7% C-index increase). Model interpretations elucidate consideration of biologically meaningful factors in making prognosis predictions. DISCUSSION Our results demonstrate that the selection of pretraining strategies is crucial for obtaining highly accurate prognostication models, even more so than devising an innovative model architecture, and further emphasize the all-important role of the tumor microenvironment on disease progression.
Collapse
Affiliation(s)
- Zarif L Azher
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, USA
| | - Anish Suvarna
- Thomas Jefferson High School for Science and Technology, Alexandria, VA, USA
| | - Ji-Qing Chen
- Cancer Biology Graduate Program, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
| | - Ze Zhang
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
| | - Brock C Christensen
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Department of Molecular and Systems Biology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Department of Community and Family Medicine, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
| | - Lucas A Salas
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Department of Molecular and Systems Biology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
- Integrative Neuroscience at Dartmouth (IND) Graduate Program, Dartmouth College Geisel School of Medicine, Hanover, NH, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH, USA
| | - Joshua J Levy
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, USA.
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, USA.
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH, USA.
- Department of Dermatology, Dartmouth Health, Lebanon, NH, USA.
| |
Collapse
|
49
|
Alharbi B, Alshanbari HS. Face-voice based multimodal biometric authentication system via FaceNet and GMM. PeerJ Comput Sci 2023; 9:e1468. [PMID: 37547388 PMCID: PMC10403184 DOI: 10.7717/peerj-cs.1468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 06/08/2023] [Indexed: 08/08/2023]
Abstract
Information security has become an inseparable aspect of the field of information technology as a result of advancements in the industry. Authentication is crucial when it comes to dealing with security. A user must be identified using biometrics based on certain physiological and behavioral markers. To validate or establish the identification of an individual requesting their services, a variety of systems require trustworthy personal recognition schemes. The goal of such systems is to ensure that the offered services are only accessible by authorized users and not by others. This case study provides enhanced accuracy for multimodal biometric authentication based on voice and face hence, reducing the equal error rate. The proposed scheme utilizes the Gaussian mixture model for voice recognition, FaceNet model for face recognition and score level fusion to determine the identity of the user. The results reveal that the proposed scheme has the lowest equal error rate in comparison to the previous work.
Collapse
|
50
|
de Paz C, Travieso D. A direct comparison of sound and vibration as sources of stimulation for a sensory substitution glove. Cogn Res Princ Implic 2023; 8:41. [PMID: 37402032 DOI: 10.1186/s41235-023-00495-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/18/2023] [Indexed: 07/05/2023] Open
Abstract
Sensory substitution devices (SSDs) facilitate the detection of environmental information through enhancement of touch and/or hearing capabilities. Research has demonstrated that several tasks can be successfully completed using acoustic, vibrotactile, and multimodal devices. The suitability of a substituting modality is also mediated by the type of information required to perform the specific task. The present study tested the adequacy of touch and hearing in a grasping task by utilizing a sensory substitution glove. The substituting modalities inform, through increases in stimulation intensity, about the distance between the fingers and the objects. A psychophysical experiment of magnitude estimation was conducted. Forty blindfolded sighted participants discriminated equivalently the intensity of both vibrotactile and acoustic stimulation, although they experienced some difficulty with the more intense stimuli. Additionally, a grasping task involving cylindrical objects of varying diameters, distances and orientations was performed. Thirty blindfolded sighted participants were divided into vibration, sound, or multimodal groups. High performance was achieved (84% correct grasps) with equivalent success rate between groups. Movement variables showed more precision and confidence in the multimodal condition. Through a questionnaire, the multimodal group indicated their preference for using a multimodal SSD in daily life and identified vibration as their primary source of stimulation. These results demonstrate that there is an improvement in performance with specific-purpose SSDs, when the necessary information for a task is identified and coupled with the delivered stimulation. Furthermore, the results suggest that it is possible to achieve functional equivalence between substituting modalities when these previous steps are met.
Collapse
Affiliation(s)
- Carlos de Paz
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain
| | - David Travieso
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049, Madrid, Spain.
| |
Collapse
|