1
|
Yuan G, Wang T, Ju W, Fu S. A portable affective computing system for identifying mate preference. Sci Rep 2024; 14:17735. [PMID: 39085370 PMCID: PMC11292018 DOI: 10.1038/s41598-024-68772-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 07/29/2024] [Indexed: 08/02/2024] Open
Abstract
Recognizing an individual's preference state for potential romantic partners based on electroencephalogram (EEG) signals holds significant practical value in enhancing matchmaking success rates and preventing romance fraud. Despite some progress has been made in this field, challenges such as high-dimensional feature space and channel redundancy limited the technology's practical application. The aim of this study is to explore the most discriminative EEG features and channels, in order to enhance the recognition performance of the system, while maximizing the portable and practical value of EEG-based systems for recognizing romantic attraction. To achieve this goal, we first conducted an interesting simulated dating experiment to collect the necessary data. Next, EEG features were extracted from various dimensions, including band power and asymmetry index features. Then, we introduced a novel method for EEG feature and channel selection that combines the sequential forward selection (SFS) algorithm with the frequency-based feature subset integration (FFSI) algorithm. Finally, we used the random forest classifier (RFC) to determine a person's preference state for potential romantic partners. Experimental results indicate that the optimal feature subset, selected using the SFS-FFSI method, attained an average classification accuracy of 88.42%. Notably, these features were predominantly sourced from asymmetry index features of electrodes situated in the frontal, parietal, and occipital lobes.
Collapse
Affiliation(s)
- Guangjie Yuan
- School of Psychology, Qufu Normal University, Shandong, China.
- College of Electronic and Information Engineering, Southwest University, Chongqing, China.
| | - Tao Wang
- School of Psychology, Qufu Normal University, Shandong, China
| | - Wei Ju
- School of Psychology, Qufu Normal University, Shandong, China
| | - Sai Fu
- Faculty of Education, Southwest University, Chongqing, China.
| |
Collapse
|
2
|
Poiret C, Bouyeure A, Patil S, Boniteau C, Duchesnay E, Grigis A, Lemaitre F, Noulhiane M. Attention-gated 3D CapsNet for robust hippocampal segmentation. J Med Imaging (Bellingham) 2024; 11:014003. [PMID: 38173654 PMCID: PMC10760147 DOI: 10.1117/1.jmi.11.1.014003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 11/18/2023] [Accepted: 12/04/2023] [Indexed: 01/05/2024] Open
Abstract
Purpose The hippocampus is organized in subfields (HSF) involved in learning and memory processes and widely implicated in pathologies at different ages of life, from neonatal hypoxia to temporal lobe epilepsy or Alzheimer's disease. Getting a highly accurate and robust delineation of sub-millimetric regions such as HSF to investigate anatomo-functional hypotheses is a challenge. One of the main difficulties encountered by those methodologies is related to the small size and anatomical variability of HSF, resulting in the scarcity of manual data labeling. Recently introduced, capsule networks solve analogous problems in medical imaging, providing deep learning architectures with rotational equivariance. Nonetheless, capsule networks are still two-dimensional and unassessed for the segmentation of HSF. Approach We released a public 3D Capsule Network (3D-AGSCaps, https://github.com/clementpoiret/3D-AGSCaps) and compared it to equivalent architectures using classical convolutions on the automatic segmentation of HSF on small and atypical datasets (incomplete hippocampal inversion, IHI). We tested 3D-AGSCaps on three datasets with manually labeled hippocampi. Results Our main results were: (1) 3D-AGSCaps produced segmentations with a better Dice Coefficient compared to CNNs on rotated hippocampi (p = 0.004 , cohen's d = 0.179 ); (2) on typical subjects, 3D-AGSCaps produced segmentations with a Dice coefficient similar to CNNs while having 15 times fewer parameters (2.285M versus 35.069M). This may greatly facilitate the study of atypical subjects, including healthy and pathological cases like those presenting an IHI. Conclusion We expect our newly introduced 3D-AGSCaps to allow a more accurate and fully automated segmentation on atypical populations, small datasets, as well as on and large cohorts where manual segmentations are nearly intractable.
Collapse
Affiliation(s)
- Clement Poiret
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
- Université Paris Cité, InDEV team, U1141 NeuroDiderot, Inserm, Paris, France
| | - Antoine Bouyeure
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
- Université Paris Cité, InDEV team, U1141 NeuroDiderot, Inserm, Paris, France
| | - Sandesh Patil
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
- Université Paris Cité, InDEV team, U1141 NeuroDiderot, Inserm, Paris, France
| | - Cécile Boniteau
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
- Université Paris Cité, InDEV team, U1141 NeuroDiderot, Inserm, Paris, France
| | - Edouard Duchesnay
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
| | - Antoine Grigis
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
| | - Frederic Lemaitre
- Université de Rouen, CETAPS EA 3832, Rouen, France
- CRIOBE, UAR 3278, CNRS-EPHE-UPVD, Mooréa, Polynésie Française
| | - Marion Noulhiane
- UNIACT, NeuroSpin, Institut Joliot, CEA Paris-Saclay, Gif-sur-Yvette, France
- Université Paris Cité, InDEV team, U1141 NeuroDiderot, Inserm, Paris, France
| |
Collapse
|
4
|
Wang K, He R, Wang S, Liu L, Yamauchi T. The Efficient-CapsNet model for facial expression recognition. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04349-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
6
|
Expression Recognition Method Using Improved VGG16 Network Model in Robot Interaction. JOURNAL OF ROBOTICS 2021. [DOI: 10.1155/2021/9326695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Aiming at the problems of poor representation ability and less feature data when traditional expression recognition methods are applied to intelligent applications, an expression recognition method based on improved VGG16 network is proposed. Firstly, the VGG16 network is improved by using large convolution kernel instead of small convolution kernel and reducing some fully connected layers to reduce the complexity and parameters of the model. Then, the high-dimensional abstract feature data output by the improved VGG16 is input into the convolution neural network (CNN) for training, so as to output the expression types with high accuracy. Finally, the expression recognition method combined with the improved VGG16 and CNN model is applied to the human-computer interaction of the NAO robot. The robot makes different interactive actions according to different expressions. The experimental results based on CK + dataset show that the improved VGG16 network has strong supervised learning ability. It can extract features well for different expression types, and its overall recognition accuracy is close to 90%. Through multiple tests, the interactive results show that the robot can stably recognize emotions and make corresponding action interactions.
Collapse
|