• Reference Citation Analysis
  • v
  • v
  • Find an Article
Find an Article PDF (4612466)   Today's Articles (4074)   Subscriber (49383)
For: Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018;13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 167] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022]  Open
Number Cited by Other Article(s)
1
Chong CS, Davis C, Kim J. A Cantonese Audio-Visual Emotional Speech (CAVES) dataset. Behav Res Methods 2024;56:5264-5278. [PMID: 38017201 PMCID: PMC11289252 DOI: 10.3758/s13428-023-02270-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2023] [Indexed: 11/30/2023]
2
von Eiff CI, Kauk J, Schweinberger SR. The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities. Behav Res Methods 2024;56:5103-5115. [PMID: 37821750 PMCID: PMC11289065 DOI: 10.3758/s13428-023-02249-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/18/2023] [Indexed: 10/13/2023]
3
Yue L, Hu P, Zhu J. Advanced differential evolution for gender-aware English speech emotion recognition. Sci Rep 2024;14:17696. [PMID: 39085418 PMCID: PMC11291894 DOI: 10.1038/s41598-024-68864-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 07/29/2024] [Indexed: 08/02/2024]  Open
4
Alroobaea R. Cross-corpus speech emotion recognition with transformers: Leveraging handcrafted features and data augmentation. Comput Biol Med 2024;179:108841. [PMID: 39002317 DOI: 10.1016/j.compbiomed.2024.108841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 04/16/2024] [Accepted: 05/06/2024] [Indexed: 07/15/2024]
5
Munsif M, Sajjad M, Ullah M, Tarekegn AN, Cheikh FA, Tsakanikas P, Muhammad K. Optimized efficient attention-based network for facial expressions analysis in neurological health care. Comput Biol Med 2024;179:108822. [PMID: 38986286 DOI: 10.1016/j.compbiomed.2024.108822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 06/25/2024] [Accepted: 06/25/2024] [Indexed: 07/12/2024]
6
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024:10.3758/s13428-024-02443-y. [PMID: 38834812 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
7
Thomas AL, Assmann PF. Speech production and perception data collection in R: A tutorial for web-based methods using speechcollectr. Behav Res Methods 2024:10.3758/s13428-024-02399-z. [PMID: 38829553 DOI: 10.3758/s13428-024-02399-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2024] [Indexed: 06/05/2024]
8
Cooper A, Eitel M, Fecher N, Johnson E, Cirelli LK. Who is singing? Voice recognition from spoken versus sung speech. JASA EXPRESS LETTERS 2024;4:065203. [PMID: 38888432 DOI: 10.1121/10.0026385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 06/03/2024] [Indexed: 06/20/2024]
9
Wurzberger F, Schwenker F. Learning in Deep Radial Basis Function Networks. ENTROPY (BASEL, SWITZERLAND) 2024;26:368. [PMID: 38785617 PMCID: PMC11120405 DOI: 10.3390/e26050368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 04/19/2024] [Accepted: 04/24/2024] [Indexed: 05/25/2024]
10
Wu D, Jia X, Rao W, Dou W, Li Y, Li B. Construction of a Chinese traditional instrumental music dataset: A validated set of naturalistic affective music excerpts. Behav Res Methods 2024;56:3757-3778. [PMID: 38702502 PMCID: PMC11133124 DOI: 10.3758/s13428-024-02411-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2024] [Indexed: 05/06/2024]
11
Kim HN, Taylor S. Differences of people with visual disabilities in the perceived intensity of emotion inferred from speech of sighted people in online communication settings. Disabil Rehabil Assist Technol 2024;19:633-640. [PMID: 35997772 DOI: 10.1080/17483107.2022.2114555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 06/17/2022] [Accepted: 08/12/2022] [Indexed: 10/15/2022]
12
Leung FYN, Stojanovik V, Jiang C, Liu F. Investigating implicit emotion processing in autism spectrum disorder across age groups: A cross-modal emotional priming study. Autism Res 2024;17:824-837. [PMID: 38488319 DOI: 10.1002/aur.3124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 03/01/2024] [Indexed: 04/13/2024]
13
Krumpholz C, Quigley C, Fusani L, Leder H. Vienna Talking Faces (ViTaFa): A multimodal person database with synchronized videos, images, and voices. Behav Res Methods 2024;56:2923-2940. [PMID: 37950115 PMCID: PMC11133183 DOI: 10.3758/s13428-023-02264-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 11/12/2023]
14
Sadok S, Leglaive S, Girin L, Alameda-Pineda X, Séguier R. A multimodal dynamical variational autoencoder for audiovisual speech representation learning. Neural Netw 2024;172:106120. [PMID: 38266474 DOI: 10.1016/j.neunet.2024.106120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 10/25/2023] [Accepted: 01/09/2024] [Indexed: 01/26/2024]
15
Hsu JH, Wu CH, Lin ECL, Chen PS. MoodSensing: A smartphone app for digital phenotyping and assessment of bipolar disorder. Psychiatry Res 2024;334:115790. [PMID: 38401488 DOI: 10.1016/j.psychres.2024.115790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 01/29/2024] [Accepted: 02/11/2024] [Indexed: 02/26/2024]
16
Diemerling H, Stresemann L, Braun T, von Oertzen T. Implementing machine learning techniques for continuous emotion prediction from uniformly segmented voice recordings. Front Psychol 2024;15:1300996. [PMID: 38572198 PMCID: PMC10987695 DOI: 10.3389/fpsyg.2024.1300996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 02/09/2024] [Indexed: 04/05/2024]  Open
17
Lingelbach K, Vukelić M, Rieger JW. GAUDIE: Development, validation, and exploration of a naturalistic German AUDItory Emotional database. Behav Res Methods 2024;56:2049-2063. [PMID: 37221343 PMCID: PMC10991051 DOI: 10.3758/s13428-023-02135-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 05/25/2023]
18
Cooper H, Jennings BJ, Kumari V, Willard AK, Bennetts RJ. The association between childhood trauma and emotion recognition is reduced or eliminated when controlling for alexithymia and psychopathy traits. Sci Rep 2024;14:3413. [PMID: 38341493 PMCID: PMC10858958 DOI: 10.1038/s41598-024-53421-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 01/31/2024] [Indexed: 02/12/2024]  Open
19
Islam B, McElwain NL, Li J, Davila MI, Hu Y, Hu K, Bodway JM, Dhekne A, Roy Choudhury R, Hasegawa-Johnson M. Preliminary Technical Validation of LittleBeats™: A Multimodal Sensing Platform to Capture Cardiac Physiology, Motion, and Vocalizations. SENSORS (BASEL, SWITZERLAND) 2024;24:901. [PMID: 38339617 PMCID: PMC10857055 DOI: 10.3390/s24030901] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 01/19/2024] [Accepted: 01/19/2024] [Indexed: 02/12/2024]
20
Ge Y, Tang C, Li H, Chen Z, Wang J, Li W, Cooper J, Chetty K, Faccio D, Imran M, Abbasi QH. A comprehensive multimodal dataset for contactless lip reading and acoustic analysis. Sci Data 2023;10:895. [PMID: 38092796 PMCID: PMC10719268 DOI: 10.1038/s41597-023-02793-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 11/27/2023] [Indexed: 12/17/2023]  Open
21
Billah MM, Sarker ML, Akhand M. KBES: A dataset for realistic Bangla speech emotion recognition with intensity level. Data Brief 2023;51:109741. [PMID: 37965597 PMCID: PMC10641593 DOI: 10.1016/j.dib.2023.109741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/16/2023]  Open
22
Won NR, Son YD, Kim SM, Bae S, Kim JH, Kim JH, Han DH. Attention Circuits Mediate the Connection between Emotional Experience and Expression within the Emotional Circuit. CLINICAL PSYCHOPHARMACOLOGY AND NEUROSCIENCE : THE OFFICIAL SCIENTIFIC JOURNAL OF THE KOREAN COLLEGE OF NEUROPSYCHOPHARMACOLOGY 2023;21:715-723. [PMID: 37859444 PMCID: PMC10591168 DOI: 10.9758/cpn.22.1029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/24/2022] [Accepted: 10/25/2022] [Indexed: 10/21/2023]
23
Rezapour Mashhadi MM, Osei-Bonsu K. Speech emotion recognition using machine learning techniques: Feature extraction and comparison of convolutional neural network and random forest. PLoS One 2023;18:e0291500. [PMID: 37988352 PMCID: PMC10662716 DOI: 10.1371/journal.pone.0291500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 08/31/2023] [Indexed: 11/23/2023]  Open
24
Li N, Ross R. Invoking and identifying task-oriented interlocutor confusion in human-robot interaction. Front Robot AI 2023;10:1244381. [PMID: 38054199 PMCID: PMC10694506 DOI: 10.3389/frobt.2023.1244381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/31/2023] [Indexed: 12/07/2023]  Open
25
Franca M, Bolognini N, Brysbaert M. Seeing emotions in the eyes: a validated test to study individual differences in the perception of basic emotions. Cogn Res Princ Implic 2023;8:67. [PMID: 37919608 PMCID: PMC10622392 DOI: 10.1186/s41235-023-00521-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/20/2023] [Indexed: 11/04/2023]  Open
26
Caulley D, Alemu Y, Burson S, Cárdenas Bautista E, Abebe Tadesse G, Kottmyer C, Aeschbach L, Cheungvivatpant B, Sezgin E. Objectively Quantifying Pediatric Psychiatric Severity Using Artificial Intelligence, Voice Recognition Technology, and Universal Emotions: Pilot Study for Artificial Intelligence-Enabled Innovation to Address Youth Mental Health Crisis. JMIR Res Protoc 2023;12:e51912. [PMID: 37870890 PMCID: PMC10628686 DOI: 10.2196/51912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 10/24/2023]  Open
27
Zhou D, Cheng Y, Wen L, Luo H, Liu Y. Drivers' Comprehensive Emotion Recognition Based on HAM. SENSORS (BASEL, SWITZERLAND) 2023;23:8293. [PMID: 37837124 PMCID: PMC10574905 DOI: 10.3390/s23198293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/30/2023] [Accepted: 10/05/2023] [Indexed: 10/15/2023]
28
Balel Y, Mercuri LG. Does Emotional State Improve Following Temporomandibular Joint Total Joint Replacement? J Oral Maxillofac Surg 2023;81:1196-1203. [PMID: 37490998 DOI: 10.1016/j.joms.2023.06.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 06/23/2023] [Accepted: 06/26/2023] [Indexed: 07/27/2023]
29
K A, Prasad S, Chakrabarty M. Trait anxiety modulates the detection sensitivity of negative affect in speech: an online pilot study. Front Behav Neurosci 2023;17:1240043. [PMID: 37744950 PMCID: PMC10512416 DOI: 10.3389/fnbeh.2023.1240043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023]  Open
30
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023;55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
31
Alhinti L, Cunningham S, Christensen H. The Dysarthric Expressed Emotional Database (DEED): An audio-visual database in British English. PLoS One 2023;18:e0287971. [PMID: 37549162 PMCID: PMC10406321 DOI: 10.1371/journal.pone.0287971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 06/19/2023] [Indexed: 08/09/2023]  Open
32
Johnson KT, Narain J, Quatieri T, Maes P, Picard RW. ReCANVo: A database of real-world communicative and affective nonverbal vocalizations. Sci Data 2023;10:523. [PMID: 37543663 PMCID: PMC10404278 DOI: 10.1038/s41597-023-02405-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 07/24/2023] [Indexed: 08/07/2023]  Open
33
Pulatov I, Oteniyazov R, Makhmudov F, Cho YI. Enhancing Speech Emotion Recognition Using Dual Feature Extraction Encoders. SENSORS (BASEL, SWITZERLAND) 2023;23:6640. [PMID: 37514933 PMCID: PMC10383041 DOI: 10.3390/s23146640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/21/2023] [Accepted: 07/21/2023] [Indexed: 07/30/2023]
34
Ullah R, Asif M, Shah WA, Anjam F, Ullah I, Khurshaid T, Wuttisittikulkij L, Shah S, Ali SM, Alibakhshikenari M. Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer. SENSORS (BASEL, SWITZERLAND) 2023;23:6212. [PMID: 37448062 DOI: 10.3390/s23136212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Revised: 05/26/2023] [Accepted: 06/04/2023] [Indexed: 07/15/2023]
35
John V, Kawanishi Y. Progressive Learning of a Multimodal Classifier Accounting for Different Modality Combinations. SENSORS (BASEL, SWITZERLAND) 2023;23:4666. [PMID: 37430579 DOI: 10.3390/s23104666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/09/2023] [Accepted: 05/09/2023] [Indexed: 07/12/2023]
36
Razzaq MA, Hussain J, Bang J, Hua CH, Satti FA, Rehman UU, Bilal HSM, Kim ST, Lee S. A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions. SENSORS (BASEL, SWITZERLAND) 2023;23:s23094373. [PMID: 37177574 PMCID: PMC10181635 DOI: 10.3390/s23094373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 04/03/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023]
37
Heffer N, Dennie E, Ashwin C, Petrini K, Karl A. Multisensory processing of emotional cues predicts intrusive memories after virtual reality trauma. VIRTUAL REALITY 2023;27:2043-2057. [PMID: 37614716 PMCID: PMC10442266 DOI: 10.1007/s10055-023-00784-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 03/03/2023] [Indexed: 08/25/2023]
38
Tanko D, Demir FB, Dogan S, Sahin SE, Tuncer T. Automated speech emotion polarization for a distance education system based on orbital local binary pattern and an appropriate sub-band selection technique. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-18. [PMID: 37362680 PMCID: PMC10068203 DOI: 10.1007/s11042-023-14648-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 08/02/2022] [Accepted: 02/03/2023] [Indexed: 06/28/2023]
39
Gong B, Li N, Li Q, Yan X, Chen J, Li L, Wu X, Wu C. The Mandarin Chinese auditory emotions stimulus database: A validated set of Chinese pseudo-sentences. Behav Res Methods 2023;55:1441-1459. [PMID: 35641682 DOI: 10.3758/s13428-022-01868-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
40
Reece A, Cooney G, Bull P, Chung C, Dawson B, Fitzpatrick C, Glazer T, Knox D, Liebscher A, Marin S. The CANDOR corpus: Insights from a large multimodal dataset of naturalistic conversation. SCIENCE ADVANCES 2023;9:eadf3197. [PMID: 37000886 PMCID: PMC10065445 DOI: 10.1126/sciadv.adf3197] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
41
Cronin SL, Lipp OV, Marinovic W. Pupil Dilation During Encoding, But Not Type of Auditory Stimulation, Predicts Recognition Success in Face Memory. Biol Psychol 2023;178:108547. [PMID: 36972756 DOI: 10.1016/j.biopsycho.2023.108547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/19/2023] [Accepted: 03/24/2023] [Indexed: 03/29/2023]
42
Hajek P, Munk M. Speech emotion recognition and text sentiment analysis for financial distress prediction. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08470-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
43
Olatinwo DD, Abu-Mahfouz A, Hancke G, Myburgh H. IoT-Enabled WBAN and Machine Learning for Speech Emotion Recognition in Patients. SENSORS (BASEL, SWITZERLAND) 2023;23:2948. [PMID: 36991659 PMCID: PMC10056097 DOI: 10.3390/s23062948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 02/27/2023] [Accepted: 03/03/2023] [Indexed: 06/19/2023]
44
Aspect-Based Sentiment Analysis of Customer Speech Data Using Deep Convolutional Neural Network and BiLSTM. Cognit Comput 2023. [DOI: 10.1007/s12559-023-10127-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
45
van Rijn P, Larrouy-Maestri P. Modelling individual and cross-cultural variation in the mapping of emotions to speech prosody. Nat Hum Behav 2023;7:386-396. [PMID: 36646838 PMCID: PMC10038802 DOI: 10.1038/s41562-022-01505-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 11/28/2022] [Indexed: 01/18/2023]
46
Xia W, Zhang Y, Yang Y, Xue JH, Zhou B, Yang MH. GAN Inversion: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023;45:3121-3138. [PMID: 37022469 DOI: 10.1109/tpami.2022.3181070] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
47
Mustaqeem, El Saddik A, Alotaibi FS, Pham NT. AAD-Net: Advanced end-to-end speech signal system for human emotion detection & recognition using attention-based deep echo state network. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
48
Ahmad M, Sanawar S, Alfandi O, Qadri SF, Saeed IA, Khan S, Hayat B, Ahmad A. Facial expression recognition using lightweight deep learning modeling. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023;20:8208-8225. [PMID: 37161193 DOI: 10.3934/mbe.2023357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
49
Pucci F, Fedele P, Dimitri GM. Speech emotion recognition with artificial intelligence for contact tracing in the COVID‐19 pandemic. COGNITIVE COMPUTATION AND SYSTEMS 2023. [DOI: 10.1049/ccs2.12076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]  Open
50
Leung FYN, Stojanovik V, Micai M, Jiang C, Liu F. Emotion recognition in autism spectrum disorder across age groups: A cross-sectional investigation of various visual and auditory communicative domains. Autism Res 2023;16:783-801. [PMID: 36727629 DOI: 10.1002/aur.2896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023]
PrevPage 1 of 4 1234Next
© 2004-2024 Baishideng Publishing Group Inc. All rights reserved. 7041 Koll Center Parkway, Suite 160, Pleasanton, CA 94566, USA