Predicting Depression and Emotions in the Cross-Roads of Cultures, Para-Linguistics, and Non-Linguistics
Loading...

Date
2019
Journal Title
Journal ISSN
Volume Title
Publisher
Assoc Computing Machinery
Open Access Color
Green Open Access
No
OpenAIRE Downloads
OpenAIRE Views
Publicly Funded
No
Abstract
Cross-language, cross-cultural emotion recognition and accurate prediction of affective disorders are two of the major challenges in affective computing today. In this work, we compare several systems for Detecting Depression with AI Sub-challenge (DDS) and Cross-cultural Emotion Sub-challenge (CES) that are published as part of the Audio-Visual Emotion Challenge (AVEC) 2019. For both sub-challenges, we benefit from the baselines, while introducing our own features and regression models. For the DDS challenge, where ASR transcripts are provided by the organizers, we propose simple linguistic and word-duration features. These ASR transcript-based features are shown to outperform the state of the art audio visual features for this task, reaching a test set Concordance Correlation Coefficient (CCC) performance of 0.344 in comparison to a challenge baseline of 0.120. Our results show that non-verbal parts of the signal are important for detection of depression, and combining this with linguistic information produces the best results. For CES, the proposed systems using unsupervised feature adaptation outperform the challenge baselines on emotional primitives, reaching test set CCC performances of 0.466 and 0.499 for arousal and valence, respectively.
Description
Keywords
Affective Computing, Depression Severity Prediction, PTSD, Cross-Cultural Emotion Recognition
Fields of Science
0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology
Citation
WoS Q
N/A
Scopus Q
N/A

OpenCitations Citation Count
25
Source
9th International on Audio/Visual Emotion Challenge and Workshop-AVEC -- OCT 21, 2019 -- Nice, FRANCE
Volume
Issue
Start Page
27
End Page
35
PlumX Metrics
Citations
CrossRef : 26
Scopus : 40
Captures
Mendeley Readers : 52
Google Scholar™


