Unknown

Dataset Information

0

Vector learning representation for generalized speech emotion recognition


ABSTRACT: Graphical abstract Highlights • A verify-to-classify framework was designed for achieving in generalization and overall performance.• An implemented verify-to-classify framework can work well in both verification (in-domain) and recognition (out-domain).• Our softmax with Lo5 can work well with emotion vectors and help improve classification performance. Speech emotion recognition (SER) plays an important role in global business today to improve service efficiency. In the literature of SER, many techniques have been using deep learning to extract and learn features. Recently, we have proposed end-to-end learning for a deep residual local feature learning block (DeepResLFLB). The advantages of end-to-end learning are low engineering effort and less hyperparameter tuning. Nevertheless, this learning method is easily to fall into an overfitting problem. Therefore, this paper described the concept of the “verify-to-classify” framework to apply for learning vectors, extracted from feature spaces of emotional information. This framework consists of two important portions: speech emotion learning and recognition. In speech emotion learning, consisting of two steps: speech emotion verification enrolled training and prediction, the residual learning (ResNet) with squeeze-excitation (SE) block was used as a core component of both steps to extract emotional state vectors and build an emotion model by the speech emotion verification enrolled training. Then the in-domain pre-trained weights of the emotion trained model are transferred to the prediction step. As a result of the speech emotion learning, the accepted model—validated by EER—is transferred to the speech emotion recognition in terms of out-domain pre-trained weights, which are ready for classification using a classical ML method. In this manner, a suitable loss function is important to work with emotional vectors. Here, two loss functions were proposed: angular prototypical and softmax with angular prototypical losses. Based on two publicly available datasets: Emo-DB and RAVDESS, both with high- and low-quality environments. The experimental results show that our proposed method can significantly improve generalized performance and explainable emotion results, when evaluated by standard metrics: EER, accuracy, precision, recall, and F1-score. Speech emotion recognition; Residual squeeze excitation network; Normalized log mel spectrogram; Speech emotion verification; Verify-to-classify framework; Softmax with angular prototypical loss; Cross environment; End-to-end learning

SUBMITTER: Singkul S 

PROVIDER: S-EPMC9280549 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC9377622 | biostudies-literature
| S-EPMC9138108 | biostudies-literature
| S-EPMC9571288 | biostudies-literature
| S-EPMC9049856 | biostudies-literature
| S-EPMC4066940 | biostudies-other
| S-EPMC8589823 | biostudies-literature
| S-EPMC4184843 | biostudies-literature
| S-EPMC6236866 | biostudies-literature
| S-EPMC4482447 | biostudies-other
| S-EPMC9523358 | biostudies-literature