• Home
  • emotion recognition
    • List of Articles emotion recognition

      • Open Access Article

        1 - Emotion Recognition of Speech Signals Based on Filter Methods
        Narjes Yazdanian Hamid Mahmoodian
        Abstract: Speech is the basic mean of communication among human beings.With the increase of transaction between human and machine, necessity of automatic dialogue and removing human factor has been considered. The aim of this study was to determine a set of affective fe More
        Abstract: Speech is the basic mean of communication among human beings.With the increase of transaction between human and machine, necessity of automatic dialogue and removing human factor has been considered. The aim of this study was to determine a set of affective features the speech signal is based on emotions. In this study system was designs that include three mains sections, features extraction, features selection and classification. After extraction of useful features such as, mel frequency cepstral coefficient (MFCC), linear prediction cepstral coefficients (LPC), perceptive linear prediction coefficients (PLP), ferment frequency, zero crossing rate, cepstral coefficients and pitch frequency, Mean, Jitter, Shimmer, Energy, Minimum, Maximum, Amplitude, Standard Deviation, at a later stage with filter methods such as Pearson Correlation Coefficient, t-test, relief and information gain, we came up with a method to rank and select effective features in emotion recognition. Then Result, are given to the classification system as a subset of input. In this classification stage, multi support vector machine are used to classify seven type of emotion. According to the results, that method of relief, together with multi support vector machine, has the most classification accuracy with emotion recognition rate of 93.94%. Manuscript profile
      • Open Access Article

        2 - Speech Emotion Recognition Using a Combination of Transformer and Convolutional Neural networks
        Yousef Pourebrahim Farbod Razzazi Hossein Sameti
        Speech emotions recognition due to its various applications has been considered by many researchers in recent years. With the extension of deep neural network training methods and their widespread usage in various applications. In this paper, the application of convolut More
        Speech emotions recognition due to its various applications has been considered by many researchers in recent years. With the extension of deep neural network training methods and their widespread usage in various applications. In this paper, the application of convolutional and transformer networks in a new combination in the recognition of speech emotions has been investigated, which is easier to implement than existing methods and has a good performance. For this purpose, basic convolutional neural networks and transformers are introduced and then based on them a new model resulting from the combination of convolutional networks and transformers is presented in which the output of the basic convolutional network is the input of the basic transformer network. The results show that the use of transformer neural networks in recognizing some emotional categories performs better than the convolutional neural network-based method. This paper also shows that the use of simple neural networks in combination can have a better performance in recognizing emotions through speech. In this regard, recognition of speech emotions using a combination of convolutional neural networks and a transformer called convolutional-transformer (CTF) for RAVDESS dataset achieved an accuracy of %80.94; while a simple convolutional neural network achieved an accuracy of about %72.7. The combination of simple neural networks can not only increase recognition accuracy but also reduce training time and the need for labeled training samples. Manuscript profile
      • Open Access Article

        3 - Evaluation of Deep Neural Networks in Emotion Recognition Using Electroencephalography Signal Patterns
        Azin Kermanshahian Mahdi Khezri
        In this study, the design of a reliable detection system that is able to identify different emotions with the desired accuracy has been considered. To reach this goal, two different structures for the emotion recognition system include 1) using linear and non-linear fea More
        In this study, the design of a reliable detection system that is able to identify different emotions with the desired accuracy has been considered. To reach this goal, two different structures for the emotion recognition system include 1) using linear and non-linear features of the electroencephalography (EEG) signal along with common classifiers and 2) using EEG signal in a deep learning structure is considered to identify emotional states. To design the system, the EEG signals of the DEAP database which were recorded by displaying emotional videos from 32 subjects were used. After the preparation and noise removal, linear and non-linear features such as: Skewness, Kurtosis, Hjorth parameters, Lyapunov exponent, Shannon entropy, correlation and fractal dimension and time reversibility were extracted from the alpha, beta and gamma subbands of the EEG signals. Then according to structure 1, the features were applied as input to common classifiers such as decision tree (DT), k nearest neighbor (kNN) and support vector machine (SVM). Also in structure 2, the EEG signal was considered as the input of the convoloutional neural network (CNN). The goal is to evaluate the results of deep learning networks and other methods for emotion recognition. According to the obtained results, the SVM achieved the best performance for identifying four emotional states with 94.1 % accuracy. Also, the proposed CNN identified the desired emotional states with the accuracy of 86%. Deep learning methods are superior to simple classifiers because they do not require the features of the signals and are resistant to different noises. Using a short period of time for the signals and performing near optimal preprocessing and conditioning, can further improve the results of deep neural networks. Manuscript profile
      • Open Access Article

        4 - Wavelet Packet Entropy in Speaker-Independent Emotional State Detection from Speech Signal
        Mina Kadkhodaei Elyaderani Hamid Mahmoodian Ghazaal Sheikhi
        In this paper, wavelet packet entropy is proposed for speaker-independent emotion detection from speech. After pre-processing, wavelet packet decomposition using wavelet type db3 at level 4 is calculated and Shannon entropy in its nodes is calculated to be used as featu More
        In this paper, wavelet packet entropy is proposed for speaker-independent emotion detection from speech. After pre-processing, wavelet packet decomposition using wavelet type db3 at level 4 is calculated and Shannon entropy in its nodes is calculated to be used as feature. In addition, prosodic features such as first four formants, jitter or pitch deviation amplitude, and shimmer or energy variation amplitude besides MFCC features are applied to complete the feature vector. Then, Support Vector Machine (SVM) is used to classify the vectors in multi-class (all emotions) or two-class (each emotion versus normal state) format. 46 different utterances of a single sentence from Berlin Emotional Speech Dataset are selected. These are uttered by 10 speakers in sadness, happiness, fear, boredom, anger, and normal emotional state. Experimental results show that proposed features can improve emotional state detection accuracy in multi-class situation. Furthermore, adding to other features wavelet entropy coefficients increase the accuracy of two-class detection for anger, fear, and happiness. Manuscript profile
      • Open Access Article

        5 - Computational Intelligence Methods for Facial Emotion Recognition: A Comparative Study
        Fatemeh Shahrabi Farahani Mansour Sheikhan
      • Open Access Article

        6 - Bionic Wavelet Transform Entropy in Speaker-Independent and Context-Independent Emotional State Detection from Speech Signal
        Mina Kadkhodaei Elyaderani Hamid Mahmoodian
      • Open Access Article

        7 - A Comparison of Recognition of Facial Emotion Expressions in Children with Autism Spectrum Disorder and Normal Children
        Mahsa Ahadian Hamid Poursharifi layli Panaghi
        Originally, children recognize other people’s emotional expressions through an emotional transmission process that is approximately automatic. Happiness, sadness, fear and anger are some of the main internal emotions which can be directly perceived by facial expre More
        Originally, children recognize other people’s emotional expressions through an emotional transmission process that is approximately automatic. Happiness, sadness, fear and anger are some of the main internal emotions which can be directly perceived by facial expressions. Children’s reaction to the emotions demonstrate their ability to recognize and interpret them. Researches evaluating Deficit in emotional recognition of autistic children leads to conflicting results. The aim of this study is to compare recognition of six basic facial expressions and reaction times in children with and without autism spectrum disorder. For this purpose, utilizing a comparative research design, 20 children with autism spectrum disorder - selected by the use of convenience sampling method- were compared with 20 typically developing age-matched control subjects in facial emotional recognition and reaction time. Multivariate Covariance Analysis results showed significant differences between two groups in reaction time and recognition of emotions. It was revealed that performance of autistic children in facial emotion recognition was slower and less accurate than normal children. In this study, it is confirmed that children with autism spectrum disorder have a deficit in recognizing emotional expression in faces. Manuscript profile