Recognition EEG signal patterns for emotion identify using feature learning methods
Subject Areas : Journal of Computer & Roboticsmalihe mohamadi 1 , Amir Masoud Eftekhari Moghadam 2 *
1 - a Faculty of Computer and Information Technology Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
2 - a Faculty of Computer and Information Technology Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
Keywords: EEG Signal, Feature Learning, Emotion Recognition, Decision tree, patern recognition,
Abstract :
Abstract— Emotions play an important role in the daily human life, hence the need to recognize feelings for improving human and computer communication has increased. Recognition EEG signal, considering the internal emotion of people compared to other methods, is very important. One of the modern methods of emotion detection is the use of electroencephalography signals (EEG). Using signal processing techniques and characteristic learning methods, it is examined the patterns obtained by registered signals. A new method for improving emotion recognition is presented in this paper. This paper explores the impact of emotion recognition accuracy of EEG signals on different frequency bands and different number of channels, and extract the pattern recognition of signals. The proposed method uses of brain alpha waves, and extraction and characterization of characteristics based on received signals, and attempts to improve emotion recognition. Signals are classified using DT decision tree classification after recording, processing and extraction of the property by the two methods of PCA and PSD with. The proposed algorithm has been recorded on 10 people watching 2 videos, 4 happy images and 4 sad images. The results obtained from the 6 electrodes provide an acceptable improvement percentage. Given a decrease in the number of electrodes and a reduction in processes, an 88.73% improvement is shown in the recognition of emotions of happiness and 86.31% of improvement in detecting emotions of sadness.
Journal of Computer & Robotics 18 (2), Summer and Autumn 2025, 53-59
Recognition EEG signal patterns for emotion identify using feature learning methods
Malihe Mohamadi a, Amir Masoud Eftekhari Moghadam a, *
a Faculty of Computer and Information Technology Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
Received 12 August 2019, Accepted 01 October 2024.
Abstract
Emotions play an important role in daily human life, hence the need to recognize feelings for improving human and computer communication has increased. Recognition EEG signal, considering the internal emotion of people compared to other methods, is very important. One of the modern methods of emotion detection is the use of electroencephalography signals (EEG). Using signal processing techniques and characteristic learning methods, the patterns obtained by registered signals are examined. A new method for improving emotion recognition is presented in this paper. This paper explores the impact of emotion recognition accuracy of EEG signals on different frequency bands and different numbers of channels and extracts the pattern recognition of signals. The proposed method uses brain alpha waves, and extraction and characterization of characteristics based on received signals and attempts to improve emotion recognition. Signals are classified using DT decision tree classification after recording, processing and extraction of the property by the two methods of PCA and PSD with. The proposed algorithm has been recorded on 10 people watching 2 videos, 4 happy images and 4 sad images. The results obtained from the 6 electrodes provide an acceptable improvement percentage. Given a decrease in the number of electrodes and a reduction in processes, an 88.73% improvement is shown in the recognition of emotions of happiness and 86.31% improvement in detecting emotions of sadness.
Keywords: EEG Signal, Feature Learning, Emotion Recognition, Decision tree, patern recognition
1.Introduction
eftekhari@qiau.ac.ir.ac.ir |
formed in the human brain only after the body has reacted.
In the case of emotion, the functions of the brain hemispheres are different [2]. When you feel happy the left hemisphere of the brain is more active and when you feel sad the right hemisphere of your brain becomes more active. And in both cases, the occipital lobe and frontal lobe are involved. The emotions include six basic feelings of happiness, sadness, anger, fear, hatred and surprise [3]. which are introduced by Ekman [4] and divided into a two-dimensional page. These emotions are located on a two-dimensional model based on " Arousal level (high / low)" and " Valence (positive / negative)". The two-dimensional model is shown in Fig 1.
Fig 1: Arousal / Valence Two-dimensional emotion model
• cerebral cortex
The cerebral cortex is responsible for many high-level functions such as problem solving, language perception, and processing complex visual information. The cerebral cortex can be divided into different areas that are responsible for different functions. These areas are described in Table1.
This layer is made up of brain nerve cells. Its thickness varies in different areas of the brain but is 2 to 4 mm thick almost everywhere. Cellular, the cerebral cortex is made up of 6 layers, on top of each other. However, the thickness of each layer varies in different areas of the cortex, and in some areas of the brain, some of these layers may not be present. The cerebral cortex is responsible for all voluntary human behaviors. Human cognitive behaviors also originate from this organ.
Table 1: Cortical areas and their function
Cortical area | function |
Auditory Association Area | Complex processing of auditory information |
Auditory Cortex | Detection of sound quality (loudness, tone) |
Broca's Area | Speech production and articulation |
Prefrontal Cortex | Problem Solving, Emotion, Complex Thought |
Motor Association Cortex | Coordination of complex movement |
Primary Motor Cortex | Initiation of voluntary movement |
Primary Somatosensory Cortex | Receives tactile information from the body |
Sensory Association Area | Processing of multisensory information |
Visual Association Area | Complex processing of visual information |
Wernicke's Area | Language comprehension |
· Detection of brain signals
In 1875, the English surgeon Richard Cotton discovered the existence of electrical potentials in the surface of the cortex in animals such as rabbits and monkeys. He also reported that when light is emitted into the animal's eye, there are changes in the potential of the opposite brain half. Similar research were conducted in Russia and Finland during the same years [5]. But German physician and psychologist Hans Berger was the first to record human brain signals. After learning about the results of Cotton's research on animals, he focused his research on humans.
Over the next few years, Berger continued to record more to make sure that what was being recorded was not due to harmonics produced by the bloodstream or to the scalp. Until finally in 1929, he wrote [6]:
"EEG is a curve with continuous oscillation that can detect the existence of type I waves with an average period of 90 ms and type II waves with a smaller amplitude and an average period of 35 ms. Oscillations with a maximum range of 150-200 microvolts have been measured."
2.Related work
Today, the study of emotions in interaction with computers and humans has increased. Many methods have been used to identify emotions, and one of those proposed methods is using brain signals. Using EEG, we can achieve more acceptable results. In recent years, researchers have begun to use EEG signals to recognize emotions more and more, because they are reliable. The EEG signal inherently has a non-static nature [7]. As EEG data are high-dimensional and complex, this also calls for large data sets to train feature learning method for EEG analysis and classification [8]. However, feature learning outcomes and classification are often not good enough.
Rafael Ramirez et al, 2012 [9] used the two techniques of LDA feature learning and SVM classification, attempted to detect emotional based on EEG signals obtained after watching four happy and sad images. In this method, the classification accuracy for the feelings of happiness and sadness is 83.33% and 86.35%, respectively.
Lee Zunk et al., In 2015 [10], used a deep feature learning method based on the DBN method to find out more about the extraction of emotional features based on traditional Q & A models. Jin Ping Lee et al. in 2017 [11] suggested KNN and HCNN combination model for three positive, negative, and neutral emotional states. By obtaining the stimulated points of the brain and classifying these points based on the feelings introduced with the SVM classifier, reached 75.21% to improve emotional recognition in three general conditions.
Yang Pin Lin et al in 2010 [12] used machine learning algorithms to dynamically categorize the EEG signals while listening to music to obtain the emotional state. for the classification of the four emotional states of happiness, sadness, anger and pleasure, by studying 30 independent subject matter and using SVM, the average classification accuracy was 82.29%.
The purpose of this study is to find the proper relationship between EEG signals and emotions based on the feature learning methods and using decision tree classifier to obtain the accuracy of the more appropriate classification. Figure 2 shows the different steps of our approach. Section 2 presents the methodology for this research. In Section 3, we describe the data collection process, and in Section 4 we will provide more results and in Section 5 further research.
Fig 2: Flowchart of Steps to record and extract emotional state
3.methodology
A.brain frequency: Brain waves have different types and functions and are divided into different types in terms of frequency. All these waves exist at all times, but under different conditions, a certain wave overwhelms other waves [13]. Different brain waves and their frequency are Delta wave (0.5-3 Hz), Theta wave (4-7 Hz), Alpha wave (8-12 Hz), Beta wave (30-30 Hz), and Gamma wave (31-50 Hz).”
Fig 3: Brain Wave Model
According to the experiments performed in this section, the best frequency is considered to find the emotions of alpha waves.
B. Electrode location: The international 20-10 system is a well-known international method for describing and applying the position of scalp electrode in the field of EEG studies. This is a standard way to ensure that the results of thematic studies can be collected, replicated and effectively analyzed and compared scientifically with other methods [14]. This system is based on the relationship between the location of the electrode and the underlying area of the brain [15], especially the cerebral cortex. The position of the electrodes in the 20-10 system is shown in Fig 4.
Fig 4: Position of electrodes in 20-10 system
In this research, three pairs of electrodes were used for recording signals. The location of the electrodes were the standard locations of Fp1 Fp2, T7 T8 and O1 O2 in the EEG signals recording 20-10 system.
After examining brain function, explaining learning and the role of emotions in human life, we look for a better way to identify and classify emotions based on brain signals. According to previous work, we know that we have difficulty in classifying emotions, and the number of used electrodes varies from 14 to 32. In this paper, we increase the processing speed by reducing the number of electrodes and processing times, by changing the classification method and using feature learning methods.
Fig 5: processing in emotion
In research related to emotion in brain signals, the five brain waves are examined in all five states. By examining all 5 brain waves, more time and processing is needed to get the answer and classification for the desired signal. In this research, we have achieved better results only by using alpha waves and processing on this wave. In addition, we have shortened the processing and execution time. As a result, in this section, the best frequency for finding the emotions of alpha waves is considered. Stimulants will be used to record signals to receive an appropriate response in signal feedback.
In this article, images and videos collected by other people were used to examine the feelings of joy and sadness, and those people did not participate in the main experiment. By receiving the signals in each section, and recording them using the EEGLAB toolbox, preprocessing and noise removal mentioned in the above steps have been applied to the signals. The final signals obtained by PCA and PSD are processed to select the desired features in each signal. Significant categories of feelings of happiness and sadness were obtained using the DT classifier. Each category of information included 70% for Training and 30% for Test. So that the data is well reviewed and classified at different stages, and the accuracy obtained is acceptable.
C. Preprocessing: The first step in data processing is noise elimination. Noises occur for various reasons, such as shaking or blinking. To remove noise, this high-pass filter with one Hz cut [16]. Urban electricity also has noise generated on signals, and using the Notch filter, which is 50 Hz, the noise generated is eliminated from the signal [17]. Noise data are like domains with a sudden increase in the range, which can be clearly seen. This part of the data is removed from all selected channels. Due to the position of the electrodes on the forehead, blinking creates a noise and using the EOG filter, blinking noises are eliminated.
To better detect signals before processing, wavelet is used to resolve resolution problems in the time-frequency domain conversion [18]. A continuous wavelet transformation is a transformation that takes a continuous function in time into time-frequency space. The bases of the new space are wavelet functions. In mathematics, the continuous wavelet transformation for a function x (t) whose square is integrable, on the scale of a> 0 and the place b∈R is defined as:
(1) |
where function Ψ (t) is a continuous function in time and frequency, known as the mother wavelet. wavelet provides better results and is used for analyzing time serial data.
D .Algorithms: In this paper, in order to use more important features, after removing the noise and using the violin functions, two known methods of PSD and PCA were used to extract the features. To classify the emotional state after obtaining appropriate features, the decision tree is used to evaluate the classification.
D. Analysis of the main components: PCA is a statistical method that has many applications. Among these applications are reduced dimensions, face recognition, and finding suitable patterns. This algorithm is very popular in the processing of data collected for signal processing. The analysis of the main components in the mathematical definition is an orthogonal linear transformation that takes the data to a new coordinate system, so that the largest variance is placed on the second coordinate axis and it is similarly done for the rest of the models. For the data matrix XT with the zero experimental mean, in which each row is a set of observations, and each column of the data is related to an index, the principal component analysis is defined as follows:
(2) |
The analyzes single values is the XT matrix. Based on the definition of main component analysis, the purpose of this analysis is to transfer the data set X with dimensions M to the data set Y with dimensions L. Therefore, it is assumed that the matrix X consists of vectors
each of which is put in a column in the matrix. Therefore, according to the dimensions of the vectors (M), the matrix of the data is M × N. EEG signals are calculated as a feature Herbond using PSD after feature extraction by PCA, and the signals are normalized using the formula (3):
(3) |
Classification: At this stage, the appropriate classification algorithm is used to improve the accuracy of emotion recognition according to the input feature vector derived from the extracted feature obtained from the combination of PCA and PSD methods. The proposed algorithm of this paper is for categorization of the decision tree (DT) algorithm. The achieved superior features are delivered to the DT Classifier. Among decision support tools, decision tree and decision diagrams have the following advantages:
1. Simple understanding: Every person with little study and training can learn how to work with the decision tree.
2. Working with large and complex data: while being sample, the decision tree can easily work with complex data and decide on them.
3. Easy reuse: If the decision tree is made for a problem, different instances of that problem can be calculated with the same decision tree.
4- Ability to combine with other methods: The decision tree can be combined with other decision-making techniques and obtain better results.
We examined the signals after the feature extraction in EEGLAB. In Figure 6, five to ten seconds of signal recording are separated by six electrodes in this EEGLAB toolbox. According to this figure, in these five seconds, after the event and seeing the stimulus, the brain has responded, and in the recorded signals, according to the position of the signal and the response time of the brain, the signal changes indicate the response to the stimulus.
Fig6: Five to ten seconds of signal recording by six electrodes in this EEGLAB toolbox
4.Experiment and Experiment results
In this experiment, to obtain state of emotional happiness and sadness, used labeled image and video. From all these experiments, a result has been used to obtain the desired emotional state. In this experiment, two emotional states of happiness with a positive amount of pleasure and a high excitement, and a feeling of sadness with a negative amount of pleasure and low excitement are examined. In these two modes, four images and a video are used to stimulate emotions. For labeling each video, at first 20 images and 5 videos of 3-minutes long were considered for each feeling. Images and videos were displayed for 10 people aged from 19 to 35 years old. Then each movie and image were given a score of 1 to 5 using a questionnaire. Four images and one movie based on high scores were selected as stimuli for each of the emotions of happiness and sadness.
People who were tested for labeling were not included in the signal recording test. To test the signaling, 10 people aged from 20 to 35 years (6 men and 4 women), all in healthy mental and physical conditions were selected and for all subjects, identical laboratory environments were used. Individuals are placed in the room for 5 minutes before starting the experiment to get used to the BCI hat and the environment. Then the individual announces his readiness, the sad film is first played then an empty screen is displayed for 5 seconds, and then the sadly labeled pictures are played, each for 10 seconds and with the intervals of 2 seconds. For one minute nothing is played, and again in the same way happiness labeled pictures and videos are played.
We examined the signals obtained after preprocessing and processing using the DT classifier in the Toolbox EEG lab. According to the images obtained in Fig 7, the mean topography of emotions of joy and sadness is involved in the posterior and frontal brain. Based on this study, using three pairs of electrodes at the O1 O2-T7 T8-Fp1 Fp2 points, we have achieved a better result.
Fig 7: Stimulation of occipital lobe and frontal lobe during emotion detection
The signals obtained were first pre-processed and noise-removed, and the necessary pre-processing was performed on them using the violet function and EEGLAB toolbox. Pre- processed data, using PCA feature methods, showed a dimensional reduction of 14 features before PCA application, and 11 features after application of this method. They were then converted to a time-frequency function using PSD to be processed in this area. We show in fig 8.
Fig8: Signal recorded graphically
According to the results obtained in the display of signals, we see the conflict of the back and front parts of the brain in each of the emotions. Figure 9. Part A shows the functions and topography of feeling happiness, and Part B shows the functions and topography of feeling sadness. In both spectra, blue and red indicate more activity in the bolder parts.
Fig 9: (a) The topographic functions of happiness
Fig 9: (b) The topographic functions of sadness
In this experiment, by more accurately identifying the signals of happiness and sadness in each person, we achieved a higher improvement than SVM classifier. By reducing the number of electrodes and thus reducing the processing carried out in each step, in addition to better identification of each signal, the process of diagnosis is quickened. Considering table 2 and comparing the previous methods with the method proposed in this article, we noticed the improvement in emotion detection performance.
The signals were examined in EEGlab after feature extraction, and, about the signals obtained and the power of detecting emotions in these signals, the accuracy of detecting happiness and sadness for each of the 10 individuals was shown in table 3.
According to the obtained table and the classification accuracy for each feeling, classification accuracy of 88.73% was obtained for happiness and 86.31% was obtained for sadness.
Table 2: Results obtained from DT classifier
5 | 4 | 3 | 2 | 1 | Emotion |
84.89% | 87.28% | 81.8% | 87.19% | 90.66% | Sad |
86.36% | 90.33% | 83.98% | 89.22% | 93.3% | Happy |
10 | 9 | 8 | 7 | 6 | Emotion |
79.8% | 85.79% | 84.68% | 90.32% | 90.69% | Sad |
80.45% | 89.74% | 89.76% | 93.67% | 90.54% | Happy |
Table 3: Classification accuracy in SVM and DT methods
accuracy | classification | Feature selection | emotion | |
Sad | Happy | |||
86.35% | 83.33% | SVM | LDA | Rafael Ramirez at al |
75.4% | 75.21% | SVM | KNN-HCNN | Li zhank at al |
86.31% | 88.73% | DT | PCA-PSD | This method |
According to the table obtained and comparison of previous methods, using the method proposed in this research, happiness was improved by 5.4% and the same percentage was achieved for sadness. signals of sadness, due to being close to neutral states, are more difficult to identify than positive emotional states.
5.Conclusion
In this paper, we have proposed EEG based emotion recognition methodology for two emotional states, sadness and happiness. According to the type of records of each signal and the importance of feature extraction and classification in EEG signals using the feature learning methods and selecting two methods of PCA and PCD to reduce the dimension and classification of each signal with a two-category classification for feelings of happiness and sadness. Using the Decision Tree classifier for each category has been classified and improvement of each category. Feelings based on a face detection Unable to identify, in this paper, only two emotions and on the 10 persons with happy and sad images that had been labeled by 10 other people, have been investigated. After pre-processing and processing and appropriate classification, 88.73% improvement was obtained for happiness and 86.31% of improvement was achieved for emotion recognition. The number of electrodes was reduced to 6 electrodes and only the alpha wave has been used for this experiment. The number of emotions can be changed, and the number of electrodes can be less and in other parts.
References
[1] C. Lorenzo, M. Daniele, D. R. Danilo Emilio, “SEAI: Social Emotional Artificial Intelligence Based on Damasio’s Theory of Mind “, JOURNAL of Frontiers in Robotics and AI ,VOLUME5 ,2018
[2] D.Tranel,A. R.Damasio ‘Dissociable neural systems for recognizing emotions’. Brain and Cognition Volume 52, Issue 1, June 2003, Pages 61-69
[3] R.W. Picard,“Affective computing: Challenges” , International Journal of HumanComputer Studies 2003
[4] P. Ekman, ”Unmasking the face” . ISBN 0-13-938183-X
[5] Dapaul Gj. Parent and teacher rating of ADHD system psychometric properties in a community based sample. Child psychopharmalogy. J clin psychol 1991; 20: 245-53
[6] Ernst NiederMeyer and Fernando Lopes da Silval, editors. Electroencephalography, chapter 9, pages 149–173. LippincottWilliams&Wilkins, 1999.
[7]M. Mohamadi and A.M. Eftekhari Moghadam, “Improvement of EEG signal-based emotion recognition based on feature learning methods” 2018 9th Conference on Artificial Intelligence and Robotics and 2nd Asia-Pacific International Symposium
[8]S. Haidong and J.Hongkai and Z. Huiwei and W.Fuan, “A novel deep autoencoder feature learning method for rotating machinery fault diagnosis” , Mechanical Systems and Signal Processing. Volume 95, October 2017, Pages 187-204
[9] R. Ramirez and Z. Vamvakousis , “Detecting Emotion from EEG Signals Using the Emotive Epoc Device”, university pompue ,2012
[10] Xiang Li, Peng Zhang, Dawei Song, Guangliang Yu , “EEG Based Emotion Identification Using Unsupervised Deep Feature Learning” , SIGIR2015 Workshop on NeuroPhysiological Methods in IR Research, 13 August 2015, Santiago, Chile.
[11] J. Li and Zh. Zhang and H. He, “Hierarchical Convolutional Neural Networks for EEG-Based Emotion Recognition”, Springer Science Media, LLC, part of Springer Nature 2017
[12] Y-P. Lin, Chi-H. Wang and T-P. Jung, “EEG-Based Emotion Recognition in Music Listening”. IEEE transactions on biomedical engineering, vol. 57, no. 7, july 2010
[13] J. S. Rahman, T. Gedeon, S. Caldwell and R. Jones, "Brain Melody Informatics: Analysing Effects of Music on Brainwave Patterns," 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-8, doi: 10.1109/IJCNN48605.2020.9207392.
[14] H. Candra, M. Yuwono, R. Chai, H. T. Nguyen and S. Su, "EEG emotion recognition using reduced channel wavelet entropy and average wavelet coefficient features with normal Mutual Information method," 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2017, pp. 463-466, doi: 10.1109/EMBC.2017.8036862.
[15] B. Asheri, V. Rostami, M.B. Menhaj , “The ability to control a brain-computer interactive game by concentration“ , J. Biomedical Science and Engineering, 2018, 3, 390-396
[16] F. Fürbass, M. Aykut Kural, Ge. Gritsch “An artificial intelligence-based EEG algorithm for detection of epileptiform EEG discharges: Validation against the diagnostic gold standard”, Clinical Neurophysiology Volume 131, Issue 6, June 2020, Pages 1174-1179
[17] Md. Asadur Rahman, Md. Foisal Hossain, Mazhar Hossain, Rasel Ahmmed,, “Employing PCA and t-statistical approach for feature extraction and classification of emotion from multichannel EEG signal” , Egyptian Informatics Journal, Volume 21, Issue 1, 2020
[18] M. R. Islam and M. Ahmad, "Wavelet Analysis Based Classification of Emotion from EEG Signal", 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 2019, pp. 1-6, doi: 10.1109/ECACE. 2019.86791
-
Determining COVID-19 Tweet Check-Worthiness: Based On Deep Learning Approach
Print Date : 2023-01-01 -
-
A New Approach to Improve Tracking Performance of Moving Objects with Partial Occlusion.
Print Date : 2019-06-01 -
Application of Numerical Iterative Methods for Solving Benjamin-Bona-Mahony Equation
Print Date : 2019-12-01 -