Emotion classification on eye-tracking and electroencephalograph fused signals employing deep gradient neural networksWu, Q., Dey, N., Shi, F., González Crespo, R. and Sherratt, S. ORCID: https://orcid.org/0000-0001-7899-4445 (2021) Emotion classification on eye-tracking and electroencephalograph fused signals employing deep gradient neural networks. Applied Soft Computing, 110. 107752. ISSN 1568-4946
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1016/j.asoc.2021.107752 Abstract/SummaryEmotion produces complex neural processes and physiological changes under appropriate event stimulation. Physiological signals have the advantage of better reflecting a person’s actual emotional state than facial expressions or voice signals. An electroencephalogram (EEG) is a signal obtained by collecting, amplifying, and recording the human brain's weak bioelectric signals on the scalp. The eye-tracking (E.T.) signal records the potential difference between the retina and the cornea and the potential generated by the eye movement muscle. Furthermore, the different modalities of physiological signals will contain various information representations of human emotions. Finding this different modal information is of great help to get higher recognition accuracy. The E.T. and EEG signals are synchronized and fused in this research, and an effective deep learning (DL) method was used to combine different modalities. This article proposes a technique based on a fusion model of the Gaussian mixed model (GMM) with the Butterworth and Chebyshev signal filter. Features extraction on EEG and E.T. are subsequently calculated. Secondly, the self-similarity (SSIM), energy (E), complexity (C), high order crossing (HOC), and power spectral density (PSD) for EGG, and electrooculography power density estimation ((EOG-PDE), center gravity frequency (CGF), frequency variance (F.V.), root mean square frequency (RMSF) for E.T. are selected hereafter; the max-min method is applied for vector normalization. Finally, a deep gradient neural network (DGNN) for EEG and E.T. multimodal signal classification is proposed. The proposed neural network predicted the emotions under the eight emotions event stimuli experiment with 88.10% accuracy. For the evaluation indices of accuracy (Ac), precision (Pr), recall (Re), F-measurement (Fm), precision-recall (P.R.) curve, true-positive rate (TPR) of receiver operating characteristic curve (ROC), the area under the curve (AUC), true-accept rate (TAR), and interaction on union (IoU), the proposed method also performs with high efficiency compared with several typical neural networks including the artificial neural network (ANN), SqueezeNet, GoogleNet, ResNet-50, DarkNet-53, ResNet-18, Inception-ResNet, Inception-v3, and ResNet-101.
Download Statistics DownloadsDownloads per month over past year Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |