Accessibility navigation


Emotion classification on eye-tracking and electroencephalograph fused signals employing deep gradient neural networks

Wu, Q., Dey, N., Shi, F., González Crespo, R. and Sherratt, S. ORCID: https://orcid.org/0000-0001-7899-4445 (2021) Emotion classification on eye-tracking and electroencephalograph fused signals employing deep gradient neural networks. Applied Soft Computing, 110. 107752. ISSN 1568-4946

[img]
Preview
Text - Accepted Version
· Available under License Creative Commons Attribution Non-commercial No Derivatives.
· Please see our End User Agreement before downloading.

1MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1016/j.asoc.2021.107752

Abstract/Summary

Emotion produces complex neural processes and physiological changes under appropriate event stimulation. Physiological signals have the advantage of better reflecting a person’s actual emotional state than facial expressions or voice signals. An electroencephalogram (EEG) is a signal obtained by collecting, amplifying, and recording the human brain's weak bioelectric signals on the scalp. The eye-tracking (E.T.) signal records the potential difference between the retina and the cornea and the potential generated by the eye movement muscle. Furthermore, the different modalities of physiological signals will contain various information representations of human emotions. Finding this different modal information is of great help to get higher recognition accuracy. The E.T. and EEG signals are synchronized and fused in this research, and an effective deep learning (DL) method was used to combine different modalities. This article proposes a technique based on a fusion model of the Gaussian mixed model (GMM) with the Butterworth and Chebyshev signal filter. Features extraction on EEG and E.T. are subsequently calculated. Secondly, the self-similarity (SSIM), energy (E), complexity (C), high order crossing (HOC), and power spectral density (PSD) for EGG, and electrooculography power density estimation ((EOG-PDE), center gravity frequency (CGF), frequency variance (F.V.), root mean square frequency (RMSF) for E.T. are selected hereafter; the max-min method is applied for vector normalization. Finally, a deep gradient neural network (DGNN) for EEG and E.T. multimodal signal classification is proposed. The proposed neural network predicted the emotions under the eight emotions event stimuli experiment with 88.10% accuracy. For the evaluation indices of accuracy (Ac), precision (Pr), recall (Re), F-measurement (Fm), precision-recall (P.R.) curve, true-positive rate (TPR) of receiver operating characteristic curve (ROC), the area under the curve (AUC), true-accept rate (TAR), and interaction on union (IoU), the proposed method also performs with high efficiency compared with several typical neural networks including the artificial neural network (ANN), SqueezeNet, GoogleNet, ResNet-50, DarkNet-53, ResNet-18, Inception-ResNet, Inception-v3, and ResNet-101.

Item Type:Article
Refereed:Yes
Divisions:Life Sciences > School of Biological Sciences > Biomedical Sciences
Life Sciences > School of Biological Sciences > Department of Bio-Engineering
ID Code:99520
Publisher:Elsevier

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation