Accessibility navigation


Facial emotion recognition from feature loss media: human versus machine learning algorithms

Dube, D. Y., Sannasi, M. V., Kyritsis, M. ORCID: https://orcid.org/0000-0002-7151-1698 and Gulliver, S. R. ORCID: https://orcid.org/0000-0002-4503-5448 (2026) Facial emotion recognition from feature loss media: human versus machine learning algorithms. Computers in Human Behavior, 174. 108806. ISSN 0747-5632

[thumbnail of Manuscript_FER_with_information_loss_R&R_minorFinalSep18.docx] Text - Accepted Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.

129kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1016/j.chb.2025.108806

Abstract/Summary

The automatic identification of human emotion, from low-resolution cameras is important for remote monitoring, interactive software, pro-active marketing, and dynamic customer experience management. Even though facial identification and emotion classification are active fields of research, no studies, to the best of our knowledge, have compared the performance of humans and Machine Learning Algorithms (MLAs) when classifying facial emotions from media suffering from systematic feature loss. In this study, we used singular value decomposition to systematically reduce the number of features contained within facial emotion images. Human participants were then asked to identify the facial emotion contained within the onscreen images, where image granularity was varied in a stepwise manner (from low to high). By clicking a button, participants added feature vectors until they were confident that they could categorise the emotion. The results of the human performance trials were compared against those of a Convolutional Neural Network (CNN), which classified facial emotions from the same media images. Findings showed that human participants were able to cope with significantly greater levels of granularity, achieving 85% accuracy with only three singular image vectors. Humans were also more rapid when classifying happy faces. CNNs are as accurate as humans when given mid- and high-resolution images; with 80% accuracy at twelve singular image vectors or above. The authors believe that this comparison concerning the differences and limitations of human and MLAs is critical to (i) the effective use of CNN with lower-resolution video, and (ii) the development of useable facial recognition heuristics.

Item Type:Article
Refereed:Yes
Divisions:Henley Business School > Digitalisation, Marketing and Entrepreneurship
ID Code:124519
Publisher:Elsevier

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation