Accessibility navigation

Accent and gender recognition from English language speech and audio using signal processing and deep learning

Shergill, J. S., Pravin, C. and Ojha, V. ORCID: (2021) Accent and gender recognition from English language speech and audio using signal processing and deep learning. In: International Conference on Hybrid Intelligent Systems, 14-16 Dec 2020, pp. 62-72,

[img] Text - Accepted Version
· Restricted to Repository staff only until 17 April 2023.


It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1007/978-3-030-73050-5_7


This research is concerned with taking user input in the form of speech data to classify and then predict which region of the United Kingdom the user is from and their gender. This research was conducted on regional accents, data preprocessing, Fourier transforms, and deep learning modeling. Due to the lack of publicly available datasets for this type of research, a dataset was created from scratch (12 regions with a 1:1 gender ratio). In this paper, we propose modeling the human’s voice accent and voice gender recognition as a classification task. We used a deep convolution neural network, and experimentally developed an architecture that maximized the classification accuracy of the mentioned tasks simultaneously. We also tested the model on publicly available spoken digit detests. We find that the gender classification is relatively easier to predict with high accuracy than the accent in our proposed multi-class classification model. Accent classification was found difficult because of the regional accent’s overlapping that prevents it from being classified with high accuracy.

Item Type:Conference or Workshop Item (Paper)
Divisions:Interdisciplinary Research Centres (IDRCs) > Centre for the Mathematics of Planet Earth (CMPE)
Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:97785

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation