Machine learning based speaker gender classification using transformed features

Ahmed I. Ahmed, John Chiverton, David L. Ndzi, Mahmoud Al-Faris

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Speech and image processing are fundamental components of artificial intelligence technology. Speech processing can be deployed to acquire unique features of a person's voice. These can then be used for speaker identification as well as gender and age classification. This paper studies the effect of the relative degree of correlation in speech features on gender classification. To this end, gender classification performance is evaluated using orthogonally transformed speech features. The performance is then compared to the case when speech features are used without transformation. Two machine learning approaches are used in the evaluation. One of them primarily depends on Gaussian Mixture Models (GMM) and the other one uses Support Vector Machines (SVM). The results show that less correlated speech features, obtained after the orthogonal transformation, provides better classification performance.

Original languageEnglish
Title of host publicationInternational Conference on Communication and Information Technology, ICICT 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages13-18
Number of pages6
ISBN (Electronic)9781665439145
ISBN (Print)9781665439152
DOIs
Publication statusPublished - 26 Oct 2021
Event2021 International Conference on Communication and Information Technology, ICICT 2021 - Basrah, Iraq
Duration: 5 Jun 20216 Jun 2021

Conference

Conference2021 International Conference on Communication and Information Technology, ICICT 2021
Country/TerritoryIraq
CityBasrah
Period5/06/216/06/21

Keywords

  • machine learning
  • principal component analysis
  • speaker gender classification

Fingerprint

Dive into the research topics of 'Machine learning based speaker gender classification using transformed features'. Together they form a unique fingerprint.

Cite this