Channel variability synthesis in i-vector speaker recognition

Research output: Chapter in Book/Report/Conference proceedingConference contribution

169 Downloads (Pure)

Abstract

In this paper, we are tackling a practical problem which can be faced when establishing an i-vector speaker recognition system with limited resources. This addresses the problem of lack of development data of multiple recordings for each speaker. When we only have one recording for each speaker in the development set, phonetic variability can be simply synthesised by dividing the recordings if they are of sufficient length. For channel variability, we pass the recordings through a Gaussian channel to produce another set of recordings, referred to here as Gaussian version recordings. The proposed method for channel variability synthesis produces total relative improvements in EER of 5%.
Original languageEnglish
Title of host publicationIET 3rd International Conference on Intelligent Signal Processing (ISP 2017)
Place of PublicationLondon, UK
PublisherIET Conference Publications
Pages1-6
Number of pages6
ISBN (Electronic)978-1-78561-708-9
ISBN (Print)978-1-78561-707-2
DOIs
Publication statusPublished - 21 May 2018
EventIET 3rd International Conference on Intelligent Signal Processing (ISP 2017) - Savoy Place, IET Headquarters, London, United Kingdom
Duration: 4 Dec 20175 Dec 2017
Conference number: 3
http://digital-library.theiet.org/content/conferences/cp731

Conference

ConferenceIET 3rd International Conference on Intelligent Signal Processing (ISP 2017)
Abbreviated titleIET ISP 2017
Country/TerritoryUnited Kingdom
CityLondon
Period4/12/175/12/17
Internet address

Keywords

  • multi-condition training
  • i-vector
  • session variability

Fingerprint

Dive into the research topics of 'Channel variability synthesis in i-vector speaker recognition'. Together they form a unique fingerprint.

Cite this