Joint Speech Enhancement and Speaker Identification Using Approximate Bayesian Inference

Show simple item record

dc.contributor.author Ciira, Wa Maina
dc.date.accessioned 2019-03-11T08:18:31Z
dc.date.available 2019-03-11T08:18:31Z
dc.date.issued 2011-08
dc.identifier.issn 1558-7916
dc.identifier.uri http://41.89.227.156:8080/xmlui/handle/123456789/835
dc.description.abstract —We present a variational Bayesian algorithm for joint speech enhancement and speaker identification that makes use of speaker dependent speech priors. Our work is built on the intuition that speaker dependent priors would work better than priors that attempt to capture global speech properties. We derive an iterative algorithm that exchanges information between the speech enhancement and speaker identification tasks. With cleaner speech we are able to make better identification decisions and with the speaker dependent priors we are able to improve speech enhancement performance. We present experimental results using the TIMIT data set which confirm the speech enhancement performance of the algorithm by measuring signal-to-noise (SNR) ratio improvement and perceptual quality improvement via the Perceptual Evaluation of Speech Quality (PESQ) score. We also demonstrate the ability of the algorithm to perform voice activity detection (VAD). The experimental results also demonstrate that speaker identification accuracy is improved. en_US
dc.language.iso en en_US
dc.publisher IEEE Transactions on Audio, Speech and Language Processing en_US
dc.subject Speech enhancement en_US
dc.subject speaker identification en_US
dc.subject variational Bayesian inference en_US
dc.title Joint Speech Enhancement and Speaker Identification Using Approximate Bayesian Inference en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account