Movatterモバイル変換


[0]ホーム

URL:


ISCAArchiveInterspeech 2018
ISCAArchiveInterspeech 2018

Joint Learning Using Denoising Variational Autoencoders for Voice Activity Detection

Youngmoon Jung, Younggwan Kim, Yeunju Choi, Hoirin Kim

Voice activity detection (VAD) is a challenging task in very low signal-to-noise ratio (SNR) environments. To address this issue, a promising approach is to map noisy speech features to corresponding clean features and to perform VAD using the generated clean features. This can be implemented by concatenating a speech enhancement (SE) and a VAD network, whose parameters are jointly updated. In this paper, we propose denoising variational autoencoder-based (DVAE) speech enhancement in the joint learning framework. Moreover, we feed not only the enhanced feature but also the latent code from the DVAE into the VAD network. We show that the proposed joint learning approach outperforms conventional denoising autoencoder-based joint learning approach.

@inproceedings{jung18_interspeech,  title     = {Joint Learning Using Denoising Variational Autoencoders for Voice Activity Detection},  author    = {Youngmoon Jung and Younggwan Kim and Yeunju Choi and Hoirin Kim},  year      = {2018},  booktitle = {Interspeech 2018},  pages     = {1210--1214},  doi       = {10.21437/Interspeech.2018-1151},  issn      = {2958-1796},}

Cite as:Jung, Y., Kim, Y., Choi, Y., Kim, H. (2018) Joint Learning Using Denoising Variational Autoencoders for Voice Activity Detection. Proc. Interspeech 2018, 1210-1214, doi: 10.21437/Interspeech.2018-1151

doi:10.21437/Interspeech.2018-1151

[8]ページ先頭

©2009-2025 Movatter.jp