Movatterモバイル変換


[0]ホーム

URL:


ISCAArchiveInterspeech 2016
ISCAArchiveInterspeech 2016

Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features

Barlas Oğuz, Issac Alphonso, Shuangyu Chang

Non-negative matrix based language models have been recently introduced[1] as a computationally efficient alternative to other feature-basedmodels such as maximum-entropy models. We present a new entropy basedpruning algorithm for this class of language models, which is fastand scalable. We present perplexity and word error rate results andcompare these against regular n-gram pruning. We also train modelswith location and personalization features and report results at variouspruning thresholds. We demonstrate that contextual features are helpfulover the vanilla model even after pruning to a similar size.

@inproceedings{oguz16_interspeech,  title     = {Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features},  author    = {Barlas Oğuz and Issac Alphonso and Shuangyu Chang},  year      = {2016},  booktitle = {Interspeech 2016},  pages     = {2328--2332},  doi       = {10.21437/Interspeech.2016-130},  issn      = {2958-1796},}

Cite as:Oğuz, B., Alphonso, I., Chang, S. (2016) Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features. Proc. Interspeech 2016, 2328-2332, doi: 10.21437/Interspeech.2016-130

doi:10.21437/Interspeech.2016-130

[8]ページ先頭

©2009-2025 Movatter.jp