This paper presents our investigations of recurrent neural networks(RNNs) for the phrase break prediction task. With the advent of deeplearning, there have been attempts to apply deep neural networks (DNNs)to phrase break prediction. While deep neural networks are able toeffectively capture dependencies across features, they lack the abilityto capture long-term relations that are spread over time. On the otherhand, RNNs are able to capture long-term temporal relations and thusare better suited for tasks where sequences have to be modeled. Wemodel the phrase break prediction task as a sequence labeling task,and show by means of experimental results that RNNs perform betterat phrase break prediction as compared to conventional DNN systems.
@inproceedings{vadapalli16_interspeech, title = {An Investigation of Recurrent Neural Network Architectures Using Word Embeddings for Phrase Break Prediction}, author = {Anandaswarup Vadapalli and Suryakanth V. Gangashetty}, year = {2016}, booktitle = {Interspeech 2016}, pages = {2308--2312}, doi = {10.21437/Interspeech.2016-885}, issn = {2958-1796},}
Cite as:Vadapalli, A., Gangashetty, S.V. (2016) An Investigation of Recurrent Neural Network Architectures Using Word Embeddings for Phrase Break Prediction. Proc. Interspeech 2016, 2308-2312, doi: 10.21437/Interspeech.2016-885