Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Python implementation of an N-gram language model with Laplace smoothing and sentence generation.

NotificationsYou must be signed in to change notification settings

joshualoehr/ngram-language-model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Python implementation of an N-gram language model with Laplace smoothing and sentence generation.

Some NLTK functions are used (nltk.ngrams,nltk.FreqDist), but most everything is implemented by hand.

Note: theLanguageModel class expects to be given data which is already tokenized by sentences. If using the includedload_data function, thetrain.txt andtest.txt files should already be processed such that:

  1. punctuation is removed
  2. each sentence is on its own line

See thedata/ directory for examples.


Example output for a trigram model trained ondata/train.txt and tested againstdata/test.txt:

Loading 3-gram model...Vocabulary size: 23505Generating sentences......<s> <s> the company said it has agreed to sell its shares in a statement </s> (0.03163)<s> <s> he said the company also announced measures to boost its domestic economy and could be a long term debt </s> (0.01418)<s> <s> this is a major trade bill that would be the first quarter of 1987 </s> (0.02182)...Model perplexity: 51.555

The numbers in parentheses beside the generated sentences are the cumulative probabilities of those sentences occurring.


Usage info:

usage: N-gram Language Model [-h] --data DATA --n N [--laplace LAPLACE] [--num NUM]optional arguments:  -h, --help         show this help message and exit  --data DATA        Location of the data directory containing train.txt and test.txt  --n N              Order of N-gram model to create (i.e. 1 for unigram, 2 for bigram, etc.)  --laplace LAPLACE  Lambda parameter for Laplace smoothing (default is 0.01 -- use 1 for add-1 smoothing)  --num NUM          Number of sentences to generate (default 10)

Originally authored by Josh Loehr and Robin Cosbey, with slight modifications. Last edited Feb. 8, 2018.

About

Python implementation of an N-gram language model with Laplace smoothing and sentence generation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp