textrank
packagemoduleThis package is not in the latest version of its module.
Details
Valid go.mod file
The Go module system was introduced in Go 1.11 and is the official dependency management solution for Go.
Redistributable license
Redistributable licenses place minimal restrictions on how software can be used, modified, and redistributed.
Tagged version
Modules with tagged versions give importers more predictable builds.
Stable version
When a project reaches major version v1 it is considered stable.
- Learn more about best practices
Repository
Links
README¶
TextRank on Go
This source code is an implementation of textrank algorithm, under MIT licence.
The minimum requred Go version is 1.8.
MOTIVATION
If there was a program what could rank book size text's words, phrases and sentences continuously on multiple threads and it would be opened to modifing by objects, written in a simple, secure, static language and if it would be very well documented... Now, here it is.
DEMO
The following linkRecona is a simple, pre-programmed a.i. what uses this library to ranking raw texts. It visualizes how ranking works and it represents how it could be used for different purposes:Recona.app
FEATURES
- Find the most important phrases.
- Find the most important words.
- Find the most important N sentences.
- Importance by phrase weights.
- Importance by word occurrence.
- Find the first N sentences, start from Xth sentence.
- Find sentences by phrase chains ordered by position in text.
- Access to the whole ranked data.
- Support more languages.
- Algorithm for weighting can be modified by interface implementation.
- Parser can be modified by interface implementation.
- Multi thread support.
INSTALL
You can install TextRank by Go's get:
go get github.com/DavidBelicza/TextRank
TextRank uses the DEP as vendoring tool, so the required dependencies are versioned under thevendor folder. The exact version number defined in the Gopkg.toml. If you want to reinstall the dependencies, use the DEP functions: flush the vendor folder and run:
dep ensure
DOCKER
Using Docker to TextRank isn't necessary, it's just an option.
Build image from the repository's root directory:
docker build -t go_text_rank_image .
Create container from the image:
docker run -dit --name textrank go_text_rank_image:latest
Run thego test -v . code inside the container:
docker exec -i -t textrank go test -v .
Stop, start or remove the container:
docker stop textrank
docker start textrank
docker rm textrank
HOW DOES IT WORK
Too see how does it work, the easiest way is to use the sample text. Sample text can be found in thetextrank_test.go file at this line. It's a short size text about Gnome Shell.
- TextRank reads the text,
- parse it,
- remove the unnecessary stop words,
- tokenize it
- and counting the occurrence of the words and phrases
- and then it starts weighting
- by the occurrence of words and phrases and their relations.
- After weights are done, TextRank normalize weights to between 1 and 0.
- Then the different finder methods capable to find the most important words, phrases or sentences.
The most important phrases from the sample text are:
Phrase | Occurrence | Weight |
---|---|---|
gnome - shell | 5 | 1 |
extension - gnome | 3 | 0.50859946 |
icons - tray | 3 | 0.49631447 |
gnome - caffeine | 2 | 0.27027023 |
Thegnome is the most often used word in this text andshell is also used multiple times. Two of them are used together as a phrase 5 times. This is the highest occurrence in this text, so this is the most important phrase.
The following two important phrases have same occurrence 3, however they are not equal. This is because theextension gnome phrase contains the wordgnome, the most popular word in the text, and it increases the phrase's weight. It increases the weight of any word what is related to it, but not too much to overcome other important phrases what don't contain thegnome word.
The exact algorithm can be found in thealgorithm.go file at this line.
TEXTRANK OR AUTOMATIC SUMMARIZATION
Automatic summarization is the process of reducing a text document with a computer program in order to create a summary that retains the most important points of the original document. Technologies that can make a coherent summary take into account variables such as length, writing style and syntax. Automatic data summarization is part of machine learning and data mining. The main idea of summarization is to find a representative subset of the data, which contains the information of the entire set. Summarization technologies are used in a large number of sectors in industry today. - Wikipedia
EXAMPLES
Find the most important phrases
This is the most basic and simplest usage of textrank.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
All possible pre-defined finder queries
After ranking, the graph contains a lot of valuable data. There are functions in textrank package what contains logic to retrieve those data from the graph.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get all phrases order by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Get all words order by weight.words := textrank.FindSingleWords(tr)// Most important word.fmt.Println(words[0])// Get the most important 10 sentences. Importance by phrase weights.sentences := textrank.FindSentencesByRelationWeight(tr, 10)// Found sentencesfmt.Println(sentences)// Get the most important 10 sentences. Importance by word occurrence.sentences = textrank.FindSentencesByWordQtyWeight(tr, 10)// Found sentencesfmt.Println(sentences)// Get the first 10 sentences, start from 5th sentence.sentences = textrank.FindSentencesFrom(tr, 5, 10)// Found sentencesfmt.Println(sentences)// Get sentences by phrase/word chains order by position in text.sentencesPh := textrank.FindSentencesByPhraseChain(tr, []string{"gnome", "shell", "extension"})// Found sentence.fmt.Println(sentencesPh[0])}
Access to everything
After ranking, the graph contains a lot of valuable data. The GetRank function allows access to the graph and every data can be retrieved from this structure.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get the rank graph.rankData := tr.GetRankData()// Get word ID by token/word.wordId := rankData.WordValID["gnome"]// Word's weight.fmt.Println(rankData.Words[wordId].Weight)// Word's quantity/occurrence.fmt.Println(rankData.Words[wordId].Qty)// All sentences what contain the this word.fmt.Println(rankData.Words[wordId].SentenceIDs)// All other words what are related to this word on left side.fmt.Println(rankData.Words[wordId].ConnectionLeft)// All other words what are related to this word on right side.fmt.Println(rankData.Words[wordId].ConnectionRight)// The node of this word, it contains the related words and the relation weight.fmt.Println(rankData.Relation.Node[wordId])}
Adding text continuously
It is possibe to add more text after another texts already have been added. The Ranking function can merge these multiple texts and it can recalculate the weights and all related data.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)rawText2 := "Another book or article..."rawText3 := "Third another book or article..."// Add text to the previously added text.tr.Populate(rawText2, language, rule)// Add text to the previously added text.tr.Populate(rawText3, language, rule)// Run the ranking to the whole composed text.tr.Ranking(algorithmDef)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
Using different algorithm to ranking text
There are two algorithm has implemented, it is possible to write custom algorithm by Algorithm interface and use it instead of defaults.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Using a little bit more complex algorithm to ranking text.algorithmChain := textrank.NewChainAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmChain)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
Using multiple graphs
Graph ID exists because it is possible run multiple independent text ranking processes.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// 1th TextRank objecttr1 := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr1.Populate(rawText, language, rule)// Run the ranking.tr1.Ranking(algorithmDef)// 2nd TextRank objecttr2 := textrank.NewTextRank()// Using a little bit more complex algorithm to ranking text.algorithmChain := textrank.NewChainAlgorithm()// Add text to the second graph.tr2.Populate(rawText, language, rule)// Run the ranking on the second graph.tr2.Ranking(algorithmChain)// Get all phrases by weight from first graph.rankedPhrases := textrank.FindPhrases(tr1)// Most important phrase from first graph.fmt.Println(rankedPhrases[0])// Second important phrase from first graph.fmt.Println(rankedPhrases[1])// Get all phrases by weight from second graph.rankedPhrases2 := textrank.FindPhrases(tr2)// Most important phrase from second graph.fmt.Println(rankedPhrases2[0])// Second important phrase from second graph.fmt.Println(rankedPhrases2[1])}
Using different non-English languages
Engish is used by default but it is possible to add any language. To use other languages a stop word list is required what you can find here:https://github.com/stopwords-iso
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Add Spanish stop words (just some example).language.SetWords("es", []string{"uno", "dos", "tres", "yo", "es", "eres"})// Active the Spanish.language.SetActiveLanguage("es")// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
Asynchronous usage by goroutines
It is thread safe. Independent graphs can receive texts in the same time and can be extended by more text also in the same time.
package mainimport ("fmt""time""github.com/DavidBelicza/TextRank")func main() {// A flag when program has to stop.stopProgram := false// Channel.stream := make(chan string)// TextRank object.tr := textrank.NewTextRank()// Open new thread/routinego func(tr *textrank.TextRank) {// 3 texts.rawTexts := []string{"Very long text...","Another very long text...","Second another very long text...",}// Add 3 texts to the stream channel, one by one.for _, rawText := range rawTexts {stream <- rawText}}(tr)// Open new thread/routinego func() {// Counter how many times texts added to the ranking.i := 1for {// Get text from stream channel when it got a new one.rawText := <-stream// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithm := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithm)// Set stopProgram flag to true when all 3 text have been added.if i == 3 {stopProgram = true}i++}}()// The main thread has to run while go-routines run. When stopProgram is// true then the loop has finish.for !stopProgram {time.Sleep(time.Second * 1)}// Most important phrase.phrases := textrank.FindPhrases(tr)// Second important phrase.fmt.Println(phrases[0])}
A SIMPLE VISUAL REPRESENTATION
The below image is a representation how works the simplest text ranking algorithm. This algorithm can be replaced by an another one by inject different Algorithm interface implementation.

Documentation¶
Overview¶
Package textrank is an implementation of Text Rank algorithm in Go withextendable features (automatic summarization, phrase extraction). It supportsmultithreading by goroutines. The package is under The MIT Licence.
MOTIVATION¶
If there was a program what could rank book size text's words, phrases andsentences continuously on multiple threads and it would be opened to modifing byobjects, written in a simple, secure, static language and if it would be verywell documented... Now, here it is.
FEATURES¶
- Find the most important phrases.- Find the most important words.- Find the most important N sentences.- Importance by phrase weights.- Importance by word occurrence.- Find the first N sentences, start from Xth sentence.- Find sentences by phrase chains ordered by position in text.- Access to the whole ranked data.- Support more languages.- Algorithm for weighting can be modified by interface implementation.- Parser can be modified by interface implementation.- Multi thread support.
EXAMPLES¶
Find the most important phrases:
This is the most basic and simplest usage of textrank.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
All possible pre-defined finder queries:
After ranking, the graph contains a lot of valuable data. There are functions intextrank package what contains logic to retrieve those data from the graph.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get all phrases order by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Get all words order by weight.words := textrank.FindSingleWords(tr)// Most important word.fmt.Println(words[0])// Get the most important 10 sentences. Importance by phrase weights.sentences := textrank.FindSentencesByRelationWeight(tr, 10)// Found sentencesfmt.Println(sentences)// Get the most important 10 sentences. Importance by word occurrence.sentences = textrank.FindSentencesByWordQtyWeight(tr, 10)// Found sentencesfmt.Println(sentences)// Get the first 10 sentences, start from 5th sentence.sentences = textrank.FindSentencesFrom(tr, 5, 10)// Found sentencesfmt.Println(sentences)// Get sentences by phrase/word chains order by position in text.sentencesPh := textrank.FindSentencesByPhraseChain(tr, []string{"gnome", "shell", "extension"})// Found sentence.fmt.Println(sentencesPh[0])}
Access to everything¶
After ranking, the graph contains a lot of valuable data. The GetRank functionallows access to the graph and every data can be retrieved from this structure.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get the rank graph.rankData := tr.GetRankData()// Get word ID by token/word.wordId := rankData.WordValID["gnome"]// Word's weight.fmt.Println(rankData.Words[wordId].Weight)// Word's quantity/occurrence.fmt.Println(rankData.Words[wordId].Qty)// All sentences what contain the this word.fmt.Println(rankData.Words[wordId].SentenceIDs)// All other words what are related to this word on left side.fmt.Println(rankData.Words[wordId].ConnectionLeft)// All other words what are related to this word on right side.fmt.Println(rankData.Words[wordId].ConnectionRight)// The node of this word, it contains the related words and the// relation weight.fmt.Println(rankData.Relation.Node[wordId])}
Adding text continuously:
It is possibe to add more text after another texts already have been added. TheRanking function can merge these multiple texts and it can recalculate theweights and all related data.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)rawText2 := "Another book or article..."rawText3 := "Third another book or article..."// Add text to the previously added text.tr.Populate(rawText2, language, rule)// Add text to the previously added text.tr.Populate(rawText3, language, rule)// Run the ranking to the whole composed text.tr.Ranking(algorithmDef)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
Using different algorithm to ranking text:
There are two algorithm has implemented, it is possible to write customalgorithm by Algorithm interface and use it instead of defaults.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Using a little bit more complex algorithm to ranking text.algorithmChain := textrank.NewChainAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmChain)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
Using multiple graphs:
Graph ID exists because it is possible run multiple independent text rankingprocesses.
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// 1th TextRank objecttr1 := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr1.Populate(rawText, language, rule)// Run the ranking.tr1.Ranking(algorithmDef)// 2nd TextRank objecttr2 := textrank.NewTextRank()// Using a little bit more complex algorithm to ranking text.algorithmChain := textrank.NewChainAlgorithm()// Add text to the second graph.tr2.Populate(rawText, language, rule)// Run the ranking on the second graph.tr2.Ranking(algorithmChain)// Get all phrases by weight from first graph.rankedPhrases := textrank.FindPhrases(tr1)// Most important phrase from first graph.fmt.Println(rankedPhrases[0])// Second important phrase from first graph.fmt.Println(rankedPhrases[1])// Get all phrases by weight from second graph.rankedPhrases2 := textrank.FindPhrases(tr2)// Most important phrase from second graph.fmt.Println(rankedPhrases2[0])// Second important phrase from second graph.fmt.Println(rankedPhrases2[1])}
Using different non-English languages:
Engish is used by default but it is possible to add any language. To use otherlanguages a stop word list is required what you can find here:https://github.com/stopwords-iso
package mainimport ("fmt""github.com/DavidBelicza/TextRank")func main() {rawText := "Your long raw text, it could be a book. Lorem ipsum..."// TextRank objecttr := textrank.NewTextRank()// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Add Spanish stop words (just some example).language.SetWords("es", []string{"uno", "dos", "tres", "yo", "es", "eres"})// Active the Spanish.language.SetActiveLanguage("es")// Default algorithm for ranking text.algorithmDef := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithmDef)// Get all phrases by weight.rankedPhrases := textrank.FindPhrases(tr)// Most important phrase.fmt.Println(rankedPhrases[0])// Second important phrase.fmt.Println(rankedPhrases[1])}
Asynchronous usage by goroutines:
It is thread safe. Independent graphs can receive texts in the same time and canbe extended by more text also in the same time.
package mainimport ("fmt""time""github.com/DavidBelicza/TextRank")func main() {// A flag when program has to stop.stopProgram := false// Channel.stream := make(chan string)// TextRank object.tr := textrank.NewTextRank()// Open new thread/routinego func(tr *textrank.TextRank) {// 3 texts.rawTexts := []string{"Very long text...","Another very long text...","Second another very long text...",}// Add 3 texts to the stream channel, one by one.for _, rawText := range rawTexts {stream <- rawText}}(tr)// Open new thread/routinego func() {// Counter how many times texts added to the ranking.i := 1for {// Get text from stream channel when it got a new one.rawText := <-stream// Default Rule for parsing.rule := textrank.NewDefaultRule()// Default Language for filtering stop words.language := textrank.NewDefaultLanguage()// Default algorithm for ranking text.algorithm := textrank.NewDefaultAlgorithm()// Add text.tr.Populate(rawText, language, rule)// Run the ranking.tr.Ranking(algorithm)// Set stopProgram flag to true when all 3 text have been added.if i == 3 {stopProgram = true}i++}}()// The main thread has to run while go-routines run. When stopProgram is// true then the loop has finish.for !stopProgram {time.Sleep(time.Second * 1)}// Most important phrase.phrases := textrank.FindPhrases(tr)// Second important phrase.fmt.Println(phrases[0])}
Index¶
- func FindPhrases(textRank *TextRank) []rank.Phrase
- func FindSentencesByPhraseChain(textRank *TextRank, phrases []string) []rank.Sentence
- func FindSentencesByRelationWeight(textRank *TextRank, limit int) []rank.Sentence
- func FindSentencesByWordQtyWeight(textRank *TextRank, limit int) []rank.Sentence
- func FindSentencesFrom(textRank *TextRank, sentenceID int, limit int) []rank.Sentence
- func FindSingleWords(textRank *TextRank) []rank.SingleWord
- func NewChainAlgorithm() *rank.AlgorithmChain
- func NewDefaultAlgorithm() *rank.AlgorithmDefault
- func NewDefaultLanguage() *convert.LanguageDefault
- func NewDefaultRule() *parse.RuleDefault
- type TextRank
Constants¶
This section is empty.
Variables¶
This section is empty.
Functions¶
funcFindPhrases¶
FindPhrases function retrieves a slice of Phrase structures by TextRankobject. The return value contains the sorted phrases with IDs, words, weightsand quantities by weight from 1 to 0. Weight is calculated from quantities ofrelation between two words. A single phrase is from two words - not less andmore. (But it's possible to find chain of phrases byFindSentencesByPhraseChain function.)
funcFindSentencesByPhraseChain¶
FindSentencesByPhraseChain function retrieves a slice of Sentence structuresby TextRank object and slice of phrases. The return value contains the ID ofthe sentence and the sentence text itself. The slice is sorted by weight ofword quantities from 1 to 0.
textRank TextRank is the object of the TextRank.
phrases []string is a slice of phrases. A single phrase is from two words, sowhen the slice contains 3 words the inner method will search for two phrases.The search algorithm seeks for "len(phrases)!". In case of three item thepossible combination is 3 factorial (3!) = 3 * 2 * 1.
rawText := "Long raw text, lorem ipsum..."rule := NewDefaultRule()language := NewDefaultLanguage()algorithm := NewDefaultAlgorithm()Append(rawText, language, rule, 1)Ranking(1, algorithm)FindSentencesByPhraseChain(1, []string{ "captain", "james", "kirk",})
The above code searches for captain james kirk, captain kirk james, jameskirk captain, james captain kirk, kirk james captain and james kirk captaincombinations in the graph. The 3 of words have to be related to each otherin the same sentence but the search algorithm ignores the stop words. So ifthere is a sentence "James Kirk is the Captain of the Enterprise." thesentence will be returned because the words "is" and "the" are stop words.
funcFindSentencesByRelationWeight¶
FindSentencesByRelationWeight function retrieves a slice of Sentencestructures by TextRank object. The return value contains the ID of thesentence and the sentence text itself. The slice is sorted by weight ofphrases from 1 to 0.
funcFindSentencesByWordQtyWeight¶
FindSentencesByWordQtyWeight function retrieves a slice of Sentencestructures by TextRank object. The return value contains the ID of thesentence and the sentence text itself. The slice is sorted by weight of wordquantities from 1 to 0.
funcFindSentencesFrom¶
FindSentencesFrom function retrieves a slice of Sentence structures byTextRank object and by ID of the sentence. The return value contains thesentence text itself. The returned slice contains sentences sorted by theirIDs started from the given sentence ID in ascending sort.
funcFindSingleWords¶
func FindSingleWords(textRank *TextRank) []rank.SingleWord
FindSingleWords function retrieves a slice of SingleWord structures byTextRank object. The return value contains the sorted words with IDs, words,weights and quantities by weight from 1 to 0. Weight is calculated fromquantities of word.
funcNewChainAlgorithm¶
func NewChainAlgorithm() *rank.AlgorithmChain
NewChainAlgorithm function retrieves an Algorithm object. It defines howshould work the text ranking algorithm, the weighting. This is an alternativeway to ranking words by weighting the number of the words. Because Algorithmis an interface it's possible to modify the ranking algorithm by injectdifferent implementation. This is the 4th step to use TextRank.
funcNewDefaultAlgorithm¶
func NewDefaultAlgorithm() *rank.AlgorithmDefault
NewDefaultAlgorithm function retrieves an Algorithm object. It defines howshould work the text ranking algorithm, the weighting. This is the generaltext rank by weighting the connection between the words to find the strongestphrases. Because Algorithm is an interface it's possible to modify theranking algorithm by inject different implementation. This is the 4th step touse TextRank.
funcNewDefaultLanguage¶
func NewDefaultLanguage() *convert.LanguageDefault
NewDefaultLanguage function retrieves a default Language object. It defineswhat words are real and what words are just Stop Words or useless Junk Words.It uses the default English Stop Words, but it's possible to set differentStop Words in English or any other languages. Because Language is aninterface it's possible to modify the ranking by inject different Languageimplementation. This is the 3rd step to use TextRank.
funcNewDefaultRule¶
func NewDefaultRule() *parse.RuleDefault
NewDefaultRule function retrieves a default Rule object what works in themost cases in English or similar Latin languages like French or Spanish. TheRule defines raw text how should be split to sentences and words. BecauseRule is an interface it's possible modify the ranking by inject differentRule implementation. This is the 2nd step to use TextRank.
Types¶
typeTextRank¶
type TextRank struct {// contains filtered or unexported fields}
TextRank structure contains the Rank data object. This structure is a wrapperaround the whole text ranking functionality.
funcNewTextRank¶
func NewTextRank() *TextRank
NewTextRank constructor retrieves a TextRank pointer. This is the 1th step touse TextRank.
func (*TextRank)GetRankData¶
GetRankData method retrieves the Rank data to that case if the developer wantaccess to the whole graph and sentences, words, weights and all of the datato analyze it or just implement a new search logic or finder method.
func (*TextRank)Populate¶
Populate method adds a raw text to the text-ranking graph. It parses,tokenize the raw text and prepares it to weighting and scoring. It's possibleto append a new raw text to an existing one even if the previously text isalready ranked. This is 5th step to use TextRank.
text string must be a plain text from TXT or PDF or any document, it cancontain new lines, break lines or any unnecessary text parts, but it shouldnot contain HTML tags or codes.
lang Language object can be loaded from NewDefaultLanguage function.
rule Rule object can be loaded from NewDefaultRule function.