RELATED APPLICATIONS- This is a continuation of U.S. patent application Ser. No. 09/609,202, filed Jun. 20, 2000, now U.S. Pat. No. ______, which claims priority to a provisional patent application No. 60/163,850, entitled “An iterative method for lexicon, word segmentation and language model joint optimization”, filed on Nov. 5, 1999 by the inventors of this application, each of which are incorporated herein by reference.[0001] 
TECHNICAL FIELD- This invention generally relates to language modeling and, more specifically, to a system and iterative method for lexicon, word segmentation and language model joint optimization.[0002] 
BACKGROUND- Recent advances in computing power and related technology have fostered the development of a new generation of powerful software applications including web-browsers, word processing and speech recognition applications. The latest generation of web-browsers, for example, anticipate a uniform resource locator (URL) address entry after a few of the initial characters of the domain name have been entered. Word processors offer improved spelling and grammar checking capabilities, word prediction, and language conversion. Newer speech recognition applications similarly offer a wide variety of features with impressive recognition and prediction accuracy rates. In order to be useful to an end-user, these features must execute in substantially real-time. To provide this performance, many applications rely on a tree-like data structure to build a simple language model.[0003] 
- Simplistically, a language model measures the likelihood of any given sentence. That is, a language model can take any sequence of items (words, characters, letters, etc.) and estimate the probability of the sequence. A common approach to building a prior art language model is to utilize a prefix tree-like data structure to build an N-gram language model from a known training set of a textual corpus.[0004] 
- The use of a prefix tree data structure (a.k.a. a suffix tree, or a PAT tree) enables a higher-level application to quickly traverse the language model, providing the substantially real-time performance characteristics described above. Simplistically, the N-gram language model counts the number of occurrences of a particular item (word, character, etc.) in a string (of size N) throughout a text. The counts are used to calculate the probability of the use of the item strings. Traditionally, a tri-gram (N-gram where N=3) approach involves the following steps:[0005] 
- (a) a textual corpus is dissected into a plurality of items (characters, letters, numbers, etc.);[0006] 
- (b) the items (e.g., characters (C)) are segmented (e.g., into words (W)) in accordance with a small, pre-defined lexicon and a simple, pre-defined segmentation algorithm, wherein each W is mapped in the tree to one or more C's;[0007] 
- (c) train a language model on the dissected corpus by counting the occurrence of strings of characters, from which the probability of a sequence of words (W[0008]1, W2, . . . WM) is predicted from the previous two words: 
- P(W1, W2, W3, . . . WM)≈ΠP(Wi|Wi-1, Wi-2)   (1) 
- The N-gram language model is limited in a number of respects. First, the counting process utilized in constructing the prefix tree is very time consuming. Thus, only small N-gram models (typically bi-gram, or tri-gram) can practically be achieved. Second, as the string size (N) of the N-gram language model increases, the memory required to store the prefix tree increases by 2[0009]N. Thus, the memory required to store the N-gram language model, and the access time required to utilize a large N-gram language model is prohibitively large for N-grams larger than three (i.e., a tri-gram). 
- Prior art N-gram language models tend to use a fixed (small) lexicon, a simplistic segmentation algorithm, and will typically only rely on the previous two words to predict the current word (in a tri-gram model).[0010] 
- A fixed lexicon limits the ability of the model to select the best words in general or specific to a task. If a word is not in the lexicon, it does not exist as far as the model is concerned. Thus, a small lexicon is not likely to cover the intended linguistic content.[0011] 
- The segmentation algorithms are often ad-hoc and not based on any statistical or semantic principles. A simplistic segmentation algorithm typically errors in favor of larger words over smaller words. Thus, the model is unable to accurately predict smaller words contained within larger lexiconically acceptable strings.[0012] 
- As a result of the foregoing limitations, a language model using prior art lexicon and segmentation algorithms tend to be error prone. That is, any errors made in the lexicon or segmentation stage are propagated throughout the language model, thereby limiting its accuracy and predictive attributes.[0013] 
- Finally, limiting the model to at most the previous two words for context (in a tri-gram language model) is also limiting in that a greater context might be required to accurately predict the likelihood of a word. The limitations on these three aspects of the language model often result in poor predictive qualities of the language model.[0014] 
- Thus, a system and method for lexicon, segmentation algorithm and language model joint optimization is required, unencumbered by the deficiencies and limitations commonly associated with prior art language modeling techniques. Just such a solution is provided below.[0015] 
SUMMARY- This invention concerns a system and iterative method for lexicon, segmentation and language model joint optimization. To overcome the limitations commonly associated with the prior art, the present invention does not rely on a predefined lexicon or segmentation algorithm, rather the lexicon and segmentation algorithm are dynamically generated in an iterative process of optimizing the language model. According to one implementation, a method for improving language model performance is presented comprising developing an initial language model from a lexicon and segmentation derived from a received corpus using a maximum match technique, and iteratively refining the initial language model by dynamically updating the lexicon and re-segmenting the corpus according to statistical principles until a threshold of predictive capability is achieved.[0016] 
BRIEF DESCRIPTION OF THE DRAWINGS- The same reference numbers are used throughout the figures to reference like components and features.[0017] 
- FIG. 1 is a block diagram of a computer system incorporating the teachings of the present invention;[0018] 
- FIG. 2 is a block diagram of an example modeling agent to iteratively develop a lexicon, segmentation and language model, according to one implementation of the present invention;[0019] 
- FIG. 3 is a graphical representation of a DOMM tree according to one aspect of the present invention;[0020] 
- FIG. 4 is a flow chart of an example method for building a DOMM tree;[0021] 
- FIG. 5 is a flow chart of an example method for lexicon, segmentation and language model joint optimization, according to the teachings of the present invention;[0022] 
- FIG. 6 is a flow chart detailing the method steps for generating an initial lexicon, and iteratively altering a dynamically generated lexicon, segmentation and language model until convergence, according to one implementation of the present invention; and[0023] 
- FIG. 7 is a storage medium with a plurality of executable instructions which, when executed, implement the innovative modeling agent of the present invention, according to an alternate embodiment of the present invention.[0024] 
DETAILED DESCRIPTION- This invention concerns a system and iterative method for lexicon, segmentation and language model joint optimization. In describing the present invention, an innovative language model, the Dynamic Order Markov Model (DOMM), is referenced. A detailed description of DOMM is presented in copending U.S. patent application Ser. No. 09/608,526 entitled A Method and Apparatus for Generating and Managing a Language Model Data Structure, by Lee, et al., the disclosure of which is expressly incorporated herein by reference.[0025] 
- In the discussion herein, the invention is described in the general context of computer-executable instructions, such as program modules, being executed by one or more conventional computers. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with other computer system configurations, including hand-held devices, personal digital assistants, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. In a distributed computer environment, program modules may be located in both local and remote memory storage devices. It is noted, however, that modification to the architecture and methods described herein may well be made without deviating from spirit and scope of the present invention.[0026] 
- Example Computer System[0027] 
- FIG. 1 illustrates an[0028]example computer system102 including an innovativelanguage modeling agent104, to jointly optimize a lexicon, segmentation and language model according to the teachings of the present invention. It should be appreciated that although depicted as a separate, stand alone application in FIG. 1,language modeling agent104 may well be implemented as a function of an application, e.g., word processor, web browser, speech recognition system, etc. Moreover, although depicted as a software application, those skilled in the art will appreciate that the innovative modeling agent may well be implemented in hardware, e.g., a programmable logic array (PLA), a special purpose processor, an application specific integrated circuit (ASIC), microcontroller, and the like. 
- It will be evident, from the discussion to follow, that[0029]computer102 is intended to represent any of a class of general or special purpose computing platforms which, when endowed with the innovative language modeling agent (LMA)104, implement the teachings of the present invention in accordance with the first example implementation introduced above. It is to be appreciated that although the language modeling agent is depicted herein as a software application,computer system102 may alternatively support a hardware implementation ofLMA104 as well. In this regard, but for the description ofLMA104, the following description ofcomputer system102 is intended to be merely illustrative, as computer systems of greater or lesser capability may well be substituted without deviating from the spirit and scope of the present invention. 
- As shown,[0030]computer102 includes one or more processors orprocessing units132, asystem memory134, and abus136 that couples various system components including thesystem memory134 toprocessors132. 
- The[0031]bus136 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. The system memory includes read only memory (ROM)138 and random access memory (RAM)140. A basic input/output system (BIOS)142, containing the basic routines that help to transfer information between elements withincomputer102, such as during start-up, is stored inROM138.Computer102 further includes ahard disk drive144 for reading from and writing to a hard disk, not shown, amagnetic disk drive146 for reading from and writing to a removablemagnetic disk148, and anoptical disk drive150 for reading from or writing to a removableoptical disk152 such as a CD ROM, DVD ROM or other such optical media. Thehard disk drive144,magnetic disk drive146, andoptical disk drive150 are connected to thebus136 by aSCSI interface154 or some other suitable bus interface. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data forcomputer102. 
- Although the exemplary environment described herein employs a[0032]hard disk144, a removablemagnetic disk148 and a removableoptical disk152, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs) read only memories (ROM), and the like, may also be used in the exemplary operating environment. 
- A number of program modules may be stored on the[0033]hard disk144,magnetic disk148,optical disk152,ROM138, orRAM140, including anoperating system158, one ormore application programs160 including, for example, theinnovative LMA104 incorporating the teachings of the present invention,other program modules162, and program data164 (e.g., resultant language model data structures, etc.). A user may enter commands and information intocomputer102 through input devices such askeyboard166 andpointing device168. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are connected to theprocessing unit132 through aninterface170 that is coupled tobus136. Amonitor172 or other type of display device is also connected to thebus136 via an interface, such as avideo adapter174. In addition to themonitor172, personal computers often include other peripheral output devices (not shown) such as speakers and printers. 
- As shown,[0034]computer102 operates in a networked environment using logical connections to one or more remote computers, such as aremote computer176. Theremote computer176 may be another personal computer, a personal digital assistant, a server, a router or other network device, a network “thin-client” PC, a peer device or other common network node, and typically includes many or all of the elements described above relative tocomputer102, although only amemory storage device178 has been illustrated in FIG. 1. 
- As shown, the logical connections depicted in FIG. 1 include a local area network (LAN)[0035]180 and a wide area network (WAN)182. Such networking environments are commonplace in offices, enterprise-wide computer networks, Intranets, and the Internet. In one embodiment,remote computer176 executes an Internet Web browser program such as the “Internet Explorer” Web browser manufactured and distributed by Microsoft Corporation of Redmond, Wash. to access and utilize online services. 
- When used in a LAN networking environment,[0036]computer102 is connected to thelocal network180 through a network interface oradapter184. When used in a WAN networking environment,computer102 typically includes amodem186 or other means for establishing communications over thewide area network182, such as the Internet. Themodem186, which may be internal or external, is connected to thebus136 via a input/output (I/O)interface156. In addition to network connectivity, I/O interface156 also supports one ormore printers188. In a networked environment, program modules depicted relative to thepersonal computer102, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used. 
- Generally, the data processors of[0037]computer102 are programmed by means of instructions stored at different times in the various computer-readable storage media of the computer. Programs and operating systems are typically distributed, for example, on floppy disks or CD-ROMs. From there, they are installed or loaded into the secondary memory of a computer. At execution, they are loaded at least partially into the computer's primary electronic memory. The invention described herein includes these and other various types of computer-readable storage media when such media contain instructions or programs for implementing the innovative steps described below in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described below. Furthermore, certain sub-components of the computer may be programmed to perform the functions and steps described below. The invention includes such sub-components when they are programmed as described. In addition, the invention described herein includes data structures, described below, as embodied on various types of memory media. 
- For purposes of illustration, programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer, and are executed by the data processor(s) of the computer.[0038] 
- Example Language Modeling Agent[0039] 
- FIG. 2 illustrates a block diagram of an example language modeling agent (LMA)[0040]104, incorporating the teachings of the present invention. As shown,language modeling agent104 is comprised of one ormore controllers202,innovative analysis engine204, storage/memory device(s)206 and, optionally, one or more additional applications (e.g., graphical user interface, prediction application, verification application, estimation application, etc.)208, each communicatively coupled as shown. It will be appreciated that although depicted in FIG. 2 as a number of disparate blocks, one or more of the functional elements of theLMA104 may well be combined. In this regard, modeling agents of greater or lesser complexity which iteratively jointly optimize a dynamic lexicon, segmentation and language model may well be employed without deviating from the spirit and scope of the present invention. 
- As alluded to above, although depicted as a separate functional element,[0041]LMA104 may well be implemented as a function of a higher level application, e.g., a word processor, web browser, speech recognition system, or a language conversion system. In this regard, controller(s)202 ofLMA104 are responsive to one or more instructional commands from a parent application to selectively invoke the features ofLMA104. Alternatively,LMA104 may well be implemented as a stand-alone language modeling tool, providing a user with a user interface (208) to selectively implement the features ofLMA104 discussed below. 
- In either case, controller(s)[0042]202 ofLMA104 selectively invoke one or more of the functions ofanalysis engine204 to optimize a language model from a dynamically generated lexicon and segmentation algorithm. Thus, except as configured to effect the teachings of the present invention,controller202 is intended to represent any of a number of alternate control systems known in the art including, but not limited to, a microprocessor, a programmable logic array (PLA), a micro-machine, an application specific integrated circuit (ASIC) and the like. In an alternate implementation,controller202 is intended to represent a series of executable instructions to implement the control logic described above. 
- As shown, the[0043]innovative analysis engine204 is comprised aMarkov probability calculator212, adata structure generator210 including afrequency calculation function213, a dynamiclexicon generation function214 and adynamic segmention function216, and a datastructure memory manager218. Upon receiving an external indication,controller202 selectively invokes an instance of theanalysis engine204 to develop, modify and optimize a statistical language model (SLM). More specifically, in contrast to prior art language modeling techniques,analysis engine204 develops a statistical language model data structure fundamentally based on the Markov transition probabilities between individual items (e.g., characters, letters, numbers, etc.) of a textual corpus (e.g., one or more sets of text). Moreover, as will be shown,analysis engine204 utilizes as much data (referred to as “context” or “order” as is available to calculate the probability of an item string. In this regard, the language model of the present invention is aptly referred to as a Dynamic Order Markov Model (DOMM). 
- When invoked by[0044]controller202 to establish a DOMM data structure,analysis engine204 selectively invokes thedata structure generator210. In response,data structure generator210 establishes a tree-like data structure comprised of a plurality of nodes (associated with each of the plurality of items) and denoting inter-node dependencies. As described above, the tree-like data structure is referred to herein as a DOMM data structure, or DOMM tree.Controller202 receives the textual corpus and stores at least a subset of the textual corpus inmemory206 as a dynamic training set222 from which the language model is to be developed. It will be appreciated that, in alternate embodiments, a predetermined training set may also be used. 
- Once the dynamic training set is received, at least a subset of the training set[0045]222 is retrieved byfrequency calculation function213 for analysis.Frequency calculation function213 identifies a frequency of occurrence for each item (character, letter, number, word, etc.) in the training set subset. Based on inter-node dependencies,data structure generator210 assigns each item to an appropriate node of the DOMM tree, with an indication of the frequency value (Ci) and a compare bit (bi). 
- The[0046]Markov probability calculator212 calculates the probability of an item (character, letter, number, etc.) from a context (j) of associated items. More specifically, according to the teachings of the present invention, the Markov probability of a particular item (Ci) is dependent on as many previous characters as data “allows”, in other words: 
- P(C1, C2, C3, . . . , CN)≈ΠP(Ci|Ci-1, Ci-2, Ci-3, . . . , Cj)   (2) 
- The number of characters employed as context (j) by[0047]Markov probability calculator212 is a “dynamic” quantity that is different for each sequence of characters Ci, Ci-1, Ci-2, Ci-3, etc. According to one implementation, the number of characters relied upon for context (j) byMarkov probability calculator212 is dependent, at least in part, on a frequency value for each of the characters, i.e., the rate at which they appear throughout the corpus. More specifically, if in identifying the items of the corpusMarkov probability calculator212 does not identify at least a minimum occurrence frequency for a particular item, it may be “pruned” (i.e., removed) from the tree as being statistically irrelevant. According to one embodiment, the minimum frequency threshold is three (3). 
- As alluded to above,[0048]analysis engine204 does not rely on a fixed lexicon or a simple segmentation algorithm (both of which tend to be error prone). Rather,analysis engine204 selectively invokes adynamic segmentation function216 to segment items (characters or letters, for example) into strings (e.g., words). More precisely,segmentation function216 segments the training set222 into subsets (chunks) and calculates a cohesion score (i.e., a measure of the similarity between items within the subset). The segmentation and cohesion calculation is iteratively performed bysegmentation function216 until the cohesion score for each subset reaches a predetermined threshold. 
- The[0049]lexicon generation function214 is invoked to dynamically generate and maintain alexicon220 inmemory206. According to one implementation,lexicon generation function214 analyzes the segmentation results and generates a lexicon from item strings with a Markov transition probability that exceeds a threshold. In this regard,lexicon generation function214 develops adynamic lexicon220 from item strings which exceed a pre-determined Markov transition probability taken from one or more language models developed byanalysis engine204. Accordingly, unlike prior art language models which rely on a known, fixed lexicon that is prone to error,analysis engine204 dynamically generates a lexicon of statistically significant, statistically accurate item strings from one or more language models developed over a period of time. According to one embodiment, thelexicon220 comprises a “virtual corpus” thatMarkov probability calculator212 relies upon (in addition to the dynamic training set) in developing subsequent language models. 
- When invoked to modify or utilize the DOMM language model data structure,[0050]analysis engine204 selectively invokes an instance of datastructure memory manager218. According to one aspect of the invention, datastructure memory manager218 utilizes system memory as well as extended memory to maintain the DOMM data structure. More specifically, as will be described in greater detail below with reference to FIGS. 6 and 7, datastructure memory manager218 employs a WriteNode function and a ReadNode function (not shown) to maintain a subset of the most recently used nodes of the DOMM data structure in afirst level cache224 of asystem memory206, while relegating least recently used nodes to extended memory (e.g., disk files inhard drive144, or some remote drive), to provide for improved performance characteristics. In addition, a second level cache ofsystem memory206 is used to aggregate write commands until a predetermined threshold has been met, at which point data structure memory manager make one aggregate WriteNode command to an appropriate location in memory. Although depicted as a separate functional element, those skilled in the art will appreciate that datastructure memory manager218 may well be combined as a functional element of controller(s)202 without deviating from the spirit and scope of the present invention. 
- Example Data Structure—Dynamic Order Markov Model (DOMM) Tree[0051] 
- FIG. 3 graphically represents a conceptual illustration of an example Dynamic Order Markov Model tree-[0052]like data structure300, according to the teachings of the present invention. To conceptually illustrate how a DOMMtree data structure300 is configured, FIG. 3 presents an exampleDOMM data structure300 for a language model developed from the English alphabet, i.e., A, B, C, . . . Z. As shown theDOMM tree300 is comprised of one ormore root nodes302 and one or moresubordinate nodes304, each associated with an item (character, letter, number, word, etc.) of a textual corpus, logically coupled to denote dependencies between nodes. According to one implementation of the present invention,root nodes302 are comprised of an item and a frequency value (e.g., a count of how many times the item occurs in the corpus). At some level below theroot node level302, the subordinate nodes are arranged in binary sub-trees, wherein each node includes a compare bit (bi), an item with which the node is associated (A, B, . . . ), and a frequency value (CN) for the item. 
- Thus, beginning with the root node associated with the[0053]item B306, a binary sub-tree is comprised of subordinate nodes308-318 denoting the relationships between nodes and the frequency with which they occur. Given this conceptual example, it should be appreciated that starting at a root node, e.g.,306, the complexity of a search of the DOMM tree approximates log(N), where N is the total number of nodes to be searched. 
- As alluded to above, the size of the[0054]DOMM tree300 may exceed the space available in thememory device206 ofLMA104 and/or themain memory140 ofcomputer system102. Accordingly, datastructure memory manager218 facilitates storage of a DOMMtree data structure300 across main memory (e.g.,140 and/or206) into an extended memory space, e.g., disk files on a mass storage device such ashard drive144 ofcomputer system102. 
- Example Operation and Implementation[0055] 
- Having introduced the functional and conceptual elements of the present invention with reference to FIGS. 1-3, the operation of the innovative[0056]language modeling agent104 will now be described with reference to FIGS. 5-10. 
- Building DOMM Tree Data Structure[0057] 
- FIG. 4 is a flow chart of an example method for building a Dynamic Order Markov Model (DOMM) data structure, according to one aspect of the present invention. As alluded to above,[0058]language modeling agent104 may be invoked directly by a user or a higher-level application. In response,controller202 ofLMA104 selectively invokes an instance ofanalysis engine204, and a textual corpus (e.g., one or more documents) is loaded intomemory206 as adynamic training set222 and split into subsets (e.g., sentences, lines, etc.), block402. In response,data structure generator210 assigns each item of the subset to a node in data structure and calculates a frequency value for the item, block404. According to one implementation, once data structure generator has populated the data structure with the subset,frequency calculation function213 is invoked to identify the occurrence frequency of each item within the training set subset. 
- In block[0059]406, data structure generator determines whether additional subsets of the training set remain and, if so, the next subset is read in block408 and the process continues with block404. In alternate implementation,data structure generator210 completely populates the data structure, a subset at a time, before invocation of thefrequency calculation function213. In alternate embodiment,frequency calculation function213 simply counts each item as it is placed into associated nodes of the data structure. 
- If, in block[0060]406data structure generator210 has completely loaded thedata structure300 with items of the training set222,data structure generator210 may optionally prune the data structure, block410. A number of mechanisms may be employed to prune theresultant data structure300. 
- Example Method for Lexicon, Segmentation and Language Model Joint Optimization[0061] 
- FIG. 5 is a flow chart of an example method for lexicon, segmentation and language model joint optimization, according to the teachings of the present invention. As shown, the method begins with[0062]block400 whereinLM104 is invoked and a prefix tree of at least a subset of the received corpus is built. More specifically, as detailed in FIG. 4,data structure generator210 ofmodeling agent104 analyzes the received corpus and selects at least a subset as a training set, from which a DOMM tree is built. 
- In[0063]block502, a very large lexicon is built form the prefix tree and pre-processed to remove some obvious illogical words. More specifically,lexicon generation function214 is invoked to build an initial lexicon from the prefix tree. According to one implementation, the initial lexicon is built from the prefix tree using all sub-strings whose length is less than some pre-defined value, say ten (10) items (i.e., the sub-string is ten nodes or less from root to the most subordinate node). Once the initial lexicon is compiled,lexicon generation function214 prunes the lexicon by removing some obvious illogical words (see, e.g., block604, below). According to one implementation,lexicon generation function214 appends a pre-defined lexicon with the new, initial lexicon generated from at least the training set of the received corpus. 
- In[0064]block504, at least the training set of the received corpus is segmented, using the initial lexicon. More particularly,dynamic segmentation function216 is invoked to segment at least the training set of the received corpus to generate an initial segmented corpus. Those skilled in the art will appreciate that there are a number of ways in which the training corpus could be segmented, e.g., fixed-length segmentation, Maximum Match, etc. To do so without having yet generated a statistical language model (SLM) from the received corpus,dynamic segmentation function216 utilizes a Maximum Match technique to provide an initial segmented corpus. Accordingly,segmentation function216 starts at the beginning of an item string (or branch of the DOMM tree) and searches lexicon to see if the initial item (I1) is a one-item “word”. Segmentation function then combines it with the next item in the string to see if the combination (e.g., I1I2) is found as a “word” in the lexicon, and so on. According to one implementation, the longest string (I1, I2, . . . IN) of items found in the lexicon is deemed to be the correct segmentation for that string. It is to be appreciated that more complex Maximum Match algorithms may well be utilized bysegmentation function216 without deviating from the scope and spirit of the present invention. 
- Having developed an initial lexicon and segmentation from the training corpus, an iterative process is entered wherein the lexicon, segmentation and language model are jointly optimized, block[0065]506. More specifically, as will be shown in greater detail below, the innovative iterative optimization employs a statistical language modeling approach to dynamically adjust the segmentation and lexicon to provide an optimized language model. That is, unlike prior art language modeling techniques,modeling agent104 does not rely on a pre-defined static lexicon, or simplistic segmentation algorithm to generate a language model. Rather,modeling agent104 utilizes the received corpus, or at least a subset thereof (training set), to dynamically generate a lexicon and segmentation to produce an optimized language model. In this regard, language models generated by modelingagent104 do not suffer from the drawbacks and limitations commonly associated with prior art modeling systems. 
- Having introduced the innovative process in FIG. 5, FIG. 6 presents a more detailed flow chart for generating an initial lexicon, and the iterative process of refining the lexicon and segmentation to optimize the language model, according to one implementation of the present invention. As before, the method begins with step[0066]400 (FIG. 4) of building a prefix tree from the received corpus. As discussed above, the prefix tree may be built using the entire corpus or, alternatively, using a subset entire corpus (referred to as a training corpus). 
- In[0067]block502, the process of generating an initial lexicon begins withblock602, whereinlexicon generation function214 generates an initial lexicon from the prefix tree by identifying substrings (or branches of the prefix tree) with less than a select number of items. According to one implementation,lexicon generation function214 identifies substrings of ten (10) items or less to comprise the initial lexicon. Inblock604,lexicon generation function214 analyzes the initial lexicon generated instep602 for obvious illogical substrings, removing these substrings from the initial lexicon. That is,lexicon generation function214 analyzes the initial lexicon of substrings for illogical, or improbable words and removes these words from the lexicon. For the initial pruning,dynamic segmentation function216 is invoked to segment at least the training set of the received corpus to generate an segmented corpus. According to one implementation, the Maximum Match algorithm is used to segment based on the initial lexicon. Then thefrequency analysis function213 is invoked to compute the frequency of the occurrence in the received corpus for each word in the lexicon, sorting the lexicon according to the frequency of occurrence. The word with the lowest frequency is identified and deleted from the lexicon. The threshold for this deletion and re-segmentation may be determined according to the size of the corpus. According to one implementation, a corpus of 600M items may well utilize a frequency threshold of 500 to be included within the lexicon. In this way, we can delete most of the obvious illogical words from the initial lexicon. 
- Once the initial lexicon is generated and pruned,[0068]step502, the received corpus is segmented based, at least in part, on the initial lexicon, block504. As described above, according to one implementation, the initial segmentation of the corpus is performed using a maximum matching process. 
- Once the initial lexicon and corpus segmentation process is complete, the an iterative process of dynamically altering the lexicon and segmentation begins to optimize a statistical language model (SLM) from the received corpus (or training set), block[0069]506. As shown, the process begins inblock606, wherein theMarkov probability calculator212 utilizes the initial lexicon and segmentation to begin language model training using the segmented corpus. That is, given the initial lexicon and an initial segmentation, a statistical language model may be generated therefrom. It should be noted that although the language model does not yet benefit from a refined lexicon and a statistically based segmentation (which will evolve in the steps to follow), it is nonetheless fundamentally based on the received corpus itself. 
- In[0070]block608, having performed initial language model training, the segmented corpus (or training set) is re-segmented using SLM-based segmentation. Given a sentence w1, w2, . . . wn, there are M possible ways to segment it (where M≧1).Dynamic segmentation function216 computes a probability (pi) of each segmentation (Si) based on an N-gram statistical language model. According to one implementation,segmentation function216 utilizes a tri-gram (i.e., N=3) statistical language model for determining the probability of any given segmentation. A Viterbi search algorithm is employed to find the most probable segmentation Sk, where: 
- Sk=argmax(pi)   (3) 
- In[0071]block610, the lexicon is updated using the re-segmented corpus resulting from the SLM-based segmentation described above. According to one implementation,modeling agent104 invokesfrequency analysis function213 to compute the frequency of occurrence in the received corpus for each word in the lexicon, sorting the lexicon according to the frequency of occurrence. The word with the lowest frequency is identified and deleted from the lexicon. All occurrences of the word must then be re-segmented into smaller words, as the uni-count for all those words are re-computed. The threshold for this deletion and re-segmentation may be determined according to the size of the corpus. According to one implementation, a corpus of 600M items may well utilize a frequency threshold of 500 to be included within the lexicon. 
- In[0072]block612, the language model is updated to reflect the dynamically generated lexicon and the SLM-based segmentation, and a measure of the language model perplexity (i.e., an inverse probability measure) is computer byMarkov probability calculator212. If the perplexity continues to converge (toward zero (0)), i.e., improve, the process continues withblock608 wherein the lexicon and segmentation are once again modified with the intent of further improving the language model performance (as measured by perplexity). If inblock614 it is determined that the language model has not improved as a result of the recent modifications to the lexicon and segmentation, a further determination of whether the perplexity has reached an acceptable threshold is made, block616. If so, the process ends. 
- If, however, the language model has not yet reached an acceptable perplexity threshold,[0073]lexicon generation function214 deletes the word with the smallest frequency of occurrence in the corpus from the lexicon, re-segmenting the word into smaller words, block618, as the process continues withblock610. 
- It is to be appreciated, based on the foregoing, that innovative[0074]language modeling agent104 generates an optimized language model premised on a dynamically generated lexicon and segmentation rules statistically predicated on at least a subset of the received corpus. In this regard, the resultant language model has improved computational and predictive capability when compared to prior art language models. 
ALTERNATE EMBODIMENTS- FIG. 7 is a block diagram of a storage medium having stored thereon a plurality of instructions including instructions to implement the innovative modeling agent of the present invention, according to yet another embodiment of the present invention. In general, FIG. 7 illustrates a storage medium/[0075]device700 having stored thereon a plurality ofexecutable instructions702 including at least a subset of which that, when executed, implement theinnovative modeling agent104 of the present invention. When executed by a processor of a host system, theexecutable instructions702 implement the modeling agent to generate a statistical language model representation of a textual corpus for use by any of a host of other applications executing on or otherwise available to the host system. 
- As used herein,[0076]storage medium700 is intended to represent any of a number of storage devices and/or storage media known to those skilled in the art such as, for example, volatile memory devices, non-volatile memory devices, magnetic storage media, optical storage media, and the like. Similarly, the executable instructions are intended to reflect any of a number of software languages known in the art such as, for example, C++, Visual Basic, Hypertext Markup Language (HTML), Java, eXtensible Markup Language (XML), and the like. Moreover, it is to be appreciated that the storage medium/device700 need not be co-located with any host system. That is, storage medium/device700 may well reside within a remote server communicatively coupled to and accessible by an executing system. Accordingly, the software implementation of FIG. 7 is to be regarded as illustrative, as alternate storage media and software embodiments are anticipated within the spirit and scope of the present invention. 
- Although the invention has been described in language specific to structural features and/or methodological steps, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or steps described. Rather, the specific features and steps are disclosed as exemplary forms of implementing the claimed invention.[0077]