Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposedBody details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specificThe application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricityThe detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described specialSign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodimentAnd be not intended to limit the application.As present specification and it is used in the attached claims, unless onOther situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims isRefer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quiltBe construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to trueIt is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical solution described herein, the following is a description of specific embodiments.
Fig. 1 is the implementation process schematic diagram of Text Clustering Method provided by the embodiments of the present application, as shown, the methodIt may comprise steps of:
Step S101 obtains training text, and carries out participle pretreatment to the training text and obtain multiple words to be trainedLanguage.
The minimum unit of English is word, is separated between word by space.And the minimum unit of Chinese is word, two words often connectContinuous appearance, do not separated significantly.For the angle of Study on Semantic, word is the semantic unit of atomicity,Therefore it correctly first must be cut into word, could preferably carries out understanding semantically.When Chinese Text Categorization, need firstIt segments.The participle of Chinese text namely refers to that continuously character string is cut according to certain specification progress cutting originally by textIt is divided into one by one individually with the word of certain semantic.
In one embodiment, it is described to the training text carry out participle pretreatment obtain multiple trained words, comprising:
It removes the punctuation mark in the training text and obtains the first preprocessed text.
The stop words removed in first preprocessed text obtains the second preprocessed text.
Word segmentation processing is carried out to second preprocessed text and obtains multiple text feature words.
In practical applications, before participle, need to carry out text to be clustered participle pretreatment, removal as ".","*",The punctuation marks such as "/", "+" will also remove such as " the ", " a ", " an ", " that ", " you ", " I ", " they ", " desired ", " beatOpen ", the stop words of meaningless function word such as " can with " etc, and then obtain training required text feature word.
Wherein, stop words refers in information retrieval, to save memory space and improving search efficiency, in processing nature languageCertain words or word are fallen in meeting automatic fitration before or after speech data (or text).These stop words be usually by being manually entered, it is non-What automation generated, the stop words after generation will form a deactivated vocabulary.
Step S102 is trained preset transformation model using the word to be trained, the conversion after being trainedModel.
It is in one embodiment, described that preset transformation model is trained using the word to be trained, it is instructedTransformation model after white silk, comprising:
The word frequency that each word to be trained occurs in the training text is counted respectively, and is constructed and breathed out according to the word frequencyFu Man tree.
Initial information is obtained, and according to the Huffman tree of the initial information and building, the word to be trained is carried outTraining, the transformation model after being trained.
Wherein, the initial information includes preset window, the initial term vector of initial parameter vector sum.
Wherein, Huffman tree is a kind of shortest binary tree of cum rights path length, also referred to as optimum binary tree.Referring to fig. 4, scheme4 be the schematic diagram of binary tree provided by the embodiments of the present application.As shown, cum rights path length is WPL=5*2+ in Fig. 4 (a)7*2+2*2+13*2=54;Cum rights path length is WPL=5*3+2*3+7*2+13*1=48 in Fig. 4 (b).As it can be seen that Fig. 4 (b)Cum rights path length it is smaller, so Fig. 4 (b) is Huffman tree.
In practice, the step of creating Huffman tree can be as follows.Assuming that have n node, the weight point of n nodeNot Wei w1, w2 ..., wn, the collection of the binary tree of composition is combined into F={ T1, T2 ..., Tn }, then can construct one and contain n leafThe Huffman tree of child node.Steps are as follows:
1) the smallest tree of two root node weights is chosen from F as left and right subtree constructs a new binary tree, it is newThe weight of binary tree be its left and right subtree root node weights sum;
2) two binary trees that previous step is chosen are deleted from F, and the tree of neotectonics is put into F;
3) (1) (2) are repeated, until F is containing only one tree.
It illustratively, is the building process schematic diagram of Huffman tree provided by the embodiments of the present application referring to Fig. 5, Fig. 5.Such as figureIt is shown, there are 5 nodes, weight is respectively 1,3,2,5,4, the smallest joint structure binary tree of two weights is therefrom chosen first,That is selection 1 and 2, and by it and as new y-bend tree node, i.e., 3,;Next lesser node of weight is 3, as newBinary tree a node, and with 1 and 2 and 3 be added to obtain the node 6 of new binary tree, and so on, obtain such as Fig. 5Binary tree shown in middle step 5.
In one embodiment, the Huffman tree according to the initial information and building, to the word to be trainedIt is trained, the transformation model after being trained, comprising:
The context of the word to be trained is obtained according to the preset window in the initial information, and is calculated described wait instructPractice needing of including in the context of word and trained the sum of term vector of word, obtains and vector.
It determines in the Huffman tree from root node to the path of the word to be trained.
Using Bayesian formula, and based on the probability corresponding with the vector calculating path.
Logarithmic calculation is taken to obtain objective function the probability, using the objective function as the transformation model after training.
In one embodiment, after taking Logarithmic calculation to obtain objective function the probability, further includes:
The objective function is obtained into the first increment to the initial parameter vector derivation in the initial information, and utilizes θ '=θ0+αη1The initial parameter vector is updated.
The objective function is obtained into the second increment to described and vector derivation, and utilizes X '=X0+βη2To described initialTerm vector is updated.
Wherein, the θ ' is updated parameter vector, the θ0For the initial parameter vector, the α is first pre-If weight, the η1For first increment, the X ' is the term vector of the updated word to be trained, the X0For instituteThe initial term vector of word to be trained is stated, the β is the second preset weights, the η2For second increment.
Step S103 obtains text to be clustered, carries out participle pretreatment to the text to be clustered and obtains multiple texts spiesLevy word.
Step S103 is similar to step S101, and for details, reference can be made to the steps in step S101.
The text feature word is converted to term vector respectively using the transformation model after the training by step S104, andAll term vectors in the text to be clustered are overlapped to obtain the text vector of the text to be clustered.
In one embodiment, all term vectors by the text to be clustered are overlapped to obtain described to poly-The text vector of class text, comprising:
The weight of each text feature word is calculated using TF-IDF algorithm.
The term vector of the text feature word is obtained into the text feature word multiplied by the corresponding weight of text Feature WordsFeature vector.
It is overlapped the feature vector of all text feature words to obtain the text vector of the training text.
Wherein, TF-IDF (term frequency-inverse document frequency) is a kind of for informationThe common weighting technique of retrieval and data mining.TF means word frequency (Term Frequency) that IDF means inverse text frequencyIndex (Inverse Document Frequency).TF-IDF is a kind of statistical method, to assess a words for oneThe significance level of a file set or a copy of it file in a corpus.The importance of words occurs hereof with itThe directly proportional increase of number, but the frequency that can occur in corpus with it simultaneously is inversely proportional decline.
The weight for calculating text feature value, can first calculate the TF of text Feature Words, i.e. word frequency, then calculate text spyThe IDF of word is levied, i.e., TF is finally multiplied to obtain the weight of text Feature Words with IDF by reverse document-frequency.
Illustratively, if total word number of a file is 100, and word " cow " occurs 3 times, then " femaleThe word frequency TF=3/100=0.03 of an ox " word in this document.One method for calculating reverse document-frequency (IDF) is fileThere is " cow " word divided by how many part file is measured in the total number of files for including in collection.So if " cow " word exists1,000 part of file occurred, and total number of files is 10, if 000,000 part, reverse document-frequency be exactly IDF=lg (10,000,000/1,000)=4.Finally calculate the weight=0.03*4=0.12 of " cow " this word.
Step S105 is clustered to obtain cluster result to the text vector.
It is in one embodiment, described that the text vector is clustered to obtain cluster result, comprising:
Initiation parameter is obtained, the initiation parameter includes preset threshold and default learning rate.
A text vector is chosen from all text vectors and is labeled as center vector, will be removed in all text vectorsText vector outside the center vector is labeled as vector to be clustered, and each vector to be clustered is successively inputted the cluster mouldType is clustered.
After all vectors to be clustered have inputted the Clustering Model, cluster result is exported.
In practical applications, the vector centered on randomly selecting one in all text vectors, then by remaining textThis vector sequentially inputs Clustering Model.Wherein, center vector is equivalent to the cluster centre in clustering algorithm.In other words, at thisApply in embodiment, first determine a cluster centre at random, then successively by other text vectors and this cluster centre intoRow cluster, specific cluster process see below embodiment.
It is in one embodiment, described successively to cluster each vector input to be clustered Clustering Model, comprising:
Pass through netij=WiXjCalculate the activation value between the vector to be clustered and the center vector, the netijForActivation value between j-th of vector to be clustered and i-th of center vector, the WiFor i-th of center vector, the XjIt is j-thVector to be clustered.
It is selected from all activated value between the calculated vector to be clustered and the center vector maximum sharpWhether value living using the corresponding center vector of the maximum activation value as object vector, and judges the maximum activation valueGreater than the preset threshold.
If the maximum activation value is greater than the preset threshold, W is utilizedt=Wt+ηXjThe object vector is carried outIt updates, the WtFor the object vector, the η is the default learning rate.
If the maximum activation value is less than or equal to the preset threshold, centered on the vector label to be clusteredVector, and the number of center vector is added 1.
In practical applications, if a text vector and that maximum activation value in the activation value of each cluster centreGreater than preset threshold, illustrates that text vector that cluster centre corresponding with maximum activation value belongs to same class, utilize at this timeThis text vector is updated this cluster centre, that is, redefines such new cluster centre;If a textThat maximum activation value is less than or equal to preset threshold in the activation value of vector and each cluster centre, illustrates text vectorIt is not belonging to any cluster centre, text vector is defined as a new cluster centre at this time, is i.e. the number of cluster centre adds1。
When there are multiple text vectors, initially determine that a cluster centre may generate more in cluster processA new cluster centre, it is every generate new cluster centre after, need to calculate next input text vector and existing instituteThere is the activation value between cluster centre.
In one embodiment, after obtaining cluster result, further includes:
The text vector for including in the center vector in the cluster result and every one kind is obtained, and counts center vectorNumber, using the number of the center vector as the number of class.
It utilizesCluster index is calculated, and judges that the cluster refers toWithin a preset range whether number.
If the cluster index within a preset range, does not re-use preset Clustering Model to the text to be clusteredText vector clustered.
Wherein, the DB is the cluster index, and the K is the number of the class, the DmIndicate all texts in m classThis vector to m class center vector average distance, the DnIndicate in the n-th class all text vectors to the center of the n-th classThe average distance of vector, the CmnIndicate the distance between center vector and the center vector of the n-th class of m class.
Through the foregoing embodiment, the result of cluster can be evaluated, DB value is smaller, indicates similar between class and classDegree is lower, and it is better to further relate to cluster result.
The embodiment of the present application by obtain training text, to the training text carry out participle pretreatment obtain it is multiple wait instructPractice word, and preset transformation model is trained using the word to be trained, by the above method, can be trainedTransformation model afterwards;Then text to be clustered is obtained, participle pretreatment is carried out to the text to be clustered and obtains multiple texts spiesWord is levied, the text feature word is converted into term vector respectively using the transformation model after the training, utilizes turning after trainingThe text feature word of text to be clustered more accurately can be converted to term vector by mold changing type;By the institute in the text to be clusteredThere is term vector to be overlapped to obtain the text vector of the text to be clustered, the text vector is clustered to obtain cluster knotFruit.By the above method, accurate term vector can be obtained, and then effectively increases the accuracy rate of text cluster result.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each processExecution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limitIt is fixed.
Fig. 2 is the schematic diagram of text cluster device provided by the embodiments of the present application, for ease of description, is only shown and this ShenIt please the relevant part of embodiment.
Text cluster device shown in Fig. 2 can be the software unit being built in existing terminal device, hardware cell,Or the unit of soft or hard combination, it can also be used as independent pendant and be integrated into the terminal device, be also used as independent endEnd equipment exists.
The text cluster device 2 includes:
Acquiring unit 21, for obtaining training text, and to the training text carry out participle pretreatment obtain it is multiple toTraining word.
Training unit 22, for being trained using the word to be trained to preset transformation model, after being trainedTransformation model.
Pretreatment unit 23, for obtaining text to be clustered, to the text to be clustered carry out participle pretreatment obtain it is moreA text feature word.
Superpositing unit 24, for using the transformation model after the training respectively by the text feature word be converted to word toAmount, and all term vectors in the text to be clustered are overlapped to obtain the text vector of the text to be clustered.
Cluster cell 25, for being clustered to obtain cluster result to the text vector.
Optionally, the acquiring unit 21 includes:
First removal module, obtains the first preprocessed text for removing the punctuation mark in the training text.
Second removal module, obtains the second preprocessed text for removing the stop words in first preprocessed text.
Word segmentation module obtains multiple text feature words for carrying out word segmentation processing to second preprocessed text.
Optionally, the training unit 22 includes:
Statistical module, for counting the word frequency that each word to be trained occurs in the training text respectively, and according toThe word frequency constructs Huffman tree.
Construct module, for obtaining initial information, and according to the Huffman tree of the initial information and building, to it is described toTraining word is trained, the transformation model after being trained.
Wherein, the initial information includes preset window, the initial term vector of initial parameter vector sum.
Optionally, the building module includes:
First computational submodule, for obtaining the word to be trained according to the preset window in the initial informationHereafter, and calculate needing of including in the context of the word to be trained and trained the sum of term vector of word, obtain and toAmount.
Submodule is determined, for determining in the Huffman tree from root node to the path of the word to be trained.
Second computational submodule, for utilizing Bayesian formula, and based on described corresponding with the vector calculating pathProbability.
Third computational submodule makees the objective function for taking Logarithmic calculation to obtain objective function the probabilityFor the transformation model after training.
Optionally, the building module further include:
4th computational submodule, for after taking Logarithmic calculation to obtain objective function the probability, by the targetFunction obtains the first increment to the initial parameter vector derivation in the initial information, and utilizes θ '=θ0+αη1To described initialParameter vector is updated.
Derivation submodule, for the objective function to be obtained the second increment to described and vector derivation, and using X '=X0+βη2The initial term vector is updated.
Wherein, the θ ' is updated parameter vector, the θ0For the initial parameter vector, the α is first pre-If weight, the η1For first increment, the X ' is the term vector of the updated word to be trained, the X0For instituteThe initial term vector of word to be trained is stated, the β is the second preset weights, the η2For second increment.
Optionally, the superpositing unit 24 includes:
Computing module, for calculating the weight of each text feature word using TF-IDF algorithm.
Product module, for the term vector of the text feature word to be obtained institute multiplied by the corresponding weight of text Feature WordsState the feature vector of text feature word.
Laminating module, for being overlapped the feature vector of all text feature words to obtain the text of the training textThis vector.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each functionCan unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by differentFunctional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completingThe all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can alsoTo be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integratedUnit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function listMember, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above systemThe specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
Fig. 3 is the schematic diagram of terminal device provided by the embodiments of the present application.As shown in figure 3, the terminal device 3 of the embodimentInclude: processor 30, memory 31 and is stored in the calculating that can be run in the memory 31 and on the processor 30Machine program 32.The processor 30 is realized when executing the computer program 32 in above-mentioned each Text Clustering Method embodimentStep, such as step S101 to S105 shown in FIG. 1.Alternatively, realization when the processor 30 executes the computer program 32The function of each module/unit in above-mentioned each Installation practice, such as the function of module 21 to 25 shown in Fig. 2.
Illustratively, the computer program 32 can be divided into one or more module/units, it is one orMultiple module/units are stored in the memory 31, and are executed by the processor 30, to complete the application.Described oneA or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used forImplementation procedure of the computer program 32 in the terminal device 3 is described.For example, the computer program 32 can be dividedIt is cut into acquiring unit, training unit, pretreatment unit, superpositing unit, cluster cell, each unit concrete function is as follows:
Acquiring unit, for obtaining training text, and to the training text carry out participle pretreatment obtain it is multiple wait instructPractice word.
Training unit, for being trained using the word to be trained to preset transformation model, after being trainedTransformation model.
Pretreatment unit, for obtaining text to be clustered, to the text to be clustered carry out participle pretreatment obtain it is multipleText feature word.
Superpositing unit, for using the transformation model after the training respectively by the text feature word be converted to word toAmount, and all term vectors in the text to be clustered are overlapped to obtain the text vector of the text to be clustered.
Cluster cell, for being clustered to obtain cluster result to the text vector.
Optionally, the acquiring unit includes:
First removal module, obtains the first preprocessed text for removing the punctuation mark in the training text.
Second removal module, obtains the second preprocessed text for removing the stop words in first preprocessed text.
Word segmentation module obtains multiple text feature words for carrying out word segmentation processing to second preprocessed text.
Optionally, the training unit includes:
Statistical module, for counting the word frequency that each word to be trained occurs in the training text respectively, and according toThe word frequency constructs Huffman tree.
Construct module, for obtaining initial information, and according to the Huffman tree of the initial information and building, to it is described toTraining word is trained, the transformation model after being trained.
Wherein, the initial information includes preset window, the initial term vector of initial parameter vector sum.
Optionally, the building module includes:
First computational submodule, for obtaining the word to be trained according to the preset window in the initial informationHereafter, and calculate needing of including in the context of the word to be trained and trained the sum of term vector of word, obtain and toAmount.
Submodule is determined, for determining in the Huffman tree from root node to the path of the word to be trained.
Second computational submodule, for utilizing Bayesian formula, and based on described corresponding with the vector calculating pathProbability.
Third computational submodule makees the objective function for taking Logarithmic calculation to obtain objective function the probabilityFor the transformation model after training.
Optionally, the building module further include:
4th computational submodule, for after taking Logarithmic calculation to obtain objective function the probability, by the targetFunction obtains the first increment to the initial parameter vector derivation in the initial information, and utilizes θ '=θ0+αη1To described initialParameter vector is updated.
Derivation submodule, for the objective function to be obtained the second increment to described and vector derivation, and using X '=X0+βη2The initial term vector is updated.
Wherein, the θ ' is updated parameter vector, the θ0For the initial parameter vector, the α is first pre-If weight, the η1For first increment, the X ' is the term vector of the updated word to be trained, the X0For instituteThe initial term vector of word to be trained is stated, the β is the second preset weights, the η2For second increment.
Optionally, the superpositing unit includes:
Computing module, for calculating the weight of each text feature word using TF-IDF algorithm.
Product module, for the term vector of the text feature word to be obtained institute multiplied by the corresponding weight of text Feature WordsState the feature vector of text feature word.
Laminating module, for being overlapped the feature vector of all text feature words to obtain the text of the training textThis vector.
The terminal device 3 can be the calculating such as desktop PC, notebook, palm PC and cloud server and setIt is standby.The terminal device may include, but be not limited only to, processor 30, memory 31.It will be understood by those skilled in the art that Fig. 3The only example of terminal device 3 does not constitute the restriction to terminal device 3, may include than illustrating more or fewer portionsPart perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, netNetwork access device, bus etc..
Alleged processor 30 can be central processing unit (Central Processing Unit, CPU), can also beOther general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processorDeng.
The memory 31 can be the internal storage unit of the terminal device 3, such as the hard disk or interior of terminal device 3It deposits.The memory 31 is also possible to the External memory equipment of the terminal device 3, such as be equipped on the terminal device 3Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodgeDeposit card (Flash Card) etc..Further, the memory 31 can also both include the storage inside list of the terminal device 3Member also includes External memory equipment.The memory 31 is for storing needed for the computer program and the terminal deviceOther programs and data.The memory 31 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodimentThe part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosureMember and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actuallyIt is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technicianEach specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceedScope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be withIt realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, instituteThe division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such asMultiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.SeparatelyA bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, deviceOr the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unitThe component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multipleIn network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unitIt is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated listMember both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale orIn use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementationAll or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer programCalculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that onThe step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generationCode can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable mediumIt may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program codeDish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that describedThe content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practiceSubtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal andTelecommunication signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned realityExample is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned eachTechnical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modifiedOr replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should allComprising within the scope of protection of this application.