Disclosure of Invention
According to the model application portrait-based multi-model fusion method and device, the prediction result can be subjected to attention weighted integration and multi-level random sampling model fusion customization according to the basic model application portrait and by combining the application data distribution characteristics. The method can perform personalized model fusion according to the distribution characteristics of the data and the prediction capability of the basic model in specific application data so as to better meet the actual application requirements.
In order to achieve the above object, the present invention provides a multi-model fusion method based on model application portrait, comprising:
extracting characteristic data of a plurality of basic models to generate application images corresponding to the basic models respectively;
screening the plurality of basic models according to the application data and the plurality of application images;
and fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data.
In one embodiment, the extracting feature data of the plurality of base models to generate application representations corresponding to the plurality of base models respectively includes:
extracting performance evaluation parameters, total prediction accuracy, new data prediction accuracy, partial sample data prediction accuracy and authority evaluation parameters of the plurality of basic models;
and generating application figures corresponding to the plurality of basic models respectively according to the performance evaluation parameters, the total prediction precision, the new data prediction precision, the partial sample data prediction precision and the authority evaluation parameters.
In an embodiment, the screening the plurality of base models according to the application data and the plurality of application images includes:
determining a distribution difference of the application data;
determining performance parameters and prediction accuracy corresponding to the plurality of basic models according to the distribution difference and the plurality of application images;
and screening the plurality of basic models according to the performance parameters, the preset performance parameter threshold, the prediction precision and the preset prediction precision threshold.
In an embodiment, the fusing the plurality of base models according to the distribution difference of the predicted result of the screened plurality of base models for the application data includes:
generating a plurality of fusion sub-models according to the screened plurality of basic models by utilizing a multistage random sampling method;
clustering the prediction results of the plurality of fusion sub-models;
and fusing the plurality of basic models according to the clustered prediction results and the distribution difference of the plurality of basic models aiming at the prediction results of the application data.
In one embodiment, the method for multi-model fusion based on model application portrait further includes:
and generating a base model corresponding to each of the plurality of algorithms according to the plurality of algorithms and the training data.
In a second aspect, the present invention provides a model application portrait-based multi-model fusion apparatus, comprising:
the characteristic data extraction module is used for extracting characteristic data of a plurality of basic models to generate application images corresponding to the basic models respectively;
the basic model screening module is used for screening the plurality of basic models according to the application data and the plurality of application images;
and the basic model fusion module is used for fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data.
In one embodiment, the feature data extraction module includes:
the parameter extraction unit is used for extracting performance evaluation parameters, total prediction accuracy, new data prediction accuracy, partial sample data prediction accuracy and authority evaluation parameters of the plurality of basic models;
and the portrait generation unit is used for generating application portraits corresponding to the plurality of basic models according to the performance evaluation parameters, the total prediction precision, the new data prediction precision, the partial sample data prediction precision and the authority evaluation parameters.
In one embodiment, the base model screening module comprises:
a distribution difference determination unit configured to determine a distribution difference of the application data;
a parameter determining unit, configured to determine, according to the distribution difference and the application images, performance parameters and prediction accuracy corresponding to the base models respectively;
and the basic model screening unit is used for screening the plurality of basic models according to the performance parameters, the preset performance parameter threshold, the prediction precision and the preset prediction precision threshold.
In one embodiment, the base model fusion module comprises:
a sub-model generating unit, which is used for generating a plurality of fusion sub-models according to the screened plurality of basic models by utilizing a multi-level random sampling method;
the result clustering unit is used for clustering the prediction results of the plurality of fusion sub-models;
and the basic model fusion unit is used for fusing the plurality of basic models according to the clustered prediction results and the distribution difference of the plurality of basic models aiming at the prediction results of the application data.
In one embodiment, the model-based application portrait-based multi-model fusion apparatus further includes:
and the basic model generating module is used for generating basic models corresponding to the algorithms according to the algorithms and the training data.
In a third aspect, the present invention provides an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the model-based application representation multi-model fusion method when executing the program.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a method for model-based application representation-based multi-model fusion.
As can be seen from the above description, in the multi-model fusion method and apparatus based on model application portraits provided in the embodiments of the present invention, first, feature data of a plurality of basic models are extracted to generate application portraits corresponding to the plurality of basic models; then, screening a plurality of basic models according to the application data and the plurality of application images; and finally, fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data. According to the method, the prediction result is subjected to attention weighted integration and multi-level random sampling model fusion customization according to the basic model application portrait and by combining the application data distribution characteristics. According to the invention, personalized model fusion can be carried out according to the distribution characteristics of the data and the prediction capability of the basic model in specific application data, and the actual application requirements are met.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The embodiment of the invention provides a specific implementation method of a multi-model fusion method based on model application portrait, and referring to fig. 1, the method specifically includes the following steps:
step 100: and extracting the characteristic data of the plurality of basic models to generate application images corresponding to the plurality of basic models respectively.
It can be understood that the application portrait instep 100 refers to a means of information full-view abstracted from the basic model, and instep 100, the evaluation result of each basic model is deeply analyzed, and the application portrait of the basic model is dynamically generated in multiple dimensions from performance, overall prediction accuracy, new data prediction accuracy and small sample data prediction accuracy.
Step 200: and screening the plurality of basic models according to the application data and the plurality of application images.
The principle of screening is two: firstly, deleting a basic model with performance not meeting task requirements; the second is to delete the model whose prediction accuracy is lower than the predefined threshold.
Step 300: and fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data.
Model fusion refers to training multiple models and then integrating the multiple models according to a certain method, and if the models are homogeneous, such as all linear regression or all decision trees, it is called base leaner. If the models used for fusion are heterogeneous, such as a decision tree and a neural network, it is called component leaner. The neural network model is non-convex, and has a plurality of local optimal points, so that a solution which is closer to the global optimal can be obtained by taking a plurality of initial values through the fusion of a plurality of models.
Fusion models are divided into four types (including hybrid types):
bagging: and training a plurality of basic models by using different data subsets selected randomly and replacing. The base model is voted for the final prediction. Commonly used in random forest algorithms (randomtrees);
boosting: the model is iteratively trained and the degree of importance of obtaining each training example is updated after each iteration. Commonly used for gradient enhancement algorithms (GradientBoosting);
blending: many different types of underlying models are trained and predictions are made on a hardout set. Retraining a new model from their prediction results and performing predictions on the test set (stacked with a hardout set);
and (3) Stacking: a plurality of different types of basic models are trained, and k-folds of the data set are predicted. Retraining a new model from the prediction results of the models, and predicting on the test set;
as can be seen from the above description, in the multi-model fusion method based on model application portraits provided in the embodiments of the present invention, first, feature data of a plurality of basic models are extracted to generate application portraits corresponding to the plurality of basic models; then, screening a plurality of basic models according to the application data and the plurality of application images; and finally, fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data. According to the method, the prediction result is subjected to attention weighted integration and multi-level random sampling model fusion customization according to the basic model application portrait and by combining the application data distribution characteristics. According to the invention, personalized model fusion can be carried out according to the distribution characteristics of the data and the prediction capability of the basic model in specific application data, and the actual application requirements are met.
In one embodiment, referring to fig. 2,step 100 comprises:
step 101: extracting performance evaluation parameters, total prediction accuracy, new data prediction accuracy, partial sample data prediction accuracy and authority evaluation parameters of the plurality of basic models;
basic model application sketch: performance score, authority score, overall predictive power, new sample distribution generalization power, small sample data predictive power }
Performance evaluation parameters: and performing multi-dimensional performance evaluation sequencing such as training speed, reasoning speed and convergence condition on each basic model, and performing score (ranking) according to the ranking rank, preferably, performing evaluation sequencing on the training speed (train), the reasoning speed (inference) and the convergence condition (loss).
The Performance Score _ Performance of the model is defined as follows:
Score_Performance=α1×Score(train)+β1×Score(inference)+γ1×Score(loss);
where α 1, β 1, γ 1 are predefined weights.
Authority evaluation: and evaluating the overall prediction capability, the new data distribution prediction capability and the small sample data prediction capability of the model, and calculating the authority of the basic model on the basis of the evaluation.
Total prediction capability (Acc _ All) for evaluating quality evaluation indexes such as accuracy, precision and recall rate of prediction results by taking text analysis as an example
New sample distribution generalization ability (Acc _ New): prediction accuracy of the New distribution samples. Difference D between test sample S and training set distribution TKL=If (S | | T) is greater than the predefined threshold k, then the new distribution example is considered.
Small sample data prediction capability (Acc _ Few) test data prediction accuracy in a small sample data subset. For example, in text classification, if the number of training samples of a class Ci is less than n (n < ═ 20), Ci belongs to the small sample class.
Authority score (Authority) of model:
Authority=α2×Score(Acc_All)+β2×Score(Acc_New)+γ2×Score(Acc_Few);
where α 2, β 2, γ 2 are predefined weights.
Step 102: and generating application figures corresponding to the plurality of basic models respectively according to the performance evaluation parameters, the total prediction precision, the new data prediction precision, the partial sample data prediction precision and the authority evaluation parameters.
Specifically, the evaluation results of each basic model are deeply analyzed, and the basic model application portrait is dynamically generated from several dimensions of performance, overall prediction accuracy, new data prediction accuracy and small sample data prediction accuracy.
In one embodiment, referring to fig. 3,step 200 comprises:
step 201: determining a distribution difference of the application data;
it is necessary to determine whether the overall distribution of the target data segment is divergent or concentrated, on which frequency segment the median is concentrated, on which block, and on which data block 80% of the data is concentrated. The optional processes have: histogram, boxplot, normal distribution, dot plot, pareto.
Step 202: determining performance parameters and prediction accuracy corresponding to the plurality of basic models according to the distribution difference and the plurality of application images;
step 203: and screening the plurality of basic models according to the performance parameters, the preset performance parameter threshold, the prediction precision and the preset prediction precision threshold.
Specifically, the trained model is screened according to the application portrait of the basic model on the current data distribution, and the preferred basic model is output.
In one embodiment, referring to fig. 4,step 300 comprises:
step 301: generating a plurality of fusion sub-models according to the screened plurality of basic models by utilizing a multistage random sampling method;
the random sampling method is a sampling survey which is performed completely according to the principle of chance equalization and is called as an 'equal probability', namely, every part in the survey object population has the equal possibility of being sampled. The representativeness of the sample is guaranteed on a stochastic basis, i.e., by ensuring that each object in the population has a known, non-zero probability of being selected as the object under study.
There are four basic forms of random sampling, namely simple random sampling, equidistant sampling, type sampling and whole group sampling. The method has the greatest advantage that when the totality is deduced according to the sample data, the reliability of the deduced value can be objectively measured in a probabilistic mode, so that the deduction is established on a scientific basis. Because of this, random sampling is widely used in social research and social research. The common random sampling methods mainly include pure random sampling, hierarchical sampling, systematic sampling, whole group sampling, multi-stage sampling and the like.
In the implementation ofstep 301, firstly, the prediction result categories { Cd1, Cd2, … … Cdk } to be randomly sampled are selected according to the predefined proportion, and the sample layer in each type of model is randomly sampled, and in the selected category Cdi ∈ Cd1, Cd2, … … Cdk }, the sample random sampling is performed in each category according to the predefined proportion
Step 302: clustering the prediction results of the plurality of fusion sub-models;
and clustering { C1, C2 and … … Cn } the basic models according to the multi-dimensional portrait of the basic models, and classifying the prediction result of each basic model into a corresponding category.
Step 303: and fusing the plurality of basic models according to the clustered prediction results and the distribution difference of the plurality of basic models aiming at the prediction results of the application data.
The customized objective function for model fusion is defined as follows: given a prediction result data sample (x)i,yi) Minimization of loss LiWherein L isiNLLLoss, L of the fusion sub-model iikLIs the mean of the distribution differences of the prediction results between the fusion submodels
Li=LiNLL+αLikL
LiNLL=-logP1w(yi|xi)-logP2w(yi|xi),
Wherein, yiIs the correct answer XiThe prediction result sequence is weighted and integrated based on the attention of each basic model.
Xi={x1i:Att(M1,x1i),x2i:Att(M2,x2i),………xki:Att(Mk,xki)}},
Wherein, Att (M)j,xji) Is a basic model MjPredicted result x of (1)jiThe degree of attention received; dkL(P1w(yi|xi)||P2w(yi|xi) Is distribution P)1w(yi|xi) And P1w(yi|xi) KL dispersion of (1).
After fusion, fusion model evaluation is needed, if convergence is achieved, the fusion model is output, and otherwise, iterative training is continued.
In an embodiment, referring to fig. 5, the method for model-based application portrait fusion further includes:
step 400: and generating a base model corresponding to each of the plurality of algorithms according to the plurality of algorithms and the training data.
Specifically, a plurality of algorithms are selected from a library of algorithms and learned on an original training data set to generate a plurality of base models.
To further illustrate the present solution, the present application provides a specific application example of the multi-model fusion method based on model application portrait, see fig. 6.
Referring to fig. 7, in general, the multi-model fusion method based on model application portrait is mainly divided into two stages:
stage 1: training a basic model and dynamically generating a model application portrait.
Training a basic model: a plurality of algorithms are selected from an algorithm library, and a plurality of base models are learned on an original training data set.
And (3) generating a multi-dimensional basic model application portrait: deeply analyzing the evaluation result of each basic model, and dynamically generating the application image of the basic model from several dimensions of performance, overall prediction precision, new data prediction precision and small sample data prediction precision
And (2) stage: and applying model fusion customization stage of the portrait based on the basic model.
Integration of prediction results based on model attention: and after the basic model generates a prediction result sequence on the application data set, dynamically adjusting the application portrait of the basic model according to the distribution difference of the application data, and dynamically generating an attention weight for each prediction result.
S1: and (5) training a basic model.
And selecting an available algorithm set from the algorithm library according to the application task requirement, performing model training evaluation on the training data set, and outputting a test result.
S2: a multi-dimensional base model applies sketch generation.
And dynamically generating a basic model application portrait according to the model performance analysis and the evaluation result analysis.
S3: and integrating the prediction results based on the attention of the model.
Given an application data set, a preferred base model { M1, M2, … Mn } is run to obtain a prediction result set { x }1,x2,……xk}
Integration of prediction results based on model attention: given a set of predicted results { x1,x2,……xkAnd weighting the attention of each prediction result according to the model application image.
Given the predicted result, prediction Xi ═ x1i,x2i,…xsi…,xkiGet the following attention weighted calculation
X‘={x1i:Att(M1,x1i),x2i:Att(M2,x2i),……xsi:Att(Ms,xsi)…xki:Att(Mk,xki)}
Basic model MsPredicted result xsiThe attention degree of the method is obtained by calculation according to the authority of the basic model, the confidence degree of the prediction result and the difference degree of the prediction result of other basic models.
Att(Ms,xsi)=α×Authority(Ms)×Confidence(xsi)+β×(1-Divergence(xsi,Ms))
Wherein α, β are predefined weights, Authority (M)s) Is a model MsAuthority of (1), Confidence (x)si):MsPredicted result xsiConfidence level of (1), Divergene (x)si,Ms): model MsPredicted result xsiThe degree of difference of (a).
Model MsPredicted result xsiDegree of difference (x) of (c)si,Ms): model MsThe average value of the distribution difference between the predicted result of (1) and the predicted results of other models:
Divergence(xsi,Ms)=LiKL=1/(K-1)∑DkL(xsi||xti),1<=t<is equal to K, and s<>t,
In the above formula, DkL(xsi||xti) Is the predicted result xsiAnd xtiKL dispersion of (1).
Model authority dynamic weighting based on application data distribution differences: the authority of the model may be dynamically weighted according to the difference in the distribution of the current application data and the training data.
If the current application data and the small sample training data are distributed in a homologous manner, the model Authority with strong small sample weight prediction capability is weighted by delta 1, namely the Authority (M)s)=Authority0(Ms)×(1+δ1);
If the current application data belongs to the new data distribution, the model Authority with strong generalization capability of the new sample is weighted by delta 2, namely the Authority (M)s)=Authority0(Ms)×(1+δ2)。
S4: model fusion customization based on base model application portrait and multistage random sampling.
Integration of prediction results based on model attention: and after the basic model generates a prediction result sequence on the application data set, dynamically adjusting the application portrait of the basic model according to the distribution difference of the application data, and dynamically generating an attention weight for each prediction result. And giving a basic model prediction result set weighted by attention, and customizing different fusion submodels by adopting a multi-level random sampling mode to ensure that the final output result difference of each fusion submodel is as small as possible.
As can be seen from the above description, in the multi-model fusion method based on model application portraits provided in the embodiments of the present invention, first, feature data of a plurality of basic models are extracted to generate application portraits corresponding to the plurality of basic models; then, screening a plurality of basic models according to the application data and the plurality of application images; and finally, fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data. According to the method, the prediction result is subjected to attention weighted integration and multi-level random sampling model fusion customization according to the basic model application portrait and by combining the application data distribution characteristics. According to the invention, personalized model fusion can be carried out according to the distribution characteristics of the data and the prediction capability of the basic model in specific application data, and the actual application requirements are met.
Based on the same inventive concept, the embodiment of the present application further provides a multi-model fusion device based on model application portrait, which can be used to implement the methods described in the above embodiments, such as the following embodiments. Because the principle of solving the problems of the multi-model fusion device based on the model application portrait is similar to that of the multi-model fusion method based on the model application portrait, the implementation of the multi-model fusion device based on the model application portrait can be referred to the implementation of the multi-model fusion method based on the model application portrait, and repeated parts are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. While the system described in the embodiments below is preferably implemented in software, implementations in hardware, or a combination of software and hardware are also possible and contemplated.
The embodiment of the present invention provides a specific implementation manner of a model application portrait based multi-model fusion device capable of implementing a model application portrait based multi-model fusion method, and referring to fig. 8, the model application portrait based multi-model fusion device specifically includes the following contents:
a featuredata extraction module 10, configured to extract feature data of a plurality of base models to generate application images corresponding to the plurality of base models respectively;
a basicmodel screening module 20, configured to screen the plurality of basic models according to the application data and the plurality of application images;
and a basicmodel fusion module 30, configured to fuse the multiple basic models according to the distribution difference of the predicted results of the screened multiple basic models for the application data.
In one embodiment, referring to fig. 9, the featuredata extraction module 10 includes:
aparameter extracting unit 101, configured to extract performance evaluation parameters, total prediction accuracy, new data prediction accuracy, partial sample data prediction accuracy, and authority evaluation parameters of the multiple basic models;
and thesketch generation unit 102 is used for generating application drawings corresponding to the plurality of basic models respectively according to the performance evaluation parameter, the total prediction precision, the new data prediction precision, the partial sample data prediction precision and the authority evaluation parameter.
In one embodiment, referring to fig. 10, the basicmodel screening module 20 includes:
a distributiondifference determination unit 201 for determining a distribution difference of the application data;
aparameter determining unit 202, configured to determine, according to the distribution difference and the application images, performance parameters and prediction accuracies corresponding to the base models respectively;
a basicmodel screening unit 203, configured to screen the multiple basic models according to the performance parameter, a preset performance parameter threshold, the prediction accuracy, and a preset prediction accuracy threshold.
In one embodiment, referring to fig. 11, the basemodel fusion module 30 includes:
asub-model generating unit 301, configured to generate a plurality of fusion sub-models according to the screened plurality of basic models by using a multi-level random sampling method;
aresult clustering unit 302, configured to cluster the prediction results of the multiple fusion sub-models;
and a basicmodel fusion unit 303, configured to fuse the multiple basic models according to the clustered prediction results and the distribution differences of the multiple basic models with respect to the prediction results of the application data.
In one embodiment, referring to fig. 12, the model-based application portrait multi-model fusion apparatus further includes:
and a basicmodel generating module 40, configured to generate a basic model corresponding to each of the plurality of algorithms according to the plurality of algorithms and the training data.
As can be seen from the above description, the multi-model fusion apparatus based on model application portraits provided in the embodiments of the present invention first extracts feature data of a plurality of basic models to generate application portraits corresponding to the plurality of basic models; then, screening a plurality of basic models according to the application data and the plurality of application images; and finally, fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data. According to the method, the prediction result is subjected to attention weighted integration and multi-level random sampling model fusion customization according to the basic model application portrait and by combining the application data distribution characteristics. According to the invention, personalized model fusion can be carried out according to the distribution characteristics of the data and the prediction capability of the basic model in specific application data, and the actual application requirements are met.
An embodiment of the present application further provides a specific implementation manner of an electronic device, which is capable of implementing all steps in the multi-model fusion method based on model application portrait in the foregoing embodiment, and referring to fig. 13, the electronic device specifically includes the following contents:
a processor (processor)1201, a memory (memory)1202, acommunication Interface 1203, and abus 1204;
theprocessor 1201, thememory 1202 and thecommunication interface 1203 complete communication with each other through thebus 1204; thecommunication interface 1203 is configured to implement information transmission between related devices, such as a server-side device, a power measurement device, and a client device.
Theprocessor 1201 is configured to invoke a computer program in thememory 1202, and the processor executes the computer program to implement all the steps of the model-based application representation multi-model fusion method in the above-described embodiments, for example, the processor executes the computer program to implement the following steps:
step 100: extracting characteristic data of a plurality of basic models to generate application images corresponding to the basic models respectively;
step 200: screening the plurality of basic models according to the application data and the plurality of application images;
step 300: and fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data.
Embodiments of the present application further provide a computer-readable storage medium capable of implementing all steps of the model-application-representation-based multi-model fusion method in the above embodiments, where the computer-readable storage medium stores thereon a computer program, and the computer program implements all steps of the model-application-representation-based multi-model fusion method in the above embodiments when executed by a processor, for example, the processor implements the following steps when executing the computer program:
step 100: extracting characteristic data of a plurality of basic models to generate application images corresponding to the basic models respectively;
step 200: screening the plurality of basic models according to the application data and the plurality of application images;
step 300: and fusing the plurality of basic models according to the distribution difference of the plurality of screened basic models aiming at the prediction result of the application data.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Although the present application provides method steps as in an embodiment or a flowchart, more or fewer steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an actual apparatus or client product executes, it may execute sequentially or in parallel (e.g., in the context of parallel processors or multi-threaded processing) according to the embodiments or methods shown in the figures.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.