In a specific implementation process, before vector division is performed, the vector dimension of each divided vector needs to be determined, and then vector division can be performed on the feature data and the model parameters according to the vector dimension. It should be noted that the vector dimensions of the feature vector and the parameter vector obtained by dividing should be consistent with the vector dimensions supported by the preset vector floating point multiply-add instruction. Therefore, in an alternative embodiment, the vector partitioning of the model parameters and the respective feature data of each training sample may include: obtaining a vector dimension supported by a vector floating point multiply-add instruction; based on the vector dimension, performing vector division on the model parameters to obtain m n-dimensional parameter vectors to form the parameter vector sequence, and performing vector division on the respective characteristic data of each training sample to obtain m n-dimensional characteristic vectors to form the characteristic vector sequence. Wherein m is an integer greater than or equal to 1, and n is an integer greater than or equal to 2.
Specifically, in an application scenario, if the feature data includes a number of features greater than a preset vector dimension n supported by the vector floating-point multiply-add instruction, the number of vector partitions is greater than or equal to 2, that is, m is greater than or equal to 2. At this time, the performing process of separately performing vector division on the model parameters based on the vector dimension n to obtain m n-dimensional parameter vectors, and performing vector division on the respective feature data of each training sample to obtain m n-dimensional feature vectors may include: determining a vector division number m based on the vector dimension n supported by the vector floating point multiply-add instruction and the characteristic number; dividing the number m and the vector dimension n according to the determined vector, and constructing m n-dimensional first initial vectors and m n-dimensional second initial vectors; and sequentially assigning the model parameters to the elements in the m constructed first initial vectors according to a preset sequence to obtain m n-dimensional parameter vectors, and sequentially assigning the features contained in the feature data to the elements in the m constructed second initial vectors according to the preset sequence to obtain m n-dimensional feature vectors. It should be noted that the model parameters and the feature data are divided into vectors in the same manner, that is, the values are assigned sequentially according to the same preset sequence.
In addition, in the process of vector division of the respective feature data of each training sample, one feature in the feature data is divided into one feature vector, and the features contained in the same feature vector and different feature vectors are different. Similarly, in the process of dividing the model parameters into vectors, one model parameter is divided into one parameter vector, and the model parameters contained in the same parameter vector and different parameter vectors are different.
And in the vector division process, if the number of elements in the feature vector and the parameter vector is not full, namely the number of features contained in one divided feature vector is less than the vector dimension supported by the preset vector floating point multiply-add instruction, and the number of model parameters contained in one parameter vector is less than the vector dimension supported by the preset vector floating point multiply-add instruction, assigning the elements which are not full in the feature vector and the parameter vector to be preset values. Taking the above example as an example, one of the feature vectors can only contain 3 features, and when the vector dimension supported by the floating-point multiply-add instruction is less than 5, other two elements in the feature vector need to be assigned as preset values. The same is true for the vector partitioning of the model parameters. The preset value is set according to a specific calculated target value, for example, the preset value may be 0 when the target value is an assumed function value, and the preset value may be 0 or another specified value when the target value is a model parameter value in a gradient descent process.
It should be further noted that, in the vector division process, the division order, that is, the preset order, is not limited, and is specifically set according to actual needs, so that any one feature is not repeatedly divided into a plurality of feature vectors and any one model parameter is not repeatedly divided into a plurality of feature vectors.
For example, assume that the feature data includes 18 feature numbers, each of which is denoted by x0To x17The number of model parameters is also 18, respectively denoted as θ0To theta17If the number of dimensions of the vector supported by the predetermined vector floating-point multiply-add instruction is 5, the feature data may be divided into 4 feature vectors. In particular, can be selected from x0Begin to divide the feature data into four feature vectors, i.e., x, in order from front to back0To x4Dividing into the first feature vector of the feature vector sequence, and dividing x into5To x9Dividing into the second feature vector of the feature vector sequence, dividing x into10To x14Dividing the obtained feature vector into a third feature vector of the feature vector sequence, and dividing x into15To x17Dividing the feature vector into a fourth feature vector of the feature vector sequence, and dividing two insufficient feature vectors in the fourth feature vectorThe elements are assigned to preset values and, correspondingly, the model parameters are also divided into 4 parameter vectors in the same way. Alternatively, the inverse can be used from x17The feature data is initially divided into four feature vectors in a back-to-front order, and correspondingly the model parameters are also divided into 4 parameter vectors in the same way. Alternatively or additionally, x may be in other orders, e.g.0、x2、x4、x6、x8Dividing into the first feature vector of the feature vector sequence, and dividing x into10、x12、x14、x16、x1Dividing into the second feature vector of the feature vector sequence, dividing x into3、x5、x7、x9、x11Dividing the obtained feature vector into a third feature vector of the feature vector sequence, and dividing x into13、x15、x17Dividing the model parameters into a fourth feature vector of the feature vector sequence, and carrying out vector division on the model parameters according to the same sequence.
In addition, in an application scenario, if the number of features included in the feature data is less than or equal to a predetermined vector dimension supported by the vector floating point multiply-add instruction, both the feature vector sequence and the parameter vector sequence include a vector. Specifically, if the number of features included in the feature data is smaller than the vector dimension supported by the vector floating-point multiply-add instruction, the unsatisfied elements in the feature vectors after feature division need to be assigned as preset values, and if the number of features included in the feature data is 6 and the vector dimension supported by the vector floating-point multiply-add instruction is 10, the unsatisfied elements need to be assigned as preset values. And if the number of the features contained in the feature data is equal to the vector dimension supported by the vector floating point multiply-add instruction, the features contained in the feature data can be just divided into one feature vector. The same is true for the vector partitioning of the model parameters.
In the specific implementation process, it is assumed that the feature number included in the feature data of the training sample is DIM, and the vector dimension corresponding to the supported vector floating-point multiply-add instruction is n. It is understood that DIM is only an example of a variable representation of a feature quantity, and other variable names commonly used to represent quantities may be substituted, such as M, N. In one embodiment, this may be represented by the following equation:
m=[(DIM+n-1)/n]
and determining the division number of the feature vectors and the parameter vectors. That is, the number m of vector partitions is obtained by dividing the value obtained by adding the feature number DIM to the vector dimension n and subtracting 1 by the vector dimension n and then rounding. For example, if n is 3 and DIM is 10, m is 4. Alternatively, in other embodiments of the present specification, the division number of the feature vectors and the parameter vectors may be obtained by rounding DIM/n and then adding 1.
It is understood that the training process of the target model includes multiple rounds of iterative training, and after completing the feature data of the training samples and the vector division of the model parameters, the following step S102 may be performed.
And step S102, calling a preset vector floating point multiply-add instruction for a training sample in each iteration training process, and carrying out multiply-add processing on the parameter vector sequence and the characteristic vector sequence to obtain a target value of the training sample.
Specifically, a preset vector floating point multiply-add instruction may be called for each training sample in each iteration training process, and a parameter vector sequence and a feature vector sequence of the training sample may be subjected to multiply-add processing to obtain a target value of the training sample. Alternatively, in other embodiments of the present disclosure, the step S102 may be performed only on a part of training samples in each iteration of training process to obtain the target value of each training sample in the part of training samples.
It can be understood that the target value is a value obtained by performing multiply-add processing based on the feature data of the training sample and the model parameter in the iterative training process. For example, the target value may be a hypothetical function hθ(X), and/or updating the calculated value of the parameter θ'. It can be appreciated that the hypothesis function h in the linear machine learning modelθThe calculation of (X) includes the pair of θTCalculating X, wherein the calculated value can be obtained based on the result of the multiplication and addition processing of the parameter vector sequence and the feature vector sequence, and the specific process is to be carried outAs described hereinafter.
For example, an exemplary linear regression model has the hypothetical function: h is
θ(X)=θ
TX, a hypothetical function of an exemplary logistic regression model is:
for another example, in an application scenario, the application may be implemented by:
and calculating a gradient descent updating parameter, wherein α is a learning rate, NUM is the number of samples of each iteration, and Y is a sample label.
The following description will mainly take two kinds of target values as examples, and details the calculation process of the target values. Of course, in the implementation process, the target value may also be other suitable calculation parameter values in the model training process, which is not limited herein.
In an alternative embodiment of the present disclosure, the target value may include a hypothetical function value, for example, a hypothetical function value when the target model is a linear regression model, or a hypothetical function value when the target model is a logistic regression model. At this time, in step S102, the step of invoking a preset vector floating point multiply-add instruction to perform multiply-add processing on the parameter vector sequence and the feature vector sequence to obtain the target value of the training sample may include: calling a vector floating point multiply-add instruction, sequentially dividing a parameter vector arranged at the ith position in a parameter vector sequence, a feature vector arranged at the ith position in the feature vector sequence and a preset initial vector to carry out multiply-add processing to obtain a current result vector, and taking the current result vector as the initial vector of the next multiply-add processing to execute the next multiply-add processing, wherein i can be an integer between 0 and m-1, for example, m-1 can be obtained from 0, and m is the number of parameter vectors in the parameter vector sequence; then, after traversing the parameter vector sequence and the feature vector sequence, accumulating the elements in the current result vector, and obtaining a hypothesis function value of the training sample based on the accumulation result; the hypothesis function value is used as the target value of the training sample.
Specifically, for the training samples in each iteration of training process, the first-ranked parameter vector in the parameter vector sequence may be used as the current first vector θ0The feature vector X arranged first in the feature vector sequence0As the current second vector, using the preset initial vector as the current third vector R0。
Further, a vector multiply add step is performed: and carrying out vector multiplication and addition processing on the current first vector, the current second vector and the current third vector by using a vector floating point multiplication and addition instruction to obtain a current result vector. For example, it can be expressed as R ═ VFMADD (θ)0,X0,R0)。
Then, the next parameter vector in the parameter vector sequence is used as the current first vector theta1Taking the next feature vector in the feature vector sequence as the current second vector X1Taking the current result vector R as the current third vector R1And repeating the vector multiplication and addition steps, and so on until all vectors in the parameter vector sequence and the feature vector sequence are traversed. At this time, the preset value is 0, and the elements in the current result vector obtained in the last round are accumulated to obtain θTAnd the value of X is further substituted into the hypothesis function, so that the hypothesis function value of the training sample can be obtained.
That is, the above multi-round multiply-add process can be expressed as:
R=VFMADD(θi,Xi,R)
where R represents the current result vector, θiRepresenting the parameter vector arranged at the i-th position in the sequence of parameter vectors, XiRepresenting the feature vector arranged at the ith position in the feature vector sequence. And the initial value of R is a preset initial vector, the dimension of the initial vector is the same as the dimension of the characteristic vector and the dimension of the parameter vector, and the assignment of each element in the initial vector is 0.
And then, accumulating all elements in the current result vector obtained in the last round according to the following formula:
wherein n is the vector dimension supported by the predetermined vector floating point multiply-add instruction, i.e. the dimension of the eigenvector and the parameter vector, riIs the ith element in the current result vector.
It can be understood that, assuming that the vector dimension supported by the vector floating-point multiply-add instruction is n, a computation that requires n multiply instructions and n add instructions can be completed by calling one vector floating-point multiply-add instruction. Therefore, in the above calculation of θTIn the process of X, compared with the case that a multiplication instruction and an addition instruction are used for all model parameters and feature data, in the embodiments of the present specification, by first performing vector division on the model parameters and the feature data, and then calling a vector floating point multiplication and addition instruction to perform multiplication and addition processing on the divided vectors, time-consuming calculation θ can be performedTThe number of computing instructions required by X is reduced to approximately 1/2n, and the occupation of computing resources of a computing device by a modeling process is greatly reduced.
In an alternative embodiment of the present disclosure, the target value may include an updated parameter value during the gradient descent. At this time, in the step S102, the step of invoking a preset vector floating point multiply-add instruction to perform multiply-add processing on the parameter vector sequence and the feature vector sequence to obtain the target value of the training sample may include: and calling a vector floating point multiply-add instruction, carrying out multiply-add processing on the gradient coefficient vector, the characteristic vector arranged at the jth position in the characteristic vector sequence and the parameter vector arranged at the jth position in the parameter vector sequence before the descent, which are obtained in advance, so as to obtain the descended parameter vector sequence, and taking the model parameter in the descended parameter vector sequence as the target value of the training sample. Wherein j may take an integer between 0 and m-1, and m is the number of parameter vectors in the parameter vector sequence. j takes a total of m values between 0 and m-1 respectively, and then the multiplication and addition processing process can be executed for m times respectively, so that the parameter vector sequence after the reduction is obtained. It should be noted that, in the embodiment of the present specification, a parameter updating manner adopted by the model training is not limited, and for example, the parameter updating manner may be applied to any one of full-batch, mini-batch, or SGD (Stochastic Gradient Descent).
Of course, the above-mentioned multiply-add process needs to be performed first to obtain the gradient coefficient vector. Specifically, the implementation process of obtaining the gradient coefficient vector may include: acquiring a gradient descent coefficient of a gradient descent process in the iterative training process; and constructing a gradient coefficient vector according to the dimension of the parameter vector, and assigning each element of the gradient coefficient vector as the gradient descent coefficient.
Assuming that the gradient coefficient vector is denoted as a, the dimension of a coincides with the dimension of the parameter vector as well as the dimension of the feature vector. The sequence of the parameter vectors before descent is:
the feature vector sequence of the current training sample is: { X
0,X
1,…,X
m-1And if yes, invoking a vector floating point multiply add instruction, namely:
θ′k+1j=VFMADD(A,Xj,θ′kj)
wherein, theta 'on the left side of equal sign'k+1jIs the parameter vector arranged at the j-th bit in the parameter vector sequence of the next moment, and is theta 'to the right of the equal sign'kjFor the parameter vector arranged at the j-th position in the parameter vector sequence at the current time, XjRepresenting the feature vector arranged at the j-th bit in the feature vector sequence. Calling m times of vector floating point multiply-add instructions respectively to the gradient vector A and X in the feature vector sequence of the current training samplejAnd theta 'in the current time-of-day parameter vector sequence'kjBy performing the multiply-add processing, a parameter vector sequence at the next moment can be obtained, which can be expressed as { theta'k+10,θ′k+11,…,θ′k+1m-1The values of the model parameters at the next moment can be quickly obtained, and then the parameter vector in the parameter vector sequence at the next moment can be used as the current moment parameter vector sequence of the next training sample in the iteration of the current roundAnd e, repeating the steps until all training samples used in the iterative training of the round are traversed. Then, the updated model parameters can be used as the model parameters for the next round of iterative training.
For example, in an application scenario, the reduced model parameters can be obtained by the following formula:
wherein,
for gradient descent coefficients, the elements of the gradient coefficient vector to be constructed are assigned
And then, obtaining a parameter vector sequence of the next moment according to the formula, namely obtaining the value of each model parameter of the next moment.
When the number of the model parameters included in the parameter vector is less than n (assuming that the vector length applicable to the vector floating-point multiply-add instruction is n), the unsatisfied elements are assigned as the preset values, but the elements assigned as the preset values are not real model parameters and are not considered when the model parameters are updated.
It can be understood that, assuming that the vector dimension supported by the vector floating-point multiply-add instruction is n, a computation that requires n multiply instructions and n add instructions can be completed by calling one vector floating-point multiply-add instruction. Therefore, in the process of calculating the updated model parameter θ ', compared with the process of adopting the multiplication instruction and the addition instruction for all the model parameters and the feature data, in the embodiment of the present specification, the model parameters and the feature data are vector-divided, and then the vector floating point multiplication and addition instruction is called to calculate the updated model parameter θ ', so that the number of calculation instructions required for calculating θ ' in a time-consuming manner can be reduced to be close to 1/2n, and the occupation of the calculation resources of the calculation device in the modeling process is greatly reduced.
In the specific implementation process, the feature data of the training sample and the model parameters of the target model can be subjected to vector division according to actual needs, and then the hypothesis function h is subjected to vector divisionθIn the calculation process of (X), and/or in the gradient descent process, in the calculation process of the updated model parameter θ', a preset vector floating point multiply-add instruction is called, so that the number of required calculation instructions for mainly time-consuming calculation in the model training process is greatly reduced, thus the modeling speed can be effectively increased, the modeling efficiency is improved, and the occupation of the calculation resources of the calculation equipment in the modeling process is reduced, so that the internal resource management of the calculation equipment can be optimized, the calculation equipment can process more calculation tasks, and the processing efficiency is improved.
After the target values of the training samples are obtained, the following step S104 may be continuously performed to continue training the target model with the target values of the training samples.
And step S104, obtaining a trained target model based on the target value of the training sample in each iteration training process.
After the target value is obtained by the calculation in step S102, the target value may be used in subsequent calculations in the training process, for example, calculation of a loss function value, until the training is completed, and a trained target model is obtained for use. It should be noted that the process of using the target value in the subsequent calculation of the training process to obtain the trained target model is the same as the implementation process of the existing model training, and therefore, the detailed description is omitted.
The method for accelerating the modeling of the computing equipment provided by the embodiment of the specification can complete a plurality of multiplication and addition calculations in the model training process by vector division of the feature data and the model parameters and then calling a vector floating point multiplication and addition instruction once, thereby greatly reducing the times of calling the multiplication instruction and the addition instruction independently, namely greatly reducing the number of the calculation instructions required in the model training process, effectively improving the modeling speed of the computing equipment, reducing the time consumed by modeling, improving the modeling efficiency, being beneficial to ensuring the model performance and simultaneously enabling the model to be rapidly put into use, greatly reducing the occupation of the computing resources in the computing equipment in the modeling process, optimizing the internal resource management of the computing equipment and enabling the computing equipment to process more computing tasks, thereby improving processing efficiency.
In a second aspect, based on the same inventive concept as the method for accelerating modeling of a computing device provided in the foregoing first aspect, an embodiment of the present specification further provides an apparatus for accelerating modeling of a computing device, which is run on a computing device supporting a vector floating-point multiply-add instruction. As shown in fig. 2, the apparatus 20 includes:
the vector division module 21 is configured to perform vector division on the model parameters and the respective feature data of each training sample in a training process of the target model to obtain a parameter vector sequence of the model parameters and a respective feature vector sequence of each training sample, where the training process of the target model includes multiple rounds of iterative training;
the multiplication and addition module 22 is configured to call a preset vector floating point multiplication and addition instruction for a training sample in each iteration training process, and perform multiplication and addition processing on the parameter vector sequence and the feature vector sequence to obtain a target value of the training sample;
and the model determining module 23 is configured to obtain a trained target model based on the target value of the training sample in each iteration training process.
In an alternative embodiment, the vector dividing module 21 includes:
an obtaining submodule 211, configured to obtain a vector dimension supported by the vector floating-point multiply-add instruction;
and a partitioning submodule 212, configured to perform vector partitioning on the model parameters based on the vector dimensions to obtain m n-dimensional parameter vectors, to form the parameter vector sequence, and perform vector partitioning on respective feature data of each training sample to obtain m n-dimensional feature vectors, to form the feature vector sequence, where m is an integer greater than or equal to 1, and n is an integer greater than or equal to 2.
In an alternative embodiment, the partitioning sub-module is configured to:
if the feature quantity contained in the feature data is larger than the vector dimension, determining the vector division number based on the vector dimension and the feature quantity;
according to the vector division number and the vector dimension, m n-dimensional first initial vectors and m n-dimensional second initial vectors are constructed;
and sequentially assigning the model parameters to elements in the m constructed first initial vectors according to a preset sequence to obtain m n-dimensional parameter vectors, and sequentially assigning the features contained in the feature data to elements in the m constructed second initial vectors according to the preset sequence to obtain m n-dimensional feature vectors.
In an optional embodiment, in the process of vector partitioning of the feature data of each training sample, features included in the same feature vector and different feature vectors are different, and in the process of vector partitioning of the model parameters, model parameters included in the same parameter vector and different parameter vectors are different.
In an alternative embodiment, the apparatus 20 further comprises:
and the assignment module assigns the characteristic vector and elements which are not full in the parameter vector to preset values if the characteristic number contained in the characteristic vector is less than the vector dimension supported by the vector floating point multiply-add instruction and the model parameter number contained in the parameter vector is less than the vector dimension supported by the vector floating point multiply-add instruction in the vector division process.
In an alternative embodiment, the multiply-add module 22 includes:
the first processing sub-module 221 is configured to invoke the vector floating point multiply-add instruction, sequentially divide the parameter vector arranged at the ith bit in the parameter vector sequence, the feature vector arranged at the ith bit in the feature vector sequence, and a preset initial vector to perform multiply-add processing, obtain a current result vector, use the current result vector as the initial vector of the next multiply-add processing, and execute the next multiply-add processing, where i is an integer between 0 and m-1, and m is the number of parameter vectors in the parameter vector sequence;
the second processing sub-module 222 is configured to, after traversing the parameter vector sequence and the feature vector sequence, perform accumulation processing on elements in the current result vector, obtain an assumed function value of the training sample based on an accumulation result, and use the assumed function value as a target value of the training sample.
In an alternative embodiment, the multiply-add module 22 includes:
the third processing sub-module 223 is configured to invoke the vector floating point multiply-add instruction, perform multiply-add processing on a gradient coefficient vector obtained in advance, a feature vector arranged at the jth position in the feature vector sequence, and a parameter vector arranged at the jth position in the parameter vector sequence before descent, to obtain a parameter vector sequence after descent, and use a model parameter in the parameter vector sequence after descent as a target value of the training sample, where j is an integer between 0 and m-1, and m is the number of parameter vectors in the parameter vector sequence.
In an alternative embodiment, the above-mentioned multiplication and addition module 22 further includes:
the construction submodule is used for acquiring a gradient descent coefficient of a gradient descent process in the iterative training process; and constructing a gradient coefficient vector according to the dimension of the parameter vector, and assigning each element of the gradient coefficient vector as the gradient descent coefficient.
In an alternative embodiment, the target model is a linear machine learning model, and the linear machine learning model includes a linear regression model and a logistic regression model.
It should be noted that, in the apparatus 20 for accelerating computing device modeling provided in the embodiment of the present specification, the specific manner in which each module performs operations has been described in detail in the method embodiment provided in the foregoing first aspect, and the specific implementation process may refer to the method embodiment provided in the foregoing first aspect, which will not be described in detail here.
In a third aspect, based on the same inventive concept as the method for accelerating modeling of a computing device provided in the foregoing embodiments, the present specification further provides a computing device supporting the use of a vector floating-point multiply-add instruction, such as a vfmad instruction of Intel. As shown in fig. 3, comprising amemory 304, one ormore processors 302 and a computer program stored on thememory 304 and executable on theprocessors 302, theprocessor 302 when executing the program implementing the steps of any of the embodiments of the method of accelerating modeling of a computing device as provided in the previous first aspect.
Where in fig. 3 a bus architecture (represented by bus 300),bus 300 may include any number of interconnected buses and bridges,bus 300 linking together various circuits including one or more processors, represented byprocessor 302, and memory, represented bymemory 304. Thebus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. Abus interface 305 provides an interface between thebus 300 and thereceiver 301 andtransmitter 303. Thereceiver 301 and thetransmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other apparatus over a transmission medium. Theprocessor 302 is responsible for managing thebus 300 and general processing, and thememory 304 may be used for storing data used by theprocessor 302 in performing operations.
It will be appreciated that the configuration shown in FIG. 3 is merely illustrative and that embodiments of the present description provide a computing device that may also include more or fewer components than shown in FIG. 3, or have a different configuration than shown in FIG. 3. The components shown in fig. 3 may be implemented in hardware, software, or a combination thereof.
In a fourth aspect, based on the same inventive concept as the method for accelerating modeling of a computing device provided in the foregoing embodiments, the present specification embodiment further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of any of the embodiments of the method for accelerating modeling of a computing device provided in the foregoing first aspect.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present specification have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all changes and modifications that fall within the scope of the specification.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present specification without departing from the spirit and scope of the specification. Thus, if such modifications and variations of the present specification fall within the scope of the claims of the present specification and their equivalents, the specification is intended to include such modifications and variations.