Disclosure of Invention
The embodiment of the application provides a network performance optimization method, a network performance optimization device and a storage medium, which can realize on-line capacity expansion and contraction operation of VNF and VNFC levels, and achieve the purpose of dynamically optimizing network performance.
The first aspect of the present application provides a method for optimizing network performance, which may include:
determining CPU utilization rate corresponding to the current virtual resource of the network cloud;
Judging whether the CPU utilization rate reaches a preset CPU utilization rate high-limit threshold value or not;
If yes, determining the required quantity of the virtual network element assemblies VNCs corresponding to the virtual resources according to the relation model and the CPU utilization rate, wherein the relation model is a model obtained by training network management performance data through a machine learning algorithm, and the relation model is a relation model for predicting the relationship between user behavior data and the CPU utilization rate occupied by the virtual resources;
Determining an optimal configuration model of a virtual network element (VNF) corresponding to the virtual resource according to the required quantity of the VNCs;
Comparing the optimal configuration model with the current configuration model of the VNF to obtain a comparison result;
And adjusting the VNF according to the comparison result.
In one possible design, the method further comprises:
acquiring network management performance data in a preset duration, wherein the network management performance data comprises user behavior data and CPU utilization rates corresponding to a plurality of VNCs;
preprocessing the network management performance data to obtain training data;
Performing iterative training on the initial relation model according to the training data until a preset iteration termination condition is reached;
And determining the initial relation model reaching the preset iteration termination condition as the relation model.
In one possible design, the adjusting the VNF according to the comparison result includes:
Judging whether a resource allocation adjustment condition is reached;
If yes, the VNF is expanded or contracted according to the comparison result.
In one possible design, after the expanding or shrinking the number of VNFCs according to the optimization policy corresponding to the VNF, the method includes:
Collecting network management performance data corresponding to the VNFC after capacity expansion or capacity contraction;
Analyzing the network management performance data corresponding to the VNFC after capacity expansion or capacity shrinkage based on the relation model to obtain an analysis result;
and executing corresponding operation according to the analysis result.
In one possible design, the method further comprises:
judging whether the iteration times reach a preset value, if so, determining that the preset iteration termination condition is met;
Or alternatively, the first and second heat exchangers may be,
Judging whether the model parameters of the initial relation model are converged, if yes, determining that the preset iteration termination condition is met.
In one possible design, comparing the optimal configuration model with the current configuration model of the VNF to obtain a comparison result includes:
Determining a first virtual resource type corresponding to the optimal configuration model and an adjustment range corresponding to the first virtual resource type;
determining a second virtual resource type corresponding to the current configuration model of the VNF;
And comparing the first virtual resource type and the adjustment range corresponding to the first virtual resource type with the second virtual resource type respectively to obtain the comparison result.
A second aspect of the present application provides a network performance optimization apparatus, including:
The determining unit is used for determining the CPU utilization rate corresponding to the current virtual resource of the network cloud;
the judging unit is used for judging whether the CPU utilization rate reaches a preset CPU utilization rate high-limit threshold value or not;
The determining unit is further configured to determine, according to the relationship model and the CPU utilization, a required number of virtual network element components VNFCs corresponding to the virtual resources if the CPU utilization reaches a preset CPU utilization high-limit threshold, where the relationship model is a model obtained by training network management performance data through a machine learning algorithm, and the relationship model is a relationship model for predicting a relationship between user behavior data and a virtual resource occupation CPU utilization;
The determining unit is further configured to determine an optimal configuration model of a virtual network element VNF corresponding to the virtual resource according to the required number of VNFCs;
The comparison unit is used for comparing the optimal configuration model with the current configuration model of the VNF to obtain a comparison result;
And the adjusting unit is used for adjusting the VNF according to the comparison result.
In one possible design, the apparatus further comprises:
A model training unit for:
acquiring network management performance data in a preset duration, wherein the network management performance data comprises user behavior data and CPU utilization rates corresponding to a plurality of VNCs;
preprocessing the network management performance data to obtain training data;
Performing iterative training on the initial relation model according to the training data until a preset iteration termination condition is reached;
And determining the initial relation model reaching the preset iteration termination condition as the relation model.
In one possible design, the adjusting unit is specifically configured to:
Judging whether a resource allocation adjustment condition is reached;
If yes, the VNF is expanded or contracted according to the comparison result.
In a possible design, the adjusting unit is further configured to:
Collecting network management performance data corresponding to the VNFC after capacity expansion or capacity contraction;
Analyzing the network management performance data corresponding to the VNFC after capacity expansion or capacity shrinkage based on the relation model to obtain an analysis result;
and executing corresponding operation according to the analysis result.
In a possible design, the model training unit is further configured to:
judging whether the iteration times reach a preset value, if so, determining that the preset iteration termination condition is met;
Or alternatively, the first and second heat exchangers may be,
Judging whether the model parameters of the initial relation model are converged, if yes, determining that the preset iteration termination condition is met.
In one possible design, the comparison unit is specifically configured to:
Determining a first virtual resource type corresponding to the optimal configuration model and an adjustment range corresponding to the first virtual resource type;
determining a second virtual resource type corresponding to the current configuration model of the VNF;
And comparing the first virtual resource type and the adjustment range corresponding to the first virtual resource type with the second virtual resource type respectively to obtain the comparison result.
The third aspect of the application provides a server comprising a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform operations of the method for optimizing network performance.
A fourth aspect of the application provides a computer readable storage medium having stored therein at least one executable instruction which, when run on a computing device, causes the computing device to perform a method of optimizing network performance according to the first aspect of the application.
A fifth aspect of the application discloses a computer program product for causing a computer to carry out the method of optimizing network performance according to the first aspect of the application when the computer program product is run on the computer.
A sixth aspect of the present application discloses an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform the method for optimizing network performance according to the first aspect of the present application.
From the above technical solutions, the embodiment of the present application has the following advantages:
Determining the required quantity of virtual network element components according to a pre-trained relation model between predicted user behavior data and virtual resource occupation CPU utilization rate when the CPU utilization rate reaches a preset CPU utilization rate high-limit threshold value, determining an optimal configuration model of the virtual network element according to the required quantity of the virtual network element components, determining the current configuration model of the virtual network element according to the optimal configuration model, comparing to obtain a comparison result, and adjusting the virtual network element according to the comparison result. Thus, on-line capacity expansion and contraction operations at the level of the VNF and VNFC can be realized.
Detailed Description
In order that those skilled in the art will better understand the present application, reference will now be made to the accompanying drawings in which embodiments of the application are illustrated, it being apparent that the embodiments described are only some, but not all, of the embodiments of the application. Based on the embodiments of the present application, it should be understood that the present application is within the scope of protection.
In the early stage of NFV business, most operators and manufacturers consider that the elastic expansion of capacity is an important characteristic and a maximum driving force of NFV, and in an ideal case, nfvo+ can initiate elastic expansion control for one instantiated VNF/NS, and an elastic expansion command sends VNFM corresponding to each VNF, and the VNFM is responsible for executing the command, so that the purposes of promoting network efficient and reducing cost operation are achieved. However, to ensure stable operation of the telecommunications network and prior art limitations, the NFV cannot implement VNF and VNFC level online capacity expansion and contraction operations for comprehensive risk and cost analysis.
Because the construction of the cloud resource pool and the construction of the VNF application are not synchronous, the cloud resource pool needs to predict the demand in advance and deploy, and a plurality of project VNF deployments corresponding to the project of the multi-period cloud resource pool project exist, so that the problem that the deployment resources cannot be optimized due to the prediction difference of the cross-time and cross-project VNF resource demands is solved.
In view of this, an embodiment of the present application provides a method and related equipment for optimizing network performance, by determining a CPU utilization rate corresponding to a current virtual resource of a network cloud, determining a required number of virtual network element components according to a pre-trained relationship model between predicted user behavior data and a virtual resource occupation CPU utilization rate when the CPU utilization rate reaches a preset CPU utilization rate high-limit threshold, determining an optimized configuration model of the virtual network element according to the required number of the virtual network element components, determining a current configuration model of the virtual network element according to the optimized configuration model, comparing, obtaining a comparison result, and adjusting the virtual network element according to the comparison result. Therefore, the online capacity expansion and contraction operation of the VNF and the VNFC level can be solved.
The optimization method and the related equipment for the network performance can be applied to the optimization of the cloud 2G, 3G, 4G, 5G and IP Multimedia System (IMS) resource allocation model, and can be expanded to the optimization of the resource performance of other cloud computing systems.
Referring to fig. 1, fig. 1 is a network architecture diagram of a 5G core network according to an embodiment of the present application, where:
The 5G core network comprises control plane authentication services (Authentication Server Function, AUSF), access and mobility management (ACCESS AND mobility management function, AMF), policy Control (PCF), session Management (SMF), unified data management (Unified DATA MANAGEMENT, UDM), charging management network elements (CHF), network slice selection network elements (Network Slice Selection Function, NSSF), unstructured data storage functions (Unstructured Data Storage Function, UDSF), network storage functions (NF Repository Function, NRF), unified data warehousing functions (Unified Data Repository, UDR) and user plane network element (User Plane Function, UPF) management parts, which are specifically as follows:
1. AUSF is configured to receive a request for authentication of a User Equipment (UE) by the AMF, and forward a key issued by the UDM to the AMF for authentication by requesting the key from the UDM.
2. AMF is the end point of RAN signaling interface (N2), network attached storage (Network Attached Storage: network attached storage, NAS) (N1) signaling (MM message) end point, and is responsible for the encryption and the complete protection of NAS message, registration, access, mobility, authentication, transparent transmission short message and other functions.
3. PCF supports a unified policy framework to manage network behavior, provides policy rules to network entities for enforcement, accesses subscription information of a unified data repository (Universal Data Repository, UDR)
4. SMF is the terminal point of SM message of NAS message, and is used for establishing, modifying, releasing management session (session), distributing and managing UE IP, realizing DHCP function, etc.;
5. The UDM is responsible for management of user identification, subscription data, authentication data, and registration management of service network elements of the user (such as AMF, SMF, etc. that currently provide services for the terminal, for example, when the user switches the visited AMF, the UDM may also initiate a logoff message to the old AMF, so as to require the old AMF to delete user related information);
6. CHF is mainly composed of AGF, CDF, CGF parts, supporting three scenarios of online charging, offline charging and converged charging.
7. NSSF to select a set of network slice instances serving the UE, to determine allowed NSSAI, and if needed, to determine a mapping to subscribed S-NSSAI, to determine configured NSSAI, and if needed, to determine a mapping to subscribed S-NSSAI;
8. UDSF any NF can store and retrieve its unstructured data;
9. The NRF is used for carrying out NF registration, management and state detection, so as to realize the automatic management of all the NF;
10. UDR stores and retrieves subscription data, PCF stores and retrieves policy data, structured service stores and retrieves, NEF's application data;
11. UPF is responsible for packet routing and quality of service (Quality of Service, qoS) flow mapping.
The method for optimizing the network performance provided by the application is specifically described below from the viewpoint of a network performance optimizing device.
Referring to fig. 2, fig. 2 is a schematic diagram of an embodiment of a method for optimizing network performance according to an embodiment of the present application, including:
201. And determining the CPU utilization rate corresponding to the current virtual resource of the network cloud.
In this embodiment, when the network performance optimization device optimizes the network performance of the network cloud, the CPU utilization corresponding to the current virtual resource of the network cloud may be determined, and the manner of determining the CPU utilization is not specifically limited herein, for example, the CPU utilization of each virtual resource in the network cloud may be monitored.
202. And judging whether the CPU utilization rate reaches a preset CPU utilization rate high-limit threshold value, if so, executing step 203.
In this embodiment, after determining the CPU utilization corresponding to the current virtual resource of the network cloud, the network performance optimization device may determine whether the CPU utilization reaches a preset CPU utilization high-limit threshold, and if the CPU utilization reaches the preset CPU utilization high-limit threshold, execute step 203. The preset CPU utilization high-limit threshold is a value that the CPU utilization of the server corresponding to the network cloud reaches the bottleneck, where the value may be set according to a test or according to an empirical value, that is, if the CPU utilization of the server corresponding to the network cloud reaches the bottleneck, the step 203 is executed if the CPU utilization reaches the bottleneck.
203. And determining the required quantity of the virtual network element assemblies VNCs corresponding to the virtual resources according to the relation model and the CPU utilization rate.
In this embodiment, when determining that the CPU utilization reaches the preset CPU utilization high-limit threshold, the network performance optimization device may determine the required number of VNFCs of the virtual network element component corresponding to the virtual resource, that is, the number of VNFCs currently required to be expanded, through a pre-trained relationship model between predicted user behavior data and the virtual resource occupation CPU utilization. The following describes the training mode of the relation model:
Step 1, acquiring network management performance data in a preset time period, wherein the network management performance data comprises user behavior data and CPU utilization rates corresponding to a plurality of VNCs.
In this step, the collected network management performance data is described by taking the 5G core network as an example, and for the 5G core network, the network management performance data includes, but is not limited to, table 1, and includes, in addition to the network management performance data in table 1, CPU utilization statistics values of VNFCs, such as PBU and OMU. In addition, the network management performance data should be collected as data over a longer period of time, such as more than 3 months or longer, for subsequent machine learning and modeling.
TABLE 1
And step2, preprocessing the network management performance data to obtain training data.
And step 3, performing iterative training on the initial relation model according to the training data until a preset iteration termination condition is reached.
In this step, the network performance optimizing device may preprocess the network management performance data after obtaining the network management performance data, and iterate the preprocessed data through a machine learning algorithm until reaching a preset iteration termination condition. The network performance optimizing device adopts a Long Short-Term Memory (LSTM) neural network algorithm to model and predict network management performance data. LSTM is mainly used for modeling and predicting the future of time-dependent events, and application fields include text generation, machine translation, speech recognition, fault diagnosis, traffic prediction, and the like. According to the application, performance data collected by a network manager of a 5G core network, namely user behavior statistical data and CPU utilization rate data counted by a virtual network element assembly (VNFC), are modeled and predicted by adopting an LSTM neural network algorithm, a relation model of the user behavior data and the CPU utilization rate occupied by resources is established, and future prediction is performed by using historical data, so that CPU utilization rate requirements are formed.
And step 4, determining an initial relation model reaching a preset iteration termination condition as a relation model.
In the step, in the process of model training through LSTM, after each iteration, whether the iteration times reach a preset value can be judged, if yes, the preset iteration termination condition is met, or whether the model parameters of the initial relation model are converged is judged, if yes, the preset iteration termination condition is met, and the initial relation model reaching the preset iteration termination condition is determined as the relation model.
204. And determining an optimal configuration model of the virtual network element VNF corresponding to the virtual resource according to the required quantity of the VNCs.
In this embodiment, after determining the required number of VNFCs, the network performance optimization device may determine an optimal configuration model of a VNF corresponding to the virtual resource according to the required number of VNFCs, and determine the required number of VNFCs through the CPU requirement, so as to achieve the purpose of optimizing VNF configuration. The NFV is a virtualization technology or concept, which solves the problem of deploying network functions on general hardware, the VNF refers to a specific virtual network function, provides a certain network service, is software, and is deployed in a virtual machine, a container or a bare-metal physical machine by using an infrastructure provided by the NFVI.
205. And comparing the optimal configuration model with the current configuration model of the VNF to obtain a comparison result.
In this embodiment, after obtaining the optimal configuration model, the network performance optimization device may compare the optimal configuration model with the current configuration model of the VNF to obtain a comparison result, and when comparing, may determine a first virtual resource type corresponding to the optimal configuration model and an adjustment range corresponding to the first virtual resource type, determine a second virtual resource type corresponding to the current configuration model of the VNF, and compare the adjustment ranges corresponding to the first virtual resource type and the first virtual resource type with the second virtual resource type, so as to obtain the comparison result, where the comparison result is the comparison resource type and the adjustment range thereof. Namely, the resource type to be adjusted and the adjustment range corresponding to the resource type are determined.
206. And adjusting the VNF according to the comparison result.
In this embodiment, after the network performance optimization device obtains the comparison result, the VNF may be adjusted according to the comparison result, and specifically, whether a resource configuration adjustment condition is reached may be determined, where the resource configuration adjustment condition may be whether to manually determine adjustment or automatically determine whether to perform adjustment of resource configuration, if yes, the VNF may be expanded or contracted according to the comparison result, and of course, the VNFC may also be expanded or contracted.
When the decision is made to adopt the optimal configuration model, the optimal configuration model is issued through a cloud resource management pool network management system (such as nfvo+) and is deployed in an instantiation mode in the cloud resource pool.
In addition, the network performance optimizing device can also collect network management performance data corresponding to the expanded or contracted VNFC, analyze the network management performance data corresponding to the expanded or contracted VNFC based on the relation model to obtain an analysis result, and execute corresponding operation according to the analysis result. Namely, after the optimized NFV network instance is deployed, corresponding network management performance management data are collected, analysis and prediction are carried out based on the network management performance management data, a closed loop flow is formed, the effectiveness of an optimal configuration model is guaranteed, and dynamic adjustment is carried out along with the change of traffic and user behaviors.
In summary, in the embodiment provided by the application, the CPU utilization rate corresponding to the current virtual resource of the network cloud is determined, and when the CPU utilization rate reaches the preset CPU utilization rate high-limit threshold, the required number of the virtual network element components is determined according to the pre-trained relationship model between the predicted user behavior data and the virtual resource occupation CPU utilization rate, the optimal configuration model of the virtual network element is determined according to the required number of the virtual network element components, the current configuration model of the virtual network element is determined according to the optimal configuration model, the comparison result is obtained, and the virtual network element is adjusted according to the comparison result. Thus, on-line capacity expansion and contraction operations at the level of the VNF and VNFC can be realized.
The embodiments of the present application are described above in terms of a network performance optimization method, and the embodiments of the present application are described below in terms of a network performance optimization device:
Referring to fig. 3, fig. 3 is a schematic diagram of an embodiment of a network performance optimization apparatus according to an embodiment of the present application, where the network performance optimization apparatus 300 includes:
a determining unit 301, configured to determine a CPU utilization corresponding to a current virtual resource of the network cloud;
A judging unit 302, configured to judge whether the CPU utilization reaches a preset CPU utilization high-limit threshold;
The determining unit 301 is further configured to determine, according to the relationship model and the CPU utilization, a required number of virtual network element components VNFCs corresponding to the virtual resources if the CPU utilization reaches a preset CPU utilization high-limit threshold, where the relationship model is a model obtained by training network management performance data through a machine learning algorithm, and the relationship model is a relationship model for predicting a relationship between user behavior data and a virtual resource occupation CPU utilization;
The determining unit 301 is further configured to determine an optimal configuration model of a virtual network element VNF corresponding to the virtual resource according to the required number of VNFCs;
a comparison unit 303, configured to compare the optimal configuration model with the current configuration model of the VNF, to obtain a comparison result;
and the adjusting unit 304 is configured to adjust the VNF according to the comparison result.
In one possible design, the apparatus further comprises:
a model training unit 305 for:
acquiring network management performance data in a preset duration, wherein the network management performance data comprises user behavior data and CPU utilization rates corresponding to a plurality of VNCs;
preprocessing the network management performance data to obtain training data;
Performing iterative training on the initial relation model according to the training data until a preset iteration termination condition is reached;
And determining the initial relation model reaching the preset iteration termination condition as the relation model.
In one possible design, the adjusting unit 304 is specifically configured to:
Judging whether a resource allocation adjustment condition is reached;
If yes, the VNF is expanded or contracted according to the comparison result.
In a possible design, the adjusting unit 304 is further configured to:
Collecting network management performance data corresponding to the VNFC after capacity expansion or capacity contraction;
Analyzing the network management performance data corresponding to the VNFC after capacity expansion or capacity shrinkage based on the relation model to obtain an analysis result;
and executing corresponding operation according to the analysis result.
In a possible design, the model training unit 305 is further configured to:
judging whether the iteration times reach a preset value, if so, determining that the preset iteration termination condition is met;
Or alternatively, the first and second heat exchangers may be,
Judging whether the model parameters of the initial relation model are converged, if yes, determining that the preset iteration termination condition is met.
In a possible design, the comparing unit 303 is specifically configured to:
Determining a first virtual resource type corresponding to the optimal configuration model and an adjustment range corresponding to the first virtual resource type;
determining a second virtual resource type corresponding to the current configuration model of the VNF;
And comparing the first virtual resource type and the adjustment range corresponding to the first virtual resource type with the second virtual resource type respectively to obtain the comparison result.
The embodiment of the application also provides a server which is used for deploying the network performance optimization device and executing the network performance optimization method. The server comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus, and the memory is used for storing at least one executable instruction which enables the processor to execute the operation of the relay node selection method. In particular, referring to fig. 4, fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 400 may have a relatively large difference due to different configurations or performances, and may include one or more central processing units (central processing units, CPU) 422 (e.g., one or more processors) and a memory 432, and one or more storage mediums 430 (e.g., one or more mass storage devices) storing application programs 442 or data 444. Wherein memory 432 and storage medium 430 may be transitory or persistent storage. The program stored on the storage medium 430 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 422 may be configured to communicate with the storage medium 430 and execute a series of instruction operations in the storage medium 430 on the server 400.
The server 400 may also include one or more power supplies 426, one or more wired or wireless network interfaces 450, one or more input/output interfaces 458, and/or one or more operating systems 441, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, or the like.
The steps performed by the network performance optimization apparatus in the above embodiments may be based on the server structure shown in fig. 4.
The present application also provides a computer readable storage medium, where at least one executable instruction is stored, where the executable instruction when executed on a computing device causes the computing device to perform the method for optimizing network performance according to any of the embodiments described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk (Solid STATE DISK, SSD)), etc.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.