Disclosure of Invention
In view of the above, it is necessary to provide a model management method, system, computer device and storage medium for addressing the above technical problems.
According to an aspect of the present invention, there is provided a model management method, the method including:
the method comprises the steps that a main node server receives a model instruction sent by a client, generates a task instruction and sends the task instruction to a first sub-node server, wherein the main node server determines that the first sub-node server is a receiving object of the task instruction according to the model instruction or the state of the first sub-node server;
the first sub-node server executes the task according to the task instruction and returns an operation result to the main node server;
and the master node server returns the operation result to the client.
In one embodiment, the receiving, by the master node server, the model instruction sent by the client, generating a task instruction, and sending the task instruction to the first child node server includes:
the master node server receives a model instruction sent by the client;
and the main node server sends a task instruction to the first sub-node server according to the resource occupancy rate of the first sub-node server, the resource occupancy rate of the second sub-node server and the model instruction, wherein the resource occupancy rate of the first sub-node server is smaller than the resource occupancy rate of the second sub-node server.
In one embodiment, the sending, by the master node server, the task instruction to the first child node server according to the resource occupancy of the first child node server, the resource occupancy of the second child node server, and the model instruction includes:
the main node server acquires the resource occupancy rate of the first sub-node server and the resource occupancy rate of the second sub-node server in a first period;
and in the first period, the model instruction of the main node server sends a task instruction to the first sub-node server, wherein in the first period, the resource occupancy rate of the first sub-node server is less than that of the second sub-node server.
In one embodiment, before the master node server receives a model calling instruction sent by a client, the method includes:
the main node server receives a resource display instruction of the client, and acquires the resource occupancy rate of the first sub-node server and the resource occupancy rate of the second sub-node server according to the resource display instruction;
the main node server returns the resource occupancy rate of the first sub-node server and the resource occupancy rate of the second sub-node server to the client;
and the client sends the model instruction to the main node server, and the model instruction instructs the main node server to send the task instruction to the first sub-node server.
In one embodiment, the method includes that the master node server receives a model call instruction sent by a client, and sends a task instruction to a first child node server according to the model call instruction, and the first child node server calls a model according to the task instruction includes:
the main node server receives the model calling instruction and the additional information sent by the client side, and sends a task instruction and the additional information to the first sub-node server according to the model calling instruction;
and the first child node server calls a model according to the task instruction and the additional information.
In one embodiment, the first child node server executing the task according to the task instruction includes:
and under the condition that the main node server receives a modification instruction sent by a client, sending a task modification instruction to a first sub-node server according to the modification instruction, and executing a task by the first sub-node server according to the task modification instruction.
In one embodiment, the master node server returning the operation result to the client includes:
and the main node server acquires a contact object corresponding to the client and returns the operation result to the contact object.
According to another aspect of the present invention, there is also provided a model management system, the system including a master node server, a first child node server, and a client:
the main node server is used for receiving a model instruction sent by a client and sending a task instruction to a first sub-node server according to the model instruction, wherein the main node server determines that the first sub-node server is a receiving object of the task instruction according to the model instruction or the state of the first sub-node server;
the first child node server is used for executing tasks according to the task instructions and returning operation results to the main node server;
and the main node server is also used for returning the operation result to the client.
In one embodiment, the master node server is further configured to send a task instruction to the first child node server according to the resource occupancy rate of the first child node server, the resource occupancy rate of the second child node server, and the model instruction, where the resource occupancy rate of the first child node server is smaller than the resource occupancy rate of the second child node server.
According to another aspect of the present invention, there is also provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above model management method when executing the computer program.
According to another aspect of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described model management method
According to the model management method, the model management system, the computer equipment and the storage medium, the main node server receives the model instruction sent by the client, generates the task instruction and sends the task instruction to the first sub-node server, the first sub-node server executes the task according to the task instruction and returns the operation result to the main node server, and the main node server returns the operation result to the client, so that the model management and scheduling are more orderly, the operation on the model is completed through the client, and the efficiency is higher.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
Fig. 1 is an application scenario diagram of a model management method according to an embodiment of the present invention, and the model management method provided in the present application may be applied to the application environment shown in fig. 1. Theterminal 102 communicates with themaster node server 104 through a network, themaster node server 104 communicates with the firstchild node server 106 through the network, themaster node server 104 receives a model instruction sent by a client on theterminal 102, generates a task instruction and sends the task instruction to the firstchild node server 106, the firstchild node server 106 executes a task according to the task instruction and returns an operation result to themaster node server 104, and themaster node server 104 returns the operation result to the client on theterminal 102. Theterminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and theserver 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In an embodiment, fig. 2 is a first flowchart of a model management method according to an embodiment of the present invention, and as shown in fig. 2, a model management method is provided, which is described by taking an application scenario in fig. 1 as an example, and includes the following steps:
step S210, the master node server receives the model instruction sent by the client, generates a task instruction, and sends the task instruction to the first child node server, where the master node server determines that the first child node server is a receiving object of the task instruction according to the model instruction or the state of the first child node server. In step S210, the master node server is responsible for managing and scheduling all child node servers connected to the master node server, receiving a request from the client, and converting the request into a task instruction to be issued to the corresponding child node server. The main node server receives a model instruction sent by the client, the model instruction comprises a model uploading instruction, a model modifying instruction, a model calling instruction and the like, and the model instruction can comprise an identifier of a model, a storage path of the model, a model parameter, an operation type, an identifier of a child node server and the like. And under the condition that the model instruction is a model calling instruction and the model instruction comprises the identifier of the child node server, the main node server forwards the information such as the identifier of the model to be called and the model parameter as a task instruction to the child node server corresponding to the identifier of the child node server. Under the condition that the model instruction is a model calling instruction and the model instruction does not include an identifier of an operating server, selecting a child node server for executing model calling by the main node server, wherein the selection process can be based on a selection algorithm preset in the main node server, for example, a polling method, a random method, a weighted polling method, a weighted random method and the like in a load balancing algorithm to randomly select the child node server; the selection may also be based on CPU resource occupancy in the respective servers. And under the condition that the main node server selects the first sub-node server as the execution sub-node server according to the selection algorithm, the main node server sends the information related to execution in the model instruction as a task instruction to the first sub-node server. For example, when the model instruction is a model uploading instruction, the master node server stores the model, and selects and sends a task instruction to the plurality of child node servers, wherein the task instruction includes the newly uploaded model, and the child node servers receiving the task instruction all serve as backup storage of the uploaded model; in the case that the model command is a model modification command, the master node server broadcasts the task command to all child node servers storing the modified model.
Step S220, the first child node server executes the task according to the task instruction, and returns the operation result to the master node server. In step S220, the first child node server is used as the selected child node server, and in the case of receiving the task instruction from the master node server, the first child node server executes the task according to the information in the task instruction, for example, calls a corresponding model according to the model entry parameter and the model identifier, and obtains the operation result, where the model identifier indicates the identifier of the model that the client attempts to call in the child node server in the model instruction of step S210, and optionally calls the corresponding model through the storage path of the called model in the child node server. The identifiers of the models in the child node servers or the storage paths of the models in the child node servers are stored in the main node server and are uniformly managed by the main node server. The task instruction also includes an information feedback instruction, for example, the master node server needs to collect models stored in each child node server or needs to collect the operating states of each child node server, and collection can be completed through the task instruction.
In step S230, the master node server returns the operation result to the client. In step S230, the master node server serves as a master node server, serves as an interface for communicating with the client, and is further responsible for sending the operation result returned by the first child node server to the client.
Optionally, since the master node server is responsible for model storage, management and scheduling, in order to make the system stability stronger, a plurality of master node servers are set, thereby implementing redundant backup of the master node servers. The number of the child node servers can be adjusted according to the data processing demand in the application scene, namely, the up-line and the down-line of the child node servers are controlled according to the data processing demand. For example, when the existing child node servers are all occupied and the model calling requirement is still increased, a child node server connected with the main node server is added, all models are backed up after the new child node server is connected with the main node server, the main node server adds the new child node server to the list of selectable child node servers, and the newly added child node server is considered in the subsequent task assignment. In addition, under the condition that the child node server has a fault, after the current task on the fault child node server is finished, the main node server enables the child node server to be offline from the selectable child node server list, and other child node servers are selected subsequently to execute the model task, so that smooth transition of the new child node server and the old child node server is achieved.
The method comprises the steps that a main node server receives a model instruction sent by a client, generates a task instruction and sends the task instruction to a first sub-node server, the first sub-node server executes a task according to the task instruction and returns an operation result to the main node server, and the main node server returns the operation result to the client; the main node server is used as an interface between the main node server and the client to manage the model and schedule the model task, the first sub-node server carries out a specific model calling process, so that the management and the scheduling of the model are more orderly, the operation on the model is completed through the client, the process of logging in the server is omitted, the operation result can be directly obtained from the client, and the model calling efficiency is higher.
In an embodiment, fig. 3 is a flowchart of a second method for model management according to an embodiment of the present invention, and as shown in fig. 3, the receiving, by a master node server, a model instruction sent by a client, generating a task instruction, and sending the task instruction to a first child node server includes:
step S310, a master node server receives a model instruction sent by a client;
step S320, the main node server sends a task instruction to the first sub-node server according to the resource occupancy rate of the first sub-node server, the resource occupancy rate of the second sub-node server and the model instruction, wherein the resource occupancy rate of the first sub-node server is smaller than the resource occupancy rate of the second sub-node server.
In steps S310 to S320, mainly for the model call instruction, the master node server obtains the CPU resource occupancy rates of the respective child node servers and compares them, and obtains the first child node server with a smaller resource occupancy rate as the execution server. Optionally, the resource occupancy may be real-time resource occupancy, that is, an actual resource occupancy of each child node server when performing task allocation, or an empirical value obtained by statistical analysis according to historical data. In the embodiment provided in this embodiment, the child node server for executing model invocation is selected according to the resource occupancy rate of the child node server, so that load balancing is better achieved, and the model management efficiency is improved.
In an embodiment, fig. 4 is a flowchart three of a model management method according to an embodiment of the present invention, and as shown in fig. 4, the sending, by the master node server, the task instruction to the first child node server according to the resource occupancy of the first child node server, the resource occupancy of the second child node server, and the model instruction includes:
step S410, the main node server obtains the resource occupancy rate of a first sub-node server and the resource occupancy rate of a second sub-node server in a first time period;
step S420, in a first time period, the main node server sends a task instruction to the first sub-node server according to the model instruction, wherein in the first time period, the resource occupancy rate of the first sub-node server is smaller than that of the second sub-node server.
In steps S410 to S420, the master node server obtains the resource occupancy rate of each sub-node server for a period of time, where the period of time may be one day, one week, or one month, and divides the period of time into time periods, for example, dividing the resource occupancy rate data of one day into 24 time periods by hours, and calculating the resource occupancy rate of each time period, so as to obtain the resource occupancy rate of each sub-node server in each time period of one day, and store the resource occupancy rate as a reference value in the master node server, and when the master node server needs to allocate a task, according to the reference value of the resource occupancy rate data, select a sub-node server with a smaller resource occupancy rate in the current time period, and send a task instruction to the sub-node server. According to the implementation mode provided by the embodiment, through statistical analysis of historical resource occupancy rates of the child node servers, the child node server with the lower resource occupancy rate in the time period where the execution model is called is selected, and the model management efficiency can be further improved.
In an embodiment, fig. 5 is a fourth flowchart of a model management method according to an embodiment of the present invention, and as shown in fig. 5, before the master node server receives a model call instruction sent by a client, the method includes:
step S510, the main node server receives a resource display instruction of the client, and acquires the resource occupancy rate of the first sub-node server and the resource occupancy rate of the second sub-node server according to the resource display instruction;
step S520, the main node server returns the resource occupancy rate of the first sub-node server and the resource occupancy rate of the second sub-node server to the client;
step S530, the client sends a model instruction to the main node server, and the model instruction instructs the main node server to send the task instruction to the first sub-node server.
In steps S510 to S530, the client requests the master node server for the resource occupation status of each child node server, the master node server feeds back the resource occupation status of each child node server to the client, and the client selects a child node server for model invocation according to the resource occupation status of each child node server. The embodiment provides a method for monitoring the resource occupancy rate of each sub-node server and appointing the sub-node server executing the model operation through the client, so that the model scheduling is more reasonable and efficient.
In one embodiment, the method for calling the model by the master node server includes that the master node server receives a model calling instruction sent by a client, and sends a task instruction to the first child node server according to the model calling instruction, and the first child node server calls the model according to the task instruction includes: the method comprises the steps that a main node server receives a model calling instruction and additional information sent by a client side, and sends a task instruction and the additional information to a first sub-node server according to the model calling instruction; and the first child node server calls the model according to the task instruction and the additional information. In this embodiment, the model call instruction includes not only the identifier of the model and the action to be performed on the model, but also additional information such as the number of times and time. When the main node server issues the task instruction, the additional information is also sent to the sub-node servers. The number of times refers to the number of times of model operation, and in many application environments, if the model is operated only once, the obtained operation result has great contingency and can not well reflect the real result of the model, so that the result is more reliable by adding the operation number in the additional information. In addition, model running time can also be preset, on one hand, the running of the model can be more consistent with the requirements in a test or application environment, on the other hand, the calling period of the model can also be set in the period with lower resource occupancy rate of the server by setting preset execution time, and the additional information can be simultaneously set, for example, the model A can be run on a preset child node server for ten times in a fixed period every day by setting the additional information. The resource utilization rate of the server is higher, and the model calling efficiency is improved.
In one embodiment, the first child node server performing the task according to the task instruction includes: and under the condition that the main node server receives a modification instruction sent by the client, sending a task modification instruction to the first sub-node server according to the modification instruction, and executing the task by the first sub-node server according to the task modification instruction. In this embodiment, an implementation manner of modifying a task instruction is further provided, after a preset task instruction is sent to a child node server, if a client performs operations such as modification, deletion, or suspension on a task, the master node server sends the modification instruction to the corresponding child node server according to the operations, so that not only is the setting and execution of the task more flexible, but also the master node server can monitor the task and the modification on each child node server, and can better manage and schedule the child node server.
In one embodiment, the master node server returning the operation result to the client includes: and the main node server acquires a contact object corresponding to the client and returns an operation result to the contact object. The contact modes include, but are not limited to, mail, webhook and the like, and the contact object may be set by the client or may be a contact list preset on the master node server. In the process of executing the model task, if a preset situation needing notification occurs, the main node server is triggered to notify the situation, for example, after the model A is successfully called, the sub-node server returns the operation result to the main node server, and the main node server sends the operation result in a mail mode according to the e-mail address corresponding to the model A; if the model A is abnormal in the running process due to overtime model running, the main node server also collects the log of the abnormal running and sends the log to the email address. The embodiment can save the process of checking the operation result by logging in the server, further improve the efficiency of the operation of the face model and feed back the operation result more timely.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In accordance with another aspect of the present invention, fig. 6 is a schematic diagram of a model management system according to an embodiment of the present invention, and as shown in fig. 6, there is provided a model management system including amaster node server 62, a firstchild node server 64, and aclient 66, wherein:
themaster node server 62 is configured to receive a model instruction sent by theclient 66, and send a task instruction to the firstchild node server 64 according to the model instruction, where the master node server determines that the first child node server is a receiving object of the task instruction according to the model instruction or a state of the first child node server;
the firstchild node server 64 is configured to execute a task according to the task instruction and return an operation result to themaster node server 62;
themaster node server 62 is also used to return the results of the operation to theclient 66.
In one embodiment, the system further comprises a second child node server, and themaster node server 62 is further configured to send a task instruction to the first child node server according to the resource occupancy rate of the firstchild node server 64, the resource occupancy rate of the second child node server, and the model instruction, wherein the resource occupancy rate of the firstchild node server 64 is less than the resource occupancy rate of the second child node server.
For the specific definition of the model management system, reference may be made to the above definition of the model management method, which is not described herein again. The various modules in the model management system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
According to the model management system, the main node server receives the model instruction sent by the client, generates the task instruction and sends the task instruction to the first sub-node server, the first sub-node server executes the task according to the task instruction and returns the operation result to the main node server, and the main node server returns the operation result to the client, so that the model is managed and scheduled more orderly, the operation on the model is completed through the client, and the efficiency is higher.
In one embodiment, a model management system is established, which is divided into two parts: the method comprises the steps that a model background main node, namely a main node server and a model task sub-node, namely a sub-node server, is adopted, the number of the main nodes is usually one or more, so that backup and redundancy are achieved, the number of the sub-nodes is correspondingly set according to the number of models and the number of called clients, and in the using process, if the number of the sub-nodes is difficult to support the current number of models and the number of called tasks, the number of the sub-nodes can be increased, so that the purpose of capacity expansion is achieved.
The model background main node is mainly used for managing users and logs of the client; managing task child nodes; communicating with the task sub-nodes and informing the task sub-nodes of scheduling the model task; the model task child node is mainly used for registering the server ip and the port to the main node; and receiving the notification of the scheduling model of the main node, scheduling the model task and returning the result to the main node. And the scheduler of the task child node executes a corresponding command to schedule the corresponding model through the received scheduling task, and returns the result to the main node. Optionally, the scheme uses GO and python for development, mysql database storage is used, and the model can be used for data processing and calculation and large data and artificial intelligence information processing. The embodiment comprises the following steps:
step S11, configuring a main node server for configuring storage rules, management sub-nodes and models;
step S12, configuring one or more child node servers, namely a child node cluster for receiving model task scheduling of a main node and providing a model operation basic environment;
step S13, the model is uploaded to the main node server through the configured platform client, and the main node server manages the model in a unified way;
and step S14, configuring the timing task of the operation model at the corresponding child node, enabling the child node system to regularly and quantitatively operate the model according to the configured rule, enabling the test model to directly operate the model once through the platform if necessary, returning a result after the operation is finished, and directly checking, modifying and updating the model on the platform.
Step S15, when the model timing task obtains the operation result or the operation is abnormal, such as the model is overtime, the model timing task notifies the maintenance personnel of the client to receive the relevant result of the model operation in time through the functions of mail notification or webhook notification and the like;
step S16, if the model task needs to be modified, the functions of deleting, modifying, suspending and the like of the model task can be completed through the client;
in step S17, the master node server will monitor resource utilization of all the child node servers at regular time, and coordinate scheduling of tasks through a scheduling algorithm, for example, it is found that resource occupancy rate of the child node a exceeds 80% at 10 am, while resource occupancy rate of the node B is 40% at the same time, and the master node will put the task that the child node a runs at this time period on the word node B to run.
The model management system and the method in the above embodiment realize unified management of the model and the operation result, facilitate searching the model position, simplify the use of tools and improve the model management efficiency through unified operation of the client; the server resources are reasonably arranged, model deployment and related server resource occupation conditions can be clearly inquired through the platform, and if the current server resources are excessively occupied, the platform can distribute model tasks to the servers with lower resource occupation for operation.
Optionally, the system may also provide services for a part of users without actual server resources, and the users only need to provide models and related configurations and upload the models and the related configurations through the client, and the system provides related resources to run model tasks of the users, thereby saving server hardware cost and server maintenance cost of the users.
In one embodiment, a computer device is provided, the computer device may be a server, fig. 7 is a block diagram of a model management computer device according to an embodiment of the present invention, and the internal block diagram of the computer device may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing model data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a model management system.
According to the model management computer equipment, the main node server receives the model instruction sent by the client, generates the task instruction and sends the task instruction to the first sub-node server, the first sub-node server executes the task according to the task instruction and returns the operation result to the main node server, and the main node server returns the operation result to the client, so that the model is managed and scheduled more orderly, the model is operated through the client, and the efficiency is higher.
According to another aspect of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described model management method. According to the model management storage medium, the main node server receives the model instruction sent by the client, generates the task instruction and sends the task instruction to the first sub-node server, the first sub-node server executes the task according to the task instruction and returns the operation result to the main node server, and the main node server returns the operation result to the client, so that the model is managed and scheduled more orderly, the operation on the model is completed through the client, and the efficiency is higher.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.