Disclosure of Invention
In order to solve the technical problem of low resource utilization rate of the capacity expansion node, the application provides a task deployment method and device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a task deployment method, including:
acquiring all computing tasks to be deployed;
Determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed;
determining a target node for deploying the target computing task in a target container cluster, wherein idle residual computing resources exist in the target node;
And deploying the target computing task into a target container instance of the target node.
Optionally, in the foregoing method, determining, among all the computing tasks to be deployed, a target computing task that meets the requirement of containerized deployment includes:
Determining the priority of each computing task to be deployed;
Determining a low-priority computing task and a high-priority computing task except the low-priority computing task in all computing tasks to be deployed according to the priorities, wherein the priorities of the low-priority computing tasks are lower than those of the high-priority computing tasks, and the priorities are used for indicating the sequence of deploying the computing tasks to be deployed through physical computing resources;
and determining the target computing task in all the low-priority computing tasks.
Optionally, as in the foregoing method, after the determining, according to the priority, the target computing task and a high-priority computing task other than the target computing task in the all to-be-deployed computing tasks, the method further includes:
Deploying the high-priority computing task into a target device that provides physical computing resources, wherein the target device comprises: cloud host and cloud physical machine.
Optionally, the method, wherein determining, in the target container cluster, a target node for deploying the target computing task includes:
acquiring all candidate nodes in the target container cluster;
Determining a remaining amount of the remaining computing resources in each of the candidate nodes;
And selecting the target node with the largest residual computing resource from all the candidate nodes according to the residual resource quantity of the residual computing resource.
Optionally, in the foregoing method, before determining the target computing task for containerized deployment in all the computing tasks to be deployed, the method further includes:
Obtaining target information of a target type under the current condition, wherein the target type comprises at least one of the following: the load capacity of all the computing tasks to be deployed, a current point in time;
And under the condition that the target information meets the preset condition, executing a jump operation for jumping to the step of determining the target computing task for containerized deployment in all the computing tasks to be deployed.
Optionally, as in the previous method, before the deploying the target computing task into the target container instance of the target node, the method further comprises:
Creating a target resource of the target computing resource amount in the target node according to the target computing resource amount of the target computing task;
Integrating a target service into a target image, wherein the target service is used for acquiring the target computing task;
And creating the target container instance in the target resource through the target mirror image.
Optionally, in the foregoing method, the deploying the target computing task to the target container instance of the target node includes:
Requesting to acquire the target computing task from a target scheduling component through the target service, wherein the target scheduling component is used for creating a target resource corresponding to the target computing task in the target node;
And deploying the target computing task into a target container instance of the target node.
In a second aspect, an embodiment of the present application provides a task deployment device, including:
the acquisition module is used for acquiring all the computing tasks to be deployed;
The first determining module is used for determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed;
the second determining module is used for determining a target node for deploying the target computing task in a target container cluster, wherein idle residual computing resources exist in the target node;
and the deployment module is used for deploying the target computing task into a target container instance of the target node.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
The processor is configured to implement a method as claimed in any one of the preceding claims when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, the storage medium comprising a stored program, wherein the program when run performs a method according to any one of the preceding claims.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
According to the method provided by the embodiment of the application, the target computing task in the computing task to be deployed is subjected to containerization deployment, so that the target computing task does not occupy physical computing resources, idle resources for deploying the target computing task are obtained by implementing capacity expansion of the target computing task, the computing resources of the target container cluster can be fully utilized, the purpose of self-adaptive capacity expansion according to the computing task is achieved, and the computing power can be timely supplemented.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
With the development of big data and AI technology, various industries are becoming more and more aware of the value of big data technology for their own product services, big data application and big data platform become core technologies of various it technical companies and internet companies, and many cloud companies have introduced big data platforms built on the cloud service iaas layer for providing functions of collection, storage, processing and presentation of mass data. Thereby enabling enterprises to concentrate on business, improving working efficiency and reducing cost and period of enterprise construction.
According to one aspect of the embodiment of the application, a task deployment method is provided. Alternatively, in the present embodiment, the task deployment method described above may be applied to a hardware environment constituted by a terminal and a server. The server is connected with the terminal through a network, can be used for providing services (such as cloud computing services and the like) for the terminal or a client installed on the terminal, and can be used for providing data storage services for the server by setting a database on the server or independent of the server.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: wiFi (WIRELESS FIDELITY ), bluetooth. The terminal may not be limited to a PC, a mobile phone, a tablet computer, or the like.
The task deployment method of the embodiment of the application can be executed by a server, a terminal or both. The task deployment method of the terminal executing the embodiment of the application can also be executed by the client installed on the terminal.
Taking a task deployment method in this embodiment executed by a server as an example, fig. 1 is a task deployment method provided in an embodiment of the present application, including the following steps:
Step S101, obtaining all computing tasks to be deployed.
The task deployment method in the embodiment can be applied to a scenario where a computing task needs to be deployed, for example: the scenario of deploying an offline computing task to a cloud server, the scenario of deploying a computing task to a big data cluster, etc., may also be a scenario of deploying other computing tasks to other servers. In the embodiment of the application, the task deployment method is exemplified by deploying the offline computing task to the cloud server, and the task deployment detection method is also applicable to other types of media resources under the condition of no contradiction.
Taking the example of deploying the offline computing task to the cloud server scene, the target computing task in the offline computing task to be deployed is subjected to containerized deployment, so that all the computing tasks to be deployed can be processed without adding a cloud host.
The computing task to be deployed may be a computing task to be deployed uploaded by the demand direction cloud server, for example, an offline computing task requiring no real-time computation, such as a data reporting type service.
Step S102, determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed.
After all the computing tasks to be deployed are determined, the deployment mode of each computing task to be deployed in the computing tasks to be deployed can be determined.
The deployment mode of each computing task to be deployed can be deployment by using physical resources or one of deployment by using container resources. Therefore, the target computing task meeting the containerized deployment requirement can be determined from all the computing tasks to be deployed.
The target computing tasks meeting the containerized deployment requirements may be screened out computing tasks to be deployed for deployment through the container resources.
For example, in a general case, offline tasks (i.e., computing tasks to be deployed) are generally executed at night, and the container cluster service is idle at night, so that a target computing task which can be deployed through containerization in the offline tasks is determined, so that the target computing task can be processed through the container cluster service, and the number of computing devices in a cluster which is only used for processing the offline tasks can be reduced.
Step S103, determining a target node for deploying the target computing task in the target container cluster, wherein idle residual computing resources exist in the target node.
After the target computing task is determined, a target node to which the target computing task is specifically required to be deployed can be determined in a target container cluster for deploying the target computing task.
The target container cluster may include a plurality of nodes that may be deployed in a containerized manner, and a plurality of container instances may be created in the migrant node to initiate a target computing task through one of the container instances.
Step S104, deploying the target computing task into a target container instance of the target node.
After determining the target node to which the target computing task needs to be deployed, the target computing task may be deployed into a target container instance of the target node.
The target container instance may be a Pod created in the target node, and the Pod may be created by a dock.
The target computing task is deployed into the target container instance of the target node by deploying the target computing task into the container resource in the target container instance, so that the purpose of containerized deployment of the target computing task is achieved.
By the method in the embodiment, the target computing task in the computing task to be deployed is subjected to containerization deployment, so that the target computing task does not occupy physical computing resources, idle resources for deploying the target computing task are obtained by implementing capacity expansion of the target computing task, computing resources of a target container cluster can be fully utilized, the purpose of self-adaptive capacity expansion according to the computing task is achieved, and computing power can be timely supplemented.
As shown in fig. 2, as an alternative embodiment, the step S102 of determining, among all the computing tasks to be deployed, the target computing task that meets the requirement of containerized deployment includes the following steps:
step S201, determining the priority of each computing task to be deployed.
In order to realize the flexible use of the computing resources, the computing tasks to be deployed are preferably deployed by using the physical resources under the condition that the physical resources are enough, and when the physical resources are insufficient to deploy all the computing tasks to be deployed, whether each computing task to be deployed is deployed by using the physical resources or by using the container resources needs to be judged.
Whether to deploy using physical resources or container resources for each computing task to be deployed may be determined by determining a priority of each computing task to be deployed.
For each computing task to be deployed, determining the priority of each computing task to be deployed according to the time for acquiring the computing task to be deployed or the level of the service corresponding to the computing task to be deployed and other factors.
Step S202, determining a low-priority computing task and a high-priority computing task except the low-priority computing task in all computing tasks to be deployed according to the priorities, wherein the priorities of the low-priority computing tasks are lower than those of the high-priority computing tasks, and the priorities are used for indicating the sequence of deploying the computing tasks to be deployed through physical computing resources.
After determining the priority of each of the computing tasks to be deployed, the order relationship between the computing tasks to be deployed may be determined by priority (e.g., ordered from high to low priority or from low to high priority).
After the order relation is obtained, the low-priority computing task and the high-priority computing task can be determined from all the computing tasks to be deployed. And, the low priority computing tasks have a lower priority than the high priority computing tasks.
For example, the number of computing tasks to be deployed and the number of physical computing resources (for example, cloud host resources) may be determined, so as to determine the first number of computing tasks to be deployed that may be deployed by the physical computing resources, so that each computing task to be deployed with a first number of priority is regarded as a high-priority computing task, and the remaining computing tasks to be deployed except the high-priority computing task are regarded as low-priority computing tasks, where the low-priority computing tasks may be deployed only by the container resources, or may be deployed by a mixture of the physical computing resources and the container resources.
Step S203, determining a target computing task in all low-priority computing tasks.
After determining the low-priority computing task, the low-priority computing task can be deployed only through the container resource, or can be deployed through the mixture of the physical computing resource and the container resource, so that the target computing task for performing containerization deployment is also determined from the low-priority computing task.
For example, for a cloud computing platform, a resource management module can be arranged on the cloud computing platform, the resource management module can be split on a yarn queue (comprising a plurality of computing tasks to be deployed), a plurality of high-priority queues can be obtained by splitting according to services, the high-priority computing tasks in the high-priority queues only use physical resources and do not use container resources, the low-priority computing tasks in the low-priority queues can be deployed by using cloud host resources and container resources in a mixed mode, and further, the container resources can be used for supplementing the computing capacity of non-important tasks.
As an alternative embodiment, as in the foregoing method, after determining, in step S202, the target computing task and the high-priority computing task other than the target computing task among all the computing tasks to be deployed according to the priorities, the method further includes the following steps:
step S301, deploying a high-priority computing task to a target device that provides a physical computing resource, where the target device includes: cloud host and cloud physical machine.
After determining the high-priority computing task, the high-priority computing task may be deployed in a different manner than the low-priority computing task, i.e., using target devices (e.g., cloud hosts, cloud physical machines) of physical computing resources.
Furthermore, by adopting the method in the embodiment, under the condition that the target computing task is deployed through the container resource, physical computing resources allocated to part of computing tasks to be deployed still exist, so that the processing efficiency of the high-priority computing task can be effectively improved by adopting the physical computing resources to process the high-priority computing task.
As shown in fig. 3, as an alternative embodiment, the determining, in the target container cluster, the target node for deploying the target computing task in step S103 includes the following steps:
step S401, obtaining all candidate nodes in the target container cluster.
The target container cluster may be a preset cluster formed by a plurality of nodes for performing containerized deployment and providing container resources. Thus, all candidate nodes in the target container cluster may be determined.
The candidate nodes may be nodes in the target container cluster that may be deployed in a containerized manner.
Step S402, determining the residual resource quantity of the residual computing resources in each candidate node.
After all candidate nodes are determined, the remaining computing resources in each candidate node may be determined.
The remaining computing resources may be computing resources of the candidate node that are not used to create the container.
After the remaining computing resources are obtained, the remaining resource amount corresponding to each candidate node can be obtained through statistics, and the remaining resource amount is used for indicating the resource amount of the remaining computing resources in the candidate node.
Step S403, selecting the target node with the most residual computing resource from all candidate nodes according to the residual resource quantity of the residual computing resource.
After determining the residual resource amount of each candidate node, the candidate node with the largest residual resource amount can be determined, and then the candidate node with the largest residual resource amount, namely, the candidate node with the largest residual computing resource is taken as the target node.
For example, a target node may be selected by Yarn-resourcemanager, where Yarn-resourcemanager is the scheduler core component of Yarn, after determining that a target computing task needs to be deployed using container resources, yarn-resourcemanager obtains the specification and number of idle computing forces that can be provided in the current target container cluster, invokes Kubernetes api to create a number of container resources corresponding to the resources (e.g., CPU resources, memory resources, etc.) required for the target computing task, and the ex-schedule extension scheduler ensures that Pod is created on the node with more remaining resources, which Pod is responsible for starting the services of Yarn.
As an alternative embodiment, as in the foregoing method, before determining the target computing task for containerized deployment in all the computing tasks to be deployed in step S102, the method further includes the following steps:
Step S501, obtaining target information of a target type under the current situation, where the target type includes at least one of the following: and all the load amounts of the computing tasks to be deployed and the current time point.
In the case that the computing task to be deployed is an offline task, a trigger condition is required to determine whether or not the containerized deployment of at least one computing task to be deployed is required.
The triggering condition can be triggered according to the load quantity of all the computing tasks to be deployed or according to time, namely, the target information of the target type is obtained under the current condition.
In the current case, the current time or all the current computing tasks to be deployed can be used.
The target type may be the load amount of all the computing tasks to be deployed, the current point in time.
For example, when the target type is the load capacity of all the computing tasks to be deployed, judging whether the computing tasks to be deployed need to be subjected to containerized deployment according to the load capacity of all the computing tasks to be deployed; and judging whether the calculation task to be deployed needs to be subjected to containerized deployment according to the current time when the target type is the current time point.
Step S502, executing a jump operation for jumping to the step of determining the target computing task for containerized deployment in all computing tasks to be deployed under the condition that the target information meets the preset condition.
After the target information is determined, whether the target computing task for containerized deployment needs to be determined or not can be judged according to the target information.
And judging whether the target information meets the preset condition before judging whether the target computing task for containerized deployment needs to be determined or not.
The preset condition may be information consistent with a target type of the target information, for example, in a case where the target type is a load amount of all the computing tasks to be deployed, the target condition may be a preset value for indicating a load amount threshold, for example: preset values for indicating the required amount of memory, the number of CPUs, and the number of GPUs; in the case where the target type is the current time point, the target condition may then be a preset value, for example 19:00, for indicating the time point.
For example, as shown in fig. 4, a task deployment architecture applying the method in the foregoing embodiment is provided, where the task deployment architecture includes an elastic scaling module (yarn-autoscaler, including a cluster management unit, a resource management unit, a task management unit, a cluster load monitoring unit (for determining whether target information meets target conditions when the target type is the load of all the computing tasks to be deployed) and a Pod lifecycle management unit (for determining whether the target information meets target conditions when the target type is an indication time point), where the elastic scaling module may be configured to provide two scaling modes of elastic scaling according to load and according to time through the elastic scaling module. For per-load scaling, the user may set thresholds (i.e., preset conditions) for different metrics to trigger scaling, such as setting availablevcore, pending vcore, available mem, PENDING MEM in the Yarn queue. Time expansion and contraction rules can also be used to specify triggering according to the rules of days, weeks, months and the like. The big data cluster is a cluster using physical computing resources, and the container cluster is a cluster using container resources.
As an alternative embodiment, as in the foregoing method, before the step S104 of deploying the target computing task into the target container instance of the target node, the method further includes the following steps:
Step S601, creating a target resource of the target computing resource amount in the target node according to the target computing resource amount of the target computing task.
After the target computing task is determined, the target computing resource amount required by the target computing task can be determined according to the information such as the business type, the data amount and the like of the non-target computing task. Further, a target resource of the target computing resource amount is created in the target node based on the target computing resource amount.
Further, in the case that the target computing tasks include N, N target resources may also be created at the target node, where each target resource corresponds to one of the target computing tasks.
In step S602, a target service is integrated into the target image, where the target service is used to obtain a target computing task.
In order to survive the container instance, a corresponding target image is needed, and the target service is integrated into the target image, so that the target settlement task can be conveniently deployed into the target container instance in the later period.
Step S603, creating a target container instance in the target resource through the target mirror image.
After the target image is acquired, the target image can be operated, so that the purpose of creating and obtaining the target container instance by using the target resource is achieved. And further, the post-target computing task can be conveniently subjected to computing processing by using the target resource.
By integrating the target service into the target mirror image, the method in the embodiment can facilitate automatic deployment of the target computing task into the target container instance in the later period, and improves the deployment efficiency.
As an alternative embodiment, the deploying, in step S104, the target computing task to the target container instance of the target node includes the following steps:
in step S701, a target scheduling component requests to acquire a target computing task through a target service, where the target scheduling component is configured to create a target resource corresponding to the target computing task in a target node.
As known from the foregoing embodiments, the target service is integrated into the target image, so that the target service also runs when the target image is run, and thus, the target service may request to the target scheduling component to acquire the target computing task, where the target scheduling component is configured to create, in the target node, the target resource corresponding to the target computing task.
Step S702, deploying the target computing task into a target container instance of the target node.
After the target computing task is obtained, the target computing task can be deployed into the target container instance in a containerized deployment mode, so that the purpose of containerized deployment of the target computing task is achieved, and the effect of elastic capacity expansion according to the task can be achieved.
For example, as shown in fig. 5, the yan-nodemanager service is integrated in the Pod, and is manufactured in a mirror image through a dock, so that yan-nodemanager is started when the Pod is created, and yan-nodemanager automatically registers with yan-resoucemanager and acquires tasks to be executed, so that quick elastic expansion can be realized.
As shown in fig. 6, according to an embodiment of another aspect of the present application, there is also provided a task deployment device, including:
The acquisition module 1 is used for acquiring all computing tasks to be deployed;
The first determining module 2 is used for determining a target computing task meeting the containerized deployment requirement in all computing tasks to be deployed;
A second determining module 3, configured to determine, in the target container cluster, a target node for deploying a target computing task, where there are idle remaining computing resources in the target node;
a deployment module 4, configured to deploy the target computing task into a target container instance of the target node.
In particular, the specific process of implementing the functions of each module in the apparatus of the embodiment of the present invention may be referred to the related description in the method embodiment, which is not repeated herein.
According to another embodiment of the present application, there is also provided an electronic apparatus including: as shown in fig. 7, the electronic device may include: the device comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 are in communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
The processor 1501 is configured to execute the program stored in the memory 1503, thereby implementing the steps of the method embodiment described above.
The bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, wherein the storage medium comprises a stored program, and the program executes the method steps of the method embodiment.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.