Movatterモバイル変換


[0]ホーム

URL:


CN113835865B - Task deployment method and device, electronic equipment and storage medium - Google Patents

Task deployment method and device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN113835865B
CN113835865BCN202111165100.6ACN202111165100ACN113835865BCN 113835865 BCN113835865 BCN 113835865BCN 202111165100 ACN202111165100 ACN 202111165100ACN 113835865 BCN113835865 BCN 113835865B
Authority
CN
China
Prior art keywords
target
computing
task
computing task
deployed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111165100.6A
Other languages
Chinese (zh)
Other versions
CN113835865A (en
Inventor
赵宇
侯雪峰
王东
王亚洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co LtdfiledCriticalBeijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202111165100.6ApriorityCriticalpatent/CN113835865B/en
Publication of CN113835865ApublicationCriticalpatent/CN113835865A/en
Application grantedgrantedCritical
Publication of CN113835865BpublicationCriticalpatent/CN113835865B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application relates to a task deployment method and device, electronic equipment and a storage medium; the method comprises the following steps: acquiring all computing tasks to be deployed; determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed; determining a target node for deploying the target computing task in the target container cluster, wherein idle residual computing resources exist in the target node; the target computing task is deployed into a target container instance of the target node. According to the method provided by the embodiment of the application, the target computing task in the computing task to be deployed is subjected to containerization deployment, so that the target computing task does not occupy physical computing resources, idle resources for deploying the target computing task are obtained by implementing capacity expansion of the target computing task, the computing resources of the target container cluster can be fully utilized, the purpose of self-adaptive capacity expansion according to the computing task is achieved, and the computing power can be timely supplemented.

Description

Task deployment method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of cloud computing technologies, and in particular, to a task deployment method and apparatus, an electronic device, and a storage medium.
Background
Big data clusters are mainly composed of two parts, data+computation, i.e. the data stored in the cluster is computed using the computing power of big data components, which is provided by big data components deployed within the cluster, i.e. the traditional HADOOP ecosystem, which provides computing power by means of resource scheduling of big data components (the main scheduling component is Yarn), uses Yarn to manage/schedule computing resources, which generally has a significant resource usage period. Real-time computing clusters consume resources mainly in daytime, while data reporting services are arranged in offline computing clusters. The primary problem of separate deployment from online service is low resource utilization and high consumption cost.
The main way to solve this problem in the related art is to increase the capacity of the cloud hosts in the cluster, so as to increase the cloud host deployment components to provide computing services to offline services.
However, the related art adopts a capacity expansion manner to provide computing services for offline services, which brings the following problems: because the resources of the cluster host for offline service expansion are only used for a part of the offline service, most of the time is idle, the utilization rate of the resources of the expansion nodes corresponding to the part of the expansion hosts is low, and the problem of high use cost is caused.
Aiming at the technical problem of low resource utilization rate of the capacity expansion node in the related technology, no effective solution is provided at present.
Disclosure of Invention
In order to solve the technical problem of low resource utilization rate of the capacity expansion node, the application provides a task deployment method and device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present application provides a task deployment method, including:
acquiring all computing tasks to be deployed;
Determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed;
determining a target node for deploying the target computing task in a target container cluster, wherein idle residual computing resources exist in the target node;
And deploying the target computing task into a target container instance of the target node.
Optionally, in the foregoing method, determining, among all the computing tasks to be deployed, a target computing task that meets the requirement of containerized deployment includes:
Determining the priority of each computing task to be deployed;
Determining a low-priority computing task and a high-priority computing task except the low-priority computing task in all computing tasks to be deployed according to the priorities, wherein the priorities of the low-priority computing tasks are lower than those of the high-priority computing tasks, and the priorities are used for indicating the sequence of deploying the computing tasks to be deployed through physical computing resources;
and determining the target computing task in all the low-priority computing tasks.
Optionally, as in the foregoing method, after the determining, according to the priority, the target computing task and a high-priority computing task other than the target computing task in the all to-be-deployed computing tasks, the method further includes:
Deploying the high-priority computing task into a target device that provides physical computing resources, wherein the target device comprises: cloud host and cloud physical machine.
Optionally, the method, wherein determining, in the target container cluster, a target node for deploying the target computing task includes:
acquiring all candidate nodes in the target container cluster;
Determining a remaining amount of the remaining computing resources in each of the candidate nodes;
And selecting the target node with the largest residual computing resource from all the candidate nodes according to the residual resource quantity of the residual computing resource.
Optionally, in the foregoing method, before determining the target computing task for containerized deployment in all the computing tasks to be deployed, the method further includes:
Obtaining target information of a target type under the current condition, wherein the target type comprises at least one of the following: the load capacity of all the computing tasks to be deployed, a current point in time;
And under the condition that the target information meets the preset condition, executing a jump operation for jumping to the step of determining the target computing task for containerized deployment in all the computing tasks to be deployed.
Optionally, as in the previous method, before the deploying the target computing task into the target container instance of the target node, the method further comprises:
Creating a target resource of the target computing resource amount in the target node according to the target computing resource amount of the target computing task;
Integrating a target service into a target image, wherein the target service is used for acquiring the target computing task;
And creating the target container instance in the target resource through the target mirror image.
Optionally, in the foregoing method, the deploying the target computing task to the target container instance of the target node includes:
Requesting to acquire the target computing task from a target scheduling component through the target service, wherein the target scheduling component is used for creating a target resource corresponding to the target computing task in the target node;
And deploying the target computing task into a target container instance of the target node.
In a second aspect, an embodiment of the present application provides a task deployment device, including:
the acquisition module is used for acquiring all the computing tasks to be deployed;
The first determining module is used for determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed;
the second determining module is used for determining a target node for deploying the target computing task in a target container cluster, wherein idle residual computing resources exist in the target node;
and the deployment module is used for deploying the target computing task into a target container instance of the target node.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
The processor is configured to implement a method as claimed in any one of the preceding claims when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, the storage medium comprising a stored program, wherein the program when run performs a method according to any one of the preceding claims.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
According to the method provided by the embodiment of the application, the target computing task in the computing task to be deployed is subjected to containerization deployment, so that the target computing task does not occupy physical computing resources, idle resources for deploying the target computing task are obtained by implementing capacity expansion of the target computing task, the computing resources of the target container cluster can be fully utilized, the purpose of self-adaptive capacity expansion according to the computing task is achieved, and the computing power can be timely supplemented.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a task deployment method according to an embodiment of the present application;
FIG. 2 is a flowchart of a task deployment method according to another embodiment of the present application;
FIG. 3 is a flowchart of a task deployment method according to another embodiment of the present application;
Fig. 4 is a schematic diagram of a system architecture capable of implementing a task deployment method according to an embodiment of the present application;
FIG. 5 is a schematic representation of deployment results in one embodiment of the application;
FIG. 6 is a block diagram of a task deployment device provided by an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
With the development of big data and AI technology, various industries are becoming more and more aware of the value of big data technology for their own product services, big data application and big data platform become core technologies of various it technical companies and internet companies, and many cloud companies have introduced big data platforms built on the cloud service iaas layer for providing functions of collection, storage, processing and presentation of mass data. Thereby enabling enterprises to concentrate on business, improving working efficiency and reducing cost and period of enterprise construction.
According to one aspect of the embodiment of the application, a task deployment method is provided. Alternatively, in the present embodiment, the task deployment method described above may be applied to a hardware environment constituted by a terminal and a server. The server is connected with the terminal through a network, can be used for providing services (such as cloud computing services and the like) for the terminal or a client installed on the terminal, and can be used for providing data storage services for the server by setting a database on the server or independent of the server.
The network may include, but is not limited to, at least one of: wired network, wireless network. The wired network may include, but is not limited to, at least one of: a wide area network, a metropolitan area network, a local area network, and the wireless network may include, but is not limited to, at least one of: wiFi (WIRELESS FIDELITY ), bluetooth. The terminal may not be limited to a PC, a mobile phone, a tablet computer, or the like.
The task deployment method of the embodiment of the application can be executed by a server, a terminal or both. The task deployment method of the terminal executing the embodiment of the application can also be executed by the client installed on the terminal.
Taking a task deployment method in this embodiment executed by a server as an example, fig. 1 is a task deployment method provided in an embodiment of the present application, including the following steps:
Step S101, obtaining all computing tasks to be deployed.
The task deployment method in the embodiment can be applied to a scenario where a computing task needs to be deployed, for example: the scenario of deploying an offline computing task to a cloud server, the scenario of deploying a computing task to a big data cluster, etc., may also be a scenario of deploying other computing tasks to other servers. In the embodiment of the application, the task deployment method is exemplified by deploying the offline computing task to the cloud server, and the task deployment detection method is also applicable to other types of media resources under the condition of no contradiction.
Taking the example of deploying the offline computing task to the cloud server scene, the target computing task in the offline computing task to be deployed is subjected to containerized deployment, so that all the computing tasks to be deployed can be processed without adding a cloud host.
The computing task to be deployed may be a computing task to be deployed uploaded by the demand direction cloud server, for example, an offline computing task requiring no real-time computation, such as a data reporting type service.
Step S102, determining a target computing task meeting the containerized deployment requirement from all computing tasks to be deployed.
After all the computing tasks to be deployed are determined, the deployment mode of each computing task to be deployed in the computing tasks to be deployed can be determined.
The deployment mode of each computing task to be deployed can be deployment by using physical resources or one of deployment by using container resources. Therefore, the target computing task meeting the containerized deployment requirement can be determined from all the computing tasks to be deployed.
The target computing tasks meeting the containerized deployment requirements may be screened out computing tasks to be deployed for deployment through the container resources.
For example, in a general case, offline tasks (i.e., computing tasks to be deployed) are generally executed at night, and the container cluster service is idle at night, so that a target computing task which can be deployed through containerization in the offline tasks is determined, so that the target computing task can be processed through the container cluster service, and the number of computing devices in a cluster which is only used for processing the offline tasks can be reduced.
Step S103, determining a target node for deploying the target computing task in the target container cluster, wherein idle residual computing resources exist in the target node.
After the target computing task is determined, a target node to which the target computing task is specifically required to be deployed can be determined in a target container cluster for deploying the target computing task.
The target container cluster may include a plurality of nodes that may be deployed in a containerized manner, and a plurality of container instances may be created in the migrant node to initiate a target computing task through one of the container instances.
Step S104, deploying the target computing task into a target container instance of the target node.
After determining the target node to which the target computing task needs to be deployed, the target computing task may be deployed into a target container instance of the target node.
The target container instance may be a Pod created in the target node, and the Pod may be created by a dock.
The target computing task is deployed into the target container instance of the target node by deploying the target computing task into the container resource in the target container instance, so that the purpose of containerized deployment of the target computing task is achieved.
By the method in the embodiment, the target computing task in the computing task to be deployed is subjected to containerization deployment, so that the target computing task does not occupy physical computing resources, idle resources for deploying the target computing task are obtained by implementing capacity expansion of the target computing task, computing resources of a target container cluster can be fully utilized, the purpose of self-adaptive capacity expansion according to the computing task is achieved, and computing power can be timely supplemented.
As shown in fig. 2, as an alternative embodiment, the step S102 of determining, among all the computing tasks to be deployed, the target computing task that meets the requirement of containerized deployment includes the following steps:
step S201, determining the priority of each computing task to be deployed.
In order to realize the flexible use of the computing resources, the computing tasks to be deployed are preferably deployed by using the physical resources under the condition that the physical resources are enough, and when the physical resources are insufficient to deploy all the computing tasks to be deployed, whether each computing task to be deployed is deployed by using the physical resources or by using the container resources needs to be judged.
Whether to deploy using physical resources or container resources for each computing task to be deployed may be determined by determining a priority of each computing task to be deployed.
For each computing task to be deployed, determining the priority of each computing task to be deployed according to the time for acquiring the computing task to be deployed or the level of the service corresponding to the computing task to be deployed and other factors.
Step S202, determining a low-priority computing task and a high-priority computing task except the low-priority computing task in all computing tasks to be deployed according to the priorities, wherein the priorities of the low-priority computing tasks are lower than those of the high-priority computing tasks, and the priorities are used for indicating the sequence of deploying the computing tasks to be deployed through physical computing resources.
After determining the priority of each of the computing tasks to be deployed, the order relationship between the computing tasks to be deployed may be determined by priority (e.g., ordered from high to low priority or from low to high priority).
After the order relation is obtained, the low-priority computing task and the high-priority computing task can be determined from all the computing tasks to be deployed. And, the low priority computing tasks have a lower priority than the high priority computing tasks.
For example, the number of computing tasks to be deployed and the number of physical computing resources (for example, cloud host resources) may be determined, so as to determine the first number of computing tasks to be deployed that may be deployed by the physical computing resources, so that each computing task to be deployed with a first number of priority is regarded as a high-priority computing task, and the remaining computing tasks to be deployed except the high-priority computing task are regarded as low-priority computing tasks, where the low-priority computing tasks may be deployed only by the container resources, or may be deployed by a mixture of the physical computing resources and the container resources.
Step S203, determining a target computing task in all low-priority computing tasks.
After determining the low-priority computing task, the low-priority computing task can be deployed only through the container resource, or can be deployed through the mixture of the physical computing resource and the container resource, so that the target computing task for performing containerization deployment is also determined from the low-priority computing task.
For example, for a cloud computing platform, a resource management module can be arranged on the cloud computing platform, the resource management module can be split on a yarn queue (comprising a plurality of computing tasks to be deployed), a plurality of high-priority queues can be obtained by splitting according to services, the high-priority computing tasks in the high-priority queues only use physical resources and do not use container resources, the low-priority computing tasks in the low-priority queues can be deployed by using cloud host resources and container resources in a mixed mode, and further, the container resources can be used for supplementing the computing capacity of non-important tasks.
As an alternative embodiment, as in the foregoing method, after determining, in step S202, the target computing task and the high-priority computing task other than the target computing task among all the computing tasks to be deployed according to the priorities, the method further includes the following steps:
step S301, deploying a high-priority computing task to a target device that provides a physical computing resource, where the target device includes: cloud host and cloud physical machine.
After determining the high-priority computing task, the high-priority computing task may be deployed in a different manner than the low-priority computing task, i.e., using target devices (e.g., cloud hosts, cloud physical machines) of physical computing resources.
Furthermore, by adopting the method in the embodiment, under the condition that the target computing task is deployed through the container resource, physical computing resources allocated to part of computing tasks to be deployed still exist, so that the processing efficiency of the high-priority computing task can be effectively improved by adopting the physical computing resources to process the high-priority computing task.
As shown in fig. 3, as an alternative embodiment, the determining, in the target container cluster, the target node for deploying the target computing task in step S103 includes the following steps:
step S401, obtaining all candidate nodes in the target container cluster.
The target container cluster may be a preset cluster formed by a plurality of nodes for performing containerized deployment and providing container resources. Thus, all candidate nodes in the target container cluster may be determined.
The candidate nodes may be nodes in the target container cluster that may be deployed in a containerized manner.
Step S402, determining the residual resource quantity of the residual computing resources in each candidate node.
After all candidate nodes are determined, the remaining computing resources in each candidate node may be determined.
The remaining computing resources may be computing resources of the candidate node that are not used to create the container.
After the remaining computing resources are obtained, the remaining resource amount corresponding to each candidate node can be obtained through statistics, and the remaining resource amount is used for indicating the resource amount of the remaining computing resources in the candidate node.
Step S403, selecting the target node with the most residual computing resource from all candidate nodes according to the residual resource quantity of the residual computing resource.
After determining the residual resource amount of each candidate node, the candidate node with the largest residual resource amount can be determined, and then the candidate node with the largest residual resource amount, namely, the candidate node with the largest residual computing resource is taken as the target node.
For example, a target node may be selected by Yarn-resourcemanager, where Yarn-resourcemanager is the scheduler core component of Yarn, after determining that a target computing task needs to be deployed using container resources, yarn-resourcemanager obtains the specification and number of idle computing forces that can be provided in the current target container cluster, invokes Kubernetes api to create a number of container resources corresponding to the resources (e.g., CPU resources, memory resources, etc.) required for the target computing task, and the ex-schedule extension scheduler ensures that Pod is created on the node with more remaining resources, which Pod is responsible for starting the services of Yarn.
As an alternative embodiment, as in the foregoing method, before determining the target computing task for containerized deployment in all the computing tasks to be deployed in step S102, the method further includes the following steps:
Step S501, obtaining target information of a target type under the current situation, where the target type includes at least one of the following: and all the load amounts of the computing tasks to be deployed and the current time point.
In the case that the computing task to be deployed is an offline task, a trigger condition is required to determine whether or not the containerized deployment of at least one computing task to be deployed is required.
The triggering condition can be triggered according to the load quantity of all the computing tasks to be deployed or according to time, namely, the target information of the target type is obtained under the current condition.
In the current case, the current time or all the current computing tasks to be deployed can be used.
The target type may be the load amount of all the computing tasks to be deployed, the current point in time.
For example, when the target type is the load capacity of all the computing tasks to be deployed, judging whether the computing tasks to be deployed need to be subjected to containerized deployment according to the load capacity of all the computing tasks to be deployed; and judging whether the calculation task to be deployed needs to be subjected to containerized deployment according to the current time when the target type is the current time point.
Step S502, executing a jump operation for jumping to the step of determining the target computing task for containerized deployment in all computing tasks to be deployed under the condition that the target information meets the preset condition.
After the target information is determined, whether the target computing task for containerized deployment needs to be determined or not can be judged according to the target information.
And judging whether the target information meets the preset condition before judging whether the target computing task for containerized deployment needs to be determined or not.
The preset condition may be information consistent with a target type of the target information, for example, in a case where the target type is a load amount of all the computing tasks to be deployed, the target condition may be a preset value for indicating a load amount threshold, for example: preset values for indicating the required amount of memory, the number of CPUs, and the number of GPUs; in the case where the target type is the current time point, the target condition may then be a preset value, for example 19:00, for indicating the time point.
For example, as shown in fig. 4, a task deployment architecture applying the method in the foregoing embodiment is provided, where the task deployment architecture includes an elastic scaling module (yarn-autoscaler, including a cluster management unit, a resource management unit, a task management unit, a cluster load monitoring unit (for determining whether target information meets target conditions when the target type is the load of all the computing tasks to be deployed) and a Pod lifecycle management unit (for determining whether the target information meets target conditions when the target type is an indication time point), where the elastic scaling module may be configured to provide two scaling modes of elastic scaling according to load and according to time through the elastic scaling module. For per-load scaling, the user may set thresholds (i.e., preset conditions) for different metrics to trigger scaling, such as setting availablevcore, pending vcore, available mem, PENDING MEM in the Yarn queue. Time expansion and contraction rules can also be used to specify triggering according to the rules of days, weeks, months and the like. The big data cluster is a cluster using physical computing resources, and the container cluster is a cluster using container resources.
As an alternative embodiment, as in the foregoing method, before the step S104 of deploying the target computing task into the target container instance of the target node, the method further includes the following steps:
Step S601, creating a target resource of the target computing resource amount in the target node according to the target computing resource amount of the target computing task.
After the target computing task is determined, the target computing resource amount required by the target computing task can be determined according to the information such as the business type, the data amount and the like of the non-target computing task. Further, a target resource of the target computing resource amount is created in the target node based on the target computing resource amount.
Further, in the case that the target computing tasks include N, N target resources may also be created at the target node, where each target resource corresponds to one of the target computing tasks.
In step S602, a target service is integrated into the target image, where the target service is used to obtain a target computing task.
In order to survive the container instance, a corresponding target image is needed, and the target service is integrated into the target image, so that the target settlement task can be conveniently deployed into the target container instance in the later period.
Step S603, creating a target container instance in the target resource through the target mirror image.
After the target image is acquired, the target image can be operated, so that the purpose of creating and obtaining the target container instance by using the target resource is achieved. And further, the post-target computing task can be conveniently subjected to computing processing by using the target resource.
By integrating the target service into the target mirror image, the method in the embodiment can facilitate automatic deployment of the target computing task into the target container instance in the later period, and improves the deployment efficiency.
As an alternative embodiment, the deploying, in step S104, the target computing task to the target container instance of the target node includes the following steps:
in step S701, a target scheduling component requests to acquire a target computing task through a target service, where the target scheduling component is configured to create a target resource corresponding to the target computing task in a target node.
As known from the foregoing embodiments, the target service is integrated into the target image, so that the target service also runs when the target image is run, and thus, the target service may request to the target scheduling component to acquire the target computing task, where the target scheduling component is configured to create, in the target node, the target resource corresponding to the target computing task.
Step S702, deploying the target computing task into a target container instance of the target node.
After the target computing task is obtained, the target computing task can be deployed into the target container instance in a containerized deployment mode, so that the purpose of containerized deployment of the target computing task is achieved, and the effect of elastic capacity expansion according to the task can be achieved.
For example, as shown in fig. 5, the yan-nodemanager service is integrated in the Pod, and is manufactured in a mirror image through a dock, so that yan-nodemanager is started when the Pod is created, and yan-nodemanager automatically registers with yan-resoucemanager and acquires tasks to be executed, so that quick elastic expansion can be realized.
As shown in fig. 6, according to an embodiment of another aspect of the present application, there is also provided a task deployment device, including:
The acquisition module 1 is used for acquiring all computing tasks to be deployed;
The first determining module 2 is used for determining a target computing task meeting the containerized deployment requirement in all computing tasks to be deployed;
A second determining module 3, configured to determine, in the target container cluster, a target node for deploying a target computing task, where there are idle remaining computing resources in the target node;
a deployment module 4, configured to deploy the target computing task into a target container instance of the target node.
In particular, the specific process of implementing the functions of each module in the apparatus of the embodiment of the present invention may be referred to the related description in the method embodiment, which is not repeated herein.
According to another embodiment of the present application, there is also provided an electronic apparatus including: as shown in fig. 7, the electronic device may include: the device comprises a processor 1501, a communication interface 1502, a memory 1503 and a communication bus 1504, wherein the processor 1501, the communication interface 1502 and the memory 1503 are in communication with each other through the communication bus 1504.
A memory 1503 for storing a computer program;
The processor 1501 is configured to execute the program stored in the memory 1503, thereby implementing the steps of the method embodiment described above.
The bus mentioned above for the electronic device may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The embodiment of the application also provides a computer readable storage medium, wherein the storage medium comprises a stored program, and the program executes the method steps of the method embodiment.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

The first determining module is configured to determine, among all the computing tasks to be deployed, a target computing task that meets a containerized deployment requirement, where the determining module includes: determining the priority of each computing task to be deployed; determining a low-priority computing task and a high-priority computing task except the low-priority computing task in all computing tasks to be deployed according to the priorities, wherein the priorities of the low-priority computing tasks are lower than those of the high-priority computing tasks, and the priorities are used for indicating the sequence of deploying the computing tasks to be deployed through physical computing resources; determining the target computing task from all the low-priority computing tasks;
CN202111165100.6A2021-09-302021-09-30Task deployment method and device, electronic equipment and storage mediumActiveCN113835865B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111165100.6ACN113835865B (en)2021-09-302021-09-30Task deployment method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111165100.6ACN113835865B (en)2021-09-302021-09-30Task deployment method and device, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN113835865A CN113835865A (en)2021-12-24
CN113835865Btrue CN113835865B (en)2024-09-13

Family

ID=78967990

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111165100.6AActiveCN113835865B (en)2021-09-302021-09-30Task deployment method and device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN113835865B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114706675A (en)*2022-01-242022-07-05北京邮电大学深圳研究院 Task deployment method and device based on cloud-edge collaborative system
CN114726734B (en)*2022-03-212024-08-27平安壹账通云科技(深圳)有限公司Cloud service resource capacity expansion method and device, electronic equipment and storage medium
CN114996003B (en)*2022-05-272024-10-01北京火山引擎科技有限公司 A cloud service deployment method, device, electronic device and storage medium
CN115225506B (en)*2022-06-022025-07-15圆壹智慧科技(宁波)有限公司 Data processing method, system, electronic device and storage medium based on cloud platform
CN114979282B (en)*2022-07-282023-01-20北京金山云网络技术有限公司Task scheduling method, device, storage medium and electronic equipment
CN116055562B (en)*2022-10-262024-08-23北京蔚领时代科技有限公司Automatic expansion method and device for cloud game storage space
CN117851023A (en)*2023-03-292024-04-09广州纳指数据智能科技有限公司Conversion method and system for computing power of high-performance computer group and local resources
CN119493644A (en)*2023-08-152025-02-21华为云计算技术有限公司 A task processing method, device and equipment running on a cloud computing platform
CN117827376B (en)*2023-12-292024-11-15摩尔线程智能科技(成都)有限责任公司GPU test task scheduling method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109408230A (en)*2018-10-102019-03-01中国科学院计算技术研究所Docker container dispositions method and system based on energy optimization
CN111522639A (en)*2020-04-162020-08-11南京邮电大学Multidimensional resource scheduling method under Kubernetes cluster architecture system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108769254B (en)*2018-06-252019-09-20星环信息科技(上海)有限公司Resource-sharing application method, system and equipment based on preemption scheduling
CN112463290A (en)*2020-11-102021-03-09中国建设银行股份有限公司Method, system, apparatus and storage medium for dynamically adjusting the number of computing containers
CN112269641B (en)*2020-11-182023-09-15网易(杭州)网络有限公司Scheduling method, scheduling device, electronic equipment and storage medium
CN112667376A (en)*2020-12-232021-04-16数字广东网络建设有限公司Task scheduling processing method and device, computer equipment and storage medium
CN113064696B (en)*2021-03-252024-11-12网易(杭州)网络有限公司 Cluster system expansion method, device and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109408230A (en)*2018-10-102019-03-01中国科学院计算技术研究所Docker container dispositions method and system based on energy optimization
CN111522639A (en)*2020-04-162020-08-11南京邮电大学Multidimensional resource scheduling method under Kubernetes cluster architecture system

Also Published As

Publication numberPublication date
CN113835865A (en)2021-12-24

Similar Documents

PublicationPublication DateTitle
CN113835865B (en)Task deployment method and device, electronic equipment and storage medium
US8020161B2 (en)Method and system for the dynamic scheduling of a stream of computing jobs based on priority and trigger threshold
CN111104227B (en)Resource control method and device of K8s platform and related components
CN109684074B (en)Physical machine resource allocation method and terminal equipment
CN106201661B (en) Method and device for elastic scaling virtual machine cluster
US20150295970A1 (en)Method and device for augmenting and releasing capacity of computing resources in real-time stream computing system
CN107295090B (en)Resource scheduling method and device
US20240256362A1 (en)Allocating computing resources for deferrable virtual machines
CN113886069B (en) Resource allocation method, device, electronic device and storage medium
US20220206873A1 (en)Pre-emptive container load-balancing, auto-scaling and placement
CN109739627B (en)Task scheduling method, electronic device and medium
CN110213128B (en)Service port detection method, electronic device and computer storage medium
WO2017166643A1 (en)Method and device for quantifying task resources
US12026536B2 (en)Rightsizing virtual machine deployments in a cloud computing environment
CN116483546B (en)Distributed training task scheduling method, device, equipment and storage medium
CN113608838A (en)Deployment method and device of application image file, computer equipment and storage medium
CN108429704B (en)Node resource allocation method and device
CN109117244B (en)Method for implementing virtual machine resource application queuing mechanism
CN112860442A (en)Resource quota adjusting method and device, computer equipment and storage medium
US20250265128A1 (en)Computing Resource Management Method and Apparatus
Wu et al.ABP scheduler: Speeding up service spread in docker swarm
CN120029769A (en) K8s cluster expansion and contraction method and device, electronic device and storage medium
CN119440819A (en) Task allocation method, device, medium, equipment and product
CN113900799A (en)Computing resource allocation method and device, electronic equipment and storage medium
CN113590319A (en)Computing resource load balancing method and device for message queue

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp