Movatterモバイル変換


[0]ホーム

URL:


CN113900811A - Event-driven task scheduling method and device - Google Patents

Event-driven task scheduling method and device
Download PDF

Info

Publication number
CN113900811A
CN113900811ACN202111182687.1ACN202111182687ACN113900811ACN 113900811 ACN113900811 ACN 113900811ACN 202111182687 ACN202111182687 ACN 202111182687ACN 113900811 ACN113900811 ACN 113900811A
Authority
CN
China
Prior art keywords
message queue
request message
module
message
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111182687.1A
Other languages
Chinese (zh)
Inventor
柳恒建
杨军
陈挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanyi Technology Co Ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co LtdfiledCriticalWanyi Technology Co Ltd
Priority to CN202111182687.1ApriorityCriticalpatent/CN113900811A/en
Publication of CN113900811ApublicationCriticalpatent/CN113900811A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application provides a task scheduling method and a task scheduling device driven by an event, wherein the method is used for a business function support system, and the system comprises a function creating module, a gateway module, a message queue module and a worker module; the function creating module receives the first request message and creates a function; the gateway module receives the second request message and enters a flow; the message queue module stores the first request message and the second request message to form a first message queue; if the worker module monitors the first message queue, a work task corresponding to the first message queue is created; processing the work task and storing the result into a message queue module to form a second message queue; and if the gateway module monitors the second message queue, sending the second message queue to the user for providing callback service. The method based on event driving can realize safe isolation of the request and scheduling tasks without omission and with low delay.

Description

Event-driven task scheduling method and device
Technical Field
The present application relates to the field of open-source function computing technologies, and in particular, to an event-driven task scheduling method and a related apparatus.
Background
At present, an open function as a service (OpenFaaS) support system mainly aims to simplify a serverless function by using a Docker container, and an OpenFaaS architecture is based on a cloud native standard and includes the following components: API gateway, Watchdog, Container orchestrator Kubernets, Docker Swarm, Prometeus, and Docker.
The existing public cloud service function support system (FaaS) has strict requirements on function execution duration, cannot support hour-level execution duration, and has many limitations on resource usage.
Disclosure of Invention
The application provides an event-driven task scheduling method and a related device, aiming at constructing a general FaaS framework, solving the problems that the existing FaaS framework does not support ultra-long execution time and ultra-conventional CPU/memory setting, and realizing the safe isolation of requests and the scheduling tasks without omission and low delay.
In a first aspect, an embodiment of the present application provides an event-driven task scheduling method, where the event-driven task scheduling method is used for a business function support system, where the business function support system includes a gateway module, a message queue module, and a worker module; the method comprises the following steps:
the gateway module receives a first request message, analyzes a request parameter of the first request message, and pushes the first request message to the message queue module;
the message queue module stores the first request message to form a first message queue;
the worker module monitors the first message queue, creates a work task corresponding to the first request message, and pushes a processing result to the message queue module as a second request message by the work task;
the message queue module stores the second request message to form a second message queue;
and the gateway module monitors the second message queue and sends the second request message to a user to provide callback service.
In a second aspect, an embodiment of the present application provides an apparatus for event-driven task scheduling, including:
a receiving unit, configured to receive a first request message and a third request message, where the first request message is used to indicate to enter a flow, and the third request message is used to indicate to create a target service function;
the gateway unit is connected with the receiving unit and is used for analyzing the request parameters of the first request message;
the storage unit is connected with the gateway unit and used for storing the first request message;
the function definition unit is connected with the storage unit and used for creating a work task corresponding to the first message queue, and the work unit is a component used for creating the work task in the business function support system;
and the message queue unit is connected with the gateway unit and the work unit and used for storing the first request message and the second request message to form a first message queue and a second message queue, the first request queue corresponds to the first request message one by one, and the second request message is used for indicating the processing result of the work task.
In a third aspect, an embodiment of the present application provides an electronic device, where the electronic device includes:
one or more processors;
one or more memories for storing programs, one or more communication interfaces for wireless communication, the memories and the communication interfaces being connected to each other and performing communication work therebetween; the one or more memories and the program are configured to control the apparatus to perform some or all of the steps as described in any of the methods of the first aspect of the embodiments of the application by the one or more processors.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein for electronic data exchange, the computer program comprising executable instructions for performing some or all of the steps as described in any one of the methods of the first aspect of embodiments of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a computer program operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a function creation module receives a first request message and creates a function; the gateway module receives the second request message and enters a flow; the message queue module stores the first request message and the second request message to form a first message queue; if the worker module monitors the first message queue, a work task corresponding to the first message queue is created; processing the work task and storing the result into a message queue module to form a second message queue; and if the gateway module monitors the second message queue, sending the second message queue to the user for providing callback service. The method based on event driving can realize safe isolation of the request and scheduling tasks without omission and with low delay.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a service function support system according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for event-driven task scheduling according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an apparatus for event-driven task scheduling according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another apparatus for event-driven task scheduling according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps is not limited to only those steps recited, but may alternatively include other steps not recited, or may alternatively include other steps inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In related designs, the original worker module faas-worker needs to keep long connection with the job task joba, and synchronous connection brings two main problems: performance issues, i.e. the number of connections must not be excessive; the timeout problem, i.e. the synchronization connection needs to set the timeout and is usually short, to release the connection resources as soon as possible to avoid system crash.
In order to solve the above problems, an embodiment of the present application provides a method and an apparatus for event-driven task scheduling, where the method is used for a business function support system, and the system includes a function creation module, a gateway module, a message queue module, and a worker module; the function creating module receives the first request message and creates a function; the gateway module receives the second request message and enters a flow; the message queue module stores the first request message and the second request message to form a first message queue; if the worker module monitors the first message queue, a work task corresponding to the first message queue is created; processing the work task and storing the result into a message queue module to form a second message queue; and if the gateway module monitors the second message queue, sending the second message queue to the user for providing callback service. The method based on event driving can realize safe isolation of the request and scheduling tasks without omission and with low delay.
In order to better understand the method and the apparatus for event-driven task scheduling disclosed in the embodiments of the present invention, the following describes embodiments of the present invention in detail.
A description will be given below of a network architecture to which the embodiment of the present invention is applicable. Referring to fig. 1, fig. 1 is a schematic structural diagram of a service function support system according to an embodiment of the present invention. As shown in fig. 1, thenetwork architecture 10 specifically includes:
the function creation module 110: thefunction creating module 110 may be referred to as FaaS-builder, and is configured to receive a request for creating a target service function from a user and define a function.
The gateway module 120: thegateway module 120 may be an application program interface API gateway module FaaS-gateway, which is an entry gateway of the servicefunction support system 10 and is configured to receive a request and return a response.
The worker module 130: theworker module 130 may be a FaaS-worker, which is a component of the businessfunction support system 10 for creating the work task Job.
The container orchestration component 140: thecontainer arrangement component 140 may be a kubernets K8s module, configured to scale the scaling policy of the target service function instance.
The target service function example may include a drawing inference function, a primitive identification function, a structure rule review function, and the like in the drawing inspection service, which is not limited herein.
The message queue component 150: the message queue component 150 may be a RabbitMQ, which is a message queue developed in erlang language and implemented based on Advanced Message Queue Protocol (AMQP), and is a communication method between application programs, and the message queue is widely applied in the development of a distributed system.
Specifically, in the service function support system proposed in this embodiment, the message queue module 150 is changed from the original communication medium of thegateway module 120 and theworker module 130 to the communication medium of thegateway module 120, theworker module 130 and the Job task Job; after the modification, theworker module 130 and the Job task Job no longer maintain a synchronous connection, and are adjusted from synchronous to asynchronous.
Based on this, please refer to fig. 2, where fig. 2 is a schematic flowchart of a method for scheduling event-driven tasks according to an embodiment of the present application, and the method is applied to a business function support system, where the business function support system includes a function creation module, a gateway module, a message queue module, and a worker module; as shown in the figure, the event-driven task scheduling method includes the following operation flows:
step 201, the gateway module receives a first request message, parses a request parameter of the first request message, and pushes the first request message to the message queue module.
Illustratively, the user transmits a request for entering the flow to the system through the second request message, and the gateway module FaaS-gateway receives and processes the second request message called by thousands of concurrent APIs.
Specifically, the request message includes, but is not limited to, traffic management, authorization and access control, monitoring, and the like.
Step 202, the message queue module stores the first request message to form a first message queue.
Step 203, the worker module monitors the first message queue, creates a work task corresponding to the first request message, and the work task pushes a processing result to the message queue module as a second request message.
Illustratively, when the worker module monitors that the first message queue in the message queue module is not empty, a work task corresponding to it is created and the information contained in each request is passed to the work task.
Further, each corresponding work task may perform resource dynamic configuration, which specifically includes but is not limited to: various configurations such as timeout, memory, and CPU are set.
Step 204, the message queue module stores the second request message to form a second message queue.
Illustratively, the second request message is used for indicating a processing result of the work task. Instep 203, each time a work task is created, a corresponding work instance needs to be processed. After the work task is created, the interior of the work task is processed aiming at each work instance to generate a corresponding processing result, and a third request message is formed.
Further, the second request messages are sequentially stored in the message queue module to form a second message queue.
In a possible example, the message queue module is further configured to obtain the number of the working instances, and obtain the number of the user requests from the first message queue and obtain the number of the processing results from the second message queue.
Step 205, the gateway module monitors the second message queue, and sends the second request message to the user to provide the callback service.
Illustratively, the gateway module needs to monitor the second message queue ofstep 204, and if the second message queue is not empty, obtain processing result information stored in the current message queue, and send the processing result information to the user to provide the callback service.
In a possible embodiment, one possible way to implement the gateway module listening is as follows: and monitoring the tasks in the MQ is realized by the consumers registered as the message queue module MQ.
Illustratively, instep 202, the first request message forms a first message queue in the message queue module, and the gateway module may obtain the listening identifier corresponding to each request in the first message queue by registering in the message queue.
Further, when the processing result of each working instance is stored in the message queue module, the processing result also carries the identification information corresponding to the first message queue.
Further, when the gateway module monitors identification information corresponding to one working instance, the processing result of the current task is provided to the user callback service.
In one possible embodiment, the user can customize the gateway module to perform an API reset REST API and operate on the function when the REST API is called by the client.
Specifically, the service function support system supports a user to manually trigger a function through an API/console, and helps the user to debug and use the cloud function more conveniently and more clearly.
Furthermore, the service function support system provides monitoring indexes of calling function calls and collection and display of running logs, and a user can check the running state of the function conveniently.
Specifically, the function monitoring index provides key index monitoring such as call times, error times, rejection times, call delay and the like for each user, and helps the user to know the overall operation condition of the function. The running log of the function records the running log of the function, provides a flexible log query function, and facilitates a user to check the running condition of the function and carry out debugging and auditing.
It can be seen that, in the embodiment of the present application, a function creation module receives a first request message and creates a function; the gateway module receives the second request message and enters a flow; the message queue module stores the first request message and the second request message to form a first message queue; if the worker module monitors the first message queue, a work task corresponding to the first message queue is created; processing the work task and storing the result into a message queue module to form a second message queue; and if the gateway module monitors the second message queue, sending the second message queue to the user for providing callback service. The method based on event driving can realize safe isolation of the request and scheduling tasks without omission and with low delay. Meanwhile, the gateway module monitors in a mode of registering as a consumer, and can timely and accurately provide a processing result to corresponding user callback service.
In one possible example, if the worker module monitors the first message queue, the method creates a work task corresponding to the first message queue, and includes: and the worker module creates a corresponding work task according to the request parameter of the first request message when monitoring one first request message, and performs resource dynamic configuration on the work task, wherein the resource dynamic configuration comprises setting at least one of different timeout, memory and CPU for each work task.
Illustratively, different resource configurations are specifically set for each of the work tasks according to different request types in the first message queue.
It can be seen that, in the embodiment of the present application, the worker module dynamically generates the work task and dynamically configures the resource of the work task. The service function support system can support ultra-long execution time and ultra-conventional CPU/memory setting. Meanwhile, the reasonable utilization of system resources can be realized by dynamically allocating the resources.
In one possible example, the request parameters include a function name, a service parameter, and a callback address; the function name is a mandatory option, and the service parameter and the callback address are selectable options.
Illustratively, after receiving a first request message, the gateway module performs parameter resolution on the first request message, where the parameters include a function name, a service parameter, and a callback address; specifically, the function name is a mandatory option, and the service parameter and the callback address are optional items.
In actual operation, a user supports management similar to an engineering mode when editing function codes, and files and folders can be created and edited.
Illustratively, the generator module modifies an existing function configuration template, each configuration being divided into a default value and a dynamic value. The dynamic value can be extracted from the calling parameter, and if the calling parameter is not set, the default value is directly selected.
Further, the container composer K8s stores the above function configuration information in the configuration command table.
Illustratively, the first request message is a request characterizing the user's incoming flow, the contents of which include, but are not limited to, traffic management, authorization and access control, monitoring, and the like. And the gateway module carries out parameter processing on the first request message, wherein the parameters comprise a function name, service parameters and a callback address.
The function name is a mandatory option, and the service parameter and the callback address are selectable options.
It can be seen that, in the embodiment of the present application, by modifying the existing function configuration template and dividing each configuration into a default value and a dynamic value, the flexibility of configuration in the function definition process can be improved. And, storing the setting of the relevant parameter in the configuration command table can facilitate the acquisition and utilization of the subsequent steps.
In one possible example, the business function support system further includes a function creation module, the method including: the function creating module is used for creating a target service function and storing the configuration information of the target service function in a configuration command table.
Illustratively, the function creation module receives a third request message, wherein the third request message is used for indicating the creation of the target business function. The function creating module sets configuration information for the third request message through the function service; the configuration information is stored in a configuration command table of the orchestration management tool K8s of a portable container.
Illustratively, the user passes a request to create a function to the system via said first request message, which request may be regarded as an event. Based on the driving of the event, after the function creation module FaaS-builder receives the first request message, writing a service function code and setting an operating condition through a function service.
Specifically, a function is a combination of code, runtime, resource, and setting required to implement a certain function, and is a minimum unit that can be independently run. The function is triggered by Trigger to automatically schedule needed resources and environment, so as to realize the expected function.
It can be seen that, in the embodiment of the application, the objective function is created through the function service, and the function application concept is introduced, so that the user can create the function application when creating the function according to the service logic, and create a plurality of functions under the corresponding function application, thereby facilitating the management of a group of functions related to the service.
In one possible example, before creating the work task corresponding to the first message queue if the worker module monitors the first message queue, the method further includes: the worker module acquires the configuration information of the target service function from the configuration command table according to the request parameter of the first request message; and generating the work task corresponding to the first request message according to the configuration information of the target service function.
Illustratively, the work tasks corresponding to the first message queue are generated according to the definition information of the function, and the target task function corresponds to the work tasks in the first message queue one by one through the unique identifier.
It can be seen that, in the embodiment of the present application, the work tasks may be created according to the definition information stored in the configuration command table. The method and the device can realize orderly generation of the work tasks and orderly transmission to the next module for work task processing.
In one possible example, after creating the work task corresponding to the first message queue if the worker module monitors the first message queue, the method further includes: the worker module creates a corresponding number of container sets pod to process each first request message according to the number of the first request messages in the first message queue, wherein the container sets pod is the minimum scheduling unit of the K8 s.
In particular, the container set pod is the smallest unit of operation of K8 s. And judging the processing progress result of the first request message according to the pod, and then selecting a capacity expansion/reduction strategy.
For example, the system may configure a preset scaling expression for the working instance in advance, where the expression specifically is: the number of first request messages/number of working instances >10 indicates that the working instances handle more than 10 requests on average, and an increase (capacity expansion) of the working instances is required.
In one possible example, in determining the capacity expansion policy for the working instance, an optional method is:
and acquiring the number of configured working examples in the first request message corresponding to the target service function.
And subtracting the threshold value of the average processing task number of the working example from the average processing task number of the working example to obtain the excess of the average processing task number.
And multiplying the average processing task number excess by the number of the configured working examples to obtain the total processing task number excess.
And dividing the excess of the total processing task number by the average processing task number threshold of the function examples and rounding to obtain the number of the working examples needing to be increased.
And determining a capacity expansion strategy aiming at the number of the first request messages corresponding to the target service function according to the number of the working examples needing to be increased.
For example, assuming that the threshold value of the average number of processing tasks of the work instance is 10, the average amount of processing tasks of the work instance is 20, and the number of configured work instances is 15, the number of work instances to be added is 15 × (20-10) ÷ 10 ═ 15.
In this possible example, in terms of determining the capacity reduction policy for the first number of request messages, an optional method is:
and acquiring the configured number of the working instances of the first request message number.
And subtracting the average processing task amount of the working example from the threshold value of the average processing task amount of the working example to obtain an average processing task amount difference.
And multiplying the average processing task quantity difference by the quantity of the configured working examples to obtain a total processing task quantity difference.
And dividing the total processing task quantity difference by the working example average processing task quantity threshold value and rounding to obtain the quantity of the working examples needing to be reduced.
Determining a capacity reduction strategy for the first request message quantity according to the quantity of the working instances needing to be reduced.
For example, assuming that the threshold value of the average number of processing tasks of the work instance is 20, the average number of processing tasks of the work instance is 15, and the number of configured work instances is 20, the number of work instances to be reduced is 20 × (20-15) ÷ 20 ═ 5.
In another possible example, the configuration of the maximum tolerable time in the container orchestrator module may be modified to be configurable, and the modified maximum tolerable time may be configured according to actual conditions.
Illustratively, the container organizer module may be kubernets K8s, which functions to perform container deployment and management, and is intended to implement an open source platform for automating the handling of containers.
Wherein, realizing the automation operation can be embodied as: support application container deployment, extension, and other operations on a cluster of multiple hosts. The service function support system is modularized, and other application programs or frameworks can be easily accessed. At the same time, it also provides additional functionality, such as self-healing services, including auto-placement (auto-replication), auto-replication (auto-restart), and auto-restart (auto-restart) of the container.
It can be seen that in the embodiment of the present application, by modifying the configuration of the time that can be tolerated at most in the container orchestrator module, the time that can be tolerated at most can be configured according to an actual situation, and meanwhile, the system can determine an accurate scaling strategy according to the condition of the average processing task amount of the function instance, thereby improving the accuracy.
In one possible example, after the creating of the work task corresponding to the first request message, the method further comprises: the worker module creates an asynchronous template for the work task, and the asynchronous template is used for writing in a processing result of the work task; the processing result forms a second message queue.
Illustratively, in order to acquire the execution result, a new asynchronous template of the job instance job is created, and after the job execution is completed, the processing result is put into a message queue of MQ specific subject, namely a second message queue;
specifically, the template can be regarded as a gap filling question: the blank area is characterized by a service algorithm and logic, and the fixed content of the template is to write the execution result of the blank area into the MQ. The template provides an abstraction capability that ultimately writes its results to the MQ, regardless of what message request was performed.
It can be seen that, in the embodiment of the present application, by creating a new asynchronous template of job instances joba, after each job instance is processed, the result can be stored in the message queue, and the processing result can be monitored by the gateway module and fed back to the user callback service, so as to implement asynchronous scheduling of messages.
Referring to fig. 3, in accordance with the embodiment shown in fig. 2, fig. 3 is a schematic structural diagram of an event-driven task scheduling device according to an embodiment of the present application, as shown in fig. 3:
an event-driven task scheduler, said apparatus comprising:
301: a receiving unit, configured to receive a first request message and a third request message, where the first request message is used to indicate to enter a flow, and the third request message is used to indicate to create a target service function.
302: and the gateway unit is connected with the receiving unit and is used for analyzing the request parameters of the first request message.
303: and the storage unit is connected with the gateway unit and is used for storing the first request message.
304: and the function definition unit is connected with the storage unit and used for creating a work task corresponding to the first message queue, and the work unit is a component used for creating the work task in the business function support system.
305: and the message queue unit is connected with the gateway unit and the work unit and used for storing the first request message and the second request message to form a first message queue and a second message queue, the first request queue corresponds to the first request message one by one, and the second request message is used for indicating the processing result of the work task.
It can be seen that, in the embodiment of the present application, the service operation executed by the target terminal is monitored; if a target trigger event in the business operation is monitored, acquiring user data of the target trigger event, wherein the user data comprises attribute information and behavior information, the attribute information is used for describing static characteristics of a user, the behavior information is used for describing an operation record of the user, and the target trigger event is a preset event used for indicating event-driven task scheduling; and sending the user data to a database of a server. By adopting the method of the embodiment of the application, the client data of the non-buried point event-driven task scheduling is reported to the database of the server in real time, the high-throughput processing data can be realized, and mass data storage and high-efficiency aggregation and real-time analysis of the data are supported.
Specifically, in the embodiment of the present application, the functional units of the event-driven task scheduling device may be divided according to the above method, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Referring to fig. 4, in accordance with the embodiment shown in fig. 2, fig. 4 is a schematic structural diagram of another event-driven task scheduling apparatus provided in the embodiment of the present application, as shown in fig. 4:
an electronic device, comprising:
one or more processors; one or more memories for storing programs, one or more communication interfaces for wireless communication, the memories and the communication interfaces being connected to each other and performing communication work therebetween; the one or more memories and the program are configured to control the apparatus to perform some or all of the steps as described in any of the methods of the first aspect of the embodiments of the application by the one or more processors.
The memory may be a volatile memory such as a dynamic random access memory DRAM, or a non-volatile memory such as a mechanical hard disk. The memory is used for storing a set of executable program codes, and the processor is used for calling the executable program codes stored in the memory, and can execute part or all steps of any method for scheduling the event-driven tasks, as described in the embodiment of the method for scheduling the event-driven tasks.
The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
The present application provides a computer-readable storage medium, in which a computer program for electronic data exchange is stored, where the computer program includes an execution instruction for executing part or all of the steps of any one of the method for event-driven task scheduling described in the above method embodiments of event-driven task scheduling, and the computer includes an electronic terminal device.
Embodiments of the present application provide a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to perform some or all of the steps of any one of the methods of event-driven task scheduling as described in the above method embodiments, and the computer program product may be a software installation package.
It should be noted that, for the sake of simplicity, any of the foregoing embodiments of the method for event-driven task scheduling are described as a series of combinations of actions, but those skilled in the art should understand that the present application is not limited by the described order of actions, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
The above embodiments of the present application are described in detail, and the principles and implementations of a method and apparatus for event-driven task scheduling according to the present application are described herein with specific examples, and the description of the above embodiments is only used to help understand the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the method and apparatus for event-driven task scheduling of the present application, the specific implementation and application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, hardware products and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. The memory may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Those skilled in the art will appreciate that all or part of the steps of the various methods of any of the above described method embodiments of event-driven task scheduling may be performed by associated hardware as instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
It will be appreciated that any products controlled or configured to perform the methods of processing of the flowcharts described in the method embodiments of event-driven task scheduling of the present application, such as the apparatus of the flowcharts described above, and computer program products, are within the scope of the related products described herein.
It is apparent that those skilled in the art can make various changes and modifications to the method and apparatus for event-driven task scheduling provided herein without departing from the spirit and scope of the present application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

CN202111182687.1A2021-10-112021-10-11Event-driven task scheduling method and devicePendingCN113900811A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111182687.1ACN113900811A (en)2021-10-112021-10-11Event-driven task scheduling method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111182687.1ACN113900811A (en)2021-10-112021-10-11Event-driven task scheduling method and device

Publications (1)

Publication NumberPublication Date
CN113900811Atrue CN113900811A (en)2022-01-07

Family

ID=79191420

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111182687.1APendingCN113900811A (en)2021-10-112021-10-11Event-driven task scheduling method and device

Country Status (1)

CountryLink
CN (1)CN113900811A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114706699A (en)*2022-04-222022-07-05美的集团股份有限公司Message processing method and device
CN114741174A (en)*2022-04-132022-07-12武汉大学 Hybrid cloud native high computing power and high concurrency solution method and device based on message queue
CN116032671A (en)*2023-03-302023-04-28杭州华卓信息科技有限公司Communication method and network system based on hybrid cloud
CN116094898A (en)*2023-01-092023-05-09上海爱数信息技术股份有限公司Efficient content processing control method and gateway based on object storage

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2016177138A1 (en)*2015-08-272016-11-10中兴通讯股份有限公司Method, device and system for scheduling task
CN107632894A (en)*2017-08-092018-01-26中国电力科学研究院A kind of implementation method and device for power market transaction service call
CN110069353A (en)*2019-03-182019-07-30中科恒运股份有限公司Business asynchronous processing method and device
CN111506412A (en)*2020-04-222020-08-07上海德拓信息技术股份有限公司Distributed asynchronous task construction and scheduling system and method based on Airflow

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2016177138A1 (en)*2015-08-272016-11-10中兴通讯股份有限公司Method, device and system for scheduling task
CN107632894A (en)*2017-08-092018-01-26中国电力科学研究院A kind of implementation method and device for power market transaction service call
CN110069353A (en)*2019-03-182019-07-30中科恒运股份有限公司Business asynchronous processing method and device
CN111506412A (en)*2020-04-222020-08-07上海德拓信息技术股份有限公司Distributed asynchronous task construction and scheduling system and method based on Airflow

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114741174A (en)*2022-04-132022-07-12武汉大学 Hybrid cloud native high computing power and high concurrency solution method and device based on message queue
CN114741174B (en)*2022-04-132024-11-15武汉大学 Hybrid cloud native high computing power and high concurrency solution and device based on message queue
CN114706699A (en)*2022-04-222022-07-05美的集团股份有限公司Message processing method and device
CN116094898A (en)*2023-01-092023-05-09上海爱数信息技术股份有限公司Efficient content processing control method and gateway based on object storage
CN116032671A (en)*2023-03-302023-04-28杭州华卓信息科技有限公司Communication method and network system based on hybrid cloud

Similar Documents

PublicationPublication DateTitle
CN113900811A (en)Event-driven task scheduling method and device
CN108845884B (en)Physical resource allocation method, device, computer equipment and storage medium
CN109067890B (en)CDN node edge computing system based on docker container
CN106897206B (en)Service testing method and device
EP3355187A1 (en)Loading method and device for terminal application (app)
WO2021159638A1 (en)Method, apparatus and device for scheduling cluster queue resources, and storage medium
CN114416352B (en)Computing power resource allocation method and device, electronic equipment and storage medium
CN104077212A (en)Pressure test system and method
CN110659131B (en)Task processing method, electronic device, computer equipment and storage medium
CN115292014A (en)Image rendering method and device and server
CN112817748A (en)Task processing method based on android virtual machine and computer equipment
CN114006815B (en)Automatic deployment method and device for cloud platform nodes, nodes and storage medium
CN112486502B (en)Distributed task deployment method, distributed task deployment device, computer equipment and storage medium
CN116541134A (en)Method and device for deploying containers in multi-architecture cluster
CN111949546A (en) An operating system testing method, apparatus, device and readable storage medium
CN115600852A (en)Project allocation method, terminal and storage medium
CN108108248A (en)A kind of CPU+GPU cluster management methods, device and equipment for realizing target detection
CN111190875A (en) Log aggregation method and device based on container platform
CN112506791A (en)Application program testing method and device, computer equipment and storage medium
WO2019117767A1 (en)Method, function manager and arrangement for handling function calls
CN117857501A (en)Method, device, equipment and storage medium for drop-disc of non-lock ring queue dns log
CN113157411A (en)Reliable configurable task system and device based on Celery
CN115391004B (en) Task scheduling system, method, device and electronic equipment
CN113626173B (en)Scheduling method, scheduling device and storage medium
CN118331751B (en)Computing resource allocation method, computer program, device and medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20220107

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp