Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
FIG. 1 is a schematic diagram illustrating an implementation environment in accordance with an exemplary implementation. The implementation environment comprises: aserver 200 and at least one terminal 100 (only two are shown in fig. 1).
Theterminal 100 may be an electronic device such as a laptop, a desktop, or a smart phone, which may run an application client. Theserver 200 is a server corresponding to a client running on the terminal 100, and is configured to exchange data with an application client running on the terminal 100, so as to provide a corresponding service for the application client, for example, code data is written on the client of the terminal, the server performs switching operation of multiple tasks in the code data according to the technical scheme of the present disclosure on the code data written in the terminal, and returns an operation result to the terminal 100, or returns an operation process to the terminal 100 in an operation process, where the operation process is displayed to a user by theterminal 100.
The terminal 100 and theserver 200 establish network connection to realize communication, and the association mode between the terminal 100 and theserver 200 comprises a network association mode and/or a protocol of hardware and a data association mode between the terminal and the server.
Fig. 2 is a block diagram illustrating aserver 200, whichserver 200 may be used to implement task scheduling in accordance with the methods of the present disclosure, according to an exemplary embodiment.
It should be noted that this server is only one example adapted to the present disclosure, and should not be construed as providing any limitation on the scope of use of the present disclosure. Nor should the server be construed as necessarily relying on or necessarily having one or more of the components of theexemplary server 200 shown in fig. 2.
The hardware structure of the server may vary widely depending on the configuration or performance, as shown in fig. 2, theserver 200 includes: apower supply 210, aninterface 230, at least onememory 250, and at least one central processing unit (CPU, central Processing Units) 270.
Wherein, thepower supply 210 is used for providing an operating voltage for each hardware device on theserver 200.
Theinterface 230 includes at least one wired orwireless network interface 231, at least one serial-to-parallel interface 233, at least one input-output interface 235, and at least oneUSB interface 237, etc., for communicating with external devices.
Thememory 250 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, where the resources stored include anoperating system 251,application programs 253, anddata 255, and the storage mode may be transient storage or permanent storage. Theoperating system 251 is used for managing and controlling various hardware devices andapplication programs 253 on theserver 200, so as to implement calculation and processing of themass data 255 by thecentral processor 270, which may be Windows server, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc. Theapplication 253 is a computer program that performs at least one specific task based on theoperating system 251, and may include at least one module (not shown in fig. 2), each of which may respectively include a series of computer readable instructions for theserver 200. Thedata 255 may be code data stored in a disk, or the like.
Thecentral processor 270 may include one or more of the above processors and is configured to communicate with thememory 250 via a bus for computing and processing themass data 255 in thememory 250.
As described in detail above, theserver 200 to which the present disclosure is applied will complete the task scheduling method in the form of a series of computer readable instructions stored in thememory 250 by thecentral processor 270.
In an exemplary embodiment, theserver 200 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for executing the methods described below. Thus, implementation of the present invention is not limited to any specific hardware circuitry, software, or combination of the two.
FIG. 3 is a flowchart illustrating a method of task scheduling according to an exemplary embodiment. The task scheduling method is used in theterminal 100 of the implementation environment shown in fig. 1. As shown in fig. 3, the task scheduling method includes:
step S110, obtaining the co-program code data obtained by converting the program grammar of the multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel by multiple threads.
Before a specific discussion, the differences between the three are explained by performing multiple tasks in a coroutine, multithreading, and single threaded manner, respectively:
for example, the tasks to be executed are task a and task B, and in an ideal state, both tasks are arithmetic operations, and there are no problems of competition and data sharing.
When the two tasks are executed in a multithreading manner, the task A is executed by the thread A, the task B is executed by the thread B, and therefore parallel execution of the task A and the task B is realized.
When the two tasks are executed in a single thread manner, for example, the task a is executed first and then the task B is executed in accordance with the task execution order set in the code data, and then the task B can be executed after the task a is executed in the execution process.
When the two tasks are executed in a coordinated manner, the task A can be executed first and then the task B can be executed, wherein if the task A is blocked, the execution of the task A can be suspended, the task A is switched to the task B to be executed first, and then the task A is resumed to be executed. It can also be considered as a yield between coroutines, i.e., coroutine A performs task A and coroutine B performs task B, but only one task in coroutine is performed at a time.
From the above, in multithreading, multiple tasks may be performed in parallel (i.e., multiple tasks may be performed at the same time). In a single thread, multiple tasks are executed sequentially, i.e., only one task can be executed at a time, and the execution of the next task can be started after the previous task is completed. In coroutine, only one task is executing at the same time, but the execution sequence among the tasks can be switched.
In addition, it should be noted that, in multithreading, the threads may also be switched, for example, two tasks need to be executed C, D in the thread I, and the task that needs to be processed is not temporarily executed in the thread II, so that in the execution process, the task in the thread I, for example, the task C, may be switched to be executed in the thread II, that is, the task D is executed by the thread I, and the task C is executed by the thread II. The scheduling among threads is realized by an operating system, and the scheduling of tasks in the coroutines is performed by corresponding code data, for example, the task yielding is performed by using the coroutine keywords such as yields in the coroutine code data. Thus the cost of switching between threads is greater than the cost of coroutine switching, and the speed of thread switching is slower than the speed of coroutine switching.
The switch execution of the multiple tasks in the multithreaded code data in the single thread realized by the technical scheme of the disclosure is to switch in a cooperative way, namely, the switch is realized by the code data.
The multithreaded code data refers to code data running in a multithreaded mode, wherein whether the code data is a programming language of the code data is limited by the code data, for example, if a corresponding thread library for multithreaded execution of the code data is provided in a function library of the programming language to support multithreaded operation and application of the code data, code compiled by the thread library in the programming language is the multithreaded code data, so that the multithreaded code data can run in a multithreaded mode in an environment supporting the running of the programming language. C. Programming languages such as c++, python, etc. support multi-threaded operation, and specific programming languages for multi-threaded code data are not limited herein.
Coroutine code data refers to code data that is run in a coroutine manner, that is, as mentioned above, when run in a coroutine manner, a plurality of tasks to be executed in the coroutine code data may be switched to be executed, but only one task is executed at a time. Similarly, the coroutine code data is also limited by the programming language, i.e. whether the programming language provides coroutine related functions that the code data executes in a coroutine manner, such as the coroutine keywords mentioned below, etc., so that the code data written by the coroutine related functions in the programming language is coroutine code data. In the technical scheme of the disclosure, the multithreaded code data is converted into the cooperative code data by performing static analysis on the multithreaded code data and adding corresponding cooperative keywords in the task function.
What way (i.e., single-threaded, multi-threaded, and co-threaded) code data is run by different programming languages is limited by, on the one hand, the actual running environment of the code data, i.e., whether the actual running environment performs the corresponding running mode, and on the other hand, the programming language, i.e., whether the programming language provides a library of functions and interfaces of the corresponding running mode, etc. For example, code data written in JavaScript can only run in a single thread mode when running in a browser, code data written in a programming language such as Python, C, C ++ can support multi-thread mode and also support co-thread mode, for example, code data written in a multi-thread mode by Python and multi-thread related functions provided by Python can support multi-thread mode and code data written in a co-thread related functions supported by Python and Python can support co-thread mode.
Of course, after the multi-thread code data is subjected to program grammar conversion to obtain the co-thread code data, tasks executed by the co-thread code data during running are also a plurality of tasks to be executed by the corresponding multi-thread code data.
In a specific embodiment, the multithreaded code data may be written on a client of the terminal, and the server obtains the multithreaded code data from the terminal and performs program grammar conversion on the multithreaded code data to obtain the cooperative code data, so that the server directly executes the obtained cooperative code data according to the task scheduling method of the present disclosure. Of course, the co-program code data obtained by performing the program grammar conversion on the multi-thread code data may also be directly stored in the server, so that the server may directly extract the co-program code data and execute the co-program code data according to the task scheduling method of the present disclosure.
In step S130, a plurality of tasks are added to the scheduler.
The scheduler is used for scheduling the code data to execute a plurality of tasks in the code data in a co-program manner, i.e. the scheduler is a co-program scheduler supporting the execution of a plurality of tasks in a co-program manner. In a programming language, if the co-program related function is provided by the programming language for a user to write co-program code data, a co-program scheduler is also provided in the programming language that co-program executes a plurality of tasks in the co-program code data.
The adding of the plurality of tasks to the scheduler, that is, adding the task functions corresponding to the plurality of tasks to the coroutine scheduler, so that the scheduler may schedule the plurality of tasks according to the step of step S150.
In one embodiment, if the programming language used by the multithreaded code data does not support writing the code data of the co-thread, the multithreaded code data needs to be compiled into other languages supporting writing the code data of the co-thread, that is, the multithreaded code data is compiled into the multithreaded code data of another programming language, and then the program grammar conversion is performed according to the compiled multithreaded code data to obtain the co-thread code data. Thus, in step S130, a plurality of tasks to be performed by the multi-threaded code data are added to a co-program scheduler provided by a programming language supporting co-program.
Step S150, the scheduler controls the switching execution of a plurality of tasks in a single thread according to the coroutine code data schedule.
That is, in step S150, although the co-program code data is executed in a single thread, the scheduler may schedule according to the co-program code data during execution, so as to implement the switching execution of a plurality of tasks in the single thread. The switching execution of the plurality of tasks in step S150 is described in detail below.
In the technical scheme of the disclosure, the multithreaded code data is converted into the cooperative code data through the program grammar, and the scheduler is utilized to schedule and control the switching execution of a plurality of tasks in a single thread according to the cooperative code data, so that the multithreaded code data is operated in the single thread, and the difficulty in operating a general programming language in a browser in the prior art is solved.
The present disclosure may be applied to programming software, particularly programming software that has a browser as a platform. Therefore, after the multi-thread code data is written according to the general programming language, the technical scheme of the present disclosure can be utilized to run the multi-thread code data in a single thread, for example, the multi-thread code data is subjected to program grammar conversion to obtain the co-thread code data, and the co-thread code data is compiled into a JavaScript language, so that the operation in a browser is realized.
In an exemplary embodiment, the multithreaded code data includes task functions for multithreading to execute multiple tasks in parallel, as shown in FIG. 4, and further includes, prior to step S110:
and step S011, performing static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through static analysis.
Static analysis refers to scanning the multi-thread code data through the technologies of lexical analysis, grammar analysis, control flow, data flow and the like under the condition that the multi-thread code data is not operated, so that the type corresponding to each task function in the multi-thread code data is determined.
The static analysis of the multi-threaded code data may be performed using static analysis tools such as Findbugs, PMD, checkstyle, blueMorpho, klocwork, LDRA Testbed, HP Fortify, pararoft C++ Test, etc., and is not specifically limited herein. Of course, the static analysis tool has requirements on the programming language of the code data, for example, the Pararoft C++ Test tool supports the static analysis of the C, C ++ language code data, and then the static analysis of the code data by using the Pararoft C++ Test tool cannot be performed by using the code data written by Python. The static analysis tool used is therefore dependent on the programming language used for the coroutine data.
In the running process of the code data, the execution of the task is to execute the task function corresponding to the task, and of course, the task function may include one or more sub-functions, where the one or more sub-functions form the task function.
By means of static analysis, whether the task corresponding to a certain task function is likely to be blocked or not can be judged. The task blocking may be caused by deferred execution of a task function, for example, a task function including a time. Sleep () deferred execution function, or may be a task function that needs to read data from other devices or files through an I/O interface, or a task function that needs to write data to other devices or files through an I/O interface, so that the task execution time corresponding to the task function is long, and then the task function is the blocking function.
Step S013, adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the co-program code data runs in a single thread, the scheduler switches tasks corresponding to the task functions among different task queues in the single thread through the co-program keywords, so that switching execution of a plurality of tasks is realized.
The added coroutine keywords depend on the programming language and the type of the task function, for example, yield, await and the like are supported in the python language, and yield, await and the like are added according to the type of the corresponding task function when the coroutine keywords are added, which is not particularly limited herein.
For example, in the multithreaded code data, the execution of the calling thread is deferred through a time. Sleep (1000), the function is a deferred execution function, then the task function containing the function is a deferred execution function, where the numeral 1000 in brackets indicates a deferred time, and after performing static analysis to determine that the task function where the function is located is a deferred execution function, a corresponding co-program keyword await is added to obtain the co-program code data: and waiting (1000), so that when the coroutine data runs to the task function where the function is located, the task corresponding to the task function is considered to be a task which is possibly blocked, and a task queue of the task corresponding to the task function is transferred to execute another task.
For another example, if it is determined by static analysis that the function block () in the multithreaded code data is a blocking function, a corresponding coroutine keyword yieldis added to the function to obtain coroutine code data: yield block (). If another function call_block is called to the blocking function block (), the function call_block is also regarded as a blocking function, and a corresponding coroutine keyword yieldis added to obtain coroutine code data: yield call_block. Therefore, in the process of executing the coroutine code data, if the coroutine keyword is operated to judge the task corresponding to the task function where the coroutine keyword is positioned, the task is regarded as blocked, the operation of the task is suspended, and other tasks are switched to be executed.
The supported co-program keywords are also different in different programming languages, for example, in the Python language, the yield and await keywords are supported, and meanwhile, the co-program keywords added by different types of task functions are also different, for example, the await co-program keywords are added for deferred execution functions, and the yield co-program keywords are added for blocking functions. Of course, in a specific programming language, there are other types of task functions and corresponding addable coroutine keywords, which are not specifically limited herein. The foregoing is merely exemplary and is not to be construed as limiting the scope of the present disclosure in its application.
In an exemplary embodiment, the task queues include an execution queue for storing currently executed tasks, a blocking queue for storing blocked tasks, and a preparation queue for storing tasks to be executed, and as shown in fig. 5, step S150 includes:
in step S151, during the process of running the co-program code data in a single thread, it is determined whether the task located in the execution queue is blocked according to the co-program key.
In step S153, if the task in the execution queue is blocked, execution of the blocked task in the execution queue is suspended, the blocked task is moved to the blocking queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
In the running process of the coroutine code data, if the task is run to the task corresponding to the task function containing the coroutine key words, the task is considered to be blocked, so that the task is moved from the execution queue to the blocking queue through the scheduler, and the task waiting to be executed in the preparation queue is moved to the execution queue, thereby executing the task moved to the execution queue, and realizing the switching execution of a plurality of tasks.
Wherein, the task in the preparation queue can be one or a plurality of tasks. In a specific embodiment, in order of placing tasks in the preparation queue, the tasks are fetched from the preparation queue and transferred to the execution queue to execute the tasks, i.e. the tasks are fetched from the preparation queue and transferred to the execution queue to execute the tasks in a first-in first-out order. In another embodiment, the tasks in the preparation queue are fetched according to the priorities of the tasks in the preparation queue, that is, the task with the high priority is fetched first and transferred to the execution queue to execute the task with the high priority.
In an exemplary embodiment, as shown in fig. 6, after step S151, the method further includes:
Step S161, monitoring the tasks located in the blocking queue.
In step S162, if it is monitored that the blocking condition causing the blocking of the task located in the blocking queue is eliminated, the task located in the blocking queue is moved to the preparation queue by the scheduler to wait for resuming the execution of the blocked task.
After the blocked task is moved to the blocked queue, if the condition causing the task to be blocked is eliminated, the task is moved from the blocked queue to the ready queue again to wait for the task to be resumed. The reason why a task is blocked may be that a task calls another task during execution, so that the scheduler moves the task to the blocking queue and moves the called task to the execution queue to execute the called task. Then, when the execution of the called task is completed or an exception occurs, the condition that is considered to cause the task to be blocked is eliminated, so that the task is moved from the blocking queue to the preparation queue again to wait for the recovery execution of the task.
The following is an example of the switching of tasks corresponding to task functions including the yield coroutine key between different task queues:
During single-threaded execution of the coroutine code data, the function containing the yield coroutine key suspends the generator execution, and if the function containing the yield coroutine key is executed, the execution is suspended. The value of the expression following the yield co-range key is returned to the caller of the producer. Wherein the yield coroutine key actually returns an iterateresult object with two attributes: value and done, where the value attribute is the result of evaluating the expression where the yield co-range key is located, and done indicates that the producer function has not yet been executed to completion.
In the execution process, once the yfield cooperative key word is encountered, the function containing the yfield cooperative key word is suspended, and the scheduler moves the task corresponding to the task function where the function is located to the blocking queue and executes the task moved from the preparation queue to the execution queue.
When next () of the generator is called, the function that was suspended will resume execution. The next () of the generator is called, that is, after the condition that causes the task to be blocked is eliminated, the blocked task is moved from the preparation queue to the execution queue again. Of course, whether next () of the generator is called depends on whether the condition causing the task blocking is eliminated.
In an exemplary embodiment, as shown in fig. 7, step S150 includes:
step S251, performing execution status monitoring on the tasks located in the execution queue.
In step S252, if it is monitored that the task in the execution queue is completed, the task whose execution is completed is removed from the execution queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
And removing the task from the execution queue for executing the completed task in the execution queue, and further executing other tasks to be executed in the preparation queue.
In an exemplary embodiment, before step S151, the method further includes:
and distributing the tasks which are initially executed in the tasks to an execution queue according to the initial execution sequence of the tasks in the coroutine code data, and distributing other tasks in the tasks to a preparation queue.
In an exemplary embodiment, as shown in fig. 8, step S150 further includes:
and compiling the coroutine code data to meet the program language requirement of the running environment where the single thread is located on the running code data.
Step S150 includes:
step S250 is to schedule and control the switching execution of a plurality of tasks in a single thread of the running environment according to the compiled coroutine code data through a scheduler.
For example, as mentioned above, in order to implement running code data in a browser, the code data needs to be compiled into code data in JavaScript language. Thus, in order to meet the programming language requirements of the co-program code data in the actual running environment, the co-program code data needs to be compiled to realize the running of the co-program code data in the actual single-threaded environment.
By compiling the coroutine code data, the operation of the code data of various program languages can be realized in an operation environment, and the application range is wide.
The following is an embodiment of the apparatus of the present disclosure, which may be used to execute the task scheduling method embodiment executed by theserver 200 of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to an embodiment of a task scheduling method of the present disclosure.
Fig. 9 is a block diagram illustrating a task scheduling device that may be used in theserver 200 of the implementation environment shown in fig. 1 to perform all or part of the steps of the task scheduling method shown in any of the above method embodiments, according to an exemplary embodiment. As shown in fig. 9, the task scheduling device includes:
anacquisition module 110 configured to perform: and acquiring the co-program code data obtained by converting the program grammar of the multi-thread code data, wherein the multi-thread code data is used for executing a plurality of tasks in parallel by multiple threads.
An addingmodule 130, which is connected to the obtainingmodule 110 and configured to perform: a plurality of tasks is added to the scheduler.
A schedule control module 150, coupled to the addingmodule 130, configured to perform: and controlling the switching execution of the plurality of tasks in the single thread according to the coroutine code data scheduling by a scheduler.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the task scheduling method, and will not be described herein.
In an exemplary embodiment, the multithreaded code data includes task functions for multithreading in parallel execution of a plurality of tasks, the task scheduling device further comprising:
a static analysis module configured to perform: and carrying out static analysis on the multi-thread code data, and determining the type corresponding to each task function in the multi-thread code data through static analysis.
The coroutine keyword adding module is configured to execute: adding corresponding coroutine keywords into the task functions according to the types corresponding to each task function to obtain coroutine code data; and in the process that the co-program code data runs in a single thread, the scheduler switches tasks corresponding to the task functions among different task queues in the single thread through the co-program keywords, so that switching execution of a plurality of tasks is realized.
After the static analysis module and the coroutine keyword adding module execute the corresponding operations, the obtainingmodule 110 may directly obtain the corresponding coroutine code data.
In an exemplary embodiment, the task queues include an execution queue for storing currently executed tasks, a blocking queue for storing blocked tasks, and a preparation queue for storing tasks to be executed, and the scheduling control module includes:
a congestion judging unit configured to perform: during the process of the co-thread code data running on a single thread, whether the task in the execution queue is blocked is judged according to the co-thread key word.
A schedule switching unit configured to perform: if the task in the execution queue is blocked, suspending executing the blocked task in the execution queue, moving the blocked task to the blocking queue through the scheduler, and moving the task in the preparation queue to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the task scheduling device further includes:
a first monitoring module configured to perform: tasks located in the blocking queue are monitored.
A first transfer module configured to perform: if the blocking condition causing the blocking of the task located in the blocking queue is monitored to be eliminated, the task located in the blocking queue is moved to a preparation queue by a scheduler to wait for resuming the execution of the blocked task.
In an exemplary embodiment, the task scheduling device further includes:
an allocation module configured to perform: and distributing the tasks which are initially executed in the tasks to an execution queue according to the initial execution sequence of the tasks in the coroutine code data, and distributing other tasks in the tasks to a preparation queue.
In an exemplary embodiment, the scheduling control module further includes:
a monitoring unit configured to perform: and performing execution state monitoring on the tasks in the execution queue.
A transfer unit configured to perform: if the task in the execution queue is monitored to be completed, the task in the execution queue after completion of execution is removed from the execution queue by the scheduler, and the task in the preparation queue is moved to the execution queue to execute the task moved to the execution queue.
In an exemplary embodiment, the task scheduling device further includes:
a compilation module configured to perform: and compiling the coroutine code data to meet the program language requirement of the running environment where the single thread is located on the running code data.
The scheduling control module comprises:
a scheduling control unit configured to perform: and scheduling and controlling the switching execution of a plurality of tasks in a single thread of the running environment according to the compiled coroutine code data through a scheduler.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the task scheduling method, and will not be described herein.
It is to be understood that these modules may be implemented in hardware, software, or a combination of both. When implemented in hardware, these modules may be implemented as one or more hardware modules, such as one or more application specific integrated circuits. When implemented in software, the modules may be implemented as one or more computer programs executing on one or more processors, such as a program stored inmemory 250 executed bycentral processor 270 of fig. 2.
The present disclosure also provides a task scheduling device, as shown in fig. 10, which may be used in theserver 200 of the implementation environment shown in fig. 1, where thetask scheduling device 1000 includes:
aprocessor 1001; and
Memory 1002, on whichmemory 1002 is stored computer readable instructions which, when executed byprocessor 1001, implement the task scheduling method in any of the task scheduling method embodiments described above. Wherein theprocessor 1001, during execution, reads computer readable instructions in thememory 1002 via the bus/communication line 1003.
The specific manner in which the processor of the apparatus in this embodiment performs the operations has been described in detail in relation to this embodiment of the task scheduling method and will not be described in detail here.
Optionally, a computer readable storage medium has a computer program stored thereon, which when executed by a processor implements the task scheduling method in any of the task scheduling method embodiments above. The computer readable storage medium includes, for example, amemory 250 of a computer program executable by acentral processor 270 of theserver 200 to implement the task scheduling method described above.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.