Disclosure of Invention
The exemplary embodiment of the disclosure provides a task scheduling method and device of a distributed storage system, which are used for improving the processing efficiency of a time delay sensitive task.
A first aspect of the present disclosure provides a task scheduling method of a distributed storage system, the method being applied to the distributed storage system, the distributed storage system including a plurality of multi-core processors, each of the multi-core processors including a plurality of processor cores, the method comprising:
responding to a task scheduling request, and determining a task to be executed based on the task scheduling request;
determining a target task scheduling domain corresponding to the task to be executed in a task scheduling domain of the distributed storage system according to the type of the task to be executed, wherein any one task scheduling domain comprises at least one processor core of a multi-core processor in the distributed storage system, and processor cores included in different task scheduling domains are different;
And storing the task to be executed in a task queue corresponding to the target task scheduling domain so that a processor core of the target task scheduling domain executes the task to be executed in the task queue.
In this embodiment, the target scheduling domain is determined according to the type of the task to be executed, then the task to be executed is stored in the task queue of the target scheduling domain, and then the task to be executed is executed through the processor core corresponding to the target task scheduling domain. Therefore, each processor core in the scheme only processes the task type corresponding to the corresponding task scheduling domain, so that the processor cores of each task scheduling domain are not mutually influenced, the processing of the time delay sensitive task is prevented from being influenced, and the processing efficiency of the time delay sensitive task is improved.
In one embodiment, the determining, according to the type of the task to be executed, a target task scheduling domain corresponding to the task to be executed in a task scheduling domain of the distributed storage system includes:
and determining a target task scheduling domain corresponding to the type of the task to be executed by utilizing the corresponding relation between the preset task type and the task scheduling domain.
In one embodiment, the task scheduling domains include a front-end task scheduling domain, a back-end task scheduling domain, and a back-end task scheduling domain, wherein:
the front-end task scheduling domain is responsible for processing part or all of a storage protocol task, a quality of service QoS task of a user input/output IO, a user IO query read cache task, a user IO write-in buffer task and a read-write cache disk task;
the back-end task scheduling domain is responsible for processing part or all of a read-write capacity disk task, a data transmission task between storage nodes, a log disk-drop task and a data reconstruction task;
the background task scheduling domain is responsible for processing part or all of management surface interaction tasks, periodic write buffer area disk-dropping tasks, cluster heartbeat keep-alive tasks, disk data inspection tasks and disk event processing tasks.
In this embodiment, three types of task scheduling domains are divided, and each task scheduling domain has a corresponding task type, so that the processor cores of each task scheduling domain are not affected by each other, the processing of the time delay sensitive task is prevented from being affected, and the processing efficiency of the time delay sensitive task is further improved.
In one embodiment, the storing the task to be performed in a task queue corresponding to the target task scheduling domain includes:
Determining each task queue corresponding to the target task scheduling domain by utilizing the corresponding relation between the task scheduling domain and the queue;
and obtaining a target task queue from the task queues based on the priority of the task to be executed, and storing the task to be executed to the target task queue.
According to the embodiment, the task to be executed is stored in the corresponding target task queue according to the priority of the task to be executed, so that the processing efficiency of the time-sensitive task is further improved.
In one embodiment, the obtaining a target task queue in each task queue based on the priority of the task to be executed includes:
and determining a task queue corresponding to the priority of the task to be executed by utilizing the corresponding relation between the priority of the task to be executed and the task queue, and determining the task queue as the target task queue.
In the embodiment, the task queue corresponding to the priority of the task to be executed is obtained by utilizing the corresponding relation between the priority of the task to be executed and the task queue, so that the task of the delay sensitive type is ensured to be processed first, and the processing efficiency of the task of the delay sensitive type is further improved.
In one embodiment, the storing the task to be executed in a task queue corresponding to the target task scheduling domain, so that a processor core of the target task scheduling domain executes the task to be executed in the task queue, includes:
and storing the task to be executed in a task queue corresponding to the target task scheduling domain, so that each processor core in the target task scheduling domain can execute the task to be executed in the task queue by using a corresponding thread.
In this embodiment, a thread is created for each processor core in each task scheduling domain, so that the threads in the same task scheduling domain do not need to be switched between different processor cores, thereby further improving the processing efficiency of the delay-sensitive task.
A second aspect of the present disclosure provides a task scheduling apparatus of a distributed storage system, the apparatus for performing a task scheduling method of the distributed storage system, the method being applied to the distributed storage system, the distributed storage system including a plurality of multi-core processors each including a plurality of processor cores, the apparatus comprising:
The task to be executed determining module is used for responding to a task scheduling request and determining a task to be executed based on the task scheduling request;
a target task scheduling domain determining module, configured to determine, according to the type of the task to be executed, a target task scheduling domain corresponding to the task to be executed in a task scheduling domain of the distributed storage system, where any one task scheduling domain includes at least one processor core of a multi-core processor in the distributed storage system, and processor cores included in different task scheduling domains are different;
and the storage module is used for storing the task to be executed in a task queue corresponding to the target task scheduling domain so that the processor core of the target task scheduling domain can execute the task to be executed in the task queue.
In one embodiment, the target task scheduling domain determining module is specifically configured to:
and determining a target task scheduling domain corresponding to the type of the task to be executed by utilizing the corresponding relation between the preset task type and the task scheduling domain.
In one embodiment, the task scheduling domains include a front-end task scheduling domain, a back-end task scheduling domain, and a back-end task scheduling domain, wherein:
The front-end task scheduling domain is responsible for processing part or all of a storage protocol task, a quality of service QoS task of a user input/output IO, a user IO query read cache task, a user IO write-in buffer task and a read-write cache disk task;
the back-end task scheduling domain is responsible for processing part or all of a read-write capacity disk task, a data transmission task between storage nodes, a log disk-drop task and a data reconstruction task;
the background task scheduling domain is responsible for processing part or all of management surface interaction tasks, periodic write buffer area disk-dropping tasks, cluster heartbeat keep-alive tasks, disk data inspection tasks and disk event processing tasks.
In one embodiment, the storage module is specifically configured to:
determining each task queue corresponding to the target task scheduling domain by utilizing the corresponding relation between the task scheduling domain and the queue;
and obtaining a target task queue from the task queues based on the priority of the task to be executed, and storing the task to be executed to the target task queue.
In one embodiment, the storage module executes the task to be executed to obtain a target task queue from the task queues based on the priority of the task to be executed, and is specifically configured to:
And determining a task queue corresponding to the priority of the task to be executed by utilizing the corresponding relation between the priority of the task to be executed and the task queue, and determining the task queue as the target task queue.
In one embodiment, the storage module is specifically configured to:
and storing the task to be executed in a task queue corresponding to the target task scheduling domain, so that each processor core in the target task scheduling domain can execute the task to be executed in the task queue by using a corresponding thread.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions for execution by the at least one processor; the instructions are executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect provided by embodiments of the present disclosure, there is provided a computer storage medium storing a computer program for performing the method according to the first aspect.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure are intended to be within the scope of this disclosure.
The term "and/or" in the embodiments of the present disclosure describes an association relationship of association objects, which indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The application scenario described in the embodiments of the present disclosure is for more clearly describing the technical solution of the embodiments of the present disclosure, and does not constitute a limitation on the technical solution provided by the embodiments of the present disclosure, and as a person of ordinary skill in the art can know that, with the appearance of a new application scenario, the technical solution provided by the embodiments of the present disclosure is equally applicable to similar technical problems. In the description of the present disclosure, unless otherwise indicated, the meaning of "a plurality" is two or more.
The present disclosure provides a task scheduling method of a distributed storage system, wherein a target scheduling domain is determined according to a type of a task to be executed, the task to be executed is then stored in a task queue of the target scheduling domain, and then the task to be executed is executed through a processor core corresponding to the target task scheduling domain. For example, as shown in FIG. 2, the processor core corresponding to scheduling domain 1 processes only tasks of type A (e.g., latency sensitive) and the processor core corresponding to scheduling domain 2 processes only tasks of type B (e.g., time consuming tasks such as garbage collection, etc.). Therefore, the processor cores in the scheme only process the task types corresponding to the task scheduling domains corresponding to the processor cores, so that the processor cores of the task scheduling domains are not mutually affected, the processing of the time delay sensitive tasks is prevented from being affected, and the processing efficiency of the time delay sensitive tasks is improved.
In order to better understand the technical solution of the present disclosure, first, a description is given of a distributed storage system in the present disclosure, as shown in fig. 3, which is an architecture diagram of the distributed storage system, and mainly includes three major parts, namely a data plane layer, a management plane layer and a hardware resource layer. The following describes the three structures respectively:
(1) The data plane layer mainly comprises a storage protocol layer, a storage service layer and a storage persistence layer. The storage protocol layer is mainly used for block storage, file storage, object storage and the like. The storage service layer is mainly used for address indexing, cache service, log service, snapshot, erasure compression and the like. The storage persistence layer is mainly used for data reconstruction, copy storage and the like.
(2) The management plane layer is mainly used for cluster management, equipment authentication and the like.
(3) The hardware resource layer mainly comprises a plurality of hardware resources, mainly comprises a CPU (Central Processing Unit ), a memory, a disk and a network.
For better understanding of the technical solutions of the present disclosure, the following explains technical terms in the present disclosure:
multi-core processor: multiple complete compute engines (processor cores) are integrated in one processor.
The following describes aspects of the present disclosure in detail with reference to the accompanying drawings.
As shown in fig. 4, an application scenario of a task scheduling method of a distributed storage system includes a server 410 and terminal devices 420, and in fig. 4, one terminal device 420 is taken as an example, and the number of terminal devices 420 is not limited in practice. The terminal device 420 may be a mobile phone, a tablet computer, a personal computer, etc. The server 410 may be implemented by a single server or by a plurality of servers. The server 410 may be implemented by a physical server or may be implemented by a virtual server.
In one possible application scenario, a user sends a task scheduling request to a server 410 through a terminal device 420, after the server 410 receives the task scheduling request, the server 410 responds to the task scheduling request, determines a task to be executed based on the task scheduling request, then the server 410 determines a target task scheduling domain corresponding to the task to be executed in a task scheduling domain of the distributed storage system according to the type of the task to be executed, and stores the task to be executed in a task queue corresponding to the target task scheduling domain, so that a processor core of the target task scheduling domain executes the task to be executed in the task queue, and sends an execution result of the task to be executed to the terminal device 420 for display.
Wherein, any one task scheduling domain comprises at least one processor core of a multi-core processor in the distributed storage system, and the processor cores included in different task scheduling domains are different;
FIG. 5 is a flow chart of a task scheduling method of the distributed storage system, wherein the method is applied to the distributed storage system, and the distributed storage system comprises a plurality of multi-core processors, and each multi-core processor comprises a plurality of processor cores, and the method may comprise the following steps:
Step 501: responding to a task scheduling request, and determining a task to be executed based on the task scheduling request;
the task scheduling request comprises a task to be executed.
Step 502: determining a target task scheduling domain corresponding to the task to be executed in a task scheduling domain of the distributed storage system according to the type of the task to be executed, wherein any one task scheduling domain comprises at least one processor core of a multi-core processor in the distributed storage system, and processor cores included in different task scheduling domains are different;
the number of processor cores corresponding to each task scheduling domain is preset, for example, 4 processor cores responsible for storage in the distributed storage system are respectively processor core 1, processor core 2, processor core 3 and processor core 4. Processor core 1 and processor core 2 may be partitioned into task scheduling domain 1, processor core 3 into task scheduling domain 2, and processor core 4 into task scheduling domain 3. Note that, the number of processor cores corresponding to a specific task scheduling domain may be set according to actual situations, and the embodiment is not limited herein.
In one embodiment, the target task scheduling field corresponding to the task to be performed is determined by:
and determining a target task scheduling domain corresponding to the type of the task to be executed by utilizing the corresponding relation between the preset task type and the task scheduling domain. Wherein, table 1 is the correspondence between the task type and the task scheduling domain:
| task type | Task scheduling domain |
| Task type 1 | Front-end task scheduling domain |
| Task type 2 | Back-end task scheduling domain |
| Task type 3 | Background task scheduling domain |
| … | … |
TABLE 1
Next, a detailed description will be given of the task scheduling domain in this embodiment, and as shown in fig. 6, the task scheduling domain in this embodiment is a structure diagram of the task scheduling domain, where the task scheduling domain in this embodiment mainly includes the following three types:
1. front-end task scheduling domain
The front-end task scheduling domain is responsible for processing tasks such as a storage protocol task, a quality of service QoS task of user input/output IO, a user IO query read cache task, a user IO write-in buffer task, a read-write cache disk task and the like.
The front-end task scheduling domain in this embodiment is responsible for processing tasks directly related to user IO, and after the task is executed, the task scheduling domain can directly respond to the execution result of the user IO. The front-end task scheduling domain typically handles latency sensitive tasks such as the user IO query read cache task and user IO write buffer task described above.
2. Back-end task scheduling domain
The back-end task scheduling domain is responsible for processing read-write capacity disk tasks, data transmission tasks among storage nodes, log disk-drop tasks, data reconstruction tasks and the like.
The back-end task scheduling domain in this embodiment is responsible for handling tasks related to back-end disk dropping and back-end network transmission, and is also typically a latency-sensitive task. Such as data transfer tasks between storage nodes, log landing tasks, etc.
3. Background task scheduling domain
The background task scheduling domain is responsible for processing management surface interaction tasks, periodically writing buffer area disk-dropping tasks, cluster heartbeat keep-alive tasks, disk data inspection tasks, disk event processing tasks and the like.
The background task scheduling domain in this embodiment is a task that is responsible for processing a task that interacts with the management plane, and a task that is activated to run only when a certain condition is met, which is usually a latency insensitive task. For example, the periodic write buffer drop tasks described above, and the like.
It should be noted that: the division of the task scheduling domain and the task that each task scheduling domain is responsible for processing in this embodiment are only for illustration, and the division of the task scheduling domain and the task that each task scheduling domain is responsible for processing are not limited, and specifically may be set according to actual situations, and this embodiment is not limited herein.
Step 503: and storing the task to be executed in a task queue corresponding to the target task scheduling domain so that a processor core of the target task scheduling domain executes the task to be executed in the task queue.
In one embodiment, step 503 may be embodied as:
determining each task queue corresponding to the target task scheduling domain by utilizing the corresponding relation between the task scheduling domain and the queue; and obtaining a target task queue from the task queues based on the priority of the task to be executed, and storing the task to be executed to the target task queue.
Wherein the target task queue may be determined by: and determining a task queue corresponding to the priority of the task to be executed by utilizing the corresponding relation between the priority of the task to be executed and the task queue, and determining the task queue as the target task queue. Table 2 is the correspondence between the priorities of the tasks to be executed and the task queues:
| priority of tasks to be performed | Task queue |
| High height | Advanced task queue |
| In (a) | Intermediate task queue |
| Low and low | Low-level task queue |
| … | … |
TABLE 2
As shown in fig. 7, a schematic diagram of each task queue corresponding to the task scheduling domain a is shown, where each task queue corresponding to the task scheduling domain a is a high-level task queue, a medium-level task queue, and a low-level task queue. The execution sequence of the tasks of each queue is as follows: a high-level task queue, a medium-level task queue and a low-level task queue. And executing the tasks in the intermediate task queue after all the tasks in the high-level task queue are executed, and executing the tasks in the low-level task queue after the tasks in the intermediate task queue are executed. Each task queue follows a first-in first-out principle.
In order to avoid frequent thread context switching and further improve the processing efficiency of the latency sensitive type task, in one embodiment, the task to be executed is stored in a task queue corresponding to the target task scheduling domain, so that each processor core in the target task scheduling domain executes the task to be executed in the task queue by using a thread corresponding to each processor core. For example, if the number of processor cores in the task scheduling domain 1 is 2, then the number of threads in the task scheduling domain 1 is also 2, i.e., each processor core in the task scheduling domain has a corresponding thread.
For further understanding of the technical solution of the present disclosure, the following detailed description with reference to fig. 8 may include the following steps:
step 801: responding to a task scheduling request, and determining a task to be executed based on the task scheduling request;
step 802: determining a target task scheduling domain corresponding to the type of the task to be executed by utilizing the corresponding relation between the preset task type and the task scheduling domain;
step 803: determining each task queue corresponding to the target task scheduling domain by utilizing the corresponding relation between the task scheduling domain and the queue;
Step 804: determining a task queue corresponding to the priority of the task to be executed by utilizing the corresponding relation between the priority of the task to be executed and the task queue, and determining the task queue as the target task queue;
step 805: and storing the task to be executed to the target task queue so that the processor core of the target task scheduling domain executes the task to be executed in the target task queue.
Based on the same disclosure concept, the task scheduling method of the distributed storage system as described above in the disclosure may also be implemented by a task scheduling device of the distributed storage system. The task scheduling device of the distributed storage system has similar effects to those of the foregoing method, and will not be described herein.
Fig. 9 is a schematic structural view of a task scheduling device of a distributed storage system according to an embodiment of the present disclosure.
As shown in fig. 9, a task scheduling device 900 of the distributed storage system of the present disclosure may include a task to be performed determination module 910, a target task scheduling domain determination module 920, and a storage module 930.
The task to be executed determining module 910 is configured to respond to a task scheduling request, and determine a task to be executed based on the task scheduling request;
A target task scheduling domain determining module 920, configured to determine, according to the type of the task to be executed, a target task scheduling domain corresponding to the task to be executed in a task scheduling domain of the distributed storage system, where any one task scheduling domain includes at least one processor core of a multi-core processor in the distributed storage system, and processor cores included in different task scheduling domains are different;
and a storage module 930, configured to store the task to be executed in a task queue corresponding to the target task scheduling domain, so that a processor core of the target task scheduling domain executes the task to be executed in the task queue.
In one embodiment, the target task scheduling domain determining module 920 is specifically configured to:
and determining a target task scheduling domain corresponding to the type of the task to be executed by utilizing the corresponding relation between the preset task type and the task scheduling domain.
In one embodiment, the task scheduling domains include a front-end task scheduling domain, a back-end task scheduling domain, and a back-end task scheduling domain, wherein:
the front-end task scheduling domain is responsible for processing part or all of a storage protocol task, a quality of service QoS task of a user input/output IO, a user IO query read cache task, a user IO write-in buffer task and a read-write cache disk task;
The back-end task scheduling domain is responsible for processing part or all of a read-write capacity disk task, a data transmission task between storage nodes, a log disk-drop task and a data reconstruction task;
the background task scheduling domain is responsible for processing part or all of management surface interaction tasks, periodic write buffer area disk-dropping tasks, cluster heartbeat keep-alive tasks, disk data inspection tasks and disk event processing tasks.
In one embodiment, the storage module 930 is specifically configured to:
determining each task queue corresponding to the target task scheduling domain by utilizing the corresponding relation between the task scheduling domain and the queue;
and obtaining a target task queue from the task queues based on the priority of the task to be executed, and storing the task to be executed to the target task queue.
In one embodiment, the storage module 930 executes the task to be executed to obtain a target task queue from the task queues based on the priority of the task to be executed, which is specifically configured to:
and determining a task queue corresponding to the priority of the task to be executed by utilizing the corresponding relation between the priority of the task to be executed and the task queue, and determining the task queue as the target task queue.
In one embodiment, the storage module 930 is specifically configured to:
and storing the task to be executed in a task queue corresponding to the target task scheduling domain, so that each processor core in the target task scheduling domain can execute the task to be executed in the task queue by using a corresponding thread.
Having described a task scheduling method and apparatus of a distributed storage system according to an exemplary embodiment of the present disclosure, next, an electronic device according to another exemplary embodiment of the present disclosure is described.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
In some possible implementations, an electronic device according to the present disclosure may include at least one processor, and at least one computer storage medium. Wherein the computer storage medium stores program code which, when executed by a processor, causes the processor to perform the steps in the task scheduling method of the distributed storage system according to various exemplary embodiments of the present disclosure described above in the present specification. For example, the processor may perform steps 501-503 as shown in FIG. 5.
An electronic device 1000 according to such an embodiment of the present disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general-purpose electronic device. Components of electronic device 1000 may include, but are not limited to: the at least one processor 1001, the at least one computer storage medium 1002, and a bus 1003 that connects the various system components, including the computer storage medium 1002 and the processor 1001.
Bus 1003 represents one or more of several types of bus structures, including a computer storage media bus or computer storage media controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
Computer storage media 1002 may include readable media in the form of volatile computer storage media, such as random access computer storage media (RAM) 1021 and/or cache storage media 1022, and may further include read only computer storage media (ROM) 1023.
Computer storage media 1002 may also include program/utility 1025 having a set (at least one) of program modules 1024, such program modules 1024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The electronic device 1000 can also communicate with one or more external devices 1004 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any device (e.g., router, modem, etc.) that enables the electronic device 1000 to communicate with one or more other electronic devices. Such communication may occur through an input/output (I/O) interface 1005. Also, electronic device 1000 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1006. As shown, the network adapter 1006 communicates with other modules for the electronic device 1000 over the bus 1003. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with the electronic device 1000, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In some possible embodiments, aspects of a task scheduling method of a distributed storage system provided by the present disclosure may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the task scheduling method of a distributed storage system according to various exemplary embodiments of the present disclosure described above when the program product is run on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a random access computer storage medium (RAM), a read-only computer storage medium (ROM), an erasable programmable read-only computer storage medium (EPROM or flash memory), an optical fiber, a portable compact disc read-only computer storage medium (CD-ROM), an optical computer storage medium, a magnetic computer storage medium, or any suitable combination of the foregoing.
The program product of task scheduling for a distributed storage system of embodiments of the present disclosure may employ a portable compact disk read-only computer storage medium (CD-ROM) and include program code and may run on an electronic device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device, partly on the remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic device may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., connected through the internet using an internet service provider).
It should be noted that although several modules of the apparatus are mentioned in the detailed description above, this division is merely exemplary and not mandatory. Indeed, the features and functions of two or more modules described above may be embodied in one module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module described above may be further divided into a plurality of modules to be embodied.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be apparent to those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk computer storage media, CD-ROM, optical computer storage media, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable computer storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable computer storage medium produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.