Disclosure of Invention
Technical problem
Currently, the mainstream processor system supports simultaneous multithreading. Simultaneous multithreading refers to the simultaneous execution of multiple threads on a Central Processing Unit (CPU) core, and although multiple threads execute simultaneously within a core, by the time they are executed, they are isolated from each other and do not require software intervention for context switching. Due to the increasing difference between the high main frequency of the processor core and the high delay of the external input and output device, the execution of the program on the processor core is often halted because the processor core cannot execute the subsequent instructions due to waiting for a series of input and output operations, and at this time, the processor core either waits or is switched to another thread through context switching; meanwhile, thread switching also needs a long time to complete, which brings about a reduction in the utilization rate of core computing resources.
Currently, the allocation of resources is mainly divided into two ways, one is static allocation (static allocation) and the other is dynamic allocation (dynamic allocation). Static allocation means that, after resource allocation is completed, the allocated resource is supplied to only one thread and, even if the thread cannot use the resource, the resource cannot be transferred to another thread in the middle. The static allocation mode is simple to manage, but the allocation efficiency is low. The dynamic allocation means that in the process of executing the threads, the scheduling module can complete resource allocation according to the execution condition of each thread, and the dynamic allocation mode can effectively improve the resource allocation efficiency. For the resource itself to be divided into the resource requiring ordered release and the resource allowing unordered release, in the prior art, the static allocation manner uses the resource requiring ordered release, and the dynamic allocation manner uses the resource allowing unordered release.
In the process of executing the thread, the resource release sequence cannot be controlled, so the sequence of the resource self distribution can be disordered because of the sequence of the resource release. For example, when allocating resources, the resource scheduler allocatesresources 1 and 3 to thread m, allocatesresources 2 and 4 to thread n, and the original sequence of the resources is 1, 2, 3, and 4, since thread n releasesresource 2 first, thread m appliesresource 2 as its new resource, the scheduler can reallocateresource 2 to m, and at this time, the resource sequence in m becomes 1, 3, and 2, which is obviously inconsistent with the original resource sequence. If the resource scheduler insists on allocating resources in the order of 1, 2, 3, and 4, if thread m cannot release resource No. 1 for some reason, even if thread n releases all its resources, it can only wait for resource No. 1 to be released before being allocated with new resources, and this resource allocation efficiency is extremely low.
The existing sharing scheme of the ordered resources is a static sharing mode, that is, the ordered resources are managed in each thread. Before the thread executes, the resource to be used by the thread is fixedly allocated at one time according to a certain principle. The thread can not share the own private resources to other threads on the same processor during the execution process, even if the resources of other threads are tense, which causes the corresponding threads to be halted. Obviously, the static allocation of resources is inefficient, and cannot fully utilize all resources, thereby resulting in a decrease in the overall performance of the processor. In addition, another problem of this resource management method is that when the thread exits, the resource re-partitioning is also complicated, and in order to ensure the correctness of resource allocation, the thread is often re-allocated after the pipeline is drained.
To achieve the capability of executing multiple threads simultaneously, it is necessary to make all the computational resources visible to all the threads while independently maintaining the context of each thread, wherein one of the biggest challenges is how to make all the resources dynamically available to different virtual cores to improve the resource utilization.
Technical scheme for solving problems
The present application is directed to addressing dynamic allocation sharing (e.g., instruction reorder buffering (ROB)) of ordered freed resources. At present, the resources are all statically allocated, and each thread can only use inherent resources which are pre-allocated when the thread enters a pipeline. Such mechanisms are relatively simple to implement and manage, but resources cannot be fully utilized, and quality of service (QoS) guarantees for resource allocation are difficult to achieve.
In order to improve the utilization rate of ordered resources and overcome the defects of a static sharing mode, the application provides a method and a system for dynamically managing the ordered resources based on tokens (tokens). The method can ensure that the ordered resources are dynamically shared among different threads, and the resource token management module only needs to manage the quantity of the allocable resources of each thread and does not need to be responsible for the sequence of the resources. The thread resource management module completes the management of the resource sequence so as to solve the problem of disorder when the ordered resources are reused. Meanwhile, the method disclosed by the invention is combined with a token mechanism to ensure that the thread can not be allocated to resources outside the limit when applying for resources again, thereby realizing the service quality guarantee on resource use. When the resources are released, the resources are directly put back to the available resource pool, so that when the thread resources are released and quit, other threads can obtain the resources released by the quit threads without reallocating the resources.
According to one aspect of the invention, a management system for dynamically sharing ordered resources in a multi-threaded system is disclosed, the system comprising: an available resource pool configured to manage available resources; a resource token management module configured to manage an amount of resources each thread can allocate; the arbitration module is configured to allocate resources to each thread according to the number of available resources, the number of resources which can be allocated by each thread and the number of resources which are applied for allocation by each thread; and a thread resource management module, wherein one thread resource management module manages resources of one thread, each thread resource management module being configured to release resources of a managed thread to the pool of available resources in an order of resources allocated by the managed thread.
In accordance with another aspect of the present invention, a method of dynamically sharing ordered resources in a multi-threaded system is disclosed, the method comprising managing available resources by a pool of available resources; managing the quantity of allocable resources of each thread by a resource token management module; and allocating resources to each thread by an arbitration module according to the number of available resources, the number of resources which can be allocated by each thread and the number of resources which are applied for allocation by each thread, wherein one thread resource management module manages the resources of one thread, and each thread resource management module is configured to release the resources of the managed thread according to the sequence of the resources allocated by the managed thread.
According to another aspect of the invention, a processing apparatus for dynamically sharing ordered resources in a multi-threaded system is disclosed, the apparatus comprising a processing unit and a storage unit storing computer-readable instructions which, when executed by the processing unit, perform the disclosed method for dynamically sharing ordered resources in a multi-threaded system.
Advantageous effects of the invention
The application provides a management system and a method capable of dynamically sharing ordered resources, so as to ensure that the resources can be effectively allocated and released according to the allocated sequence, effectively improve the resource utilization rate, and realize the QoS of resource allocation by combining a token mechanism.
Detailed Description
Embodiments of the present invention are described in detail below with reference to the accompanying drawings. The same or similar components may be denoted by similar reference numerals although they are shown in different drawings. A detailed description of configurations or processes known in the art may be omitted to avoid obscuring the subject matter of the present invention.
As previously mentioned, some resources require in-order reclamation (e.g., instruction reorder buffering) after completion of use. In the SMT system, the currently known scheme adopts a static allocation mode of resources, and the resources are all allocated to corresponding threads at the initial stage of allocation and are not reallocated in the whole using process. In a multi-threaded system, a plurality of threads share a large number of resources of a core to improve utilization of various resources. However, such a resource management method is not sufficient enough for resource utilization, because firstly, the use of resources by the program running on the threads is not exactly the same, and if the resources required by each thread are fixed from the beginning, the resources are not idle; secondly, after the thread exits, all resources cannot be released in time, the pipeline needs to be drained, and then resource allocation can be carried out again, so that the efficiency is low.
The scheme provides a management system and a management method for dynamically sharing ordered resources, which can effectively solve various problems encountered in the above-mentioned ordered resource static allocation.
FIG. 1 illustrates an example block diagram of a management system 100 for dynamically sharing ordered resources in a multi-threaded system in accordance with this invention. Referring to fig. 1, the management system of the present invention includes an available resource pool 101, a resource token management module 102, an arbitration module 103, and a threadresource management module 1041 and 104n for each thread.
As shown in fig. 1, the available resources pool 101 is configured to manage available resources. In particular, the available resources pool 101 is configured to mark resources allocated to threads as unavailable when resources are allocated to threads; and when a resource is released from a thread, marking the released resource as available. The available resource pool reclaims the resources released by each thread in such a way that it does not participate in the allocation of resources to threads, nor does it manage the order of resources.
The available resource pool management illustrated in fig. 1 may be implemented in any suitable structure, such as a linked List (List), a Vector (Vector), or a first-in-first-out queue (FIFO), as illustrated in fig. 2. In each clock cycle, the resource token management module 102 finds one or more available resources identified as available from the management structure of the available resource pool 101, and after one or more of the available resources are allocated to a thread, the available resource pool 101 may set the corresponding valid location on the allocated resources to 0 to identify the allocated resources as unavailable. In addition, after each thread releases a resource, the available resources pool 101 may identify the released resource as available by setting a corresponding valid bit on the released resource to 1, and send resource release information to the resource token management module 102 so that the resource may be used by other threads.
It will be appreciated by those skilled in the art that it is possible for the available resources pool 101 to identify resources as available or unavailable by any other suitable means. The available resource pool 101 does not manage and maintain the order of the resources, and no matter in which order the resources are released by each thread, the available resource pool 101 can find the available resources according to a given algorithm without considering which thread releases the resources at what time.
As shown in fig. 1, the management system 100 of the present invention further includes a resource token management module 102 configured to manage the number of allocable resources for each thread (i.e., the number of occupied tokens for each thread). When a thread has a quota of available resources, the resource token management module 102 determines that the thread can participate in resource allocation that is then performed by the arbitration module 103. Since each thread resource can be configured by software, the QoS of the resource management is realized by different resource ratios. In particular, one possible implementation is that for two, i.e. more, threads that are set with different priorities, the software may allocate different amounts of resources for the different threads based on the priority of each thread. The algorithm for the resource token management module 102 to manage the amount of resources that can be allocated per thread may use counting of the number of occupied tokens rather than the traditional decreasing of occupied tokens. When a corresponding thread is allocated several resources from the available resource pool 101, the resource token management module 102 needs to accumulate the number of occupied tokens (i.e., the number of allocated resources) allocated to each thread this time; and when each time the resource is used up, after the thread releases the resource, the resource token management module 102 needs to subtract the number of occupied tokens released (i.e., the number of resources released) in response to receiving the resource release message sent by the available resource pool 101. One specific implementation of the resource token management module 102 is described in detail below with reference to FIG. 3.
As shown in fig. 1, the management system 100 of the present invention further includes an arbitration module 103 configured to allocate resources to each thread according to the number of available resources, the number of allocable resources for each thread, and the number of resources for each thread to apply for allocation. As shown in fig. 1, the arbitration module 103 obtains the amount of available resources from the available resource pool 101, obtains the amount of allocable resources of each thread from the resource token management module 102, and obtains the resource application for the available resources of each thread from the threadresource management modules 1041 to 104n of each thread to perform resource allocation, and feeds back the result of resource allocation to the resource token management module 102 and the resource management modules of the corresponding threads, so as to ensure that the resource token management module 102 can calculate whether the corresponding threads can subsequently participate in resource allocation.
As shown in fig. 1, the management system 100 further includes threadresource management modules 1041 to 104n (n is a positive integer greater than 1) of n threads, where each thread resource management module manages resources of a corresponding thread and sends a resource application request for the resources to the arbitration module 103. Each thread resource management module is configured to release resources of a managed thread to the pool of available resources in an order of resources allocated by the managed thread. That is, each thread resource management module is configured to record an order in which a plurality of resources are allocated by a thread, and when one or more of the plurality of resources are released by the thread, a later allocated resource cannot be released before a previously allocated resource. Specifically, referring to FIG. 3, when a particular thread is assigned resources in the order ofresource 3,resource 5,resource 1,resource 4, andresource 2, regardless of the order in which those resources are used in that particular thread (e.g., the order given in FIG. 3 for 4, 1, 5, 2, and 3 is used by a particular thread or any other possible order for a particular thread), those resources are eventually released by a particular thread, as well as in the order ofresource 3,resource 5,resource 1,resource 4, andresource 2. Of course, it is possible that threads are assigned other numbers of threads.
As described above, the management system shown in fig. 1 enables ordered resources to be flexibly and dynamically allocated to different threads, thereby implementing QoS for resource allocation and improving the utilization rate of resources. For a traditional static resource allocation manner, a system regardsresources 1 to 5 as a whole resource block, after theresources 1 to 5 are allocated to a specific thread, when some resources in the resource block are released and the remaining resources are not yet released, the resources already released by the specific thread in the resource block cannot be applied or used by other threads, and the other threads can only apply or use the resources after all theresources 1 to 5 as the whole resource block are released; referring to fig. 3, theresource 1 to theresource 5 are not considered as an overall resource block, when some resources in the resource block are released and the remaining resources are not yet released, the released resources may be applied or used by other threads as long as there is no resource allocated before the released resources, that is, when the amount of resources that can be occupied by the threads needs to be changed, there is no need to wait for all resources to be released.
FIG. 4 illustrates an example block diagram of a resource token management module 102 in accordance with this invention. Referring to fig. 4, the resource token management module 102 may include a counter 401, a highest allocable resource register 402, and a comparator 403.
The highest allocable resource 402 register serves as a register that stores a threshold for the number of allocable resources (i.e., the number of occupiable tokens), which is dynamically set by software. When the value in the counter 401 exceeds the threshold, the corresponding thread will be determined by the resource token management module 102 as being unable to participate in the next resource allocation until the thread releases enough resources to make the amount of resources occupied by the thread (i.e., the number of occupied tokens) below the threshold, and then cannot be determined by the resource token management module 102 as being able to participate in the next resource allocation. In particular, full sharing of the entire available resource by all threads may be achieved by setting the highest allocable resource register 402 of all threads to the maximum value of the available resource. Furthermore, if the priorities of different threads are different (i.e., different priorities are set in different static or dynamic manners), different thresholds may be set for different threads according to the priorities. By dynamically changing the threshold values in the highest allocable resource registers 402, deadlocks causing resource allocation may be avoided even when the sum of the threshold values in the highest allocable resource registers of all threads is greater than the total number of available resources, since the result of the comparison by the comparator 403 for each thread is used to determine whether to let the corresponding thread participate in the next resource allocation, rather than to have as many available resources as the sum of the threshold values in all the highest allocable resource registers available for allocation.
The counter 401, as shown in fig. 4, is configured to count the occupancy tokens for each thread by accumulating the number of occupancy tokens allocated to each thread when each thread is allocated a resource and subtracting the number of occupancy tokens released by each thread when each thread releases a resource. The comparator 403 shown in fig. 4 is configured to compare the number of occupation tokens of each thread with the threshold value of the occupation token of each thread, and the resource token management module 102 determines whether each thread can participate in resource allocation based on the result of the comparator 403, specifically, determines that the corresponding thread cannot participate in resource allocation (cannot be allocated with resources) when the value in the counter 401 equals or exceeds the threshold value in the highest allocable resource register 402; and when the value in the counter 401 is less than the threshold value, it is determined that the corresponding thread can participate in resource allocation (can be allocated a resource). That is, as shown in fig. 4, based on the comparison result of the comparator 403, the resource token management module 102 of fig. 1 may output a state of a thread allocable resource including a thread unallocated resource or a thread allocable resource.
Fig. 5 shows an exemplary block diagram of the arbitration module 103 according to the present invention. Referring to fig. 5, the arbitration module 103 allocates resources to each thread according to the number of available resources, the number of resources that each thread can allocate, and the number of resources that each thread applies for allocation. In particular, the arbitration module 103 may use different arbitration algorithms to perform resource allocation, such as round-robin (round-robin) algorithms or lottery algorithms to ensure efficiency-first and fair-free resource allocation. The arbitration module 103 first calculates a resource allocation weight value for each thread applying for the resource by combining the states fed back by the threadresource management modules 1041 to 104n and the resource token management module 102 with the resource application requests sent by the threadresource management modules 1041 to 104n, specifically, the arbitration module 103 may calculate the resource allocation weight value by subtracting the number of occupied resources from the number of allocable resources of each thread, and of course, any other suitable manner for calculating the resource allocation weight value is also possible; then, the arbitration module 103 calculates a thread which finally obtains an available resource according to the resource allocation weight value calculated for each thread and the previous resource allocation result for each thread; finally, the arbitration module 103 feeds back the result of the resource allocation (i.e., the result of the resource that has been allocated) to the threadresource management modules 1041 to 104n of the respective threads and the resource token management module 102, and the resource token management module 102 updates the number of occupied tokens by adding the number of occupied tokens to the number of resources that have been allocated for the next resource allocation.
The method of dynamically sharing ordered resources in a multi-threaded system is described in detail below in conjunction with fig. 6-8.
FIG. 6 illustrates a flow diagram of amethod 600 for dynamically sharing ordered resources in a multi-threaded system in accordance with the invention.
Referring to fig. 6, at S610, the available resource pool 101 manages available resources, specifically, after a thread releases resources, the available resource pool 101 sets the released resources as available, and then sends resource release information to the resource token management module 102, so that the resource token management module 102 subtracts the number of occupied tokens of the thread by the number of released resources; at S620, the resource token management module 102 manages the number of assignable resources for each thread; in S630, the threadresource management modules 1041 to 104n of each thread send out a resource application request; at S640, the arbitration module 103 allocates resources to each thread according to the number of available resources, the number of allocable resources for each thread, and the number of resources allocated for each thread application.
FIG. 7 illustrates a flow diagram of amethod 700 for the resource token management module 102 to manage the amount of resources that can be allocated by various threads in accordance with the present invention.
Referring to fig. 7, at S710, the counter 401 accumulates the number of occupied tokens allocated to each thread when each thread is allocated to a resource, and subtracts the number of occupied tokens released by each thread when each thread releases a resource; at S720, the highest allocable resource register 402 sets a threshold for the tokens that can be occupied by each thread; at S730, the comparator 403 determines whether each thread can be allocated with resources by comparing the number of occupied tokens of each thread with the threshold value of the occupiable tokens of each thread, specifically, when the value of the counter 401 equals or exceeds the threshold value, the comparator 403 determines that the thread cannot be allocated with resources, and when the value of the counter 401 is less than the threshold value, the comparator 403 determines that the thread can be allocated with resources.
FIG. 8 illustrates a flow diagram of amethod 800 for the arbitration module 103 to allocate resources to various threads in accordance with the present invention.
Referring to fig. 8, in S810, the arbitration module 103 calculates a weight value for each thread applying for the resource in combination with the state fed back by the threadresource management modules 1041 to 104n and the resource token management module 102 and the resource application request sent by the threadresource management modules 1041 to 104 n; at S820, the arbitration module 103 calculates a thread that finally obtains an available resource according to the resource allocation weight value calculated for each thread and the previous resource allocation result for each thread; at S830, the arbitration module 103 feeds back the result of the resource allocation (i.e., the result of the resource that has been allocated) to the threadresource management modules 1041 to 104n of the respective threads and the resource token management module 102, and the resource token management module 102 updates the number of occupied tokens by adding the number of occupied tokens to the number of resources that have been allocated for the next resource allocation.
FIG. 9 illustrates an example block diagram of an apparatus 900 for dynamically sharing ordered resources in a multi-threaded system in accordance with this invention.
Referring to fig. 9, a processing apparatus 900 for dynamically sharing ordered resources in a multi-threaded system may include a control unit 901, a processing unit 902, and a storage unit 903. The control unit 901 is a control center of the entire processing apparatus 901, and may include an Instruction Register (IR) for registering an instruction, an Instruction Decoder (ID) for decoding a stored instruction, an Operation Controller (OC), and the like. The control unit 901 sequentially fetches instructions from the storage unit 903 based on a preprogrammed program, registers the instructions in the instruction register IR, determines what operation should be performed by the instruction decoder ID, and then sends control signals to the corresponding components at the determined timing by the operation controller OC. The operation controller OC may comprise control logic such as a beat pulse generator, a control matrix, a clock pulse generator, a reset circuit and a start-stop circuit. The processing unit 902 may perform arithmetic operations (including basic operations such as addition, subtraction, multiplication, division, and additional operations thereof) and logical operations (including shifts, logical tests, or two-value comparisons). With respect to the control unit 901, the processing unit can receive a command from the control unit 901 to operate, that is, the processing unit 902 is an execution unit of the processing apparatus 900, and all operations of the processing unit 902 are controlled by a control signal from the control unit 901. The storage unit 903 may include a cache and a register set, and is a component of the processing apparatus 900 for temporarily storing data (including data or instructions waiting to be processed or data or instructions already processed), and the time taken for the processing apparatus 900 to access the register of the storage unit 903 is shorter than the time taken to access the cache of the storage unit 903. The register group can be divided into a special purpose register and a general purpose register. The special registers are fixed in function and register corresponding data and instructions, respectively. While general purpose registers are widely used and may be specified by a programmer. As described above, the processing apparatus 900 implements the above-described method of dynamically sharing ordered resources in a multi-threaded system according to the present application by controlling the interaction between the processing unit 902 and the storage unit 903 by the control unit 901. In order to ensure the orderly recovery of resources, each thread needs to maintain the order of the ordered resources, and finally releases the resources according to the order of the allocated resources. The resource usage mode may be the same as the previous usage mode, and after the resource usage is completed, the thread resource management module of each thread needs to set the state of the resource to be recoverable.
In the present invention, first, all resource pairs are visible to all threads; secondly, after the resource is allocated to the thread, the thread needs to record the sequence in which the resource is allocated, and after a certain resource of the thread meets the releasable condition, the resource to be released is arranged to the forefront of the queue, as described in fig. 3 above, the implementation manner that can be adopted here may be FIFO or age matrix (age matrix) algorithm or any other suitable manner, that is, the resource allocated earlier than the resource has not been released previously; finally, when a resource is released by the thread, the thread needs to inform the available resource pool to mark the resource as available, and corresponding resource release information is sent to the resource token management module, so that the resource token management module can correspondingly process the number of occupied tokens. The management system and the method share the ordered resources, so that the ordered resources can be flexibly and dynamically allocated to different threads, the QoS of resource allocation is realized, the utilization rate of the resources is improved, and meanwhile, when the quantity of the resources occupied by the threads needs to be changed, the emptying of a pipeline does not need to be waited.
The invention provides a method and a system for dynamically managing ordered resources based on tokens (tokens). The method can ensure that the ordered resources are dynamically shared among different threads, and the resource token management module only needs to manage the quantity of the allocable resources of each thread and does not need to be responsible for the sequence of the resources. The thread resource management module completes the management of the resource sequence so as to solve the problem of disorder when the ordered resources are reused.
Note that the names of elements and components mentioned herein, such as available resource pools, resource token management modules, arbitration modules, thread resource management modules, highest allocable resource registers, and the like, are examples, and for purposes of identification and not limitation, other elements and components that achieve the same functionality are in fact possible in accordance with the principles of the present invention. Also, the available resource pool, resource token management module, arbitration module, thread resource management module, and highest allocable resource register referred to herein may be implemented in software, hardware, firmware, or any combination thereof.
The techniques and methods described herein may be implemented in hardware, software, firmware, or any combination thereof unless specifically described as being implemented in a particular manner. Any features described as modules or components may also be implemented together in an integrated logic device or separately as discrete but interoperable logic devices.
Further, as used herein, "based on" should not be construed as a reference to a closed condition set. For example, an exemplary step described as "based on condition a" may be based on condition a and condition B without departing from the scope of the present disclosure. In other words, as used herein, "based on" and "based on" should be interpreted in the same manner as the phrases "based at least in part on" and "based at least in part on".
In the drawings, similar components or features may have the same reference numerals. In addition, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference numeral is used in the specification, the description is applicable to any one of the similar components having the same first reference numeral regardless of the second or other subsequent reference numeral.
The description set forth herein, in connection with the drawings, describes example configurations, methods, and apparatus, and is not intended to represent all examples that may be implemented or within the scope of the claims. The term "exemplary" as used herein means "serving as an example, instance, or illustration," and not "preferred" or "superior to other examples. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
It should be understood that the specific order or hierarchy of steps in the methods of the present invention are illustrative of exemplary processes. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods may be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented unless specifically indicated otherwise. Furthermore, although elements may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Accordingly, the present disclosure is not limited to the examples shown, and any means for performing the functions described herein are included in aspects of the present disclosure.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the true spirit and scope of the inventive concepts. Thus, the scope of the inventive concept is to be determined by the broadest permissible interpretation of the following claims and their equivalents, as permitted by law, and shall not be restricted or limited by the foregoing detailed description.