Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon for use by or in connection with an instruction execution system.
The embodiment of the disclosure provides a pose determining method, which comprises the following steps: obtaining a first image obtained by a left camera and a second image obtained by a right camera in a binocular camera; obtaining at least one group of matching feature points according to the first image and the second image; determining a depth value of a physical space where the binocular camera is located and an error value of the depth value according to the at least one group of matched feature points; and determining the pose of the binocular camera according to the depth value and the error value of the depth value.
According to the pose determining method, when the pose of the binocular camera is determined, the accuracy of the determined pose of the camera can be improved at least partially by introducing the error value of the depth value. The accurate camera pose can be constructed to obtain more accurate three-dimensional structure information, so that the fusion degree of a virtual image rendered by the augmented reality equipment and a real environment is improved, and the user experience is improved.
Fig. 1 schematically illustrates an application scenario diagram of a pose determination method and apparatus, an augmented reality device, and a readable storage medium according to an embodiment of the present disclosure.
As shown in fig. 1, theapplication scene 100 includes an augmentedreality device 110, and the augmentedreality device 110 is provided with abinocular camera 111. Thebinocular camera 111 may include, for example, a left camera and a right camera, both of which may capture images of an environment within a visual range, and the images captured by the left camera and the right camera are images of different angles in the same physical space.
The augmentedreality device 110 may further include a display system, which may be a combination of a display screen and optical elements such as a prism and an optical waveguide, for example, and may display an environmental image of a physical space where a user is located through the display system.
According to the embodiment of the present disclosure, in order to render a virtual image in an actual environment image, it is necessary to accurately construct three-dimensional structure information of a space where an augmented reality device is located, for example, a visual slam (simultaneous localization and mapping) technology may be used to construct the three-dimensional structure information. One of the key problems in SLAM is to solve the pose of the binocular camera.
The binocular camera generally calibrates left and right eye external parameters, and then calculates an absolute depth value of a target position in a physical space according to the left and right eye external parameters. But the depth value has an error that is related to the target position and the distance of the binocular camera, the performance of the binocular camera itself, and the like. This can cause a large uncertainty in the calculated absolute depth value. In order to avoid the influence of the depth error on the pose of the binocular camera solved by the SLAM system, it may be considered to reduce the influence of the depth error on the pose of the binocular camera by using other sensor information (for example, an inertial sensing unit). However, the method brings extra software and hardware expenses, and finally size drift of the pose of the binocular camera obtained through positioning is caused by the influence of depth errors.
When the pose of the camera is positioned by considering the SLAM technology, the pose is generally determined by adopting a method of minimizing an error function, so that the depth error can be taken as one of error terms of the error function to perform minimization calculation when the error function is minimized, and the accuracy of the determined pose is improved.
It should be noted that the pose determination method according to the embodiment of the present disclosure may be executed by theaugmented reality device 110, for example. Accordingly, the pose determination apparatus of the embodiment of the present disclosure may be provided in theaugmented reality device 110, for example. It is to be understood that the structure of the augmented reality device in fig. 1 is merely an example to facilitate understanding of the present disclosure, and the present disclosure is not limited thereto.
The pose determination method provided by the present disclosure will be described in detail below with reference to fig. 2 to 7.
Fig. 2 schematically shows a flowchart of a pose determination method according to a first exemplary embodiment of the present disclosure.
As shown in fig. 2, the pose determination method of this embodiment may include, for example, operations S210 to S240.
In operation S210, a first image taken by a left camera and a second image taken by a right camera of a binocular camera are obtained. The first image may be, for example, an image in a visual range captured by a left camera, and the second image may be an image in a visual range captured by a right camera. The first image and the second image are images taken at the same time and at different angles in the same physical space.
In operation S220, at least one set of matching feature points is obtained according to the first image and the second image. The operation S220 may include, for example: comparing the first image with the second image to determine the same object in the first image and the second image; and then extracting the features of the same object in the two images, and taking the features of the same object in the two images as a group of matched feature points. Alternatively, the operation S220 may be implemented by the process described in fig. 3, for example, and will not be described in detail here.
In operation S230, depth values and error values of the depth values of the physical space where the binocular camera is located are determined according to the at least one set of matching feature points.
According to an embodiment of the present disclosure, this operation S230 may employ triangulation, for example, to determine depth values of objects characterized by each set of matching feature points relative to a binocular camera. In which it is considered that the depth value of the object determined by matching the feature points has an error, the error depth value, the base line length of the binocular camera, the angle of the object to the binocular camera, and the like. Thus, operation S230 may determine an error value of the depth value according to the depth value and the parameters, for example. Specifically, the operation S230 may determine the depth value and the error value of the depth value through the flow described in fig. 5, for example, and will not be described in detail herein.
In operation S240, the pose of the binocular camera is determined according to the depth value and the error value of the depth value.
According to an embodiment of the present disclosure, the operation S240 may include, for example: the estimated pose of the binocular camera is determined according to the coordinate values of each set of matching feature points in the image in the at least one set of matching feature points determined in operation S220, and then the estimated pose is optimized by using the depth values and the error values of the depth values as constraint conditions of a predetermined optimization model to obtain the optimized pose. And finally, taking the optimized pose as the determined pose of the binocular camera. The determination of the pose of the binocular camera may be determined, for example, by the flow described in fig. 6, and will not be described in detail here.
In summary, the pose determination method of this embodiment may at least partially improve the accuracy of the determined pose of the camera by introducing error values of depth values when determining the pose of the binocular camera. The accurate camera pose can be constructed to obtain more accurate three-dimensional structure information, so that the fusion degree of a virtual image rendered by the augmented reality equipment and a real environment is improved, and the user experience is improved.
Fig. 3 schematically shows a flow chart for obtaining at least one set of matching feature points according to an embodiment of the present disclosure.
As shown in fig. 3, the operation S220 of obtaining at least one set of matching feature points may include, for example, operations S321 to S322.
In operation S321, a first image and a second image are identified, resulting in a first feature point group for the first image and a second feature point group for the second image.
According to the embodiment of the present disclosure, the first image and the second image may be identified by using, for example, a Scale-Invariant feature transform (SIFT) feature extraction algorithm, an HOG feature extraction method, or a pre-trained neural network, so as to extract the first feature point group and the second feature point group.
The first feature point group comprises a plurality of first feature points extracted from the first image, and the second feature point group comprises a plurality of second feature points extracted from the first image. The feature points may include, for example, edge points of the object or corner points of the object, etc.
In operation S322, a first feature point and a second feature point that are matched are determined according to the first feature point group and the second feature point group, so as to obtain at least one group of matched feature points.
According to an embodiment of the present disclosure, the operation S322 may include, for example: and comparing each first feature point included in the first feature point group with a plurality of second feature points included in the second feature point group one by one, and determining second feature points with similarity higher than a preset similarity (for example, 50%) with each first feature point. And determining the first characteristic points and the second characteristic points which are higher than the preset similarity as matched first characteristic points and second characteristic points, wherein the matched first characteristic points and the matched second characteristic points form a group of matched characteristic points. The similarity may refer to, for example, a color similarity, a size similarity, an edge similarity, and/or the like, and the predetermined similarity is only used as an example to facilitate understanding of the present disclosure, and the present disclosure does not limit this.
According to the embodiment of the disclosure, in order to determine the depth value and the error value of the depth value, the binocular camera should be calibrated in advance to obtain left and right eye external parameters of the binocular camera.
Fig. 4 schematically shows a flowchart of a pose determination method according to a second exemplary embodiment of the present disclosure.
As shown in fig. 4, the pose determination method of the embodiment may include operations S450 to S460 in addition to operations S210 to S240, and the operations S450 to S460 may be performed before operation S230, for example.
In operation S450, the binocular camera is calibrated to obtain left and right eye external parameters of the binocular camera.
According to the embodiment of the disclosure, the left and right eye external parameters can represent the pose relationship between the left camera and the right camera, and specifically can be the transformation relationship between the three-dimensional coordinate system established based on the left camera and the three-dimensional coordinate system established based on the right camera. For a point in the three-dimensional coordinate system established based on the left camera, the coordinate value of the point in the three-dimensional coordinate system established based on the right camera can be obtained through transformation of the rotation matrix R and the translation vector T. Wherein, the rotation vector R and the translation vector T are the left and right external parameters obtained by calibration.
In operation S460, a base length of the binocular camera is determined according to the left and right eye external parameters.
According to an embodiment of the present disclosure, the baseline of the binocular camera refers to: and the length of the base line refers to the length of the connecting line. The base length may be, for example, the value of the first element in the translation vector T.
Fig. 5 schematically illustrates a flowchart of determining depth values and error values of the depth values in a physical space where a binocular camera is located according to an embodiment of the present disclosure.
As shown in fig. 5, the operation S230 of determining the depth value of the physical space where the binocular camera is located and the error value of the depth value may include, for example, operations S531 to S532.
In operation S531, depth values of the target position in the physical space corresponding to the at least one group of matching feature points are determined, and at least one depth value for the at least one group of matching feature points is obtained.
According to the embodiment of the present disclosure, assuming that the left camera and the right camera are located in the same plane, that is, the optical axes are parallel, and the parameters (for example, the focal length f) of the two cameras are consistent, the depth z ═ f × b/a from the cameras of the target position corresponding to each group of matched feature points can be determined according to the triangle similarity principle. Where f is the focal length of the two cameras and b is the left and right camera baselines of the two cameras. a is a relationship between each pixel point of a first image obtained by shooting with the left camera and a corresponding pixel point of a second image obtained by shooting with the right camera, and can be determined according to a coordinate value of a first feature point and a coordinate value of a second feature point in a group of matched feature points. The coordinate value of the first feature point refers to a coordinate value of the first feature point in a two-dimensional coordinate system established based on the first image, and the coordinate value of the second feature point refers to a coordinate value of the second feature point in a two-dimensional coordinate system established based on the second image.
Operation S531 may include, for example: for each group of matching feature points, the method is adopted to obtain the vertical distance between the point in the physical space corresponding to each group of matching feature points and the base line of the binocular camera, the vertical distance is used as the depth value of each group of matching feature points, and at least one depth value corresponding to at least one group of matching feature points is obtained in total.
In operation S532, an error value of each of the at least one depth value is determined according to the at least one depth value, the baseline length, and the target error model.
According to embodiments of the present disclosure, the target error model may include, for example, a reduced model derived from triangulation and pinhole imaging models. The reduced model may be represented, for example, using the following formula:
wherein Error is an Error value of the depth value; k is a constant determined from the difference between the true value and the measured value of the depth value determined a plurality of times; d is a depth value; b is the base line length of the binocular camera, and theta is the included angle between the connecting line between the target position corresponding to the depth value and the center point of the base line and the vertical line of the base line. For at least one depth value calculated in operation S531 for at least one set of matching feature points, an error value of each of the at least one depth value may be calculated by the above formula. It is to be understood that the above-described brief model is provided only as an example to facilitate understanding of the present disclosure, and the present disclosure is not limited thereto. The reduced model may particularly be determined from the relation of the depth values to the parameters affecting the depth values.
Fig. 6 schematically shows a flow chart for determining the pose of a binocular camera according to an embodiment of the present disclosure.
As shown in fig. 6, the operation S240 of determining the pose of the binocular camera may include, for example, operations S641 to S643.
In operation S641, an estimated pose of the binocular camera is determined according to the at least one group of matching feature points and the left and right eye external references.
According to an embodiment of the present disclosure, the operation S641 may include, for example: firstly, according to a first coordinate value of a first feature point in each group of matched feature points in a two-dimensional coordinate system established based on a first image, a second coordinate value of a second feature point in the two-dimensional coordinate system established based on a second image and left and right eye external parameters, determining a coordinate value of a target position in a physical space corresponding to each group of matched feature points in a camera coordinate system. And then determining the coordinate value of the target position under the world coordinate system according to the conversion relation between the camera coordinate system and the world coordinate system. And finally, calculating to obtain the estimated pose of the camera according to the coordinate value of the target position in the world coordinate system and the coordinate value of the target position in the camera coordinate system. The operation S641 may calculate the initial pose of the camera by using a PnP (passive-n-Points) algorithm, for example.
In operation S642, the target optimization model is adjusted according to the depth value and the error value of the depth value, so as to obtain an adjusted optimization model.
According to the embodiment of the disclosure, in order to calculate the pose of the binocular camera, for example, an error function (for example, photometric error, reprojection residual error, 3D geometric error, and the like) using the pose of the camera as a variable may be adopted, and the pose of the binocular camera when the error corresponding to the error function takes the minimum value is taken as the accurate pose of the camera. Thus, the target optimization model may be a model that minimizes the aforementioned errors.
According to an embodiment of the present disclosure, the target optimization model may include, for example, a ba (bundle adjustment) model. The essence of this BA model is to optimize the pose while minimizing the reprojection residual. The re-projection residual refers to a difference between a projection (pixel points on an image acquired by a left camera or a right camera) of a target position in a physical space on an image plane and a re-projection (virtual pixel points obtained by calculation). Wherein, the reprojection means: the method comprises the steps of firstly carrying out first projection, namely projecting points in a physical space onto a shot image when the camera shoots, then carrying out triangulation positioning on some characteristic points by using the shot image, and determining the positions of the points in the physical space by using geometric information to construct a triangle. And finally, performing secondary projection by using the determined position of the point in the physical space and the initial camera pose to obtain a virtual pixel point. The initial camera pose may be, for example, the estimated pose described above.
The adjustment of the target optimization model by operation S642 may be, for example, adding a residual term for an error value of the depth value on the basis of the reprojected residual. This operation S642 may be implemented by, for example, the flow described in fig. 7, and will not be described in detail herein.
In operation S643, the estimated pose is optimized according to the adjusted optimization model, so as to obtain the pose of the binocular camera.
The operation S643 may include, for example: and obtaining the camera pose when the adjusted optimization model takes the minimum value through iterative calculation by taking the estimated pose as an initial value through methods such as a gradient descent method, a Newton method, a Gaussian-Newton method and a Levenberg-Marquardt method (Levenberg-Marquardt), and taking the camera pose when the minimum value is taken as the finally determined pose of the binocular camera.
FIG. 7 schematically shows a flow diagram of an adjusted optimization model according to an embodiment of the disclosure.
As shown in fig. 7, operation S642 of obtaining the adjusted optimization model may include, for example, operations S7421 to S7422.
In operation S7421, a depth residual for the binocular camera is determined according to respective error values of the at least one depth value and the at least one depth value. According to an embodiment of the present disclosure, the operation S7421 may include, for example: the square value of the error value of each depth value is calculated first to obtain at least one square value. The at least one squared value is then summed to obtain a depth residual for the binocular camera.
In operation S7422, the target optimization model is adjusted according to the depth residual to obtain an adjusted optimization model. According to an embodiment of the present disclosure, the operation S7422 may include, for example: and taking the depth residual error as one item in a summation item in the target optimization model to obtain the adjusted optimization model.
Fig. 8 schematically shows a block diagram of the configuration of the pose determination apparatus according to the embodiment of the present disclosure.
As shown in fig. 8, thepose determination apparatus 800 may include, for example, animage obtaining module 810, a matching featurepoint obtaining module 820, a numericalvalue determination module 830, and apose determination module 840.
Theimage obtaining module 810 obtains a first image obtained by a left camera and a second image obtained by a right camera of the binocular camera (operation S210).
The matched featurepoint obtaining module 820 is configured to obtain at least one set of matched feature points according to the first image and the second image (operation S220).
The numericalvalue determining module 830 is configured to determine a depth value of the physical space where the binocular camera is located and an error value of the depth value according to the at least one set of matching feature points (operation S230).
Thepose determination module 840 is configured to determine the pose of the binocular camera according to the depth value and the error value of the depth value (operation S240).
According to an embodiment of the present disclosure, as shown in fig. 8, the matching featurepoint obtaining module 820 may include, for example, a feature point obtaining sub-module 821 and a feature point matching sub-module 822. The feature point obtaining sub-module 821 is used to identify the first image and the second image, and obtain a first feature point group for the first image and a second feature point group for the second image (operation S321). The feature point matching sub-module 822 is configured to determine a first feature point and a second feature point that are matched according to the first feature point group and the second feature point group to obtain at least one group of matched feature points (operation S322). Wherein each set of matching feature points comprises a first feature point and a second feature point which are matched with each other.
According to an embodiment of the present disclosure, as shown in fig. 8, thepose determination apparatus 800 described above may further include, for example, acamera calibration module 850 and a baselinelength determination module 860. Thecamera calibration module 850 is configured to calibrate the binocular camera to obtain left and right external parameters of the binocular camera before the numericalvalue determination module 830 determines the depth value of the physical space where the binocular camera is located and the error value of the depth value (operation S450). The baselength determining module 860 is configured to determine a base length of the binocular camera according to the left and right eye external parameters (operation S550).
According to an embodiment of the present disclosure, as shown in fig. 8, the numericalvalue determination module 830 may include, for example, a depthvalue determination submodule 831 and an errorvalue determination submodule 832. The depthvalue determining submodule 831 is configured to determine depth values of the target locations in the physical space corresponding to the at least one set of matching feature points, and obtain at least one depth value for the at least one set of matching feature points (operation S531). The errorvalue determining submodule 832 is configured to determine an error value of each of the at least one depth value according to the at least one depth value, the baseline length, and the target error model (operation S532).
According to an embodiment of the disclosure, as shown in fig. 8, thepose determination module 840 may include, for example, a predictor sub-module 841, amodel adjustment sub-module 842, and apose determination sub-module 843. Theestimation submodule 841 is configured to determine an estimated pose of the binocular camera according to the at least one set of matching feature points and the left and right external parameters (operation S641). Themodel adjusting submodule 842 is configured to adjust the target optimization model according to the depth value and the error value of the depth value, so as to obtain an adjusted optimization model (operation S642). Thepose determination sub-module 843 is configured to optimize the estimated pose according to the adjusted optimization model, and obtain a pose of the binocular camera (operation S643).
According to an embodiment of the present disclosure, the depth values comprise at least one depth value for at least one set of matching feature points; the error values of the depth values comprise respective error values for at least one depth value. Themodel adjustment submodule 842 may be specifically configured to, for example, perform the following operations: determining a depth residual for the binocular camera according to respective error values of the at least one depth value and the at least one depth value (operation S7421); and adjusting the target optimization model according to the depth residual error to obtain an adjusted optimization model (operation S7422).
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
Fig. 9 schematically shows a block diagram of an augmented reality device according to an embodiment of the present disclosure.
As shown in fig. 9, theaugmented reality device 900 includes aprocessor 910, a computer-readable storage medium 920, and abinocular camera 930. Theaugmented reality device 900 may be, for example, theaugmented reality device 110 described in fig. 1, and may perform an information processing method according to an embodiment of the present disclosure.
In particular,processor 910 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. Theprocessor 910 may also include onboard memory for caching purposes. Theprocessor 910 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage media 920, for example, may be non-volatile computer-readable storage media, specific examples including, but not limited to: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and so on.
The computer-readable storage medium 920 may include acomputer program 921, whichcomputer program 921 may include code/computer-executable instructions that, when executed by theprocessor 910, cause theprocessor 910 to perform a method according to an embodiment of the present disclosure, or any variation thereof.
Thecomputer program 921 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code incomputer program 921 may include one or more program modules, including 921A,modules 921B, … …, for example. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that theprocessor 910 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by theprocessor 910.
According to an embodiment of the present invention, thebinocular camera 930 may include, for example, a left camera and a right camera, and thebinocular camera 930 may be, for example, thebinocular camera 111 described in fig. 1. Theaugmented reality device 900 may perform the pose determination method from two images taken by two cameras of thebinocular camera 930, for example.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.