Disclosure of Invention
One of the technical problems to be solved by the present invention is to provide a novel human-computer interaction mode capable of improving the interest of children users and meeting the user requirements.
In order to solve the above technical problem, an embodiment of the present application first provides a method for performing intelligent interaction in combination with a virtual maze, where the virtual maze is configured with a virtual robot, and the virtual maze and the virtual robot run on an intelligent device, and the method includes the following steps: starting a virtual maze scene, and starting a virtual robot; determining set tasks passing through each room in the virtual maze, and outputting multi-modal output information corresponding to the set tasks through the virtual robot; in the scene, acquiring multi-modal interaction information of a user and the virtual robot; and analyzing multi-mode interaction information in the set round conversation of the current user and the virtual robot, matching with interaction information corresponding to the set task of the current room, and if the matching is successful, judging that the current room passes through and entering the next room or finishing the operation.
Preferably, the method further comprises: acquiring face information of a user, and identifying the current emotion of the user; and determining the emotional state to be displayed by the virtual robot according to the current emotion of the user, and generating and outputting corresponding expression output data based on the emotional state.
Preferably, the method further comprises: and the virtual robot displays the expression state according to the expression output data and simultaneously displays the action matched with the expression state.
Preferably, when the task is set to search for a specified object, the method further comprises: acquiring image information of an entity object searched by a user; and performing visual recognition on the image information, judging whether the entity object found by the user is matched with the specified object, if so, judging that the current room passes through and enters the next room or finishes the operation, and outputting multi-mode data for mapping the virtual image of the entity object to the virtual maze scene.
Preferably, the method further comprises: acquiring face information of a current user, and identifying the identity of the user through a user database; and selecting the theme layout, the level and the virtual robot role of each room of the virtual maze according to the identity information of the user, and outputting corresponding multi-modal data.
According to another aspect of the embodiments of the present invention, there is also provided a system for intelligent interaction in combination with a virtual maze, wherein the virtual maze is configured with a virtual robot, and the virtual maze and the virtual robot run on an intelligent device, the system includes: the intelligent equipment starts a virtual maze scene and starts the virtual robot; determining set tasks passing through each room in the virtual maze, and outputting multi-modal output information corresponding to the set tasks through the virtual robot; in the scene, acquiring multi-modal interaction information of a user and the virtual robot; and the game server analyzes the multi-mode interactive information in the set round conversation between the current user and the virtual robot, matches the multi-mode interactive information corresponding to the set task in the current room, and judges that the current room passes through and enters the next room or finishes the operation if the matching is successful.
Preferably, the game server includes: the emotion calculating unit is used for acquiring the face information of the user and identifying the current emotion of the user; and the decision unit is used for determining the emotional state to be displayed by the virtual robot according to the current emotion of the user, and generating and outputting corresponding expression output data based on the emotional state.
Preferably, the intelligent device controls the virtual robot to display the expression state according to the expression output data and simultaneously displays the action matched with the expression state.
Preferably, the game server further comprises: a visual recognition unit which acquires image information of an entity object found by a user when a task is set to find a prescribed object, and performs visual recognition on the image information; and the decision unit is used for further judging whether the entity object searched by the user is matched with the specified object, if so, judging that the entity object passes through the current room and enters the next room or finishes the operation, and outputting multi-mode data for mapping the virtual image of the entity object to the virtual maze scene.
Preferably, the visual recognition unit further acquires face information of a current user, and recognizes the identity of the user through a user database; and the decision unit is further used for selecting the theme layout, the level and the virtual robot role of each room of the virtual maze according to the identity information of the user and outputting corresponding multi-modal data.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
the embodiment of the invention provides a new user interaction mode, namely a method for intelligent interaction by combining a virtual maze, wherein in the interaction process: starting a virtual maze scene, and starting a virtual robot; in the scene, acquiring and analyzing dialogue interaction information between a user and the virtual robot to determine set tasks passing through each room in the virtual maze, and outputting multi-modal output information corresponding to the set tasks through the virtual robot; and analyzing multi-mode interaction information in the current user and the virtual robot setting round conversation, matching with interaction information corresponding to the current room setting task, and if matching is successful, judging that the current room passes through and enters the next room or finishing operation. The embodiment of the invention can improve the interest of children users, meet the user requirements and improve the user experience.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure and/or process particularly pointed out in the written description and claims hereof as well as the appended drawings.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the accompanying drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments and the features of the embodiments can be combined without conflict, and the technical solutions formed are all within the scope of the present invention.
Additionally, the steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
With the continuous development of computer technology, the requirements of users on interaction modes are continuously improved, and the natural and convenient human-computer interaction is expected. Currently, there are various human-computer interaction modes, such as the most common contact human-computer interaction mode, and many human-computer interaction modes in which physiological information such as user's voice, gesture, and facial expression is used as input for non-contact human-computer interaction. Taking a child user as an example, robots currently designed for the child user are generally physical robots, such as early education robots, accompanying robots, which help children to learn and entertain them by performing simple voice interaction with them. However, the function of the entity robot is mostly fixed and unchangeable, the intelligence and the anthropomorphic sense are poor, and although the entity robot can meet the requirements of a certain user, the learning and entertainment requirements of children users can not be met. Therefore, the embodiment of the invention provides a novel interaction mode, namely a method for carrying out intelligent interaction in a mode of combining a virtual robot and a virtual maze, so that the learning ability of a child user can be improved in the entertainment process, and the user experience degree is improved.
For a better understanding of the embodiments of the present invention, the following description will be made with respect to the virtual robot. The virtual robot in the embodiment is mounted on an intelligent device supporting input and output modules such as perception and control, preferably has a cartoon character image for a child user, supports multi-mode man-machine interaction, has AI capabilities such as natural language understanding, visual perception, language voice output and emotion expression action output, and enables the user to enjoy intelligent and personalized smooth experience in the process of interacting with the user.
In this example, the virtual robots are applications or executables of the system. And interacting with a user at a system level, wherein an operating system such as a holographic equipment built-in system is operated in system hardware, and if the system is a PC (personal computer), the system is a windows or Mac OS operating system. In the process of interacting with the user, the virtual robot acquires multi-mode interaction data of the user based on hardware of the carried intelligent equipment, and carries out semantic understanding, visual recognition and emotion calculation on the multi-mode interaction data at the game server.
Similarly, the virtual maze according to the present example may be mounted on the same hardware device as the virtual robot, and the operation process of the entire interaction mode will be described in this example by taking an application scene as a child-oriented business scene, such as a child amusement park, or a child game machine.
Examples
Fig. 1 is a block diagram of a system for performing intelligent interaction by combining a virtual maze according to an embodiment of the present application, where an application scenario of the system is set as a children's amusement park scenario in this example. The virtual robot a can be displayed to the child user U in the form of a hologram or a display interface through an intelligent device, such as a child game machine or an advertisement machine, mounted on the virtual robot a, and the virtual robot a can output multi-modal interaction information, such as voice information, expression information, and motion information, to the user in the multi-modal interaction process with the user U. As shown in FIG. 1, the system primarily includes a game server 100 and a smart device 200 that interacts multimodal with a child. In addition to the game machine or the advertisement machine for children, the smart device 200 may also be a conventional PC personal computer, a LapTop notebook computer, or the like, or may be a portable terminal device that can access the internet wirelessly through a wireless lan, a mobile communication network, or the like. In the embodiment of the present application, the wireless terminal includes, but is not limited to, a mobile phone, a Netbook (Netbook), and the like, and the wireless terminal generally has functions of multimodal information acquisition, data transmission, and the like.
As shown in fig. 1, the game server 100 internally includes acontrol unit 110, acommunication unit 120, a theme layout database D1, a virtual character database D2, a checkpoint database D3, aninput unit 130, and anoutput unit 140. Thecontrol Unit 110 is configured by, for example, a CPU (Central Processing Unit) or the like, and controls each part of the game server 100 and executes a predetermined program to realize each process described later. Thecommunication unit 120 has a communication device such as a modem or a router, and controls network communication between the game server 100 and the smart terminal 200. The network can be selected from private line network, public line network, wireless communication network, etc.
The theme layout database D1 is formed of a rewritable storage device such as a hard disk device, for example, and stores theme layout data and version information (e.g., a time stamp) transmitted from theinput unit 130. The avatar database D2 is composed of a rewritable storage device such as a hard disk drive, and stores avatar data and version information (e.g., a time stamp) transmitted from theinput unit 130. The level database D3 is constituted by a rewritable storage device such as a hard disk drive, for example, and stores level data and version information (for example, a time stamp) transmitted from theinput unit 130. The manager of the game server 100 updates the data information of the respective databases D1 to D3 by operating theinput unit 130. Theoutput unit 140 is used to output data information to the outside.
The theme layout data includes a room layout for the virtual maze and a theme for each room, and in the expedition-type theme game, taking fig. 5 as an example, a plurality of rooms are designed, and the rooms can be entered into rooms with different subtopics according to the situation of breaking a pass of children by the room layout, and each room has its own subtopic, such as a magic distress, a war magic object, a desert treasure hunt, a jungle scout, a jaywalkong tiger, a brave climbing peak, a deep sea adventure and a BOSS battle. In addition to the quest-like theme, an action-like, role-playing, formative, sports, flight-shooting, multiplayer game, etc. may be included. The virtual character data includes data contents of characters of virtual labyrinths for different subjects and virtual characters of different rooms. The virtual characters may include cartoon characters, historical personas, and original characters, etc., and as shown in fig. 6, the virtual characters in the room are original characters named "bomb man" and interact with the child user through voice output. The level data includes setting tasks to be performed by each level (mainly, room), such as finding a prescribed object, solving a question, setting an expression, a language, or completion of an action. As shown in fig. 6, the bomb will explode (the alarm lamp will flash) when the anxiety value of the bomb gets to the peak, and the child needs to communicate with the bomb to reduce the anxiety value of the bomb. If the bomb monarch is successfully enabled to be cooled and calmed down, the bomb monarch can be handed to a friend task to find a prop (blue shovel), and after the task is completed, a blue door in the task is opened; if the bomb explodes, another branch task is started, and the orange door is opened.
The functions of the respective components of thecontrol unit 110 inside the game server 100 will be described next with reference to fig. 2. As shown in fig. 2, thecontrol unit 110 includes a semantic understanding unit 111, avisual recognition unit 112, anemotion calculation unit 113, and adecision unit 114.
And a semantic understanding unit 111 that receives the voice information transferred from thecommunication unit 120 and performs voice recognition on the voice information in the current round of the user and the virtual robot setting. In the process of voice recognition, voice information is preprocessed, and then feature extraction and training recognition are carried out. The preprocessing mainly comprises the operations of pre-emphasis of voice signals, framing and windowing, end point detection and the like. And comparing the characteristic parameters of the speech to be recognized after characteristic extraction with each mode in the reference model library one by one, outputting the mode with the highest similarity as a recognition result, completing the matching process of the modes and acquiring semantic information.
And avisual recognition unit 112 that receives the image information forwarded from thecommunication unit 120 and extracts features of the object, such as line segments, regions, feature points, and the like. And finally, identifying the image according to a preset algorithm and giving a quantitative detection result. The method has the advantages of image preprocessing function, feature extraction function, decision function and specific application function. The image preprocessing is mainly to perform basic processing on the acquired visual acquisition data, including color space conversion, edge extraction, image transformation and image thresholding. The feature extraction mainly extracts feature information of complexion, color, texture, motion, coordinates and the like of a target in the image. The decision is mainly to distribute the characteristic information to the specific application needing the characteristic information according to a certain decision strategy. The specific application function realizes the functions of face detection, character recognition, motion detection and the like.
Emotion calculation section 113 receives the multimodal data forwarded fromcommunication section 120, and calculates the current emotional state of the user using emotion calculation logic (mainly emotion recognition technology). The emotion recognition technology is an important component of emotion calculation, the content of emotion recognition research comprises the aspects of facial expression, voice, behavior, text, physiological signal recognition and the like, and the emotional state of a user can be judged through the content. The emotion recognition technology may monitor the emotional state of the user only through the visual emotion recognition technology, or may monitor the emotional state of the user in a manner of combining the visual emotion recognition technology and the voice emotion recognition technology, and is not limited thereto. In this embodiment, it is preferable to monitor the emotion by a combination of both.
And adecision unit 114, whose main function is to integrate the analysis results of the semantic understanding unit 111, thevisual recognition unit 112, and theemotion calculation unit 113, and decide to output multimodal data or instructions.
And matching the analysis result with the interactive information corresponding to the set task of the current room in the task matching mode, and if the matching is successful, judging that the current room passes through and entering the next room or finishing the operation. In one example, when the task is set to search for a specific object, thevisual recognition unit 112 obtains image information of the physical object found by the user, and performs visual recognition on the image information to obtain the object description parameter. Thedecision unit 114 determines whether the physical object found by the user matches the specified object, that is, whether the object description parameter has a certain similarity with the description parameter of the object to be found, and if so, determines to pass through the current room and enter the next room or end the operation, and outputs the multi-modal data mapping the virtual image of the physical object to the virtual maze scene. If a child searches for a designated prop, the designated prop is scanned before a camera of the smart device 200, and then the smart device 200 sends the prop to thevisual recognition unit 112 of the game server 100 for recognition, and thedecision unit 114 determines that the prop is the designated prop, then the decision unit sends an instruction for passing through the current room and entering the next room and multi-modal data for mapping the virtual image of the prop to the virtual maze scene. In the virtual maze scene displayed by the intelligent device 200, the main maze corner can obtain a corresponding prop and enter the next room, thereby helping to promote the development of the scenario.
In this example, the virtual character in the virtual maze has a natural language "chatting" capability, and can chat with the virtual character for any question, laugh, question an animation background, and the like. When the interaction is performed, the semantic understanding unit 111 receives the voice information forwarded from thecommunication unit 120 to perform semantic recognition, and thedecision unit 114 searches for corresponding reply content from the question-answer database according to the recognition result, sends the reply content to the intelligent device 200, and controls the virtual robot to output voice matched with the reply content.
Moreover, the virtual character can be in a louder voice when communicating with the user in the process of the conversation between the virtual robot and the user. That is, the virtual robot may exhibit an expression state while voice communication. Specifically,emotion calculation section 113 acquires face information of the user and identifies the current emotion of the user. Thedecision unit 114 determines an emotional state that the virtual robot needs to display according to the current emotion of the user, and generates and outputs corresponding expression output data based on the emotional state. For example, when the current emotion of the user is happy, the emotion state that needs to be displayed by the corresponding virtual robot is also "happy", thedecision unit 114 sends expression output data corresponding to the decision result to the smart device 200, and controls the virtual robot to display the expression state matched with the expression output data while outputting voice.
In other examples, in order to enable different children users to perform personalized play experience, the virtual maze scene can be further designed according to the user identity. Taking a children's playground scene as an example, thevisual recognition unit 112 obtains face information of a user currently standing in front of a screen of a children's game console, and recognizes the identity of the user through a user database of the children's playground. Specifically, the presence of a human face is first detected from a scene and its position is determined. And then, after the face is detected, carrying out face recognition, namely comparing and matching the detected face to be recognized with the known face in the database to obtain related information. The face recognition can adopt a method of extracting geometric features of the face and a template matching method, and in the example, the template matching method is preferentially adopted.
Thedecision unit 114 obtains the pre-stored personalized data record of the user from the stored user personalized database according to the determined user identity. Wherein the personalization data of the involved users comprises: character features, and attribute information. The personality traits of the user include, for example, Karan, 33148, Humi, gentle, etc. Attribute information of the user includes, for example, name, gender, age, nickname, preference, and the like. For example, when it is detected by face recognition that the user is "mike", the following personalized data about "mike" may be acquired: sex, sex male, age 8, liking transformers, etc.
Thedecision unit 114 selects the theme layout, the level and the virtual robot role of each room of the virtual maze according to the identity information of the user, and outputs corresponding multi-modal data. For example, for the user "mike" mentioned above, a virtual maze scene of a quest class may be selected, and a virtual robot character may select a transformers, and then multi-modal data about the theme layout, level, and virtual robot character of the virtual maze scene type is sent to the smart device 200, so as to enhance the user experience.
The functions of the smart device 200 are explained next. Fig. 3 is a functional block diagram of the smart device 200 in the system shown in fig. 1. As shown in fig. 3, the smart device 200 mainly includes: human-computer interaction input and output modules (themulti-modal input module 21 and themulti-modal output module 25 in the figure), adata processing module 22, acommunication module 23, a virtualmaze control module 24 and an execution parameter database D4.
And themulti-modal input module 21 acquires multi-modal interaction information of the user interacting with the virtual robot. The multimodal interaction information related to this example mainly includes voice and images, and thus themultimodal input module 21 mainly includes avoice input unit 211 and animage input unit 212. Thevoice input unit 211 mainly includes a microphone, an a/D converter, and the like. After the user sends out the voice information, thevoice input unit 211 collects the analog voice signal through the microphone, converts the analog voice signal into a voice signal that can be processed by the system using an a/D converter, and then inputs the digital voice signal into thevoice processing unit 221 of thedata processing module 22 for preprocessing of the voice information, including filtering, amplification, and the like. Theimage input unit 212 mainly includes an image sensor, a data conversion device, and the like. The image sensor can be a CCD camera device or a CMOS camera device and mainly collects the current face image and posture image of a user. Theimage input unit 212 sends the converted digital image data to theimage processing unit 222 of thedata processing module 22 for image preprocessing. When the original image is preprocessed, filtering denoising, gray scale modification, image enhancement, geometric transformation and the like are generally required to be performed. And image segmentation generally comprises edge detection, binarization, thinning, edge connection and the like. In addition to the multi-modal input unit mentioned above, units of other modalities may be included, such as a haptic capture system, a keyboard, a mouse, and the like. Accordingly, thedata processing module 22 includes avoice processing unit 221 and animage processing unit 222, and mainly pre-processes the acquired digital voice data and digital image data, and then transmits the pre-processed data to thecommunication module 23.
And acommunication module 23 which transmits the preprocessed data and receives the multi-modal output data decided by the game server 100.
Themulti-modal output module 25 includes avoice output unit 251 and animage output unit 252, and receives the execution parameters sent by the virtualmaze control module 24 or the multi-modal data forwarded by thecommunication module 23, and displays the data. For example, theimage output unit 252 is a user interface, also called a human-machine interface or a user interface, which is a medium for interaction and information exchange between the system and the user, and mainly displays the virtual maze scene and the state of the virtual robot by means of a display device. In a preset display area of the user interface, for example, a central position, the started virtual maze scene and virtual robot imagery (mainly virtual human 3D appearance) and execution parameters (expressions, actions, etc.) of the virtual robot multi-modal operations are displayed. Thevoice output unit 251 outputs voice data forwarded by thecommunication module 23, which includes a D/a converter, an AF amplifier, and a speaker. The digital voice data is converted into an analog voice signal by a D/A converter, an AF amplifier amplifies the analog voice signal, and a speaker vibrates according to the analog voice signal to reproduce a voice represented by the analog voice data.
A virtualmaze control module 24 that opens a virtual maze scene and starts a virtual robot; and determining set tasks passing through each room in the virtual maze, and outputting multi-modal output information corresponding to the set tasks through the virtual robot.
Specifically, when the virtual maze is opened, the theme layout data, the level data and the character data sent from thecommunication module 23 are received, processed and forwarded to themulti-modal output module 25 to display the virtual maze scene. From these data, it is possible to specify the setting task of each room, and when the virtual robot is controlled to interact with the user, the setting task information is converted into corresponding multi-modal output information (mainly voice information), and the output information is transmitted to the user. For example, when a task of a specified object is searched, a voice message of 'please find xx objects' is output through the virtual robot; when the user is required to answer the question, the question voice is issued through the virtual robot. In addition to using voice information, the user may be informed of the set task in text, such as the text "please successfully get the bomb cool and quiet".
In addition, the virtualmaze control module 24 also receives multi-modal data (e.g., expression output data) for the virtual robot sent by thecommunication module 23, and controls the virtual robot to display the expression state or simultaneously display the expression state and the action matched with the expression state according to the expression output data.
The execution parameters of the facial expression and the limb movement are stored in advance in the execution parameter database D4. Taking facial expressions as an example, expression execution parameters are stored for different expression state associations. The expression states in the example mainly include happiness, anger, depression and the like, relative motion parameters of the facial motion of the expressions relative to each motion area in a neutral state are obtained according to the techniques of facial anatomy theory, a facial coding system and the like, the motion in the neutral state is motion without emotional meaning, namely, no motion is made, a face in a natural state is formed, a space grid deformation model is carried out according to the detection result and a neutral standard grid model selected in advance to deform the neutral standard grid model into a geometric model of the virtual robot, and the geometric models are stored. For limb actions, limb action execution parameters are also stored for different expression state associations. The parameters of the limb movements include position and orientation parameters (e.g., rotation parameters) for the torso, joint parameters for the left and right upper limbs and the left and right lower limbs, and the like. It will be readily appreciated that the database D4 may also store parameters for performing mouth movements and parameters for performing head movements, and this example is not limiting.
When the virtualmaze control module 24 receives the expression output data of the virtual robot, the expression output data is analyzed to obtain the expression state, and then the parameters of the closest model are called and sent to themulti-mode output module 25, so that the required expression state can be well shown. In another example, when the virtualmaze control module 24 receives the expression output data of the virtual robot, it calls the expression execution parameters and also calls the action execution parameters to send to themulti-modal output module 25, and after themulti-modal output module 25 receives the execution parameters, it queries the action command according to the defined command library, and then analyzes the information such as the rotation angle of each key joint, and drives the corresponding joint to act according to the information, thereby completing the limb action of the virtual robot. In the interaction process of the user and the virtual robot, the facial expressions and the limb actions are increased, the vividness and the interestingness of the conversation can be further increased, and the user experience is improved.
In addition, after receiving the theme layout data, the level data, and the character data, the virtualmaze control module 24 outputs the data to themulti-modal output module 25 for display in a certain time sequence through simple processing. For example, when receiving instruction information sent by the game server 100, for example, an instruction for completing entering a next room or finishing an operation of a task, the virtualmaze control module 24 retrieves next room data, level data, and character data in corresponding theme layout data, and sends the next room data, level data, and character data to themulti-modal output module 25 for display, and then the user performs a level violation operation of a next level.
Next, a multi-modal interaction flow of the system according to the embodiment of the present invention will be described with reference to fig. 4.
At the end of the intelligent device 200, the virtualmaze control module 24 starts a virtual maze scene, starts a virtual robot, determines a set task passing through each room in the virtual maze, and outputs multi-modal output information corresponding to the set task through the virtual robot. In this scenario, themulti-modal input module 21 obtains multi-modal interaction information between the user and the virtual robot, and the information is pre-processed by thedata processing module 22 and then sent to the game server 100 through thecommunication module 23. At the game server 100 side, thecommunication unit 120 forwards the received multi-modal data to thecontrol unit 110, and the semantic understanding unit 111, thevisual recognition unit 112 and theemotion calculation unit 113 in thecontrol unit 110 analyze multi-modal interaction information in the current user and the virtual robot setting turn dialog. Thedecision unit 140 matches the analysis result with the interaction information corresponding to the task set in the current room, and if the matching is successful, it is determined that the current room passes through the current room and enters the next room or ends the operation, and sends a corresponding instruction to the smart device 200. Otherwise, thedecision unit 140 decides to output multi-modal data based on the parsing result and outputs, for example, question and answer voice output, expression data output, data output of personalized call virtual maze related databases D1-D3, and the like for the chat scenario. The smart device 200 acquires the commands or the multimodal data, processes the data, and outputs and displays the processed data to the user through themultimodal output module 25.
The embodiment of the invention provides a new user interaction mode, namely a method for intelligent interaction by combining a virtual maze.
The method of the present invention is described as being implemented in a computer system. The computer system may be provided in a control core processor, for example. For example, the methods described herein may be implemented as software executable with control logic that is executed by a CPU in an operating system. The functionality described herein may be implemented as a set of program instructions stored in a non-transitory tangible computer readable medium. When implemented in this manner, the computer program comprises a set of instructions which, when executed by a computer, cause the computer to perform a method capable of carrying out the functions described above. Programmable logic may be temporarily or permanently installed in a non-transitory tangible computer-readable medium, such as a read-only memory chip, computer memory, disk, or other storage medium. In addition to being implemented in software, the logic described herein may be embodied using discrete components, integrated circuits, programmable logic used in conjunction with a programmable logic device such as a Field Programmable Gate Array (FPGA) or microprocessor, or any other device including any combination thereof. All such embodiments are intended to fall within the scope of the present invention.
It is to be understood that the disclosed embodiments of the invention are not limited to the process steps disclosed herein, but extend to equivalents thereof as would be understood by those skilled in the relevant art. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
Although the embodiments of the present invention have been described above, the above description is only for the convenience of understanding the present invention, and is not intended to limit the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.