Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a method and an apparatus for updating an expression in an application program according to an embodiment of the present invention with reference to the drawings.
To facilitate an understanding of the present invention, before explaining specific embodiments of the present invention in detail, terms that may be used in the present invention are first explained as follows:
the social emoticon refers to a figure or an image for expressing an emotion when a user is socialized in a chat software or a game software, such as various emoticons shown in fig. 1 (a).
The 3D chat emotions refer to emotions presented by the face of the virtual character when the user is socializing in the chat software or game software, such as happiness, anger, sadness, happiness, surprise, and the like, which can be seen in various emotions in fig. 1 (b).
Fig. 2 is a flowchart illustrating a method for updating an expression in an application according to an embodiment of the present invention, where the method may be applied to an intelligent terminal or a server, the intelligent terminal may be an intelligent electronic device with a camera, such as a mobile phone and a tablet computer, and the intelligent terminal has an application installed therein, and the application has a social function. The following describes the method in detail by taking the application of the method to an intelligent terminal as an example.
As shown in fig. 2, the method for updating the expression in the application program includes the following steps:
and S11, detecting the human face and collecting a first image comprising the human face in the running process of the application program.
In the process that a user uses the application program, the application program can call the front-facing camera of the intelligent terminal to detect the face, and when the face is detected, the front-facing camera is used for collecting a first image comprising the face.
S12, expression data is extracted from the first image.
The expression data is used for representing the current facial expression of the face, and the expression data may include feature information of each organ in the current facial expression of the user, including but not limited to feature information of the forehead, the eyebrows, the eyes, and the like.
When a user uses an application with a social function, such as chatting with friends using social software such as WeChat and QQ, or communicating with other players during playing with a landlord, the true emotion of his heart is often shown in the face. When the user feels happy, the facial expression of the user is also happy; when a user feels awkward, his facial expression is often expressed awkwardly. That is, the user's facial expression can show emotional changes in the user's mind.
Therefore, in this embodiment, in the process that the user uses the application program, the camera function built in the smart terminal where the application program is located may be used to collect the first image including the face, and the expression data of the user may be extracted from the first image.
Specifically, the change situation of the facial muscles of the user can be captured in real time by using a front-facing shooting function of a built-in camera in the intelligent terminal, and corresponding expression data can be acquired according to the change of the facial muscles of the user based on a related face recognition technology. Or, the photo or the picture can be read from a gallery of the intelligent terminal, or the picture drawn by the user is collected by the camera to be used as a first image, the face in the photo or the picture is identified by adopting a related face identification technology, and expression data representing the facial expression of the user is obtained by extracting the facial features of the face.
And S13, acquiring the target expression matched with the expression data.
In this embodiment, after the expression data is extracted, the target expression matched with the expression data may be further acquired according to the expression data. As an example, assuming that the expression data includes feature information of the forehead, the eyebrows, the eyes, and the lower half of the face, the facial expression of the face may be defined in advance, as shown in table 1. Matching the acquired expression data with the feature information of each part shown in table 1, at least one part containing a plurality of pieces of feature information is matched, and when the feature information of each part in the same expression is matched and consistent, determining that the expression is a target expression corresponding to the acquired expression data of the user, and further acquiring a social expression and a 3D expression as the target expression from an expression library shown in fig. 1(a) or an expression library shown in fig. 1 (b). For example, when the expression data includes information that the eyebrows are slightly bent downward, the crow's feet are expanded outward from the inner corners of the outer eyes, the mouth corners are enlarged, and the teeth are exposed, the expression data is matched with the feature information of each part under each expression in table 1, it can be determined that the expression matched with the expression data is "happy", and the expression "happy" is the determined target expression, and then the expression "happy" is selected from the expression library shown in fig. 1 (a).
TABLE 1
And S14, updating the expression of the target virtual character in the application program by using the target expression.
In this embodiment, after the target expression matched with the expression data is obtained according to the extracted expression data, the expression of the target virtual character in the application program may be updated by using the target expression. The target virtual character is a virtual character used by the current user in the application program, for example, a player character of the current user in the bucket owner.
For example, when the obtained expression data is that eyebrows are wrinkled and pressed together, eyes are large and swollen, lips are closed, lip corners are downward, and nostrils are enlarged, the expression conforming to the expression data can be determined to be anger by matching with the feature information of each part in table 1, and thus, the expression of the target virtual character of the user in the application program can be updated to be angry.
In the method for updating the expression in the application program, a face is detected and a first image including the face is acquired in the running process of the application program, expression data is extracted from the first image, a target expression matched with the expression data is acquired, and the expression of a target virtual character in the application program is updated by using the target expression. Therefore, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complex operation degree in the social process is reduced, and the user experience and social interaction are improved.
In the process of using the application program with the social function, the user often encounters a situation that expressions are sent to express real emotions, for example, when the user is fighting a landlord, if the card of the user or a teammate is not up to home, the user may want to send a "happy" expression to an opposite-end user to express the excited mood of the user. In order to automatically display a "happy" expression in a dialog box corresponding to a virtual character of a user in an application program, an embodiment of the present invention provides another method for updating an expression in an application program, and fig. 3 is a flowchart illustrating the method for updating an expression in an application program according to another embodiment of the present invention.
As shown in fig. 3, on the basis of the embodiment shown in fig. 1, step S14 may include the following steps:
and S21, displaying the target expression in the expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area.
In this embodiment, after the matched target expression is obtained according to the expression data, the target expression may be displayed in an expression display area corresponding to the target virtual character in the application program.
Specifically, when the expression display area corresponding to the target virtual character in the application program is expressionless, the target expression can be directly displayed in the expression display area; and when the expression display area has the expression, replacing the currently displayed expression in the expression display area to display the matched target expression.
As an example, as shown in fig. 4(a), during the user's playing with the landlord, the teammate makes a relatively large card, the user feels happy at this time, and the user's current facial expression shows her happiness. The intelligent terminal captures facial expressions of a user through the front camera, collects a first image comprising a human face, acquires expression data of the user from the first image, determines that a target expression is 'happy' according to the expression data, and displays the expression 'happy' in an expression display area 401 corresponding to a target virtual character.
Further, in order to facilitate the user to perform corresponding processing according to the target expression displayed in the expression display area, in a possible implementation manner of the embodiment of the present invention, a "send" button may be further set at an appropriate position outside the expression display area corresponding to the virtual character, as shown in fig. 4(b), when the user triggers the "send" button 402, the target expression displayed in the expression display area is sent out by the smart terminal where the user is located, as shown in fig. 4(c), and is simultaneously displayed in a dialog box 403 corresponding to the target virtual character of the user. The sent target expression is forwarded to the intelligent terminal where the opposite-end user is located through the server where the application program is located, and is displayed in the related application program in the intelligent terminal where the opposite-end user is located.
In order to further improve user experience, in a possible implementation manner of the embodiment of the present invention, the method of the embodiment of the present invention may further be configured such that after the target expression is displayed in the expression display area corresponding to the target virtual character, the application program directly sends out the target expression without the user performing a related operation, so that user participation can be further reduced, and user experience is improved.
According to the method for updating the expression in the application program, the target expression is displayed in the expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area, so that the expression which meets the current mood of the user can be automatically displayed to the user without manual selection of the user, and the user experience is improved.
Preferably, in order to make the social process more realistic and enhance the social attributes, the facial expression of the user may be visually displayed through the virtual character, so that another expression updating method in the application program is provided in the embodiment of the present invention, and fig. 5 is a flowchart of the expression updating method in the application program provided in another embodiment of the present invention.
As shown in fig. 5, on the basis of the embodiment shown in fig. 1, step S14 may include the following steps:
s31, replacing the current facial expression of the target virtual character with the target expression.
In this embodiment, after the target expression matched with the expression data is obtained, the current facial expression of the target virtual character may be replaced by the target expression, so that the target virtual character presents the facial expression the same as or similar to that of the user.
For example, the obtained expression data may be compared with the feature information of each part corresponding to each expression shown in table 1, and when the feature information of each part in the presence of one expression is matched with the obtained expression data, the expression may be determined as a target expression, and then a target expression may be selected from the expression library shown in fig. 1(b), and the target expression may be used to replace the current facial expression of the target virtual character.
Fig. 6(a) is a schematic diagram of a process of matching a target expression. As shown in fig. 6(a), facial expression data of a player, that is, a user, such as feature information of a mouth, eyes, eyebrows, and the like, may be acquired by a front camera of a smartphone. The smart phone processes the acquired expression data, matches the corresponding expressions, and selects the matched expressions from the expression library. As can be seen from fig. 6(a), the current facial expression of the user is happy, and therefore, after the smartphone processes the expression data, an expression similar to the facial expression of the user can be matched from the expression library. Finally, the matched expression is played by the target virtual character of the user, namely the matched target expression is used for replacing the current facial expression of the virtual character, and the display effect as shown in fig. 6(b) is obtained.
According to the method for updating the expression in the application program, the target expression is used for replacing the current facial expression of the virtual character, so that 3D display of the target expression can be achieved, the intuitiveness and the authenticity of the expression display are enhanced, and the user experience is further improved.
Further, in order to more clearly illustrate the implementation process of acquiring the target expression matched with the expression data in the above embodiment, the embodiment of the present invention provides two possible implementation manners of acquiring the target expression matched with the expression data.
As one possible implementation manner, as shown in fig. 7, on the basis of the foregoing embodiment, acquiring the target expression matched with the expression data may include the following steps:
and S41, matching in a preset expression library according to the expression data, and acquiring the matching degree of the first expression represented by the expression data and each expression in the expression library.
In this embodiment, after obtaining the expression data of the user, the expression data may be compared with the feature information of each part under each expression shown in table 1, so as to obtain a first expression represented by the expression data, and then the first expression is matched in a preset expression library according to the first expression, so as to obtain a matching degree between the first expression and each expression in the expression library.
Specifically, in a possible implementation manner of the embodiment of the present invention, matching the expression data in a preset expression library to obtain a matching degree between a first expression represented by the expression data and each expression in the expression library may include: for each facial organ in the first expression, matching the feature data of the facial organ with the first feature data of the facial organ in each expression in the expression library to obtain the matching degree of the facial organ; and for all expressions in the expression library, weighting the matching degree of each facial organ in the first expression corresponding to the second expression in the expression library one by one according to respective preset weight to obtain the matching degree of the first expression and the second expression, wherein the second expression is any one expression in the expression library.
That is to say, for the facial organ of each expression in the expression library, the facial organ is respectively matched with the facial organ corresponding to the first expression to obtain the matching degree of the facial organ, and then for each expression in the expression library, the matching degree of each facial organ corresponding to each expression is weighted and summed according to the preset weight of each facial organ to obtain the matching degree of each expression in the expression library and the first expression.
The facial feature matching degree is calculated firstly, and then the matching degree of each expression is calculated, so that the accuracy of expression matching can be improved.
And S42, taking the expression with the highest matching degree in the expression library as the target expression.
In this embodiment, after the matching degree between each expression in the expression library and the first expression represented by the expression data is obtained, the target expression may be determined according to the matching degree, and the expression in the expression library with the highest matching degree with the first expression may be used as the target expression.
According to the expression updating method in the application program, the matching degree of the first expression represented by the expression data and each expression in the expression library is obtained by matching the expression data in the preset expression library, and the expression with the highest matching degree in the expression library is used as the target expression, so that the accuracy of expression matching can be improved.
As another possible implementation manner, as shown in fig. 8, on the basis of the foregoing embodiment, acquiring the target expression matched with the expression data may include the following steps:
s51, selecting one or more facial organs from all facial organs as matching facial organs.
Since a human face includes a plurality of facial organs such as a nose, a mouth, eyes, etc., the related facial organs are different for different expressions. For example, as can be seen from table 1, for "sad" expressions, the facial organs that are more relevant are eyebrows, eyes and mouth, but not nose; for "aversive" expressions, the facial organs of comparative interest are the eyebrows, eyes, mouth, nose, etc.
Therefore, in this embodiment, one or more facial organs may be selected from all facial organs included in the expression data as matching facial organs for matching with expressions in the expression library.
And S52, matching in the expression library according to the feature data of the matched facial organs to obtain all expressions comprising the feature data of the matched facial organs as a candidate expression set.
Wherein each candidate expression in the set of candidate expressions comprises feature data of a matching facial organ.
In this embodiment, after the matched facial organs are extracted from the expression data, matching may be performed in the expression library according to the feature data of the matched facial organs, so as to obtain a candidate expression set, where feature information of the facial organs of each expression in the candidate expression set is consistent with the feature data of the matched facial organs.
And S53, screening the candidate expression set by using the feature data of the remaining facial organs to obtain a target expression.
In this embodiment, after obtaining the candidate expression set according to the matching of the matched facial organs, the candidate expression set may be further screened according to the feature data of the remaining facial organs except the matched facial organs in the expression data, so as to obtain the target expression from the candidate expression set.
Specifically, the screening of the candidate expression set by using the feature data of the remaining facial organs to obtain the target expression may include: extracting first feature data of the remaining facial organs from the candidate expression set; comparing the feature data of the remaining facial organs with the first feature data, and judging whether a target candidate expression exists in the candidate expression set, wherein the first feature data of the remaining facial organs in the target candidate expression are consistent with the feature data; and if the target candidate expression exists, taking the target candidate expression as the target expression.
In the method for updating expressions in the application program of the embodiment, one or more facial organs are selected from all facial organs as matching facial organs, matching is performed in an expression library according to feature data of the matching facial organs to obtain all expressions comprising the feature data of the matching facial organs as candidate expression sets, the candidate expression sets are screened by using the feature data of the remaining facial organs, and target expressions are obtained from the candidate expression sets. The candidate expression set is obtained by adopting the matched facial organs, so that the complexity and the operation amount of expression matching can be reduced; the target expression is determined from the candidate expression set by using the feature data of the remaining facial organs, so that the matching accuracy can be improved.
In order to implement synchronous display of a target expression in peer-to-peer equipment, an embodiment of the present invention provides another method for updating an expression in an application program, and fig. 9 is a flowchart illustrating the method for updating an expression in an application program according to another embodiment of the present invention.
As shown in fig. 9, the method for updating emotions in an application program may include the following steps:
and S61, detecting the human face and collecting a first image comprising the human face in the running process of the application program.
S62, expression data is extracted from the first image.
The expression data is used for representing the current facial expression of the human face.
Specifically, extracting expression data from the first image may include: extracting a face image from the first image; identifying each facial organ from the face image and extracting the feature data of each facial organ; expression data is formed using feature data of all facial organs.
As an example, fig. 10 is a schematic diagram of a process of acquiring expression data of a user. As shown in fig. 10, a face image needs to be acquired first, and a built-in camera of the intelligent terminal may be used to acquire the face image, or a photo containing the face image is read from a static file such as a gallery. And preprocessing the acquired face image so as to accurately detect the face. And performing face detection on the preprocessed face image by adopting a related face recognition technology, and cutting the face image from the original image according to different face image proportions according to different face feature extraction methods and expression classification methods to obtain the face image. And according to the used expression feature extraction method, continuously carrying out preprocessing on the face image, including geometric processing and gray processing. And extracting expression characteristics of the preprocessed face image to obtain expression data.
And S63, acquiring the target expression matched with the expression data.
In this embodiment, the social emotions and the 3D emotions may be obtained from an emoticon library shown in fig. 1(a) or an emoticon library shown in fig. 1(b) according to the emoticon data, and the specific obtaining process may refer to the related description in the foregoing embodiment, and will not be described in detail here to avoid redundancy.
And S64, updating the expression of the target virtual character in the application program by using the target expression.
It should be noted that, in the embodiment of the present invention, the description of step S63 to step S64 may refer to the description of step S13 to step S14 in the foregoing embodiment, and the implementation principle thereof is similar, and is not described herein again.
And S65, synchronizing the target expression to the opposite terminal equipment according to the login information of the application program on the opposite terminal equipment and updating the expression of the target virtual role on the opposite terminal equipment.
When a user starts an application program, for example, when the user enters a client of a landlord to start a game, an intelligent terminal held by the user needs to establish communication connection with an intelligent terminal (opposite terminal device) of an opposite terminal user, and communication between the intelligent terminal held by the user and the intelligent terminal of the opposite terminal user needs a server to which the application program belongs as a carrier. After the communication connection is established, the server can send login information of the application program on the opposite-end device to the intelligent terminal held by the user. And after receiving the login information of the application program on the opposite terminal equipment, the intelligent terminal stores the login information in the memory.
After the target expression is obtained, the intelligent terminal held by the user can extract the login information of the application program on the opposite terminal device from the storage, synchronize the obtained target expression to the opposite terminal device according to the login information, and simultaneously update the expression of the target virtual role of the user in the application program on the opposite terminal device.
According to the method for updating the expression in the application program, the face is detected and the first image comprising the face is collected in the running process of the application program, the expression data is extracted from the first image, the target expression matched with the expression data is obtained, the target expression is synchronized to the opposite terminal equipment according to the login information of the application program on the opposite terminal equipment, the expression of the target virtual character on the opposite terminal equipment is updated, synchronous display of the expression can be achieved, the identified expression is displayed on the intelligent terminal where the opposite terminal user is located, the social attribute of the application program is further enhanced, and interestingness is increased.
The foregoing embodiment describes in detail a specific implementation process of the method for updating an expression in an application program according to the embodiment of the present invention when the method is applied to an intelligent terminal, but does not indicate that the method can only be applied to an intelligent terminal, and it should be understood that the method may also be applied to a server.
When the method for updating the expression in the application program is executed by the server, the intelligent terminal still acquires the expression data of the user in the process that the user uses the application program, and the intelligent terminal sends the acquired expression data to the server. And after receiving the expression data sent by the intelligent terminal, the server identifies the corresponding expression according to the expression data, and further updates the expression of the target virtual character of the user in the application program.
It should be noted that, the foregoing description of identifying an expression according to expression data and updating an expression of a virtual character when the intelligent terminal executes the method is also applicable to a case when the method is executed by a server, and the implementation principle is similar, but the execution subject is different, and the specific implementation process when the server executes the method is not described any more.
The method for updating the expression in the application program is executed by the server, the expression data of the user is obtained from the intelligent terminal through the server, the corresponding expression is identified according to the expression data, the expression of the target virtual character of the user in the application program is updated, the expression can be automatically identified, meanwhile, the phenomenon of blocking caused by overhigh memory occupation in the operation of the intelligent terminal when the intelligent terminal executes the method is avoided, and the user experience is further improved.
In order to implement the foregoing embodiment, the present invention further provides a system for updating an expression in an application program, and fig. 11 is a hardware architecture diagram of the system for updating an expression in an application program according to an embodiment of the present invention. As shown in fig. 11, the system for updating the expression in the application program includes the following hardware: the power supply is used for supplying power to each hardware; the memory is used for storing programs required for realizing the method, static file resources, acquired information and the like; the processor is used for processing the face image of the human face and acquiring expression data; the expression matching unit is mainly used for matching the target expression according to the expression data; the input unit mainly comprises a camera, a sensor and other input equipment and is used for providing the acquired face image for the processor; and the execution unit comprises a display panel and is mainly used for displaying the identified target expression or replacing the current facial expression of the target virtual character with the target expression.
Through the system for updating the expression in the application program of the embodiment, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complex operation degree in the social process is reduced, and the user experience and social interaction are improved.
In order to implement the above embodiments, the present invention further provides an apparatus for updating an expression in an application.
Fig. 12 is a schematic structural diagram of an apparatus for updating an expression in an application according to an embodiment of the present invention.
As shown in fig. 12, the apparatus 10 for updating expression in the application program includes: an obtaining module 110, an expression extracting module 120, an expression matching module 130, and an updating module 140. Wherein,
the obtaining module 110 is configured to detect a face and acquire a first image including the face during an operation process of an application program.
The expression extraction module 120 is configured to extract expression data from the first image.
The expression data is used for representing the current facial expression of the human face.
Optionally, in a possible implementation manner of the embodiment of the present invention, the expression extraction module 120 is specifically configured to extract a face image from the first image; identifying each facial organ from the face image and extracting the feature data of each facial organ; expression data is formed using feature data of all facial organs.
And the expression matching module 130 is configured to acquire a target expression matched with the expression data.
Optionally, in a possible implementation manner of the embodiment of the present invention, the expression matching module 130 is specifically configured to perform matching in a preset expression library according to the expression data, and obtain a matching degree between a first expression represented by the expression data and each expression in the expression library; and taking the expression with the highest matching degree in the expression library as a target expression.
Further, when the expression matching module 130 obtains the matching degree between the first expression represented by the expression data and each expression in the expression library, the feature data of the facial organ may be matched with the first feature data of the facial organ in each expression in the expression library for each facial organ in the first expression to obtain the matching degree of the facial organ; further, for all expressions in the expression library, weighting the matching degree of each facial organ in the first expression corresponding to the second expression in the expression library one by one according to respective preset weight to obtain the matching degree of the first expression and the second expression; and the second expression is any one expression in the expression library.
Optionally, in a possible implementation manner of the embodiment of the present invention, the expression matching module 130 is specifically configured to select one or more facial organs from all facial organs as matching facial organs; matching in an expression library according to the feature data of the matched facial organs to obtain all expressions comprising the feature data of the matched facial organs as a candidate expression set; wherein each candidate expression in the candidate expression set comprises feature data of a matching facial organ; and screening the candidate expression set by using the feature data of the rest facial organs to obtain the target expression.
Further, when the expression matching module 130 filters the candidate expression set to obtain the target expression, the first feature data of the remaining facial organs may be extracted from the candidate expression set; comparing the feature data of the remaining facial organs with the first feature data, and judging whether a target candidate expression exists in the candidate expression set; wherein the first feature data of the remaining facial organs in the target candidate expression is consistent with the feature data; and if the target candidate expression exists, taking the target candidate expression as the target expression.
And the updating module 140 is configured to update the expression of the target virtual character in the application program by using the target expression.
Optionally, in a possible implementation manner of the embodiment of the present invention, the updating module 120 is specifically configured to replace the current facial expression of the target virtual character with the target expression.
Optionally, in a possible implementation manner of the embodiment of the present invention, the updating module 120 is further specifically configured to display the target expression in the expression display area corresponding to the target virtual character to replace the currently displayed expression in the expression display area.
Optionally, in another possible implementation manner of the embodiment of the present invention, the updating module 120 is further configured to synchronize the target expression to the peer device according to the login information of the application on the peer device, and update the expression of the target virtual role on the peer device.
It should be noted that the foregoing explanation of the method embodiment for updating an expression in an application program is also applicable to the apparatus for updating an expression in an application program of this embodiment, and the implementation principle is similar, and is not described herein again.
The device for updating the expression in the application program of the embodiment detects a face and collects a first image including the face in the running process of the application program, extracts expression data from the first image, acquires a target expression matched with the expression data, and updates the expression of a target virtual character in the application program by using the target expression. Therefore, the expression required by the user can be automatically identified according to the facial expression of the user, manual selection of the user is not needed, the complex operation degree in the social process is reduced, and the user experience and social interaction are improved.
In order to implement the foregoing embodiments, the present invention also proposes a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, is capable of implementing the method for updating an expression in an application program as described in the foregoing embodiments.
In order to implement the foregoing embodiments, the present invention further provides a computer program product, wherein when instructions in the computer program product are executed by a processor, the method for updating expressions in an application program according to the foregoing embodiments is performed.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.