Disclosure of Invention
The invention aims to provide a fatigue state detection method and a system based on key point detection and head gesture, which aims to solve the technical problem of improving the real-time performance of automatic face fatigue state detection, and the invention aims to provide a fatigue state detection method and a system based on key point detection and head gesture, wherein in the optimization process of a network for learning target training combining two tasks of face key point detection and head gesture estimation, because the correlation between the two can use the same network, adopt the backbone network as the MMC multitask prediction model of the separable convolutional network of degree of depth, put two tasks into the same network and go on at the same time, can reduce the parameter quantity and operand that needs by a wide margin, thus has improved the detection speed of the model, and then reach the real-time effect.
The invention is realized by the following technical scheme:
In one aspect, the invention provides a fatigue state detection method based on key point detection and head gesture, comprising the following steps:
constructing and training a backbone network, and adopting an MMC multi-task prediction model of a depth separable convolutional network to obtain a trained MMC multi-task prediction model;
Acquiring a plurality of frames of face images in unit time, wherein each frame is used as an image, detecting the face position of each image by adopting MTCNN networks and cutting out head images;
inputting the head image into a trained MMC multitask prediction model to obtain the head attitude angle and the position information of the key points of the human face;
Judging the fatigue states of the heads in the images by using a double-threshold method according to the head posture angle of each image, and judging the fatigue states of the eyes and the mouths in the images by using the double-threshold method according to the position information of the eyes and the mouths in the face key points of each image;
comprehensively judging the fatigue state of the person according to the fatigue states of the head, eyes and mouth.
When the fatigue state of a person is detected, the fatigue state of the person is generally judged based on the state of a face part, however, the characteristics of eyes, a mouth and a head are more complicated and the effect is general, the common deep convolutional neural network model is difficult to achieve the effect of real-time and less attention to image space information, and after the characteristics of the eyes, the mouth and the head are detected, a proper algorithm is established to judge whether the fatigue state exists, so that the formed convolutional neural network model is larger, the effect of real-time is difficult to achieve in the prediction, therefore, the invention considers that the face detection and the head posture estimation are both related to the face and depend on the potential facial characteristics, so that the face key point detection and the head posture estimation task can be simultaneously carried out by using the same network in the training and the prediction, when the same network is used, the head posture information can improve the accuracy of positioning the key points of the human face, the positioning of the key points of the human face can reflect the information of the head posture, the two have stronger relativity, the relativity has positive effects on two tasks, firstly, MTCNN networks are adopted to detect the position of the human face and cut head images, the preprocessed head images are input into an MMC multitask prediction model to obtain the head posture angle and the position information of the key points of the human face such as eyes, mouth and the like, in the prediction process, the fatigue state of each part is determined by adopting a double-threshold method, the head posture angle obtained by regression is used for judging the head state, the coordinates of the part of the characteristic points of eyes and the mouth in the key points of the human face obtained by regression are respectively used for judging the states of the eyes and the mouth, and finally, the fatigue state of the human is comprehensively judged, the backbone network of the model adopts a framework of a depth separable convolution network, so that two tasks are realized in one network at the same time, the parameter and the operand are greatly reduced, the detection speed of the model is improved, and the real-time effect can be achieved.
Further, training is performed by using a 300w_lp dataset having face keypoint coordinates and head pose angle labels when training the MMC multi-task prediction model, and preprocessing images in the dataset before training the MMC multi-task prediction model using the 300w_lp dataset, including:
and cutting redundant background parts in the image according to the coordinates of the key points of the face in the data set, unifying the size of the cut image to 224x224, and carrying out graying treatment and normalization treatment on the image with the unified size.
Further, the process of training the MMC multitasking prediction model includes two tasks:
The face key point detection task is used for positioning the position of the face feature point according to the face key point coordinates in the image, measuring the difference between the predicted coordinate value and the real coordinate value of the feature point by using the L2 loss function lossa, obtaining the position information of the face key point by regression,
The head estimation task is used for predicting angles of the head in the image in three directions yaw, pitch, roll according to the head posture angle label in the image, and the loss function is as follows:
wherein,As the estimation result of the head posture angles in the three directions of yaw, pitch, roll, (x1,x2,x3) is the head posture angle label in the three directions;
training by taking the total loss of the MMC multitask prediction model as a learning target, wherein the total loss is the sum of the loss of the face key point detection task and the head estimation task:
loss=lossa+ηlossb
wherein η is the task assigned weight set to 1.
Further, the process of obtaining the head attitude angle and the position information of the key points of the human face is as follows:
the main network of the MMC multitask prediction model is utilized to extract and fuse the characteristics of the input head image to obtain a characteristic image, the main network adopts an improved lightweight convolution MobileNet-V2 network structure,
Simultaneously, the CA attention module embedded in the backbone network is utilized to respectively pool the feature images along the horizontal direction and the vertical direction to obtain the position information of the feature images;
and according to the position information of the feature map, respectively utilizing two full-connection layers behind the backbone network to return the head posture angle and the position information of the key points of the human face.
Further, the improved lightweight convolution MobileNet-V2 network structure performs feature extraction on the input header image by adopting convolution kernels of 1x1, 3x3 and 5x5, sets the convolution stride to 1, sets the pad corresponding to each convolution kernel to 0,1 and 2, and the improved lightweight convolution MobileNet-V2 network structure size is 4M.
Further, for a plurality of continuous face images in the unit time, the fatigue state of each part is judged by a double-threshold method, which comprises the following steps:
judging the fatigue state of each image head:
Judging whether the pitch attitude angle of the head attitude angle is larger than 30 degrees when the head is low or not according to the head attitude angle of each image, if so, judging that the head of the image is in a fatigue state, and if the proportion of the head in the fatigue state in all images exceeds 30 percent, judging that the head is in the fatigue state;
judging the fatigue state of eyes of each image:
Calculating the aspect ratio of eyes according to the position information of eye key points of each image, judging whether the aspect ratio of eyes is smaller than 0.2, if so, judging that the eyes of the image are in a fatigue state, and if the proportion of the images in the fatigue state of eyes is more than 40 percent of the proportion of all the images, judging that the eyes are in the fatigue state;
Judging the fatigue state of each image mouth:
According to the position information of the mouth key points of each image, calculating the mouth aspect ratio, judging whether the mouth aspect ratio is larger than 0.3, if so, judging that the mouth of the image is in a fatigue state, and if the proportion of the images in the fatigue state of the mouth to all the images exceeds 40%, judging that the mouth is in the fatigue state.
Further, according to the influence weight of the fatigue states of the head, eyes and mouth on the fatigue state of the person, a correlation coefficient is set for the fatigue state of each part to comprehensively judge the fatigue state Z of the person:
Z=αZeye+βZmouth+λZhead
Wherein, each time Zeye represents the fatigue state of eyes, Zmouth represents the fatigue state of mouth, Zhead represents the fatigue state of head, and the relative systems alpha, beta and lambda are respectively set to 0.2, 0.3 and 0.5;
When Z is greater than or equal to 0.5, the person is judged to be in a fatigue state.
In another aspect, the present invention provides a fatigue state detection system based on keypoint detection and head pose, comprising:
The model training module is used for constructing and training an MMC multi-task prediction model to obtain a trained MMC multi-task prediction model;
The face position detection module is used for detecting the face position of each image by adopting MTCNN network and cutting out head images according to a plurality of frames of face images in the acquired unit time, wherein each frame is taken as an image;
the parallel prediction module is used for inputting the head image into the trained MMC multitask prediction model to obtain the head attitude angle and the position information of the key points of the human face;
The local state detection module is used for judging the fatigue states of the heads in the images by utilizing a double-threshold method according to the head posture angle of each image, and judging the fatigue states of the eyes and the mouths in the images by utilizing the double-threshold method according to the position information of the eyes and the mouths in the key points of the faces of the images;
And the comprehensive fatigue state detection module is used for comprehensively judging the fatigue state of the person according to the fatigue states of the head, eyes and mouth.
Further, the MMC multi-task prediction model comprises a main network and two full-connection layers respectively used for returning the head posture angle and the position information of the key points of the human face, the main network adopts an improved lightweight convolution MobileNet-V2 network structure, meanwhile, a CA attention module is embedded in the main network,
The main network is used for extracting and fusing the characteristics of the input head images to obtain a characteristic image,
The CA attention module is used for respectively carrying out pooling operation on the feature images along the horizontal direction and the vertical direction to obtain the position information of the feature images.
Further, the improved lightweight convolution MobileNet-V2 network structure performs feature extraction on the input header image by adopting convolution kernels of 1x1, 3x3 and 5x5, sets the convolution stride to 1, sets the pad corresponding to each convolution kernel to 0,1 and 2, and the improved lightweight convolution MobileNet-V2 network structure size is 4M.
Compared with the prior art, the invention has the following advantages and beneficial effects:
According to the invention, the depth separable convolution network is used as a backbone network of the MMC multi-task prediction model, and according to task correlation of face key point detection and head posture estimation, the same network is used for training and predicting two tasks, so that the required parameter quantity and calculation quantity can be greatly reduced, the detection speed of the model is improved, and the real-time effect is achieved;
according to the invention, an improved lightweight convolution MobileNet-V2 network structure is used as a backbone network, on one hand, features of pictures are extracted by adopting different scale convolutions at a first layer of the backbone network and are fused, different receptive fields are obtained through a plurality of convolution kernels with different sizes, and the correlation can be better described by obtaining a larger receptive field in consideration of the influence of the relative positions of eyes, nose, mouth and other parts in a face on the attitude angle, so that the model has higher speed and high instantaneity when training and predicting two tasks, and meanwhile, a CA attention module is embedded in the backbone network, so that accurate position information can be captured in space, and the effect of focusing on image space information while achieving instantaneity is realized.
Detailed Description
For the purpose of making apparent the objects, technical solutions and advantages of the present invention, the present invention will be further described in detail with reference to the following examples and the accompanying drawings, wherein the exemplary embodiments of the present invention and the descriptions thereof are for illustrating the present invention only and are not to be construed as limiting the present invention.
Example 1
As shown in fig. 1, this embodiment 1 provides a fatigue state detection method based on key point detection and head pose, including the steps of:
S1, constructing and training a backbone network, and adopting an MMC multi-task prediction model of a depth separable convolutional network to obtain a trained MMC multi-task prediction model;
The training is performed by using a 300w_lp dataset when training the MMC multi-task prediction model, and since the 300w_lp dataset is widely used for facial feature recognition and head pose analysis, the MMC multi-task prediction model is a commonly used field 2D landmark dataset, which is composed of 61225 head pose images and is expanded to 122450 images by flipping, and the 300w_lp dataset has face keypoint coordinates and head pose angle labels, the preprocessing is performed on the images in the dataset before training the MMC multi-task prediction model by using the 300w_lp dataset, and the method comprises:
Redundant background parts in the images are cut according to the coordinates of the key points of the faces in the data set, so that training effects are improved, the sizes of the cut images are unified to 224x224, and the images with the unified sizes are subjected to gray scale treatment and normalization treatment.
Specifically, the main network adopts an improved lightweight convolution MobileNet-V2 network structure, the whole network structure is shown in fig. 2, features of images are extracted by adopting convolution of different scales on the first layer of the main network and are fused, different receptive fields are obtained through convolution kernels of different sizes, specifically, 1x1, 3x3 and 5x5 convolution kernels are respectively adopted to replace an original head image input by a single 3x3 convolution kernel for feature extraction, a convolution stride is set to be 1, and pad corresponding to each convolution kernel is set to be 0,1 and 2 respectively, so that the convolved images can obtain features of the same dimension, the features can be directly spliced together, the network performance can be increased in a mode of increasing the network width, in order to reduce the calculated amount, the number of parameters is reduced while the network performance is ensured, more nonlinearities are introduced, the generalization capability is improved, and a larger receptive field is obtained by adopting a 5x5 convolution kernel. The correlation can be better described by obtaining a larger receptive field in consideration of the influence of the relative positions of eyes, nose, mouth and the like in the face on the attitude angle. Meanwhile, a CA attention module is embedded in the backbone network, the CA attention module can capture accurate position information in space, and the CA method decomposes the two-dimensional global pooling operation into two one-dimensional coding processes, namely, the global pooling operation is decomposed into pooling operations respectively along the horizontal direction and the vertical direction of the input feature map, so that the position information related to the x axis and the y axis of the input feature map is obtained. After the improved MobileNet-V2 full-connection layer, two full-connection layers are added, the FC1 full-connection layer is used for face key point detection, 68 characteristic point coordinates are obtained through regression, the FC2 full-connection layer is used for head posture estimation, and three direction posture angles are regressed. The lightweight convolution MobileNet-V2 network structure thus modified is 4M in size. The model is smaller while ensuring the prediction precision, and the real-time performance can be achieved during prediction.
More specifically, the process of training the MMC multi-task prediction model includes two tasks:
the face key point detection task is used for positioning the position of a face feature point according to the face key point coordinates in the image, measuring the difference between the predicted coordinate value and the real coordinate value of the feature point by using an L2 loss function lossa, and obtaining the position information of the face key point by regression;
The head estimation task is used for predicting the angles of the head in the image in three directions yaw, pitch, roll according to the head posture angle labels in the image, the learning target is to return yaw, pitch, roll three angles to describe the position of the head, and the loss function is as follows:
wherein,As the estimation result of the head posture angles in the three directions of yaw, pitch, roll, (x1,x2,x3) is the head posture angle label in the three directions;
training by taking the total loss of the MMC multitask prediction model as a learning target, wherein the total loss is the sum of the loss of the face key point detection task and the head estimation task:
loss=lossa+ηlossb (2)
Wherein, since both face keypoint detection and head pose estimation belong to regression tasks, the η task allocation weight is set to 1.
S2, acquiring a plurality of frames of face images in unit time, wherein each frame is used as an image, detecting the face position of each image by adopting MTCNN networks and cutting out head images;
s3, inputting the head image into a trained MMC multitask prediction model to obtain the head attitude angle and the position information of the key points of the human face;
specifically, the process of obtaining the head attitude angle and the position information of the key points of the human face is as follows:
the main network of the MMC multitask prediction model is utilized to extract and fuse the characteristics of the input head image to obtain a characteristic image, the main network adopts an improved lightweight convolution MobileNet-V2 network structure,
Simultaneously, the CA attention module embedded in the backbone network is utilized to respectively pool the feature images along the horizontal direction and the vertical direction to obtain the position information of the feature images;
and according to the position information of the feature map, respectively utilizing two full-connection layers behind the backbone network to return the head posture angle and the position information of the key points of the human face.
S4, judging the fatigue states of the heads in the images by using a double-threshold method according to the head posture angle of each image, and judging the fatigue states of the eyes and the mouths in the images by using the double-threshold method according to the position information of the eyes and the mouths in the key points of the faces of the images. The fatigue state of each part is determined by adopting a double-threshold method, the unit time is set to be 30 seconds, continuous multi-frame images within 30 seconds are acquired, each frame serves as one image, whether each part is in the fatigue state or not is determined by calculating the proportion of the number of frames of the state of each part to the total number of frames of the unit time, for example, the fatigue state can be determined by setting the number of frames of a video frame in an eye-closing state and a mouth in a yawning state to be more than 40% of the number of frames of the unit time, the fatigue state is determined by setting the number of frames of a video frame with a pitch attitude angle of more than 30 degrees to be more than 30% of the number of frames of the unit time under the condition of low head, and the fatigue state is determined by specifically determining the following steps:
1. judging the fatigue state of each image head:
Three Euler angles (pitch, yaw, roll) of the head are obtained by direct regression of an MMC multitask prediction model, because the angle change of the pitch is larger when a person is in a fatigue state, in order to reduce the calculation amount, the change of the pitch attitude angle can be focused, a certain threshold value is set, the pitch attitude angle of a detected continuous multi-frame image when the head is lower is judged to be in a fatigue state, whether the pitch attitude angle of the head attitude angle is larger than 30 degrees is judged according to the head attitude angle of each image, if the pitch attitude angle is larger than 30 degrees, the head of each image is judged to be in the fatigue state, and if the proportion of the head in the fatigue state accounts for more than 30 percent of all images, the head is judged to be in the fatigue state;
2. judging the fatigue state of eyes of each image:
according to the position information of the eye key points of each image, the position coordinates of the eye key points can reflect the opening and closing degree of eyes, and when the closing frequency of the eyes is too high in a certain time, the eyes are judged to be in a fatigue state. The eye opening state is judged by calculating the eye aspect ratio EAR, whether the eye aspect ratio is smaller than 0.2 or not is judged, if the eye aspect ratio is smaller than 0.2, the eye of the image is judged to be in a fatigue state, if the proportion of the images in the fatigue state of the eyes to all the images exceeds 40%, the eyes are judged to be in the fatigue state, and according to the face key point marks as shown in figure 3, the EAR calculation formula is used as follows:
And respectively calculating coordinates of the left eye and the right eye to obtain a left eye aspect ratio EARl and a right eye aspect ratio EARr, and finally comprehensively judging the eye aspect ratio EAR, wherein the width and the height of the eyes are calculated by using Euclidean distance formula, when the eyes are in a closed state, the EAR value is 0, the preliminary threshold value is set to be 0.2, when the value is smaller than the value, the eye opening state is set when the value is larger than the threshold value, and the opening sizes of different eyes are different, so that different thresholds can be set according to specific conditions.
3. Judging the fatigue state of each image mouth:
Also, according to the position information of the mouth key points of each image, the state of the mouth is distinguished by adopting the mouth aspect ratio MAR, and whether the current state is yawned or not is judged by detecting the distance between the upper lip and the lower lip and the mouth opening time. The MAR value is calculated by using the coordinates of the highest and lowest points of the inner lips and the corners of the mouth, and the formula for calculating the MAR value according to the face key point labels as shown in fig. 3 is:
setting the MAR threshold value to be 0.3, judging whether the aspect ratio of the mouth is larger than 0.3, if so, judging that the mouth of the image is in a fatigue state, and if the proportion of the images in the fatigue state of the mouth in all the images exceeds 40%, judging that the mouth is in the fatigue state. S5, comprehensively judging the fatigue state of the person according to the fatigue states of the head, eyes and mouth.
Specifically, considering that a person is easily considered as a tired state when the person is low for a long time and the person is often in a close state while making a yawning, the tired state of the person is comprehensively determined by setting a correlation coefficient for each part, and the tired state Z of the person is comprehensively determined by setting a correlation coefficient for the tired state of each part according to the influence weight of the tired state of the head, eyes and mouth on the tired state of the person:
Z=αZeye+βZmouth+λZhead (7)
Wherein Zeye represents the fatigue state of eyes, Zmouth represents the fatigue state of mouth, Zhead represents the fatigue state of head, the relative systems alpha, beta and lambda are respectively set to 0.2, 0.3 and 0.5, and when Z is more than or equal to 0.5, the person is judged to be in the fatigue state.
Example 2
As shown in fig. 4, the present embodiment provides a fatigue state detection system based on key point detection and head pose, including:
The model training module is used for constructing and training an MMC multi-task prediction model to obtain a trained MMC multi-task prediction model;
The training is performed by using a 300w_lp dataset when training the MMC multi-task prediction model, and since the 300w_lp dataset is widely used for facial feature recognition and head pose analysis, the MMC multi-task prediction model is a commonly used field 2D landmark dataset, which is composed of 61225 head pose images and is expanded to 122450 images by flipping, and the 300w_lp dataset has face keypoint coordinates and head pose angle labels, the preprocessing is performed on the images in the dataset before training the MMC multi-task prediction model by using the 300w_lp dataset, and the method comprises:
Redundant background parts in the images are cut according to the coordinates of the key points of the faces in the data set, so that training effects are improved, the sizes of the cut images are unified to 224x224, and the images with the unified sizes are subjected to gray scale treatment and normalization treatment.
The MMC multi-task prediction model comprises a backbone network and two full-connection layers which are respectively used for regressing the head attitude angle and the position information of the key points of the human face, wherein the FC1 full-connection layer is used for detecting the key points of the human face, regressing to obtain coordinates of 68 characteristic points, and the FC2 full-connection layer is used for estimating the head attitude and regressing the attitude angles in three directions. The improved lightweight convolution MobileNet-V2 network structure is adopted in the main network, features of images are extracted and fused by adopting convolution of different scales in the first layer of the main network, different receptive fields are obtained through convolution kernels of different sizes, specifically, 1x1, 3x3 and 5x5 convolution kernels are adopted to replace an original single 3x3 convolution check input head image to conduct feature extraction, a convolution stride is set to be 1, and corresponding pad of each convolution kernel is set to be 0,1 and 2 respectively, so that the convolved images can obtain features of the same dimension, the features can be directly spliced together, the performance of the network can be increased by the mode of increasing the network width, in order to reduce the calculated amount, the 1x1 convolution kernel is used for carrying out dimension reduction processing, the number of parameters is reduced while the network performance is ensured to be obtained, more nonlinearity is introduced, the generalization capability is improved, and the 5x5 convolution kernels are adopted to obtain larger receptive fields. The correlation can be better described by obtaining a larger receptive field in consideration of the influence of the relative positions of eyes, nose, mouth and the like in the face on the attitude angle. Meanwhile, a CA attention module is embedded in the backbone network, the CA attention module can capture accurate position information in space, and the CA method decomposes the two-dimensional global pooling operation into two one-dimensional coding processes, namely, the global pooling operation is decomposed into pooling operations respectively along the horizontal direction and the vertical direction of the input feature map, so that the position information related to the x axis and the y axis of the input feature map is obtained. The lightweight convolution MobileNet-V2 network structure thus modified is 4M in size. The model is smaller while ensuring the prediction precision, and the real-time performance can be achieved during prediction.
The face position detection module is used for detecting the face position of each image by adopting MTCNN network and cutting out head images according to a plurality of frames of face images in the acquired unit time, wherein each frame is taken as an image;
the parallel prediction module is used for inputting the head image into the trained MMC multitask prediction model to obtain the head attitude angle and the position information of the key points of the human face;
The local state detection module is used for judging the fatigue states of the heads in the images by utilizing a double-threshold method according to the head posture angle of each image, and judging the fatigue states of the eyes and the mouths in the images by utilizing the double-threshold method according to the position information of the eyes and the mouths in the key points of the faces of the images;
The fatigue state of each part is determined by adopting a double-threshold method, the unit time is set to be 30 seconds, continuous multi-frame images within 30 seconds are acquired, each frame serves as one image, whether each part is in the fatigue state or not is determined by calculating the proportion of the number of frames of the state of each part to the total number of frames of the unit time, for example, the fatigue state can be determined by setting the number of frames of the video in the eye-closed state and the yawning state of the mouth to be more than 40% of the number of frames of the unit time, and the fatigue state can be determined by setting the number of frames of the video in the pitch attitude angle of more than 30 degrees to be more than 30% of the number of frames of the unit time under the condition of low head.
And the comprehensive fatigue state detection module is used for comprehensively judging the fatigue state of the person according to the fatigue states of the head, eyes and mouth. Considering that a person is easily considered as a fatigue state after a long time and the person is often in a close state while making a yawning, the fatigue state of the person is comprehensively determined by setting a correlation coefficient for each part, and the fatigue state Z of the person is comprehensively determined by using the formula (7) by setting a correlation coefficient for the fatigue state of each part according to the influence weight of the fatigue state of the head, eyes and mouth on the fatigue state of the person.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those of ordinary skill in the art will appreciate that implementing all or part of the above facts and methods may be accomplished by a program to instruct related hardware, the program involved or the program may be stored in a computer readable storage medium which when executed includes the steps of bringing out the corresponding method steps at this time, the storage medium may be a ROM/RAM, a magnetic disk, an optical disk, etc.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.